Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Xen: Devel

[PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable

 

 

Xen devel RSS feed   Index | Next | Previous | View Threaded


david.vrabel at citrix

Jun 1, 2012, 8:14 AM

Post #1 of 6 (262 views)
Permalink
[PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable

From: David Vrabel <david.vrabel [at] citrix>

In xen_set_pte() if batching is unavailable (because the caller is in
an interrupt context such as handling a page fault) it would fall back
to using native_set_pte() and trapping and emulating the PTE write.

On 32-bit guests this requires two traps for each PTE write (one for
each dword of the PTE). Instead, do one mmu_update hypercall
directly.

This significantly improves page fault performance in 32-bit PV
guests.

lmbench3 test Before After Improvement
----------------------------------------------
lat_pagefault 3.18 us 2.32 us 27%
lat_proc fork 356 us 313.3 us 11%

Signed-off-by: David Vrabel <david.vrabel [at] citrix>
---
arch/x86/xen/mmu.c | 16 ++++++++++++++--
1 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b8e2794..3bf5dfa 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -308,8 +308,20 @@ static bool xen_batched_set_pte(pte_t *ptep, pte_t pteval)

static inline void __xen_set_pte(pte_t *ptep, pte_t pteval)
{
- if (!xen_batched_set_pte(ptep, pteval))
- native_set_pte(ptep, pteval);
+ if (!xen_batched_set_pte(ptep, pteval)) {
+ /*
+ * Could call native_set_pte() here and trap and
+ * emulate the PTE write but with 32-bit guests this
+ * needs two traps (one for each of the two 32-bit
+ * words in the PTE) so do one hypercall directly
+ * instead.
+ */
+ struct mmu_update u;
+
+ u.ptr = virt_to_machine(ptep).maddr | MMU_NORMAL_PT_UPDATE;
+ u.val = pte_val_ma(pteval);
+ HYPERVISOR_mmu_update(&u, 1, NULL, DOMID_SELF);
+ }
}

static void xen_set_pte(pte_t *ptep, pte_t pteval)
--
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


konrad.wilk at oracle

Jun 5, 2012, 9:07 AM

Post #2 of 6 (196 views)
Permalink
Re: [PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable [In reply to]

On Fri, Jun 01, 2012 at 04:14:54PM +0100, David Vrabel wrote:
> From: David Vrabel <david.vrabel [at] citrix>
>
> In xen_set_pte() if batching is unavailable (because the caller is in
> an interrupt context such as handling a page fault) it would fall back
> to using native_set_pte() and trapping and emulating the PTE write.
>
> On 32-bit guests this requires two traps for each PTE write (one for
> each dword of the PTE). Instead, do one mmu_update hypercall
> directly.

OK.
>
> This significantly improves page fault performance in 32-bit PV
> guests.

Nice!
>
> lmbench3 test Before After Improvement
> ----------------------------------------------
> lat_pagefault 3.18 us 2.32 us 27%
> lat_proc fork 356 us 313.3 us 11%
>
> Signed-off-by: David Vrabel <david.vrabel [at] citrix>
> ---
> arch/x86/xen/mmu.c | 16 ++++++++++++++--
> 1 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b8e2794..3bf5dfa 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -308,8 +308,20 @@ static bool xen_batched_set_pte(pte_t *ptep, pte_t pteval)
>
> static inline void __xen_set_pte(pte_t *ptep, pte_t pteval)
> {
> - if (!xen_batched_set_pte(ptep, pteval))
> - native_set_pte(ptep, pteval);
> + if (!xen_batched_set_pte(ptep, pteval)) {
> + /*
> + * Could call native_set_pte() here and trap and
> + * emulate the PTE write but with 32-bit guests this
> + * needs two traps (one for each of the two 32-bit
> + * words in the PTE) so do one hypercall directly
> + * instead.

Ouch.
> + */
> + struct mmu_update u;
> +
> + u.ptr = virt_to_machine(ptep).maddr | MMU_NORMAL_PT_UPDATE;
> + u.val = pte_val_ma(pteval);
> + HYPERVISOR_mmu_update(&u, 1, NULL, DOMID_SELF);
> + }
> }
>
> static void xen_set_pte(pte_t *ptep, pte_t pteval)
> --
> 1.7.2.5
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel [at] lists
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


konrad at darnok

Jun 10, 2012, 3:23 AM

Post #3 of 6 (197 views)
Permalink
Re: [PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable [In reply to]

On Tue, Jun 05, 2012 at 12:07:46PM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Jun 01, 2012 at 04:14:54PM +0100, David Vrabel wrote:
> > From: David Vrabel <david.vrabel [at] citrix>
> >
> > In xen_set_pte() if batching is unavailable (because the caller is in
> > an interrupt context such as handling a page fault) it would fall back
> > to using native_set_pte() and trapping and emulating the PTE write.
> >
> > On 32-bit guests this requires two traps for each PTE write (one for
> > each dword of the PTE). Instead, do one mmu_update hypercall
> > directly.
>
> OK.
> >
> > This significantly improves page fault performance in 32-bit PV
> > guests.
>
> Nice!

With this patch I keep on getting this (which is v3.5-rc2 plus my
patches in stable/for-linus-3.5 and yours):

Loading latest/xen.gz... ok
Loading latest/vmlinuz... ok
Loading latest/initramfs.cpio.gz... ok
__ __ _ _ _ _ ____ ___ ____ ____ _____=20
\ \/ /___ _ __ | || | / | / |___ \ / _ \| ___|___ \|___ / =
=20
(XEN) Xen version 4.1-120609 (konrad [at] dumpdata) (gcc version 4.4.4 20100=
503 (Red Hat 4.4.4-2) (GCC) ) Sat Jun 9 10:49:23 EDT 2012
(XEN) Latest ChangeSet: Fri May 25 08:18:47 2012 +0100 23298:435493696053
(XEN) Bootloader: unknown
(XEN) Command line: com1=3D115200,8n1 console=3Dcom1,vga guest_loglvl=3Dall=
dom0_mem=3D1G,max:2G dom0_max_vcpus=3D2 cpufreq=3Dperformance,verbose logl=
vl=3Dall apic=3Ddebug
(XEN) Video information:
(XEN) VGA is text mode 80x25, font 8x16
(XEN) VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN) EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN) Found 1 MBR signatures
(XEN) Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN) 0000000000000000 - 000000000009ec00 (usable)
(XEN) 000000000009ec00 - 00000000000a0000 (reserved)
(XEN) 00000000000e0000 - 0000000000100000 (reserved)
(XEN) 0000000000100000 - 0000000020000000 (usable)
(XEN) 0000000020000000 - 0000000020200000 (reserved)
(XEN) 0000000020200000 - 0000000040000000 (usable)
(XEN) 0000000040000000 - 0000000040200000 (reserved)
(XEN) 0000000040200000 - 00000000bad80000 (usable)
(XEN) 00000000bad80000 - 00000000badc9000 (ACPI NVS)
(XEN) 00000000badc9000 - 00000000badd1000 (ACPI data)
(XEN) 00000000badd1000 - 00000000badf4000 (reserved)
(XEN) 00000000badf4000 - 00000000badf6000 (usable)
(XEN) 00000000badf6000 - 00000000bae06000 (reserved)
(XEN) 00000000bae06000 - 00000000bae14000 (ACPI NVS)
(XEN) 00000000bae14000 - 00000000bae3c000 (reserved)
(XEN) 00000000bae3c000 - 00000000bae7f000 (ACPI NVS)
(XEN) 00000000bae7f000 - 00000000bb000000 (usable)
(XEN) 00000000bb800000 - 00000000bfa00000 (reserved)
(XEN) 00000000fed1c000 - 00000000fed40000 (reserved)
(XEN) 00000000ff000000 - 0000000100000000 (reserved)
(XEN) 0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0450, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT BADC9068, 0054 (r1 ALASKA A M I 1072009 AMI 10013)
(XEN) ACPI: FACP BADD0308, 00F4 (r4 ALASKA A M I 1072009 AMI 10013)
(XEN) ACPI: DSDT BADC9150, 71B5 (r2 ALASKA A M I 15 INTL 20051117)
(XEN) ACPI: FACS BAE0BF80, 0040
(XEN) ACPI: APIC BADD0400, 0072 (r3 ALASKA A M I 1072009 AMI 10013)
(XEN) ACPI: SSDT BADD0478, 0102 (r1 AMICPU PROC 1 MSFT 3000001)
(XEN) ACPI: MCFG BADD0580, 003C (r1 ALASKA A M I 1072009 MSFT 97)
(XEN) ACPI: HPET BADD05C0, 0038 (r1 ALASKA A M I 1072009 AMI. 4)
(XEN) ACPI: ASF! BADD05F8, 00A0 (r32 INTEL HCG 1 TFSM F4240)
(XEN) System RAM: 8104MB (8299140kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fcde0
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(X000000000000000, using 32
(XEN) ACPI: wakeup_vec[bae0bf8c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x01] enabled)
(XEN) Processor #1 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) Processor #3 6:10 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode: Flat. Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
(XEN) PCI: Not using MMCONFIG.
(XEN) Table is not found!
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3093.009 MHz processor.
(XEN) Initing memory sharing.
(XEN) mce_intel.c:1162: MCA Capability: BCAST 1 SER 0 CMCirtualisation disa=
bled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN) -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 32 KiB.
(XEN) VMX: Supported advanced features:
(XEN) - APIC MMIO access virtualisation
(XEN) - APIC TPR shadow
(XEN) - Extended Page Tables (EPT)
(XEN) - Virtual-Processor Identifiers (VPID)
(XEN) - Virtual NMI
(XEN) - MSR direct-access bitmap
(XEN) - Unrestricted Guest
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB
(XEN) Brought up 4 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0x7a3000
(XEN) elf_parse_binary: phdr: paddr=3D0x1800000 memsz=3D0x850e8
(XEN)_binary: memory: 0x1000000 -> 0x1e41000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xffffffff8189a210
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN) virt_base =3D 0xffffffff80000000
(XEN) elf_paddr_offset =3D 0x0
(XEN) virt_offset =3D 0xffffffff80000000
(XEN) virt_kstart =3D 0xffffffff81000000
(XEN) virt_kend =3D 0xffffffff81e41000
(XEN) virt_entry =3D 0xffffffff8189a210
(XEN) p2m_base =3D 0xffffffffffffffff
(XEN) Xen kernel: 64-bit, lsb, compat32
(XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x1e41000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN) Dom0 alloc.: 0000000224000000->0000000228000000 (187430 pages to b=
e allocated)
(XEN) Init. ramdisk: 0000000231a26000->000000023fdffa00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN) Loaded kernel: ffffffff81000000->ffffffff81e41000
(XEN) Init. ramdisk: ffffffff81e41000->ffffffff9021aa00
(XEN) Phys-Mach map: ffffffff9021b000->ffffffff9041b000
(XEN) Start info: ffffffff9041b000->ffffffff9041b4b4
(XEN) Page tables: ffffffff9041c000->ffffffff904a3000
(XEN) Boot stack: ffffffff904a3000->ffffffff904a4000
(XEN) TOTAL: ffffffff80000000->ffffffff90800000
(XEN) ENTRY ADDRESS: ffffffff8189a210
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff817a3000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81800000 -> 0xffffffff818850e8
(XEN) elf_load_binary: phdr 2 at 0xffffffff81886000 -> 0xffffffff81899280
(XEN) elf_load_binary: phdr 3 at 0xffffffff8189a000 -> 0xffffffff8193e000
(XEN) Scrubbing Free RAM: .................................................=
=2E...................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA co cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 3.5.0-rc2upstream-00011-g3dccb5f-dirty (konrad=
@build.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Sat Jun 9 10:49:11 EDT 2012
[ 0.000000] Command line: earlyprintk=3Dxen debug nofb console=3Dtty con=
sole=3Dhvc0 loglevel=3D10
[ 0.000000] Freeing 9e-100 pfn range: 98 pages freed
[ 0.000000] Freeing 20000-20200 pfn range: 512 pages freed
[ 0.000000] Released 610 pages of unused memory
[ 0.000000] Set 283999 page(s) to 1-1 mapping
[ 0.000000] Populating 40200-40462 pfn range: 610 pages added
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009dfff] usable
[ 0.000000] Xen: [mem 0x000000000009ec00-0x00000000000fffff] reserved
[ 0.000000] Xen: [mem 0x0000000000100000-0x000000001fffffff] usable
[ 0.000000] Xen: [mem 0x0000000020000000-0x00000000201fffff] reserved
[ 0.000000] Xen: [mem 0x0000000020200000-0x000000003fffffff] usable
[ 0.000000] Xen: [mem 0x0000000040000000-0x00000000401fffff] reserved
[ 0.000000] Xen: [mem 0x0000000040200000-0x0000000080461fff] usable
[ 0.000000] Xen: [mem 0x0000000080462000-0x00000000bad7ffff] unusable
[ 0.000000] Xen: [mem 0x00000000bad80000-0x00000000badc8fff] ACPI NVS
[ 0.000000] Xen: [mem 0x00000000badc9000-0x00000000badd0fff] ACPI data
[ 0.000000] Xen: [mem 0x00000000badd1000-0x00000000badf3fff] reserved
[ 0.000000] Xen: [mem 0x00000000badf4000-0x00000000badf5fff] unusable
[ 0.000000] Xen: [mem 0x00000000badf6000-0x00000000bae05fff] reserved
[ 0.000000] Xen: [mem 0x00000000bae06000-0x00000000bae13fff] ACPI NVS
[ 0.000000] Xen: [mem 0x00000000bae14000-0x00000000bae3bfff] reserved
[ 0.000000] Xen: [mem 0x00000000bae3c000-0x00000000bae7efff] ACPI NVS
[ 0.000000] Xen: [mem 0x00000000bae7f000-0x00000000baffffff] unusable
[ 0.000000] Xen: [mem 0x00000000bb800000-0x00000000bf9fffff] reserved
[ 0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[ 0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed3ffff] reserved
[ 0.000000] Xen: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[ 0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[ 0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[ 0.000000] bootconsole [xenboot0] enabled
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI 2.7 present.
[ 0.000000] DMI: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0 03/14/2011
[ 0.000000] e820: update [mem 0x00000000-0x0000ffff] usable =3D=3D> rese=
rved
[ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[ 0.000000] No AGP bridge found
[ 0.000000] e820: last_pfn =3D 0x80462 max_arch_pfn =3D 0x400000000
[ 0.000000] found SMP MP-table at [mem 0x000fcde0-0x000fcdef] mapped at =
[ffff8800000fcde0]
[ 0.000000] initial memory mapped: [mem 0x00000000-0x1021afff]
[ 0.000000] Base memory trampoline at [ffff880000098000] 98000 size 24576
[ 0.000000] init_memory_mapping: [mem 0x00000000-0x80461fff]
[ 0.000000] [mem 0x00000000-0x80461fff] page 4k
[ 0.000000] kernel direct mapping tables up to 0x80461fff @ [mem 0x00bf9=
000-0x00ffffff]
(XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
(XEN) mm.c:3460:d0 Could not get page for normal update
(XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
(XEN) mm.c:3460:d0 Could not get page for normal update
(XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
(XEN) mm.c:3460:d0 Could not get page for normal update
(XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
(XEN) mm.c:3460:d0 Could not get page for normal update
(XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
(XEN) mm.c:3460:d0 Could not get page for normal update
(XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff

_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


david.vrabel at citrix

Jun 11, 2012, 3:23 AM

Post #4 of 6 (189 views)
Permalink
Re: [PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable [In reply to]

On 10/06/12 11:23, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 05, 2012 at 12:07:46PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Fri, Jun 01, 2012 at 04:14:54PM +0100, David Vrabel wrote:
>>> From: David Vrabel <david.vrabel [at] citrix>
>>>
>>> In xen_set_pte() if batching is unavailable (because the caller is in
>>> an interrupt context such as handling a page fault) it would fall back
>>> to using native_set_pte() and trapping and emulating the PTE write.
>>>
>>> On 32-bit guests this requires two traps for each PTE write (one for
>>> each dword of the PTE). Instead, do one mmu_update hypercall
>>> directly.
>>
>> OK.
>>>
>>> This significantly improves page fault performance in 32-bit PV
>>> guests.
>>
>> Nice!
>
> With this patch I keep on getting this (which is v3.5-rc2 plus my
> patches in stable/for-linus-3.5 and yours):
[...]
> (XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
> (XEN) mm.c:3460:d0 Could not get page for normal update

Are you talking about these? I've not seen them. Do you know when they
happen?

The patch doesn't change what PTEs are written or their value so I don't
think I've introduced a regression -- only it now prints a new
warning/error.

David

_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


konrad at darnok

Jun 11, 2012, 5:29 AM

Post #5 of 6 (190 views)
Permalink
Re: [PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable [In reply to]

On Mon, Jun 11, 2012 at 11:23:11AM +0100, David Vrabel wrote:
> On 10/06/12 11:23, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jun 05, 2012 at 12:07:46PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Fri, Jun 01, 2012 at 04:14:54PM +0100, David Vrabel wrote:
> >>> From: David Vrabel <david.vrabel [at] citrix>
> >>>
> >>> In xen_set_pte() if batching is unavailable (because the caller is in
> >>> an interrupt context such as handling a page fault) it would fall back
> >>> to using native_set_pte() and trapping and emulating the PTE write.
> >>>
> >>> On 32-bit guests this requires two traps for each PTE write (one for
> >>> each dword of the PTE). Instead, do one mmu_update hypercall
> >>> directly.
> >>
> >> OK.
> >>>
> >>> This significantly improves page fault performance in 32-bit PV
> >>> guests.
> >>
> >> Nice!
> >
> > With this patch I keep on getting this (which is v3.5-rc2 plus my
> > patches in stable/for-linus-3.5 and yours):
> [...]
> > (XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
> > (XEN) mm.c:3460:d0 Could not get page for normal update
>
> Are you talking about these? I've not seen them. Do you know when they
> happen?

During the bootup. I hadn't really done much investigation - but reverting
your patch (so v3.5-rc2+stable/for-linus-3.5 minus your patch) makes these
errors go away.
>
> The patch doesn't change what PTEs are written or their value so I don't
> think I've introduced a regression -- only it now prints a new
> warning/error.

The boot doesn't finish. It keeps on printing those forever. This is
of course dom0 - I hadn't gotten to trying out a domU guest.

_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


david.vrabel at citrix

Jun 13, 2012, 9:10 AM

Post #6 of 6 (188 views)
Permalink
Re: [PATCH] xen/mm: do direct hypercall in xen_set_pte() if batching is unavailable [In reply to]

On 11/06/12 13:29, Konrad Rzeszutek Wilk wrote:
> On Mon, Jun 11, 2012 at 11:23:11AM +0100, David Vrabel wrote:
>> On 10/06/12 11:23, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Jun 05, 2012 at 12:07:46PM -0400, Konrad Rzeszutek Wilk wrote:
>>>> On Fri, Jun 01, 2012 at 04:14:54PM +0100, David Vrabel wrote:
>>>>> From: David Vrabel <david.vrabel [at] citrix>
>>>>>
>>>>> In xen_set_pte() if batching is unavailable (because the caller is in
>>>>> an interrupt context such as handling a page fault) it would fall back
>>>>> to using native_set_pte() and trapping and emulating the PTE write.
>>>>>
>>>>> On 32-bit guests this requires two traps for each PTE write (one for
>>>>> each dword of the PTE). Instead, do one mmu_update hypercall
>>>>> directly.
>>>>
>>>> OK.
>>>>>
>>>>> This significantly improves page fault performance in 32-bit PV
>>>>> guests.
>>>>
>>>> Nice!
>>>
>>> With this patch I keep on getting this (which is v3.5-rc2 plus my
>>> patches in stable/for-linus-3.5 and yours):
>> [...]
>>> (XEN) mm.c:659:d0 Could not get page ref for pfn fffffffffffff
>>> (XEN) mm.c:3460:d0 Could not get page for normal update
>>
>> Are you talking about these? I've not seen them. Do you know when they
>> happen?
>
> During the bootup. I hadn't really done much investigation - but reverting
> your patch (so v3.5-rc2+stable/for-linus-3.5 minus your patch) makes these
> errors go away.

Trying to update the PTE at:

pte: v: ffffffffff4f8000, p: 7f4f8000, m: fffffffffffff000

It seems we cannot get the MFN for the page containing this PTE. It
appears not to be in the p2m which is understandable as the PFN here is
outside of available RAM (this PFN is marked as UNUSABLE in the e820 map).

It's really not clear how this has ever worked.

David

_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel

Xen devel RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.