Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Linux: Kernel

[PATCH RFC V8 0/17] Paravirtualized ticket spinlocks

 

 

First page Previous page 1 2 Next page Last page  View All Linux kernel RSS feed   Index | Next | Previous | View Threaded


raghavendra.kt at linux

May 2, 2012, 3:06 AM

Post #1 of 34 (826 views)
Permalink
[PATCH RFC V8 0/17] Paravirtualized ticket spinlocks

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism. The series provides
implementation for both Xen and KVM.(targeted for 3.5 window)

Note: This needs debugfs changes patch that should be in Xen / linux-next
https://lkml.org/lkml/2012/3/30/687

Changes in V8:
- Reabsed patches to 3.4-rc4
- Combined the KVM changes with ticketlock + Xen changes (Ingo)
- Removed CAP_PV_UNHALT since it is redundant (Avi). But note that we
need newer qemu which uses KVM_GET_SUPPORTED_CPUID ioctl.
- Rewrite GET_MP_STATE condition (Avi)
- Make pv_unhalt = bool (Avi)
- Move out reset pv_unhalt code to vcpu_run from vcpu_block (Gleb)
- Documentation changes (Rob Landley)
- Have a printk to recognize that paravirt spinlock is enabled (Nikunj)
- Move out kick hypercall out of CONFIG_PARAVIRT_SPINLOCK now
so that it can be used for other optimizations such as
flush_tlb_ipi_others etc. (Nikunj)

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs). This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning. (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket. This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
inc = xadd(&lock->tickets, inc);
inc.tail &= ~TICKET_SLOWPATH_FLAG;

if (likely(inc.head == inc.tail))
goto out;
for (;;) {
unsigned count = SPIN_THRESHOLD;
do {
if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out: barrier();
which results in:
push %rbp
mov %rsp,%rbp

mov $0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f # Slowpath if lock in contention

pop %rbp
retq

### SLOWPATH START
1: and $-2,%edx
movzbl %dl,%esi

2: mov $0x800,%eax
jmp 4f

3: pause
sub $0x1,%eax
je 5f

4: movzbl (%rdi),%ecx
cmp %cl,%dl
jne 3b

pop %rbp
retq

5: callq *__ticket_lock_spinning
jmp 2b
### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

push %rbp
mov %rsp,%rbp

mov $0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp %al,%dl
jne 1f

pop %rbp
retq

### SLOWPATH START
1: pause
movzbl (%rdi),%eax
cmp %dl,%al
jne 1b

pop %rbp
retq
### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail". This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set. The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

if (TICKET_SLOWPATH_FLAG &&
static_key_false(&paravirt_ticketlocks_enabled))) {
arch_spinlock_t prev;
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);

/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
} else
__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
which generates:
push %rbp
mov %rsp,%rbp

nop5 # replaced by 5-byte jmp 2f when PV enabled

# non-PV unlock
addb $0x2,(%rdi)

1: pop %rbp
retq

### PV unlock ###
2: movzwl (%rdi),%esi # Fetch prev

lock addb $0x2,(%rdi) # Do unlock

testb $0x1,0x1(%rdi) # Test flag
je 1b # Finished if not set

### Slow path ###
add $2,%sil # Add "head" in old lock state
mov %esi,%edx
and $0xfe,%dh # clear slowflag for comparison
movzbl %dh,%eax
cmp %dl,%al # If head == tail (uncontended)
je 4f # clear slowpath flag

# Kick next CPU waiting for lock
3: movzbl %sil,%esi
callq *pv_lock_ops.kick

pop %rbp
retq

# Lock no longer contended - clear slowflag
4: mov %esi,%eax
lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
cmp %si,%ax
jne 3b # If clear failed, then kick

pop %rbp
retq

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".

TODO: 1) Remove CONFIG_PARAVIRT_SPINLOCK ?
2) Experiments on further optimization possibilities. (discussed in V6)
3) Use kvm_irq_delivery_to_apic() in kvm hypercall (suggested by Gleb)
4) Any cleanups for e.g. Xen/KVM common code for debugfs.

PS: TODOs are no blockers for the current series merge.

Results:
=======
various form of results based on V6 of the patch series are posted in following links

https://lkml.org/lkml/2012/3/21/161
https://lkml.org/lkml/2012/3/21/198

kvm results:
https://lkml.org/lkml/2012/3/23/50
https://lkml.org/lkml/2012/4/5/73

Benchmarking on the current set of patches will be posted soon.

Thoughts? Comments? Suggestions?. It would be nice to see
Acked-by/Reviewed-by/Tested-by for the patch series.

Jeremy Fitzhardinge (9):
x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/ticketlock: Collapse a layer of functions
xen: Defer spinlock setup until boot CPU setup
xen/pvticketlock: Xen implementation for PV ticket locks
xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv
ticketlocks
x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/ticketlock: Add slowpath logic
xen/pvticketlock: Allow interrupts to be enabled while blocking

Srivatsa Vaddagiri (3):
Add a hypercall to KVM hypervisor to support pv-ticketlocks
Added configuration support to enable debug information for KVM Guests
Paravirtual ticketlock support for linux guests running on KVM hypervisor

Raghavendra K T (3):
x86/ticketlock: Don't inline _spin_unlock when using paravirt
spinlocks
Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
Add documentation on Hypercalls and features used for PV spinlock

Andrew Jones (1):
Split out rate limiting from jump_label.h

Stefano Stabellini (1):
xen: Enable PV ticketlocks on HVM Xen
---
PS: Had to trim down recipient list because, LKML archive does not support
list > 20. Though many more people should have been in To/CC list.

Ticketlock links:
V7 : https://lkml.org/lkml/2012/4/19/335
V6 : https://lkml.org/lkml/2012/3/21/161

KVM patch links:
V6: https://lkml.org/lkml/2012/4/23/123

V5 kernel changes:
https://lkml.org/lkml/2012/3/23/50
Qemu changes for V5:
http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04455.html

V4 kernel changes:
https://lkml.org/lkml/2012/1/14/66
Qemu changes for V4:
http://www.mail-archive.com/kvm [at] vger/msg66450.html

V3 kernel Changes:
https://lkml.org/lkml/2011/11/30/62
Qemu patch for V3:
http://lists.gnu.org/archive/html/qemu-devel/2011-12/msg00397.html

V2 kernel changes :
https://lkml.org/lkml/2011/10/23/207

Previous discussions : (posted by Srivatsa V).
https://lkml.org/lkml/2010/7/26/24
https://lkml.org/lkml/2011/1/19/212

Ticketlock change history:
Changes in V7:
- Reabsed patches to 3.4-rc3
- Added jumplabel split patch (originally from Andrew Jones rebased to
3.4-rc3
- jumplabel changes from Ingo and Jason taken and now using static_key_*
instead of static_branch.
- using UNINLINE_SPIN_UNLOCK (which was splitted as per suggestion from Linus)
- This patch series is rebased on debugfs patch (that sould be already in
Xen/linux-next https://lkml.org/lkml/2012/3/23/51)

Changes in V6 posting: (Raghavendra K T)
- Rebased to linux-3.3-rc6.
- used function+enum in place of macro (better type checking)
- use cmpxchg while resetting zero status for possible race
[suggested by Dave Hansen for KVM patches ]

KVM patch Change history:
Changes in V6:
- Rebased to 3.4-rc3
- Removed debugfs changes patch which should now be in Xen/linux-next.
(https://lkml.org/lkml/2012/3/30/687)
- Removed PV_UNHALT_MSR since currently we don't need guest communication,
and made pv_unhalt folded to GET_MP_STATE (Marcello, Avi[long back])
- Take jumplabel changes from Ingo/Jason into use (static_key_slow_inc usage)
- Added inline to spinlock_init in non PARAVIRT case
- Move arch specific code to arch/x86 and add stubs to other archs (Marcello)
- Added more comments on pv_unhalt usage etc (Marcello)

Changes in V5:
- rebased to 3.3-rc6
- added PV_UNHALT_MSR that would help in live migration (Avi)
- removed PV_LOCK_KICK vcpu request and pv_unhalt flag (re)added.
- Changed hypercall documentaion (Alex).
- mode_t changed to umode_t in debugfs.
- MSR related documentation added.
- rename PV_LOCK_KICK to PV_UNHALT.
- host and guest patches not mixed. (Marcelo, Alex)
- kvm_kick_cpu now takes cpu so it can be used by flush_tlb_ipi_other
paravirtualization (Nikunj)
- coding style changes in variable declarion etc (Srikar)

Changes in V4:
- reabsed to 3.2.0 pre.
- use APIC ID for kicking the vcpu and use kvm_apic_match_dest for matching (Avi)
- fold vcpu->kicked flag into vcpu->requests (KVM_REQ_PVLOCK_KICK) and related
changes for UNHALT path to make pv ticket spinlock migration friendly(Avi, Marcello)
- Added Documentation for CPUID, Hypercall (KVM_HC_KICK_CPU)
and capabilty (KVM_CAP_PVLOCK_KICK) (Avi)
- Remove unneeded kvm_arch_vcpu_ioctl_set_mpstate call. (Marcello)
- cumulative variable type changed (int ==> u32) in add_stat (Konrad)
- remove unneeded kvm_guest_init for !CONFIG_KVM_GUEST case

Changes in V3:
- rebased to 3.2-rc1
- use halt() instead of wait for kick hypercall.
- modify kick hyper call to do wakeup halted vcpu.
- hook kvm_spinlock_init to smp_prepare_cpus call (moved the call out of head##.c).
- fix the potential race when zero_stat is read.
- export debugfs_create_32 and add documentation to API.
- use static inline and enum instead of ADDSTAT macro.
- add barrier() in after setting kick_vcpu.
- empty static inline function for kvm_spinlock_init.
- combine the patches one and two readuce overhead.
- make KVM_DEBUGFS depends on DEBUGFS.
- include debugfs header unconditionally.

Changes in V2:
- rebased patchesto -rc9
- synchronization related changes based on Jeremy's changes
(Jeremy Fitzhardinge <jeremy.fitzhardinge [at] citrix>) pointed by
Stephan Diestelhorst <stephan.diestelhorst [at] amd>
- enabling 32 bit guests
- splitted patches into two more chunks

Documentation/virtual/kvm/cpuid.txt | 4 +
Documentation/virtual/kvm/hypercalls.txt | 60 +++++
arch/x86/Kconfig | 10 +
arch/x86/include/asm/kvm_host.h | 4 +
arch/x86/include/asm/kvm_para.h | 16 +-
arch/x86/include/asm/paravirt.h | 32 +--
arch/x86/include/asm/paravirt_types.h | 10 +-
arch/x86/include/asm/spinlock.h | 128 +++++++----
arch/x86/include/asm/spinlock_types.h | 16 +-
arch/x86/kernel/kvm.c | 256 ++++++++++++++++++++
arch/x86/kernel/paravirt-spinlocks.c | 18 +-
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/x86.c | 44 ++++-
arch/x86/xen/smp.c | 3 +-
arch/x86/xen/spinlock.c | 387 ++++++++++--------------------
include/linux/jump_label.h | 26 +--
include/linux/jump_label_ratelimit.h | 34 +++
include/linux/kvm_para.h | 1 +
include/linux/perf_event.h | 1 +
kernel/jump_label.c | 1 +
20 files changed, 673 insertions(+), 381 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


mingo at kernel

May 7, 2012, 1:29 AM

Post #2 of 34 (799 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

* Raghavendra K T <raghavendra.kt [at] linux> wrote:

> This series replaces the existing paravirtualized spinlock mechanism
> with a paravirtualized ticketlock mechanism. The series provides
> implementation for both Xen and KVM.(targeted for 3.5 window)
>
> Note: This needs debugfs changes patch that should be in Xen / linux-next
> https://lkml.org/lkml/2012/3/30/687
>
> Changes in V8:
> - Reabsed patches to 3.4-rc4
> - Combined the KVM changes with ticketlock + Xen changes (Ingo)
> - Removed CAP_PV_UNHALT since it is redundant (Avi). But note that we
> need newer qemu which uses KVM_GET_SUPPORTED_CPUID ioctl.
> - Rewrite GET_MP_STATE condition (Avi)
> - Make pv_unhalt = bool (Avi)
> - Move out reset pv_unhalt code to vcpu_run from vcpu_block (Gleb)
> - Documentation changes (Rob Landley)
> - Have a printk to recognize that paravirt spinlock is enabled (Nikunj)
> - Move out kick hypercall out of CONFIG_PARAVIRT_SPINLOCK now
> so that it can be used for other optimizations such as
> flush_tlb_ipi_others etc. (Nikunj)
>
> Ticket locks have an inherent problem in a virtualized case, because
> the vCPUs are scheduled rather than running concurrently (ignoring
> gang scheduled vCPUs). This can result in catastrophic performance
> collapses when the vCPU scheduler doesn't schedule the correct "next"
> vCPU, and ends up scheduling a vCPU which burns its entire timeslice
> spinning. (Note that this is not the same problem as lock-holder
> preemption, which this series also addresses; that's also a problem,
> but not catastrophic).
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)
>
> Currently we deal with this by having PV spinlocks, which adds a layer
> of indirection in front of all the spinlock functions, and defining a
> completely new implementation for Xen (and for other pvops users, but
> there are none at present).
>
> PV ticketlocks keeps the existing ticketlock implemenentation
> (fastpath) as-is, but adds a couple of pvops for the slow paths:
>
> - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
> iterations, then call out to the __ticket_lock_spinning() pvop,
> which allows a backend to block the vCPU rather than spinning. This
> pvop can set the lock into "slowpath state".
>
> - When releasing a lock, if it is in "slowpath state", the call
> __ticket_unlock_kick() to kick the next vCPU in line awake. If the
> lock is no longer in contention, it also clears the slowpath flag.
>
> The "slowpath state" is stored in the LSB of the within the lock tail
> ticket. This has the effect of reducing the max number of CPUs by
> half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
> 32768).
>
> For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick
> another vcpu out of halt state.
> The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
>
> Overall, it results in a large reduction in code, it makes the native
> and virtualized cases closer, and it removes a layer of indirection
> around all the spinlock functions.
>
> The fast path (taking an uncontended lock which isn't in "slowpath"
> state) is optimal, identical to the non-paravirtualized case.
>
> The inner part of ticket lock code becomes:
> inc = xadd(&lock->tickets, inc);
> inc.tail &= ~TICKET_SLOWPATH_FLAG;
>
> if (likely(inc.head == inc.tail))
> goto out;
> for (;;) {
> unsigned count = SPIN_THRESHOLD;
> do {
> if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
> goto out;
> cpu_relax();
> } while (--count);
> __ticket_lock_spinning(lock, inc.tail);
> }
> out: barrier();
> which results in:
> push %rbp
> mov %rsp,%rbp
>
> mov $0x200,%eax
> lock xadd %ax,(%rdi)
> movzbl %ah,%edx
> cmp %al,%dl
> jne 1f # Slowpath if lock in contention
>
> pop %rbp
> retq
>
> ### SLOWPATH START
> 1: and $-2,%edx
> movzbl %dl,%esi
>
> 2: mov $0x800,%eax
> jmp 4f
>
> 3: pause
> sub $0x1,%eax
> je 5f
>
> 4: movzbl (%rdi),%ecx
> cmp %cl,%dl
> jne 3b
>
> pop %rbp
> retq
>
> 5: callq *__ticket_lock_spinning
> jmp 2b
> ### SLOWPATH END
>
> with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
> the fastpath case is straight through (taking the lock without
> contention), and the spin loop is out of line:
>
> push %rbp
> mov %rsp,%rbp
>
> mov $0x100,%eax
> lock xadd %ax,(%rdi)
> movzbl %ah,%edx
> cmp %al,%dl
> jne 1f
>
> pop %rbp
> retq
>
> ### SLOWPATH START
> 1: pause
> movzbl (%rdi),%eax
> cmp %dl,%al
> jne 1b
>
> pop %rbp
> retq
> ### SLOWPATH END
>
> The unlock code is complicated by the need to both add to the lock's
> "head" and fetch the slowpath flag from "tail". This version of the
> patch uses a locked add to do this, followed by a test to see if the
> slowflag is set. The lock prefix acts as a full memory barrier, so we
> can be sure that other CPUs will have seen the unlock before we read
> the flag (without the barrier the read could be fetched from the
> store queue before it hits memory, which could result in a deadlock).
>
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
>
> if (TICKET_SLOWPATH_FLAG &&
> static_key_false(&paravirt_ticketlocks_enabled))) {
> arch_spinlock_t prev;
> prev = *lock;
> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>
> /* add_smp() is a full mb() */
> if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> __ticket_unlock_slowpath(lock, prev);
> } else
> __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
> which generates:
> push %rbp
> mov %rsp,%rbp
>
> nop5 # replaced by 5-byte jmp 2f when PV enabled
>
> # non-PV unlock
> addb $0x2,(%rdi)
>
> 1: pop %rbp
> retq
>
> ### PV unlock ###
> 2: movzwl (%rdi),%esi # Fetch prev
>
> lock addb $0x2,(%rdi) # Do unlock
>
> testb $0x1,0x1(%rdi) # Test flag
> je 1b # Finished if not set
>
> ### Slow path ###
> add $2,%sil # Add "head" in old lock state
> mov %esi,%edx
> and $0xfe,%dh # clear slowflag for comparison
> movzbl %dh,%eax
> cmp %dl,%al # If head == tail (uncontended)
> je 4f # clear slowpath flag
>
> # Kick next CPU waiting for lock
> 3: movzbl %sil,%esi
> callq *pv_lock_ops.kick
>
> pop %rbp
> retq
>
> # Lock no longer contended - clear slowflag
> 4: mov %esi,%eax
> lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag
> cmp %si,%ax
> jne 3b # If clear failed, then kick
>
> pop %rbp
> retq
>
> So when not using PV ticketlocks, the unlock sequence just has a
> 5-byte nop added to it, and the PV case is reasonable straightforward
> aside from requiring a "lock add".
>
> TODO: 1) Remove CONFIG_PARAVIRT_SPINLOCK ?
> 2) Experiments on further optimization possibilities. (discussed in V6)
> 3) Use kvm_irq_delivery_to_apic() in kvm hypercall (suggested by Gleb)
> 4) Any cleanups for e.g. Xen/KVM common code for debugfs.
>
> PS: TODOs are no blockers for the current series merge.
>
> Results:
> =======
> various form of results based on V6 of the patch series are posted in following links
>
> https://lkml.org/lkml/2012/3/21/161
> https://lkml.org/lkml/2012/3/21/198
>
> kvm results:
> https://lkml.org/lkml/2012/3/23/50
> https://lkml.org/lkml/2012/4/5/73
>
> Benchmarking on the current set of patches will be posted soon.
>
> Thoughts? Comments? Suggestions?. It would be nice to see
> Acked-by/Reviewed-by/Tested-by for the patch series.
>
> Jeremy Fitzhardinge (9):
> x86/spinlock: Replace pv spinlocks with pv ticketlocks
> x86/ticketlock: Collapse a layer of functions
> xen: Defer spinlock setup until boot CPU setup
> xen/pvticketlock: Xen implementation for PV ticket locks
> xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv
> ticketlocks
> x86/pvticketlock: Use callee-save for lock_spinning
> x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
> x86/ticketlock: Add slowpath logic
> xen/pvticketlock: Allow interrupts to be enabled while blocking
>
> Srivatsa Vaddagiri (3):
> Add a hypercall to KVM hypervisor to support pv-ticketlocks
> Added configuration support to enable debug information for KVM Guests
> Paravirtual ticketlock support for linux guests running on KVM hypervisor
>
> Raghavendra K T (3):
> x86/ticketlock: Don't inline _spin_unlock when using paravirt
> spinlocks
> Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration
> Add documentation on Hypercalls and features used for PV spinlock
>
> Andrew Jones (1):
> Split out rate limiting from jump_label.h
>
> Stefano Stabellini (1):
> xen: Enable PV ticketlocks on HVM Xen
> ---
> PS: Had to trim down recipient list because, LKML archive does not support
> list > 20. Though many more people should have been in To/CC list.
>
> Ticketlock links:
> V7 : https://lkml.org/lkml/2012/4/19/335
> V6 : https://lkml.org/lkml/2012/3/21/161
>
> KVM patch links:
> V6: https://lkml.org/lkml/2012/4/23/123
>
> V5 kernel changes:
> https://lkml.org/lkml/2012/3/23/50
> Qemu changes for V5:
> http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04455.html
>
> V4 kernel changes:
> https://lkml.org/lkml/2012/1/14/66
> Qemu changes for V4:
> http://www.mail-archive.com/kvm [at] vger/msg66450.html
>
> V3 kernel Changes:
> https://lkml.org/lkml/2011/11/30/62
> Qemu patch for V3:
> http://lists.gnu.org/archive/html/qemu-devel/2011-12/msg00397.html
>
> V2 kernel changes :
> https://lkml.org/lkml/2011/10/23/207
>
> Previous discussions : (posted by Srivatsa V).
> https://lkml.org/lkml/2010/7/26/24
> https://lkml.org/lkml/2011/1/19/212
>
> Ticketlock change history:
> Changes in V7:
> - Reabsed patches to 3.4-rc3
> - Added jumplabel split patch (originally from Andrew Jones rebased to
> 3.4-rc3
> - jumplabel changes from Ingo and Jason taken and now using static_key_*
> instead of static_branch.
> - using UNINLINE_SPIN_UNLOCK (which was splitted as per suggestion from Linus)
> - This patch series is rebased on debugfs patch (that sould be already in
> Xen/linux-next https://lkml.org/lkml/2012/3/23/51)
>
> Changes in V6 posting: (Raghavendra K T)
> - Rebased to linux-3.3-rc6.
> - used function+enum in place of macro (better type checking)
> - use cmpxchg while resetting zero status for possible race
> [suggested by Dave Hansen for KVM patches ]
>
> KVM patch Change history:
> Changes in V6:
> - Rebased to 3.4-rc3
> - Removed debugfs changes patch which should now be in Xen/linux-next.
> (https://lkml.org/lkml/2012/3/30/687)
> - Removed PV_UNHALT_MSR since currently we don't need guest communication,
> and made pv_unhalt folded to GET_MP_STATE (Marcello, Avi[long back])
> - Take jumplabel changes from Ingo/Jason into use (static_key_slow_inc usage)
> - Added inline to spinlock_init in non PARAVIRT case
> - Move arch specific code to arch/x86 and add stubs to other archs (Marcello)
> - Added more comments on pv_unhalt usage etc (Marcello)
>
> Changes in V5:
> - rebased to 3.3-rc6
> - added PV_UNHALT_MSR that would help in live migration (Avi)
> - removed PV_LOCK_KICK vcpu request and pv_unhalt flag (re)added.
> - Changed hypercall documentaion (Alex).
> - mode_t changed to umode_t in debugfs.
> - MSR related documentation added.
> - rename PV_LOCK_KICK to PV_UNHALT.
> - host and guest patches not mixed. (Marcelo, Alex)
> - kvm_kick_cpu now takes cpu so it can be used by flush_tlb_ipi_other
> paravirtualization (Nikunj)
> - coding style changes in variable declarion etc (Srikar)
>
> Changes in V4:
> - reabsed to 3.2.0 pre.
> - use APIC ID for kicking the vcpu and use kvm_apic_match_dest for matching (Avi)
> - fold vcpu->kicked flag into vcpu->requests (KVM_REQ_PVLOCK_KICK) and related
> changes for UNHALT path to make pv ticket spinlock migration friendly(Avi, Marcello)
> - Added Documentation for CPUID, Hypercall (KVM_HC_KICK_CPU)
> and capabilty (KVM_CAP_PVLOCK_KICK) (Avi)
> - Remove unneeded kvm_arch_vcpu_ioctl_set_mpstate call. (Marcello)
> - cumulative variable type changed (int ==> u32) in add_stat (Konrad)
> - remove unneeded kvm_guest_init for !CONFIG_KVM_GUEST case
>
> Changes in V3:
> - rebased to 3.2-rc1
> - use halt() instead of wait for kick hypercall.
> - modify kick hyper call to do wakeup halted vcpu.
> - hook kvm_spinlock_init to smp_prepare_cpus call (moved the call out of head##.c).
> - fix the potential race when zero_stat is read.
> - export debugfs_create_32 and add documentation to API.
> - use static inline and enum instead of ADDSTAT macro.
> - add barrier() in after setting kick_vcpu.
> - empty static inline function for kvm_spinlock_init.
> - combine the patches one and two readuce overhead.
> - make KVM_DEBUGFS depends on DEBUGFS.
> - include debugfs header unconditionally.
>
> Changes in V2:
> - rebased patchesto -rc9
> - synchronization related changes based on Jeremy's changes
> (Jeremy Fitzhardinge <jeremy.fitzhardinge [at] citrix>) pointed by
> Stephan Diestelhorst <stephan.diestelhorst [at] amd>
> - enabling 32 bit guests
> - splitted patches into two more chunks
>
> Documentation/virtual/kvm/cpuid.txt | 4 +
> Documentation/virtual/kvm/hypercalls.txt | 60 +++++
> arch/x86/Kconfig | 10 +
> arch/x86/include/asm/kvm_host.h | 4 +
> arch/x86/include/asm/kvm_para.h | 16 +-
> arch/x86/include/asm/paravirt.h | 32 +--
> arch/x86/include/asm/paravirt_types.h | 10 +-
> arch/x86/include/asm/spinlock.h | 128 +++++++----
> arch/x86/include/asm/spinlock_types.h | 16 +-
> arch/x86/kernel/kvm.c | 256 ++++++++++++++++++++
> arch/x86/kernel/paravirt-spinlocks.c | 18 +-
> arch/x86/kvm/cpuid.c | 3 +-
> arch/x86/kvm/x86.c | 44 ++++-
> arch/x86/xen/smp.c | 3 +-
> arch/x86/xen/spinlock.c | 387 ++++++++++--------------------
> include/linux/jump_label.h | 26 +--
> include/linux/jump_label_ratelimit.h | 34 +++
> include/linux/kvm_para.h | 1 +
> include/linux/perf_event.h | 1 +
> kernel/jump_label.c | 1 +
> 20 files changed, 673 insertions(+), 381 deletions(-)

This is looking pretty good and complete now - any objections
from anyone to trying this out in a separate x86 topic tree?

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 1:32 AM

Post #3 of 34 (798 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 11:29 AM, Ingo Molnar wrote:
> This is looking pretty good and complete now - any objections
> from anyone to trying this out in a separate x86 topic tree?

No objections, instead an

Acked-by: Avi Kivity <avi [at] redhat>

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 3:58 AM

Post #4 of 34 (801 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 02:02 PM, Avi Kivity wrote:
> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>> This is looking pretty good and complete now - any objections
>> from anyone to trying this out in a separate x86 topic tree?
>
> No objections, instead an
>
> Acked-by: Avi Kivity<avi [at] redhat>
>

Thank you.

Here is a benchmark result with the patches.

3 guests with 8VCPU, 8GB RAM, 1 used for kernbench
(kernbench -f -H -M -o 20) other for cpuhog (shell script while
true with an instruction)

unpinned scenario
1x: no hogs
2x: 8hogs in one guest
3x: 8hogs each in two guest

BASE: 3.4-rc4 vanilla with CONFIG_PARAVIRT_SPINLOCK=n
BASE+patch: 3.4-rc4 + debugfs + pv patches with CONFIG_PARAVIRT_SPINLOCK=y

Machine : IBM xSeries with Intel(R) Xeon(R) x5570 2.93GHz CPU (Non PLE)
with 8 core , 64GB RAM

(Less is better. Below is time elapsed in sec for x86_64_defconfig (3+3
runs)).

BASE BASE+patch %improvement
mean (sd) mean (sd)
case 1x: 66.0566 (74.0304) 61.3233 (68.8299) 7.16552
case 2x: 1253.2 (1795.74) 131.606 (137.358) 89.4984
case 3x: 3431.04 (5297.26) 134.964 (149.861) 96.0664


Will be working on further analysis with other benchmarks
(pgbench/sysbench/ebizzy...) and further optimization.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 5:06 AM

Post #5 of 34 (805 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 01:58 PM, Raghavendra K T wrote:
> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>> This is looking pretty good and complete now - any objections
>>> from anyone to trying this out in a separate x86 topic tree?
>>
>> No objections, instead an
>>
>> Acked-by: Avi Kivity<avi [at] redhat>
>>
>
> Thank you.
>
> Here is a benchmark result with the patches.
>
> 3 guests with 8VCPU, 8GB RAM, 1 used for kernbench
> (kernbench -f -H -M -o 20) other for cpuhog (shell script while
> true with an instruction)
>
> unpinned scenario
> 1x: no hogs
> 2x: 8hogs in one guest
> 3x: 8hogs each in two guest
>
> BASE: 3.4-rc4 vanilla with CONFIG_PARAVIRT_SPINLOCK=n
> BASE+patch: 3.4-rc4 + debugfs + pv patches with
> CONFIG_PARAVIRT_SPINLOCK=y
>
> Machine : IBM xSeries with Intel(R) Xeon(R) x5570 2.93GHz CPU (Non
> PLE) with 8 core , 64GB RAM
>
> (Less is better. Below is time elapsed in sec for x86_64_defconfig
> (3+3 runs)).
>
> BASE BASE+patch %improvement
> mean (sd) mean (sd)
> case 1x: 66.0566 (74.0304) 61.3233 (68.8299) 7.16552
> case 2x: 1253.2 (1795.74) 131.606 (137.358) 89.4984
> case 3x: 3431.04 (5297.26) 134.964 (149.861) 96.0664
>

You're calculating the improvement incorrectly. In the last case, it's
not 96%, rather it's 2400% (25x). Similarly the second case is about
900% faster.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 6:20 AM

Post #6 of 34 (798 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 05:36 PM, Avi Kivity wrote:
> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>> This is looking pretty good and complete now - any objections
>>>> from anyone to trying this out in a separate x86 topic tree?
>>>
>>> No objections, instead an
>>>
>>> Acked-by: Avi Kivity<avi [at] redhat>
>>>
[...]
>>
>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>> (3+3 runs)).
>>
>> BASE BASE+patch %improvement
>> mean (sd) mean (sd)
>> case 1x: 66.0566 (74.0304) 61.3233 (68.8299) 7.16552
>> case 2x: 1253.2 (1795.74) 131.606 (137.358) 89.4984
>> case 3x: 3431.04 (5297.26) 134.964 (149.861) 96.0664
>>
>
> You're calculating the improvement incorrectly. In the last case, it's
> not 96%, rather it's 2400% (25x). Similarly the second case is about
> 900% faster.
>

You are right,
my %improvement was intended to be like
if
1) base takes 100 sec ==> patch takes 93 sec
2) base takes 100 sec ==> patch takes 11 sec
3) base takes 100 sec ==> patch takes 4 sec

The above is more confusing (and incorrect!).

Better is what you told which boils to 10x and 25x improvement in case
2 and case 3. And IMO, this *really* gives the feeling of magnitude of
improvement with patches.

I ll change script to report that way :).

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 6:22 AM

Post #7 of 34 (797 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 04:20 PM, Raghavendra K T wrote:
> On 05/07/2012 05:36 PM, Avi Kivity wrote:
>> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>>> This is looking pretty good and complete now - any objections
>>>>> from anyone to trying this out in a separate x86 topic tree?
>>>>
>>>> No objections, instead an
>>>>
>>>> Acked-by: Avi Kivity<avi [at] redhat>
>>>>
> [...]
>>>
>>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>>> (3+3 runs)).
>>>
>>> BASE BASE+patch %improvement
>>> mean (sd) mean (sd)
>>> case 1x: 66.0566 (74.0304) 61.3233 (68.8299) 7.16552
>>> case 2x: 1253.2 (1795.74) 131.606 (137.358) 89.4984
>>> case 3x: 3431.04 (5297.26) 134.964 (149.861) 96.0664
>>>
>>
>> You're calculating the improvement incorrectly. In the last case, it's
>> not 96%, rather it's 2400% (25x). Similarly the second case is about
>> 900% faster.
>>
>
> You are right,
> my %improvement was intended to be like
> if
> 1) base takes 100 sec ==> patch takes 93 sec
> 2) base takes 100 sec ==> patch takes 11 sec
> 3) base takes 100 sec ==> patch takes 4 sec
>
> The above is more confusing (and incorrect!).
>
> Better is what you told which boils to 10x and 25x improvement in case
> 2 and case 3. And IMO, this *really* gives the feeling of magnitude of
> improvement with patches.
>
> I ll change script to report that way :).
>

btw, this is on non-PLE hardware, right? What are the numbers for PLE?

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 6:38 AM

Post #8 of 34 (798 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 06:52 PM, Avi Kivity wrote:
> On 05/07/2012 04:20 PM, Raghavendra K T wrote:
>> On 05/07/2012 05:36 PM, Avi Kivity wrote:
>>> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>>>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>>>> This is looking pretty good and complete now - any objections
>>>>>> from anyone to trying this out in a separate x86 topic tree?
>>>>>
>>>>> No objections, instead an
>>>>>
>>>>> Acked-by: Avi Kivity<avi [at] redhat>
>>>>>
>> [...]
>>>>
>>>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>>>> (3+3 runs)).
>>>>
>>>> BASE BASE+patch %improvement
>>>> mean (sd) mean (sd)
>>>> case 1x: 66.0566 (74.0304) 61.3233 (68.8299) 7.16552
>>>> case 2x: 1253.2 (1795.74) 131.606 (137.358) 89.4984
>>>> case 3x: 3431.04 (5297.26) 134.964 (149.861) 96.0664
>>>>
>>>
>>> You're calculating the improvement incorrectly. In the last case, it's
>>> not 96%, rather it's 2400% (25x). Similarly the second case is about
>>> 900% faster.
>>>
>>
>> You are right,
>> my %improvement was intended to be like
>> if
>> 1) base takes 100 sec ==> patch takes 93 sec
>> 2) base takes 100 sec ==> patch takes 11 sec
>> 3) base takes 100 sec ==> patch takes 4 sec
>>
>> The above is more confusing (and incorrect!).
>>
>> Better is what you told which boils to 10x and 25x improvement in case
>> 2 and case 3. And IMO, this *really* gives the feeling of magnitude of
>> improvement with patches.
>>
>> I ll change script to report that way :).
>>
>
> btw, this is on non-PLE hardware, right? What are the numbers for PLE?
>
Sure.
I 'll get hold of a PLE mc and come up with the numbers soon. but I
'll expect the improvement around 1-3% as it was in last version.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


vatsa at linux

May 7, 2012, 6:46 AM

Post #9 of 34 (797 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

* Raghavendra K T <raghavendra.kt [at] linux> [2012-05-07 19:08:51]:

> I 'll get hold of a PLE mc and come up with the numbers soon. but I
> 'll expect the improvement around 1-3% as it was in last version.

Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
results on PLE hardware. Something worth trying IMHO.

- vatsa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 6:49 AM

Post #10 of 34 (798 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
> * Raghavendra K T <raghavendra.kt [at] linux> [2012-05-07 19:08:51]:
>
> > I 'll get hold of a PLE mc and come up with the numbers soon. but I
> > 'll expect the improvement around 1-3% as it was in last version.
>
> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
> results on PLE hardware. Something worth trying IMHO.

Is the improvement so low, because PLE is interfering with the patch, or
because PLE already does a good job?

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 6:53 AM

Post #11 of 34 (799 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 07:19 PM, Avi Kivity wrote:
> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>> * Raghavendra K T<raghavendra.kt [at] linux> [2012-05-07 19:08:51]:
>>
>>> I 'll get hold of a PLE mc and come up with the numbers soon. but I
>>> 'll expect the improvement around 1-3% as it was in last version.
>>
>> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
>> results on PLE hardware. Something worth trying IMHO.
>
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job?
>

It is because PLE already does a good job (of not burning cpu). The
1-3% improvement is because, patchset knows atleast who is next to hold
lock, which is lacking in PLE.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


vatsa at linux

May 7, 2012, 6:55 AM

Post #12 of 34 (797 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

* Avi Kivity <avi [at] redhat> [2012-05-07 16:49:25]:

> > Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
> > results on PLE hardware. Something worth trying IMHO.
>
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job?

I think its latter (PLE already doing a good job).

- vatsa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 6:56 AM

Post #13 of 34 (796 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 07:16 PM, Srivatsa Vaddagiri wrote:
> * Raghavendra K T<raghavendra.kt [at] linux> [2012-05-07 19:08:51]:
>
>> I 'll get hold of a PLE mc and come up with the numbers soon. but I
>> 'll expect the improvement around 1-3% as it was in last version.
>
> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
> results on PLE hardware. Something worth trying IMHO.
>

Yes, Sure. 'll take-up this and any scalability improvement possible
further.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 6:58 AM

Post #14 of 34 (801 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 04:53 PM, Raghavendra K T wrote:
>> Is the improvement so low, because PLE is interfering with the patch, or
>> because PLE already does a good job?
>>
>
>
> It is because PLE already does a good job (of not burning cpu). The
> 1-3% improvement is because, patchset knows atleast who is next to hold
> lock, which is lacking in PLE.
>

Not good. Solving a problem in software that is already solved by
hardware? It's okay if there are no costs involved, but here we're
introducing a new ABI that we'll have to maintain for a long time.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 7:47 AM

Post #15 of 34 (800 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 07:28 PM, Avi Kivity wrote:
> On 05/07/2012 04:53 PM, Raghavendra K T wrote:
>>> Is the improvement so low, because PLE is interfering with the patch, or
>>> because PLE already does a good job?
>>>
>>
>>
>> It is because PLE already does a good job (of not burning cpu). The
>> 1-3% improvement is because, patchset knows atleast who is next to hold
>> lock, which is lacking in PLE.
>>
>
> Not good. Solving a problem in software that is already solved by
> hardware? It's okay if there are no costs involved, but here we're
> introducing a new ABI that we'll have to maintain for a long time.
>

Hmm agree that being a step ahead of mighty hardware (and just an
improvement of 1-3%) is no good for long term (where PLE is future).

Having said that, it is hard for me to resist saying :
bottleneck is somewhere else on PLE m/c and IMHO answer would be
combination of paravirt-spinlock + pv-flush-tb.

But I need to come up with good number to argue in favour of the claim.

PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
win on PLE where only one of them alone could not prove the benefit.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 7:52 AM

Post #16 of 34 (798 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 05:47 PM, Raghavendra K T wrote:
>> Not good. Solving a problem in software that is already solved by
>> hardware? It's okay if there are no costs involved, but here we're
>> introducing a new ABI that we'll have to maintain for a long time.
>>
>
>
> Hmm agree that being a step ahead of mighty hardware (and just an
> improvement of 1-3%) is no good for long term (where PLE is future).
>

PLE is the present, not the future. It was introduced on later Nehalems
and is present on all Westmeres. Two more processor generations have
passed meanwhile. The AMD equivalent was also introduced around that
timeframe.

> Having said that, it is hard for me to resist saying :
> bottleneck is somewhere else on PLE m/c and IMHO answer would be
> combination of paravirt-spinlock + pv-flush-tb.
>
> But I need to come up with good number to argue in favour of the claim.
>
> PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
> win on PLE where only one of them alone could not prove the benefit.
>

I'd like to see those numbers, then.

Ingo, please hold on the kvm-specific patches, meanwhile.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 7, 2012, 7:54 AM

Post #17 of 34 (799 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 05:52 PM, Avi Kivity wrote:
> > Having said that, it is hard for me to resist saying :
> > bottleneck is somewhere else on PLE m/c and IMHO answer would be
> > combination of paravirt-spinlock + pv-flush-tb.
> >
> > But I need to come up with good number to argue in favour of the claim.
> >
> > PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
> > win on PLE where only one of them alone could not prove the benefit.
> >
>
> I'd like to see those numbers, then.
>

Note: it's probably best to try very wide guests, where the overhead of
iterating on all vcpus begins to show.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


mingo at kernel

May 7, 2012, 10:25 AM

Post #18 of 34 (782 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

* Avi Kivity <avi [at] redhat> wrote:

> > PS: Nikunj had experimented that pv-flush tlb +
> > paravirt-spinlock is a win on PLE where only one of them
> > alone could not prove the benefit.
>
> I'd like to see those numbers, then.
>
> Ingo, please hold on the kvm-specific patches, meanwhile.

I'll hold off on the whole thing - frankly, we don't want this
kind of Xen-only complexity. If KVM can make use of PLE then Xen
ought to be able to do it as well.

If both Xen and KVM makes good use of it then that's a different
matter.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


tglx at linutronix

May 7, 2012, 1:42 PM

Post #19 of 34 (783 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On Mon, 7 May 2012, Ingo Molnar wrote:
> * Avi Kivity <avi [at] redhat> wrote:
>
> > > PS: Nikunj had experimented that pv-flush tlb +
> > > paravirt-spinlock is a win on PLE where only one of them
> > > alone could not prove the benefit.
> >
> > I'd like to see those numbers, then.
> >
> > Ingo, please hold on the kvm-specific patches, meanwhile.
>
> I'll hold off on the whole thing - frankly, we don't want this
> kind of Xen-only complexity. If KVM can make use of PLE then Xen
> ought to be able to do it as well.
>
> If both Xen and KVM makes good use of it then that's a different
> matter.

Aside of that, it's kinda strange that a dude named "Nikunj" is
referenced in the argument chain, but I can't find him on the CC list.

Thanks,

tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


jeremy at goop

May 7, 2012, 4:15 PM

Post #20 of 34 (782 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 06:49 AM, Avi Kivity wrote:
> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>> * Raghavendra K T <raghavendra.kt [at] linux> [2012-05-07 19:08:51]:
>>
>>> I 'll get hold of a PLE mc and come up with the numbers soon. but I
>>> 'll expect the improvement around 1-3% as it was in last version.
>> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
>> results on PLE hardware. Something worth trying IMHO.
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job?

How does PLE help with ticket scheduling on unlock? I thought it would
just help with the actual spin loops.

J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 6:13 PM

Post #21 of 34 (778 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/08/2012 04:45 AM, Jeremy Fitzhardinge wrote:
> On 05/07/2012 06:49 AM, Avi Kivity wrote:
>> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>>> * Raghavendra K T<raghavendra.kt [at] linux> [2012-05-07 19:08:51]:
>>>
>>>> I 'll get hold of a PLE mc and come up with the numbers soon. but I
>>>> 'll expect the improvement around 1-3% as it was in last version.
>>> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
>>> results on PLE hardware. Something worth trying IMHO.
>> Is the improvement so low, because PLE is interfering with the patch, or
>> because PLE already does a good job?
>
> How does PLE help with ticket scheduling on unlock? I thought it would
> just help with the actual spin loops.

Hmm. This strikes something to me. I think I should replace while 1 hog
in with some *real job* to measure over-commit case. I hope to see
greater improvements because of fairness and scheduling of the
patch-set.

May be all the way I was measuring something equal to 1x case.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 7, 2012, 10:25 PM

Post #22 of 34 (774 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 08:22 PM, Avi Kivity wrote:
> On 05/07/2012 05:47 PM, Raghavendra K T wrote:
>>> Not good. Solving a problem in software that is already solved by
>>> hardware? It's okay if there are no costs involved, but here we're
>>> introducing a new ABI that we'll have to maintain for a long time.
>>>
>>
>>
>> Hmm agree that being a step ahead of mighty hardware (and just an
>> improvement of 1-3%) is no good for long term (where PLE is future).
>>
>
> PLE is the present, not the future. It was introduced on later Nehalems
> and is present on all Westmeres. Two more processor generations have
> passed meanwhile. The AMD equivalent was also introduced around that
> timeframe.
>
>> Having said that, it is hard for me to resist saying :
>> bottleneck is somewhere else on PLE m/c and IMHO answer would be
>> combination of paravirt-spinlock + pv-flush-tb.
>>
>> But I need to come up with good number to argue in favour of the claim.
>>
>> PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
>> win on PLE where only one of them alone could not prove the benefit.
>>
>
> I'd like to see those numbers, then.
>
> Ingo, please hold on the kvm-specific patches, meanwhile.
>


Hmm. I think I messed up the fact while saying 1-3% improvement on PLE.

Going by what I had posted in https://lkml.org/lkml/2012/4/5/73 (with
correct calculation)

1x 70.475 (85.6979) 63.5033 (72.7041) 15.7%
2x 110.971 (132.829) 105.099 (128.738) 5.56%
3x 150.265 (184.766) 138.341 (172.69) 8.62%


It was around 12% with optimization patch posted separately with that
(That one Needs more experiment though)

But anyways, I will come up with result for current patch series..

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


nikunj at linux

May 7, 2012, 11:46 PM

Post #23 of 34 (775 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On Mon, 7 May 2012 22:42:30 +0200 (CEST), Thomas Gleixner <tglx [at] linutronix> wrote:
> On Mon, 7 May 2012, Ingo Molnar wrote:
> > * Avi Kivity <avi [at] redhat> wrote:
> >
> > > > PS: Nikunj had experimented that pv-flush tlb +
> > > > paravirt-spinlock is a win on PLE where only one of them
> > > > alone could not prove the benefit.
> > >
Do not have PLE numbers yet for pvflush and pvspinlock.

I have seen on Non-PLE having pvflush and pvspinlock patches -
kernbench, ebizzy, specjbb, hackbench and dbench all of them improved.

I am chasing a race currently on pv-flush path, it is causing
file-system corruption. I will post these number along with my v2 post.

> > > I'd like to see those numbers, then.
> > >
> > > Ingo, please hold on the kvm-specific patches, meanwhile.
> >
> > I'll hold off on the whole thing - frankly, we don't want this
> > kind of Xen-only complexity. If KVM can make use of PLE then Xen
> > ought to be able to do it as well.
> >
> > If both Xen and KVM makes good use of it then that's a different
> > matter.
>
> Aside of that, it's kinda strange that a dude named "Nikunj" is
> referenced in the argument chain, but I can't find him on the CC list.
>
/me waves my hand

Regards
Nikunj

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 8, 2012, 2:08 AM

Post #24 of 34 (777 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/08/2012 02:15 AM, Jeremy Fitzhardinge wrote:
> On 05/07/2012 06:49 AM, Avi Kivity wrote:
> > On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
> >> * Raghavendra K T <raghavendra.kt [at] linux> [2012-05-07 19:08:51]:
> >>
> >>> I 'll get hold of a PLE mc and come up with the numbers soon. but I
> >>> 'll expect the improvement around 1-3% as it was in last version.
> >> Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
> >> results on PLE hardware. Something worth trying IMHO.
> > Is the improvement so low, because PLE is interfering with the patch, or
> > because PLE already does a good job?
>
> How does PLE help with ticket scheduling on unlock? I thought it would
> just help with the actual spin loops.

PLE yields to up a random vcpu, hoping it is the lock holder. This
patchset wakes up the right vcpu. For small vcpu counts the difference
is a few bad wakeups (and even a bad wakeup sometimes works, since it
can put the spinner to sleep for a bit). I expect that large vcpu
counts would show a greater difference.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


raghavendra.kt at linux

May 13, 2012, 10:59 AM

Post #25 of 34 (761 views)
Permalink
Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks [In reply to]

On 05/07/2012 05:36 PM, Avi Kivity wrote:
> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>> (Less is better. Below is time elapsed in sec for x86_64_defconfig
>> (3+3 runs)).
>>
>> BASE BASE+patch %improvement
>> mean (sd) mean (sd)
>> case 1x: 66.0566 (74.0304) 61.3233 (68.8299) 7.16552
>> case 2x: 1253.2 (1795.74) 131.606 (137.358) 89.4984
>> case 3x: 3431.04 (5297.26) 134.964 (149.861) 96.0664
>>
>
> You're calculating the improvement incorrectly. In the last case, it's
> not 96%, rather it's 2400% (25x). Similarly the second case is about
> 900% faster.
>

speedup calculation is clear.

I think confusion for me was more because of the types of benchmarks.

I always did

|(patch - base)| * 100 / base


So, for
(1) lesser is better sort of benchmarks,
improvement calculation would be like

|(patched - base)| * 100/ patched
e.g for kernbench,

suppose base = 150 sec
patched = 100 sec
improvement = 50 % ( = 33% degradation of base)


(2) for higher is better sort of benchmarks improvement calculation
would be like

|(patched - base)| * 100 / base

for e.g say for pgbench/ ebizzy...

base = 100 tps (transactions per sec)
patched = 150 tps

improvement = 50 % of pathched kernel ( OR 33 % degradation of base )


Is this is what generally done? just wanted to be on same page before
publishing benchmark results, other than kernbench.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

First page Previous page 1 2 Next page Last page  View All Linux kernel RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.