Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Linux: Kernel

[PATCH v2 4/4] Enabling Access bit when doing memory swapping

 

 

Linux kernel RSS feed   Index | Next | Previous | View Threaded


xudong.hao at intel

May 20, 2012, 8:54 PM

Post #1 of 2 (63 views)
Permalink
[PATCH v2 4/4] Enabling Access bit when doing memory swapping

Enabling Access bit when doing memory swapping.

Signed-off-by: Haitao Shan <haitao.shan [at] intel>
Signed-off-by: Xudong Hao <xudong.hao [at] intel>
---
arch/x86/kvm/mmu.c | 13 +++++++------
arch/x86/kvm/vmx.c | 6 ++++--
2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 07424cf..392bdf3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1232,7 +1232,8 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
int young = 0;

/*
- * Emulate the accessed bit for EPT, by checking if this page has
+ * In case of absence of EPT Access and Dirty Bits supports,
+ * emulate the accessed bit for EPT, by checking if this page has
* an EPT mapping, and clearing it if it does. On the next access,
* a new EPT mapping will be established.
* This has some overhead, but not as much as the cost of swapping
@@ -1243,11 +1244,11 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,

for (sptep = rmap_get_first(*rmapp, &iter); sptep;
sptep = rmap_get_next(&iter)) {
- BUG_ON(!(*sptep & PT_PRESENT_MASK));
+ BUG_ON(!is_shadow_present_pte(*sptep));

- if (*sptep & PT_ACCESSED_MASK) {
+ if (*sptep & shadow_accessed_mask) {
young = 1;
- clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)sptep);
+ *sptep &= ~shadow_accessed_mask;
}
}

@@ -1271,9 +1272,9 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,

for (sptep = rmap_get_first(*rmapp, &iter); sptep;
sptep = rmap_get_next(&iter)) {
- BUG_ON(!(*sptep & PT_PRESENT_MASK));
+ BUG_ON(!is_shadow_present_pte(*sptep));

- if (*sptep & PT_ACCESSED_MASK) {
+ if (*sptep & shadow_accessed_mask) {
young = 1;
break;
}
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e8003b6..342ea2e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7259,8 +7259,10 @@ static int __init vmx_init(void)
vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);

if (enable_ept) {
- kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull,
- VMX_EPT_EXECUTABLE_MASK);
+ kvm_mmu_set_mask_ptes(0ull,
+ (enable_ept_ad_bits) ? VMX_EPT_ACCESS_BIT : 0ull,
+ (enable_ept_ad_bits) ? VMX_EPT_DIRTY_BIT : 0ull,
+ 0ull, VMX_EPT_EXECUTABLE_MASK);
ept_set_mmio_spte_mask();
kvm_enable_tdp();
} else
--
1.5.6

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


avi at redhat

May 21, 2012, 1:21 AM

Post #2 of 2 (55 views)
Permalink
Re: [PATCH v2 4/4] Enabling Access bit when doing memory swapping [In reply to]

On 05/21/2012 06:54 AM, Xudong Hao wrote:
> Enabling Access bit when doing memory swapping.
>
> @@ -1243,11 +1244,11 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
>
> for (sptep = rmap_get_first(*rmapp, &iter); sptep;
> sptep = rmap_get_next(&iter)) {
> - BUG_ON(!(*sptep & PT_PRESENT_MASK));
> + BUG_ON(!is_shadow_present_pte(*sptep));
>
> - if (*sptep & PT_ACCESSED_MASK) {
> + if (*sptep & shadow_accessed_mask) {
> young = 1;
> - clear_bit(PT_ACCESSED_SHIFT, (unsigned long *)sptep);
> + *sptep &= ~shadow_accessed_mask;
> }
> }

As Marcelo already noted, this converts an atomic operation into a
non-atomic one.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo [at] vger
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Linux kernel RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.