Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Xen: Users

Xen4.0.1 : slow Disk IO on DomU

 

 

Xen users RSS feed   Index | Next | Previous | View Threaded


erwan.renier at laposte

Mar 16, 2011, 3:31 PM

Post #1 of 7 (2446 views)
Permalink
Xen4.0.1 : slow Disk IO on DomU

Hi,
When i test the IO bandwidth it's pretty much slower on DomU :

Dom0 read : 180MB/s write : 60MB/s
DomU read : 40MB/s write : 6MB/s

The main storage is a soft raid5 array of 5 sata2 disk 7200rpm
I tested with a physical partition on one disk and it'about the same (
without raid and without lvm )

DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
fs type is ext4.

I already tried the ext4 options barrier=0,data=writeback , it doesn't
realy change anything.
I tried too with ext2 and ext3 , it's the same.

Is this normal ?
If not what do you think the problem is?
Thanks.


dist : debian sqeeze
xen : Xen 4.0.1
kernel Dom0 & DomU : 2.6.32-5-xen-amd64
FS : ext4


_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users


joost at antarean

Mar 17, 2011, 1:31 AM

Post #2 of 7 (2319 views)
Permalink
Re: Xen4.0.1 : slow Disk IO on DomU [In reply to]

On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
> Hi,
> When i test the IO bandwidth it's pretty much slower on DomU :
>
> Dom0 read : 180MB/s write : 60MB/s
> DomU read : 40MB/s write : 6MB/s

Just did the same tests on my installation (not yet on Xen4):
Dom0:
# hdparm -Tt /dev/md5
/dev/md5:
Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec
Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 MB/sec

(md5 = 6-disk RAID-5 software raid)

# hdparm -Tt /dev/vg/domU_sdb1
/dev/vgvg/domU_sdb1:
Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec
Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 MB/sec

DomU:
# hdparm -Tt /dev/sdb1
/dev/sdb1:
Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec
Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec

Like you, I do see some drop in performance, but not as severe as you are
experiencing.

> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
> fs type is ext4.

How do you pass the disks to the domU?
I pass them as such:
disk = ['phy:vg/domU_sda1,sda1,w',
(rest of the partitions removed for clarity)

> I already tried the ext4 options barrier=0,data=writeback , it doesn't
> realy change anything.
> I tried too with ext2 and ext3 , it's the same.

To avoid any "issues" with the filesystem, what does "hdparm -Tt <device>" give
you?

> Is this normal ?

Some drop, yes. loosing 90% performance isn't

> If not what do you think the problem is?

Either you are hitting a bug or it's a configuration issue.
What is the configuration for your domU? And specifically the way you pass the
LVs to the domU.

--
Joost Roeleveld

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users


erwan.renier at laposte

Mar 17, 2011, 10:31 AM

Post #3 of 7 (2322 views)
Permalink
Re: Xen4.0.1 : slow Disk IO on DomU [In reply to]

Le 17/03/2011 09:31, Joost Roeleveld a écrit :
> On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
>> Hi,
>> When i test the IO bandwidth it's pretty much slower on DomU :
>>
>> Dom0 read : 180MB/s write : 60MB/s
>> DomU read : 40MB/s write : 6MB/s
> Just did the same tests on my installation (not yet on Xen4):
> Dom0:
> # hdparm -Tt /dev/md5
> /dev/md5:
> Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec
> Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94 MB/sec
>
> (md5 = 6-disk RAID-5 software raid)
>
> # hdparm -Tt /dev/vg/domU_sdb1
> /dev/vgvg/domU_sdb1:
> Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec
> Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24 MB/sec
>
> DomU:
> # hdparm -Tt /dev/sdb1
> /dev/sdb1:
> Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec
> Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec
>
> Like you, I do see some drop in performance, but not as severe as you are
> experiencing.
>
>> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
>> fs type is ext4.
> How do you pass the disks to the domU?
> I pass them as such:
> disk = [.'phy:vg/domU_sda1,sda1,w',
> (rest of the partitions removed for clarity)
>
My DomU conf is like this :
kernel = "vmlinuz-2.6.32-5-xen-amd64"
ramdisk = "initrd.img-2.6.32-5-xen-amd64"
root = "/dev/mapper/pvops-root"
memory = "512"
disk = [ 'phy:vg0/p2p,xvda,w' , 'phy:vg0/mmd,xvdb1,w', 'phy:sde3,xvdb1,w' ]
vif = [ 'bridge=eth0' ]
vfb = [ 'type=vnc,vnclisten=0.0.0.0' ]
keymap = 'fr'
serial = 'pty'
vcpus = 2
on_reboot = 'restart'
on_crash = 'restart'

>> I already tried the ext4 options barrier=0,data=writeback , it doesn't
>> realy change anything.
>> I tried too with ext2 and ext3 , it's the same.
> To avoid any "issues" with the filesystem, what does "hdparm -Tt<device>" give
> you?
>
Dom0 :
/dev/sde( single disk ):
Timing cached reads: 6086 MB in 2.00 seconds = 3050.54 MB/sec
Timing buffered disk reads: 270 MB in 3.01 seconds = 89.81 MB/sec
/dev/md127 ( raid 5 of 5 disks ):
Timing cached reads: 6708 MB in 1.99 seconds = 3362.95 MB/sec
Timing buffered disk reads: 1092 MB in 3.00 seconds = 363.96 MB/sec

DomU :
/dev/xvda:
Timing cached reads: 5648 MB in 2.00 seconds = 2830.78 MB/sec
Timing buffered disk reads: 292 MB in 3.01 seconds = 97.16 MB/sec
/dev/xvda2:
Timing cached reads: 5542 MB in 2.00 seconds = 2777.66 MB/sec
Timing buffered disk reads: 274 MB in 3.01 seconds = 90.94 MB/sec
/dev/xvdb1:
Timing cached reads: 5526 MB in 2.00 seconds = 2769.20 MB/sec
Timing buffered disk reads: 196 MB in 3.02 seconds = 64.85 MB/sec
/dev/xvdb2:
Timing cached reads: 5334 MB in 2.00 seconds = 2672.47 MB/sec
Timing buffered disk reads: 166 MB in 3.03 seconds = 54.70 MB/sec

>> Is this normal ?
> Some drop, yes. loosing 90% performance isn't
>
>> If not what do you think the problem is?
> Either you are hitting a bug or it's a configuration issue.
> What is the configuration for your domU? And specifically the way you pass the
> LVs to the domU.
As you can see :
xvda is a lv exported as a whole disk with lvm on it, so xvda2 is a lv
from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg
=>pv => raid5 =>disk )
xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg
=> pv => raid5 => disk )
xvdb2 is a physical partition exported as a partition ( ext3 => virtual
part => disk )

Curiously it seems the more complicated, the better it is :/

Thanks.

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users


joost at antarean

Mar 18, 2011, 1:00 AM

Post #4 of 7 (2309 views)
Permalink
Re: Xen4.0.1 : slow Disk IO on DomU [In reply to]

On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote:
> Le 17/03/2011 09:31, Joost Roeleveld a écrit :
> > On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
> >> Hi,
> >> When i test the IO bandwidth it's pretty much slower on DomU :
> >>
> >> Dom0 read : 180MB/s write : 60MB/s
> >> DomU read : 40MB/s write : 6MB/s
> >
> > Just did the same tests on my installation (not yet on Xen4):
> > Dom0:
> > # hdparm -Tt /dev/md5
> >
> > /dev/md5:
> > Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec
> > Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94
> > MB/sec
> >
> > (md5 = 6-disk RAID-5 software raid)
> >
> > # hdparm -Tt /dev/vg/domU_sdb1
> >
> > /dev/vgvg/domU_sdb1:
> > Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec
> > Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24
> > MB/sec
> >
> > DomU:
> > # hdparm -Tt /dev/sdb1
> >
> > /dev/sdb1:
> > Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec
> > Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec
> >
> > Like you, I do see some drop in performance, but not as severe as you
> > are
> > experiencing.
> >
> >> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
> >> fs type is ext4.
> >
> > How do you pass the disks to the domU?
> > I pass them as such:
> > disk = [.'phy:vg/domU_sda1,sda1,w',
> > (rest of the partitions removed for clarity)
>
> My DomU conf is like this :
> kernel = "vmlinuz-2.6.32-5-xen-amd64"
> ramdisk = "initrd.img-2.6.32-5-xen-amd64"
> root = "/dev/mapper/pvops-root"
> memory = "512"
> disk = [ 'phy:vg0/p2p,xvda,w' , 'phy:vg0/mmd,xvdb1,w', 'phy:sde3,xvdb1,w' ]
> vif = [ 'bridge=eth0' ]
> vfb = [ 'type=vnc,vnclisten=0.0.0.0' ]
> keymap = 'fr'
> serial = 'pty'
> vcpus = 2
> on_reboot = 'restart'
> on_crash = 'restart'

seems ok to me.
Did you pin the dom0 to a dedicated cpu-core?

> > Either you are hitting a bug or it's a configuration issue.
> > What is the configuration for your domU? And specifically the way you
> > pass the LVs to the domU.
>
> As you can see :
> xvda is a lv exported as a whole disk with lvm on it, so xvda2 is a lv
> from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg
> =>pv => raid5 =>disk )
> xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg
> => pv => raid5 => disk )
> xvdb2 is a physical partition exported as a partition ( ext3 => virtual
> part => disk )
>
> Curiously it seems the more complicated, the better it is :/

Yes, it does seem that way. Am wondering if adding more layers increases the
amount of in-memory-caching which then leads to a higher "perceived"
performance.

One other thing, I don't use "xvd*" for the device-names, but am still using
"sd*". Wonder if that changes the way things behave internally?

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users


erwan.renier at laposte

Mar 18, 2011, 11:14 AM

Post #5 of 7 (2313 views)
Permalink
Re: Xen4.0.1 : slow Disk IO on DomU [In reply to]

Le 18/03/2011 09:00, Joost Roeleveld a écrit :
> On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote:
>> Le 17/03/2011 09:31, Joost Roeleveld a écrit :
>>> On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
>>>> Hi,
>>>> When i test the IO bandwidth it's pretty much slower on DomU :
>>>>
>>>> Dom0 read : 180MB/s write : 60MB/s
>>>> DomU read : 40MB/s write : 6MB/s
>>> Just did the same tests on my installation (not yet on Xen4):
>>> Dom0:
>>> # hdparm -Tt /dev/md5
>>>
>>> /dev/md5:
>>> Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec
>>> Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94
>>> MB/sec
>>>
>>> (md5 = 6-disk RAID-5 software raid)
>>>
>>> # hdparm -Tt /dev/vg/domU_sdb1
>>>
>>> /dev/vgvg/domU_sdb1:
>>> Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec
>>> Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24
>>> MB/sec
>>>
>>> DomU:
>>> # hdparm -Tt /dev/sdb1
>>>
>>> /dev/sdb1:
>>> Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec
>>> Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98 MB/sec
>>>
>>> Like you, I do see some drop in performance, but not as severe as you
>>> are
>>> experiencing.
>>>
>>>> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
>>>> fs type is ext4.
>>> How do you pass the disks to the domU?
>>> I pass them as such:
>>> disk = ['phy:vg/domU_sda1,sda1,w',
>>> (rest of the partitions removed for clarity)
>> My DomU conf is like this :
>> kernel = "vmlinuz-2.6.32-5-xen-amd64"
>> ramdisk = "initrd.img-2.6.32-5-xen-amd64"
>> root = "/dev/mapper/pvops-root"
>> memory = "512"
>> disk = [ 'phy:vg0/p2p,xvda,w' , 'phy:vg0/mmd,xvdb1,w', 'phy:sde3,xvdb1,w' ]
>> vif = [ 'bridge=eth0' ]
>> vfb = [ 'type=vnc,vnclisten=0.0.0.0' ]
>> keymap = 'fr'
>> serial = 'pty'
>> vcpus = 2
>> on_reboot = 'restart'
>> on_crash = 'restart'
> seems ok to me.
> Did you pin the dom0 to a dedicated cpu-core?
Nop
>>> Either you are hitting a bug or it's a configuration issue.
>>> What is the configuration for your domU? And specifically the way you
>>> pass the LVs to the domU.
>> As you can see :
>> xvda is a lv exported as a whole disk with lvm on it, so xvda2 is a lv
>> from a vg in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg
>> =>pv => raid5 =>disk )
>> xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg
>> => pv => raid5 => disk )
>> xvdb2 is a physical partition exported as a partition ( ext3 => virtual
>> part => disk )
>>
>> Curiously it seems the more complicated, the better it is :/
> Yes, it does seem that way. Am wondering if adding more layers increases the
> amount of in-memory-caching which then leads to a higher "perceived"
> performance.
>
> One other thing, I don't use "xvd*" for the device-names, but am still using
> "sd*". Wonder if that changes the way things behave internally?
I doesn't change with sd*
I noticed that the cpu io wait occurs in domU ,nothing happen in dom0

Does someone knows a way to debug this ? at kernel level or in the
hypervisor ?
By the way how to get the hypervisor activity i don't think it appears
in dom0.
> _______________________________________________
> Xen-users mailing list
> Xen-users [at] lists
> http://lists.xensource.com/xen-users
>


_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users


erwan.renier at laposte

Mar 26, 2011, 5:05 AM

Post #6 of 7 (2290 views)
Permalink
Re: Xen4.0.1 : slow Disk IO on DomU [In reply to]

I've found that my motherboard with AMD890GX chipset doesn't support
IOMMU virtualisation ( (XEN) I/O virtualisation disabled )
Can you tell me if yours is supporting it ( xm dmesg |grep 'I/O
virtualisation' )
Thanks

Le 18/03/2011 19:14, Erwan RENIER a écrit :
> Le 18/03/2011 09:00, Joost Roeleveld a écrit :
>> On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote:
>>> Le 17/03/2011 09:31, Joost Roeleveld a écrit :
>>>> On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
>>>>> Hi,
>>>>> When i test the IO bandwidth it's pretty much slower on DomU :
>>>>>
>>>>> Dom0 read : 180MB/s write : 60MB/s
>>>>> DomU read : 40MB/s write : 6MB/s
>>>> Just did the same tests on my installation (not yet on Xen4):
>>>> Dom0:
>>>> # hdparm -Tt /dev/md5
>>>>
>>>> /dev/md5:
>>>> Timing cached reads: 6790 MB in 1.99 seconds = 3403.52 MB/sec
>>>> Timing buffered disk reads: 1294 MB in 3.00 seconds = 430.94
>>>> MB/sec
>>>>
>>>> (md5 = 6-disk RAID-5 software raid)
>>>>
>>>> # hdparm -Tt /dev/vg/domU_sdb1
>>>>
>>>> /dev/vgvg/domU_sdb1:
>>>> Timing cached reads: 6170 MB in 2.00 seconds = 3091.21 MB/sec
>>>> Timing buffered disk reads: 1222 MB in 3.00 seconds = 407.24
>>>> MB/sec
>>>>
>>>> DomU:
>>>> # hdparm -Tt /dev/sdb1
>>>>
>>>> /dev/sdb1:
>>>> Timing cached reads: 7504 MB in 1.99 seconds = 3761.93 MB/sec
>>>> Timing buffered disk reads: 792 MB in 3.00 seconds = 263.98
>>>> MB/sec
>>>>
>>>> Like you, I do see some drop in performance, but not as severe as you
>>>> are
>>>> experiencing.
>>>>
>>>>> DomU disks are Dom0 logical volumes, i use paravirtualized guests,
>>>>> the
>>>>> fs type is ext4.
>>>> How do you pass the disks to the domU?
>>>> I pass them as such:
>>>> disk = ['phy:vg/domU_sda1,sda1,w',
>>>> (rest of the partitions removed for clarity)
>>> My DomU conf is like this :
>>> kernel = "vmlinuz-2.6.32-5-xen-amd64"
>>> ramdisk = "initrd.img-2.6.32-5-xen-amd64"
>>> root = "/dev/mapper/pvops-root"
>>> memory = "512"
>>> disk = [ 'phy:vg0/p2p,xvda,w' , 'phy:vg0/mmd,xvdb1,w',
>>> 'phy:sde3,xvdb1,w' ]
>>> vif = [ 'bridge=eth0' ]
>>> vfb = [ 'type=vnc,vnclisten=0.0.0.0' ]
>>> keymap = 'fr'
>>> serial = 'pty'
>>> vcpus = 2
>>> on_reboot = 'restart'
>>> on_crash = 'restart'
>> seems ok to me.
>> Did you pin the dom0 to a dedicated cpu-core?
> Nop
>>>> Either you are hitting a bug or it's a configuration issue.
>>>> What is the configuration for your domU? And specifically the way you
>>>> pass the LVs to the domU.
>>> As you can see :
>>> xvda is a lv exported as a whole disk with lvm on it, so xvda2 is
>>> a lv
>>> from a vg in a lv ( ext4 => lv => vg => pv => virtual disk =>
>>> lv =>vg
>>> =>pv => raid5 =>disk )
>>> xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv
>>> => vg
>>> => pv => raid5 => disk )
>>> xvdb2 is a physical partition exported as a partition ( ext3 =>
>>> virtual
>>> part => disk )
>>>
>>> Curiously it seems the more complicated, the better it is :/
>> Yes, it does seem that way. Am wondering if adding more layers
>> increases the
>> amount of in-memory-caching which then leads to a higher "perceived"
>> performance.
>>
>> One other thing, I don't use "xvd*" for the device-names, but am
>> still using
>> "sd*". Wonder if that changes the way things behave internally?
> I doesn't change with sd*
> I noticed that the cpu io wait occurs in domU ,nothing happen in dom0
>
> Does someone knows a way to debug this ? at kernel level or in the
> hypervisor ?
> By the way how to get the hypervisor activity i don't think it
> appears in dom0.
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users [at] lists
>> http://lists.xensource.com/xen-users
>>
>


_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users


joost at antarean

Mar 29, 2011, 12:17 AM

Post #7 of 7 (2248 views)
Permalink
Re: Xen4.0.1 : slow Disk IO on DomU [In reply to]

On Monday 28 March 2011 20:10:40 you wrote:
> Le 28/03/2011 11:59, Joost Roeleveld a écrit :
> > e thing I do have, though, is dedicating a single core to dom0.
> > This avoids the situation that dom0 has to wait for an available core.
> >
> > The advantage is that dom0 will always have CPU-resources available and
> > this will speed up I/O activities as it is dom0 that is involved in all
> > the disk-access.
>
> i tried with a dedicated cpu but it doesn't change

Hmm... then I'm at the end of my ideas here, I'm afraid.
When I get round to upgrading to Xen4.x, I'll do a performance test to see if
I get the same. But I'd rather not play around with the production system.

--
Joost

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xensource.com/xen-users

Xen users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.