Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Xen: API

Flood test with xen/openvswitch

 

 

Xen api RSS feed   Index | Next | Previous | View Threaded


sr at swisscenter

Sep 7, 2011, 4:29 AM

Post #1 of 9 (1105 views)
Permalink
Flood test with xen/openvswitch

Hi,

I just did a test to see how openvswitch handle a flood from a virtual
machine on a xen
host using it as the networking layer.

I just issued a :

vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1

options I used are:
-S set SYN tcp flag
-L set ACK tcp flag
-p destination port
-i u100 = interval between packets in micro seconds

This results in a cpu usage up to 97% by the ovs-vswitchd process in the
dom0.
Letting it go for a few minutes turns the whole xen host unresponsive to
network access
and must then be accessed from the local console.

Is that an excepted behavior ? I know the test is quite agressive but
any customer could issue such a flood
and render the whole host unreachable. Are there workarounds ?

Thanks for your help.

Best regards,
Sbastien

part of the openvswitch logs when issuing the flood:

Sep 07 13:13:26|01523|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
CPU usage)
Sep 07 13:13:26|01524|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
CPU usage)
Sep 07 13:13:27|01525|poll_loop|WARN|Dropped 5136 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:27|01526|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
CPU usage)
Sep 07 13:13:27|01527|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
CPU usage)
Sep 07 13:13:28|01528|poll_loop|WARN|Dropped 5815 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:28|01529|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
CPU usage)
Sep 07 13:13:28|01530|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
CPU usage)
Sep 07 13:13:29|01531|poll_loop|WARN|Dropped 8214 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:29|01532|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
CPU usage)
Sep 07 13:13:29|01533|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
CPU usage)
Sep 07 13:13:30|01534|poll_loop|WARN|Dropped 5068 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:30|01535|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
CPU usage)
Sep 07 13:13:30|01536|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
CPU usage)
Sep 07 13:13:31|01537|poll_loop|WARN|Dropped 5008 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:31|01538|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
CPU usage)
Sep 07 13:13:31|01539|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
CPU usage)
Sep 07 13:13:32|01540|poll_loop|WARN|Dropped 4841 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:32|01541|poll_loop|WARN|wakeup due to 40-ms timeout at
../ofproto/ofproto-dpif.c:622 (83% CPU usage)
Sep 07 13:13:32|01542|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
CPU usage)
Sep 07 13:13:33|01543|poll_loop|WARN|Dropped 92 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:33|01544|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
CPU usage)
Sep 07 13:13:33|01545|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
CPU usage)
Sep 07 13:13:34|01546|poll_loop|WARN|Dropped 27 log messages in last 1
seconds (most recently, 1 seconds ago) due to excessive rate
Sep 07 13:13:34|01547|poll_loop|WARN|wakeup due to 53-ms timeout at
../lib/mac-learning.c:294 (83% CPU usage)
Sep 07 13:13:34|01548|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
(NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
CPU usage)


_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


george.shuklin at gmail

Sep 7, 2011, 6:59 AM

Post #2 of 9 (1028 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

temporary solution: add more active cpus to dom0.
echo 1 >/sys/devices/system/cpu/cpu1/online
echo 1 >/sys/devices/system/cpu/cpu2/online
echo 1 >/sys/devices/system/cpu/cpu3/online

В Ср., 07/09/2011 в 13:29 +0200, Sébastien Riccio пишет:
> Hi,
>
> I just did a test to see how openvswitch handle a flood from a virtual
> machine on a xen
> host using it as the networking layer.
>
> I just issued a :
>
> vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1
>
> options I used are:
> -S set SYN tcp flag
> -L set ACK tcp flag
> -p destination port
> -i u100 = interval between packets in micro seconds
>
> This results in a cpu usage up to 97% by the ovs-vswitchd process in the
> dom0.
> Letting it go for a few minutes turns the whole xen host unresponsive to
> network access
> and must then be accessed from the local console.
>
> Is that an excepted behavior ? I know the test is quite agressive but
> any customer could issue such a flood
> and render the whole host unreachable. Are there workarounds ?
>
> Thanks for your help.
>
> Best regards,
> Sébastien
>
> part of the openvswitch logs when issuing the flood:
>
> Sep 07 13:13:26|01523|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:26|01524|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:27|01525|poll_loop|WARN|Dropped 5136 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:27|01526|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:27|01527|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:28|01528|poll_loop|WARN|Dropped 5815 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:28|01529|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:28|01530|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:29|01531|poll_loop|WARN|Dropped 8214 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:29|01532|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:29|01533|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:30|01534|poll_loop|WARN|Dropped 5068 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:30|01535|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:30|01536|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:31|01537|poll_loop|WARN|Dropped 5008 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:31|01538|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:31|01539|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:32|01540|poll_loop|WARN|Dropped 4841 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:32|01541|poll_loop|WARN|wakeup due to 40-ms timeout at
> ../ofproto/ofproto-dpif.c:622 (83% CPU usage)
> Sep 07 13:13:32|01542|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:33|01543|poll_loop|WARN|Dropped 92 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:33|01544|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:33|01545|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:34|01546|poll_loop|WARN|Dropped 27 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:34|01547|poll_loop|WARN|wakeup due to 53-ms timeout at
> ../lib/mac-learning.c:294 (83% CPU usage)
> Sep 07 13:13:34|01548|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
>
>
> _______________________________________________
> xen-api mailing list
> xen-api [at] lists
> http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


amoya at moyasolutions

Sep 7, 2011, 7:43 AM

Post #3 of 9 (1024 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

wouldn't this give us the crashing issue that has been occurring in xen?

recntly i had to run this command
echo "NR_DOMAIN0_VCPUS=1" > /etc/sysconfig/unplug-vcpus to stop xen from crashing, its been running for 24 hours now.

Moya Solutions, Inc.
amoya [at] moyasolutions
0 | 646-918-5238 x 102
F | 646-390-1806

----- Original Message -----
From: "George Shuklin" <george.shuklin [at] gmail>
To: xen-api [at] lists
Sent: Wednesday, September 7, 2011 9:59:13 AM
Subject: Re: [Xen-API] Flood test with xen/openvswitch

temporary solution: add more active cpus to dom0.
echo 1 >/sys/devices/system/cpu/cpu1/online
echo 1 >/sys/devices/system/cpu/cpu2/online
echo 1 >/sys/devices/system/cpu/cpu3/online

В Ср., 07/09/2011 в 13:29 +0200, Sébastien Riccio пишет:
> Hi,
>
> I just did a test to see how openvswitch handle a flood from a virtual
> machine on a xen
> host using it as the networking layer.
>
> I just issued a :
>
> vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1
>
> options I used are:
> -S set SYN tcp flag
> -L set ACK tcp flag
> -p destination port
> -i u100 = interval between packets in micro seconds
>
> This results in a cpu usage up to 97% by the ovs-vswitchd process in the
> dom0.
> Letting it go for a few minutes turns the whole xen host unresponsive to
> network access
> and must then be accessed from the local console.
>
> Is that an excepted behavior ? I know the test is quite agressive but
> any customer could issue such a flood
> and render the whole host unreachable. Are there workarounds ?
>
> Thanks for your help.
>
> Best regards,
> Sébastien
>
> part of the openvswitch logs when issuing the flood:
>
> Sep 07 13:13:26|01523|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:26|01524|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:27|01525|poll_loop|WARN|Dropped 5136 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:27|01526|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:27|01527|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:28|01528|poll_loop|WARN|Dropped 5815 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:28|01529|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:28|01530|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:29|01531|poll_loop|WARN|Dropped 8214 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:29|01532|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:29|01533|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:30|01534|poll_loop|WARN|Dropped 5068 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:30|01535|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:30|01536|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:31|01537|poll_loop|WARN|Dropped 5008 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:31|01538|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:31|01539|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:32|01540|poll_loop|WARN|Dropped 4841 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:32|01541|poll_loop|WARN|wakeup due to 40-ms timeout at
> ../ofproto/ofproto-dpif.c:622 (83% CPU usage)
> Sep 07 13:13:32|01542|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:33|01543|poll_loop|WARN|Dropped 92 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:33|01544|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:33|01545|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:34|01546|poll_loop|WARN|Dropped 27 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:34|01547|poll_loop|WARN|wakeup due to 53-ms timeout at
> ../lib/mac-learning.c:294 (83% CPU usage)
> Sep 07 13:13:34|01548|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
>
>
> _______________________________________________
> xen-api mailing list
> xen-api [at] lists
> http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


george.shuklin at gmail

Sep 7, 2011, 8:04 AM

Post #4 of 9 (1028 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

I think this can be real issue.

We have a bunch of highloaded hosts with multiple CPUs in dom0 (used to
reduce network latency in load peaks).

So I'm thinking issue can be not with openvswitch, but with
hardware/compability with xen...

В Ср., 07/09/2011 в 10:43 -0400, Andres E. Moya пишет:
> wouldn't this give us the crashing issue that has been occurring in xen?
>
> recntly i had to run this command
> echo "NR_DOMAIN0_VCPUS=1" > /etc/sysconfig/unplug-vcpus to stop xen from crashing, its been running for 24 hours now.
>
> Moya Solutions, Inc.
> amoya [at] moyasolutions
> 0 | 646-918-5238 x 102
> F | 646-390-1806
>
> ----- Original Message -----
> From: "George Shuklin" <george.shuklin [at] gmail>
> To: xen-api [at] lists
> Sent: Wednesday, September 7, 2011 9:59:13 AM
> Subject: Re: [Xen-API] Flood test with xen/openvswitch
>
> temporary solution: add more active cpus to dom0.
> echo 1 >/sys/devices/system/cpu/cpu1/online
> echo 1 >/sys/devices/system/cpu/cpu2/online
> echo 1 >/sys/devices/system/cpu/cpu3/online
>
> В Ср., 07/09/2011 в 13:29 +0200, Sébastien Riccio пишет:
> > Hi,
> >
> > I just did a test to see how openvswitch handle a flood from a virtual
> > machine on a xen
> > host using it as the networking layer.
> >
> > I just issued a :
> >
> > vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1
> >
> > options I used are:
> > -S set SYN tcp flag
> > -L set ACK tcp flag
> > -p destination port
> > -i u100 = interval between packets in micro seconds
> >
> > This results in a cpu usage up to 97% by the ovs-vswitchd process in the
> > dom0.
> > Letting it go for a few minutes turns the whole xen host unresponsive to
> > network access
> > and must then be accessed from the local console.
> >
> > Is that an excepted behavior ? I know the test is quite agressive but
> > any customer could issue such a flood
> > and render the whole host unreachable. Are there workarounds ?
> >
> > Thanks for your help.
> >
> > Best regards,
> > Sébastien
> >
> > part of the openvswitch logs when issuing the flood:
> >
> > Sep 07 13:13:26|01523|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > CPU usage)
> > Sep 07 13:13:26|01524|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > CPU usage)
> > Sep 07 13:13:27|01525|poll_loop|WARN|Dropped 5136 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:27|01526|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > CPU usage)
> > Sep 07 13:13:27|01527|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > CPU usage)
> > Sep 07 13:13:28|01528|poll_loop|WARN|Dropped 5815 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:28|01529|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > CPU usage)
> > Sep 07 13:13:28|01530|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > CPU usage)
> > Sep 07 13:13:29|01531|poll_loop|WARN|Dropped 8214 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:29|01532|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > CPU usage)
> > Sep 07 13:13:29|01533|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > CPU usage)
> > Sep 07 13:13:30|01534|poll_loop|WARN|Dropped 5068 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:30|01535|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > CPU usage)
> > Sep 07 13:13:30|01536|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > CPU usage)
> > Sep 07 13:13:31|01537|poll_loop|WARN|Dropped 5008 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:31|01538|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > CPU usage)
> > Sep 07 13:13:31|01539|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > CPU usage)
> > Sep 07 13:13:32|01540|poll_loop|WARN|Dropped 4841 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:32|01541|poll_loop|WARN|wakeup due to 40-ms timeout at
> > ../ofproto/ofproto-dpif.c:622 (83% CPU usage)
> > Sep 07 13:13:32|01542|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > CPU usage)
> > Sep 07 13:13:33|01543|poll_loop|WARN|Dropped 92 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:33|01544|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > CPU usage)
> > Sep 07 13:13:33|01545|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > CPU usage)
> > Sep 07 13:13:34|01546|poll_loop|WARN|Dropped 27 log messages in last 1
> > seconds (most recently, 1 seconds ago) due to excessive rate
> > Sep 07 13:13:34|01547|poll_loop|WARN|wakeup due to 53-ms timeout at
> > ../lib/mac-learning.c:294 (83% CPU usage)
> > Sep 07 13:13:34|01548|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > CPU usage)
> >
> >
> > _______________________________________________
> > xen-api mailing list
> > xen-api [at] lists
> > http://lists.xensource.com/mailman/listinfo/xen-api
>
>
>
> _______________________________________________
> xen-api mailing list
> xen-api [at] lists
> http://lists.xensource.com/mailman/listinfo/xen-api
>
>



_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


blp at nicira

Sep 7, 2011, 8:15 AM

Post #5 of 9 (1033 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

Why did you post separate copies of this message to xen-api and
ovs-discuss, without even mentioning it in either copy? What is
the value of splintering discussion?

Sébastien Riccio <sr-dWg6jWm8wxMIjDr1QQGPvw [at] public> writes:

> I just did a test to see how openvswitch handle a flood from a virtual
> machine on a xen
> host using it as the networking layer.
>
> I just issued a :
>
> vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1
>
> options I used are:
> -S set SYN tcp flag
> -L set ACK tcp flag
> -p destination port
> -i u100 = interval between packets in micro seconds
>
> This results in a cpu usage up to 97% by the ovs-vswitchd process in
> the dom0.
> Letting it go for a few minutes turns the whole xen host unresponsive
> to network access
> and must then be accessed from the local console.

When you report a bug, it is important to give all the important
details. It is especially important for a performance-related bug like
this.

Please pass along the following information:

* The Open vSwitch version number (as output by "ovs-vswitchd
--version").

* The Git commit number (as output by "git rev-parse HEAD"),
if you built from a Git snapshot.

* Any local patches or changes you have applied (if any).

* The kernel version on which Open vSwitch is running (from
/proc/version) and the distribution and version number of
your OS (e.g. "Centos 5.0").

* The contents of the vswitchd configuration database (usually
/etc/openvswitch/conf.db).

* The output of "ovs-dpctl show".

* If you have Open vSwitch configured to connect to an
OpenFlow controller, the output of "ovs-ofctl show <bridge>"
for each <bridge> configured in the vswitchd configuration
database.


_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


Rob.Hoes at citrix

Sep 7, 2011, 8:39 AM

Post #6 of 9 (1033 views)
Permalink
RE: Flood test with xen/openvswitch [In reply to]

Hi,

If you are worried that a VM overloads the network, you could try to use the QoS settings (on the VIF, or on the vswitch port) to limit the data rate through that VIF.

Cheers,
Rob

> -----Original Message-----
> From: xen-api-bounces [at] lists [mailto:xen-api-
> bounces [at] lists] On Behalf Of Sbastien Riccio
> Sent: 07 September 2011 12:29
> To: xen-api [at] lists
> Subject: [Xen-API] Flood test with xen/openvswitch
>
> Hi,
>
> I just did a test to see how openvswitch handle a flood from a virtual
> machine on a xen
> host using it as the networking layer.
>
> I just issued a :
>
> vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1
>
> options I used are:
> -S set SYN tcp flag
> -L set ACK tcp flag
> -p destination port
> -i u100 = interval between packets in micro seconds
>
> This results in a cpu usage up to 97% by the ovs-vswitchd process in
> the
> dom0.
> Letting it go for a few minutes turns the whole xen host unresponsive
> to
> network access
> and must then be accessed from the local console.
>
> Is that an excepted behavior ? I know the test is quite agressive but
> any customer could issue such a flood
> and render the whole host unreachable. Are there workarounds ?
>
> Thanks for your help.
>
> Best regards,
> Sbastien
>
> part of the openvswitch logs when issuing the flood:
>
> Sep 07 13:13:26|01523|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:26|01524|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:27|01525|poll_loop|WARN|Dropped 5136 log messages in last
> 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:27|01526|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:27|01527|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:28|01528|poll_loop|WARN|Dropped 5815 log messages in last
> 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:28|01529|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:28|01530|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> CPU usage)
> Sep 07 13:13:29|01531|poll_loop|WARN|Dropped 8214 log messages in last
> 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:29|01532|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:29|01533|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:30|01534|poll_loop|WARN|Dropped 5068 log messages in last
> 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:30|01535|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:30|01536|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:31|01537|poll_loop|WARN|Dropped 5008 log messages in last
> 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:31|01538|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:31|01539|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> CPU usage)
> Sep 07 13:13:32|01540|poll_loop|WARN|Dropped 4841 log messages in last
> 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:32|01541|poll_loop|WARN|wakeup due to 40-ms timeout at
> ../ofproto/ofproto-dpif.c:622 (83% CPU usage)
> Sep 07 13:13:32|01542|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:33|01543|poll_loop|WARN|Dropped 92 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:33|01544|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:33|01545|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
> Sep 07 13:13:34|01546|poll_loop|WARN|Dropped 27 log messages in last 1
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 07 13:13:34|01547|poll_loop|WARN|wakeup due to 53-ms timeout at
> ../lib/mac-learning.c:294 (83% CPU usage)
> Sep 07 13:13:34|01548|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> CPU usage)
>
>
> _______________________________________________
> xen-api mailing list
> xen-api [at] lists
> http://lists.xensource.com/mailman/listinfo/xen-api

_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


pasik at iki

Sep 7, 2011, 10:41 AM

Post #7 of 9 (1036 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

On Wed, Sep 07, 2011 at 07:04:27PM +0400, George Shuklin wrote:
> I think this can be real issue.
>
> We have a bunch of highloaded hosts with multiple CPUs in dom0 (used to
> reduce network latency in load peaks).
>

With XCP or with Debian+xapi ?


> So I'm thinking issue can be not with openvswitch, but with
> hardware/compability with xen...
>

Upstream Linux 3.0 as dom0 might/will behave differently from the XCP/Xenserver Xenlinux dom0 kernel..


-- Pasi


> ?? ????., 07/09/2011 ?? 10:43 -0400, Andres E. Moya ??????????:
> > wouldn't this give us the crashing issue that has been occurring in xen?
> >
> > recntly i had to run this command
> > echo "NR_DOMAIN0_VCPUS=1" > /etc/sysconfig/unplug-vcpus to stop xen from crashing, its been running for 24 hours now.
> >
> > Moya Solutions, Inc.
> > amoya [at] moyasolutions
> > 0 | 646-918-5238 x 102
> > F | 646-390-1806
> >
> > ----- Original Message -----
> > From: "George Shuklin" <george.shuklin [at] gmail>
> > To: xen-api [at] lists
> > Sent: Wednesday, September 7, 2011 9:59:13 AM
> > Subject: Re: [Xen-API] Flood test with xen/openvswitch
> >
> > temporary solution: add more active cpus to dom0.
> > echo 1 >/sys/devices/system/cpu/cpu1/online
> > echo 1 >/sys/devices/system/cpu/cpu2/online
> > echo 1 >/sys/devices/system/cpu/cpu3/online
> >
> > ?? ????., 07/09/2011 ?? 13:29 +0200, Sbastien Riccio ??????????:
> > > Hi,
> > >
> > > I just did a test to see how openvswitch handle a flood from a virtual
> > > machine on a xen
> > > host using it as the networking layer.
> > >
> > > I just issued a :
> > >
> > > vm1# hping3 -S -L 0 -p 80 -i u100 192.168.1.1
> > >
> > > options I used are:
> > > -S set SYN tcp flag
> > > -L set ACK tcp flag
> > > -p destination port
> > > -i u100 = interval between packets in micro seconds
> > >
> > > This results in a cpu usage up to 97% by the ovs-vswitchd process in the
> > > dom0.
> > > Letting it go for a few minutes turns the whole xen host unresponsive to
> > > network access
> > > and must then be accessed from the local console.
> > >
> > > Is that an excepted behavior ? I know the test is quite agressive but
> > > any customer could issue such a flood
> > > and render the whole host unreachable. Are there workarounds ?
> > >
> > > Thanks for your help.
> > >
> > > Best regards,
> > > Sbastien
> > >
> > > part of the openvswitch logs when issuing the flood:
> > >
> > > Sep 07 13:13:26|01523|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > > CPU usage)
> > > Sep 07 13:13:26|01524|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > > CPU usage)
> > > Sep 07 13:13:27|01525|poll_loop|WARN|Dropped 5136 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:27|01526|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > > CPU usage)
> > > Sep 07 13:13:27|01527|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > > CPU usage)
> > > Sep 07 13:13:28|01528|poll_loop|WARN|Dropped 5815 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:28|01529|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > > CPU usage)
> > > Sep 07 13:13:28|01530|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (85%
> > > CPU usage)
> > > Sep 07 13:13:29|01531|poll_loop|WARN|Dropped 8214 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:29|01532|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > > CPU usage)
> > > Sep 07 13:13:29|01533|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > > CPU usage)
> > > Sep 07 13:13:30|01534|poll_loop|WARN|Dropped 5068 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:30|01535|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > > CPU usage)
> > > Sep 07 13:13:30|01536|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > > CPU usage)
> > > Sep 07 13:13:31|01537|poll_loop|WARN|Dropped 5008 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:31|01538|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > > CPU usage)
> > > Sep 07 13:13:31|01539|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (82%
> > > CPU usage)
> > > Sep 07 13:13:32|01540|poll_loop|WARN|Dropped 4841 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:32|01541|poll_loop|WARN|wakeup due to 40-ms timeout at
> > > ../ofproto/ofproto-dpif.c:622 (83% CPU usage)
> > > Sep 07 13:13:32|01542|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > > CPU usage)
> > > Sep 07 13:13:33|01543|poll_loop|WARN|Dropped 92 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:33|01544|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > > CPU usage)
> > > Sep 07 13:13:33|01545|poll_loop|WARN|wakeup due to [POLLIN] on fd 21
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > > CPU usage)
> > > Sep 07 13:13:34|01546|poll_loop|WARN|Dropped 27 log messages in last 1
> > > seconds (most recently, 1 seconds ago) due to excessive rate
> > > Sep 07 13:13:34|01547|poll_loop|WARN|wakeup due to 53-ms timeout at
> > > ../lib/mac-learning.c:294 (83% CPU usage)
> > > Sep 07 13:13:34|01548|poll_loop|WARN|wakeup due to [POLLIN] on fd 18
> > > (NETLINK_GENERIC<->NETLINK_GENERIC) at ../lib/netlink-socket.c:668 (83%
> > > CPU usage)
> > >
> > >
> > > _______________________________________________
> > > xen-api mailing list
> > > xen-api [at] lists
> > > http://lists.xensource.com/mailman/listinfo/xen-api
> >
> >
> >
> > _______________________________________________
> > xen-api mailing list
> > xen-api [at] lists
> > http://lists.xensource.com/mailman/listinfo/xen-api
> >
> >
>
>
>
> _______________________________________________
> xen-api mailing list
> xen-api [at] lists
> http://lists.xensource.com/mailman/listinfo/xen-api

_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api


sr at swisscenter

Sep 7, 2011, 11:32 AM

Post #8 of 9 (1042 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

On 07.09.2011 17:15, Ben Pfaff wrote:
>
> Why did you post separate copies of this message to xen-api and
> ovs-discuss, without even mentioning it in either copy? What is
> the value of splintering discussion?
>

Hi Ben,

Sorry that was unintentional. I wasn't sure where was the best place to
post this, so
I first sent it to the openvswitch list but then sent it the xen list
too and forgot to
mention it.

For the details about the versions:

root [at] xen-blade1:~# ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 1.2.1+build0
Compiled Sep 6 2011 01:01:15
OpenFlow versions 0x1:0x1

It's the one from de debian unstable repository.

The kernel is Linux version 3.0.0-scxen-amd64 (root [at] xen-blade1) (gcc
version 4.6.1 (Debian 4.6.1-4) ) #2 SMP Fri Aug 5 08:12:00 CEST 2011
It's the 3.0.0 stock kernel with debian .config recompiled with a vga
fix patch for xen on running on debian unstable

root [at] xen-blade1:~# ovs-dpctl show
system [at] xapi:
lookups: frags:0, hit:542619, missed:83504, lost:10
port 0: xapi0 (internal)
port 1: eth1
port 2: eth0
port 3: bond0 (internal)
system [at] xapi:
lookups: frags:0, hit:1973235, missed:711045, lost:1416
port 0: xapi1 (internal)
port 1: xapi5 (internal)
port 2: xapi2 (internal)
port 3: xapi4 (internal)
port 4: eth2
port 5: eth3
port 6: bond1 (internal)
port 7: vif1.0
port 26: vif6.0
port 27: vif7.0

The conf.db looks quite huge, I don't know why. I've attached it to this
reply zipped.
Attachments: conf.db.gz (72.6 KB)


sr at swisscenter

Sep 7, 2011, 11:38 AM

Post #9 of 9 (1037 views)
Permalink
Re: Flood test with xen/openvswitch [In reply to]

On 07.09.2011 17:39, Rob Hoes wrote:
> Hi,
>
> If you are worried that a VM overloads the network, you could try to use the QoS settings (on the VIF, or on the vswitch port) to limit the data rate through that VIF.
>
> Cheers,
> Rob
>

Hi Rob,

I have to try this. But i'm not too worried about the network being
flooded, it will support it.
I'm more concerned about the openvswitch process using 100% cpu and
seems to render
the whole host unresponsive to network access.

Also I forgot to mention I was trying this on xapi with debian, so maybe
it's not the case
on XCP / XenServer.

Just trying to figure why this is happenning :)

Cheers,
Sbastien

_______________________________________________
xen-api mailing list
xen-api [at] lists
http://lists.xensource.com/mailman/listinfo/xen-api

Xen api RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.