Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Linux Virtual Server: Users

[lvs-users] Could LVS/DR cause a bottleneck with media servers?

 

 

Linux Virtual Server users RSS feed   Index | Next | Previous | View Threaded


roger at rlmedia

Feb 15, 2011, 6:56 PM

Post #1 of 4 (927 views)
Permalink
[lvs-users] Could LVS/DR cause a bottleneck with media servers?

Hi,

I have a set up using ldirectord in direct routing to 30 media servers.

The director is a dell R210 4 core with 8GB memory and has 2 1Gbps network connections, 1 public & 1 private.

Each real server is a Dell R610 16 core with 12GB memory and has the same network connections as above.

All the servers are running centos 5.

According to the data center, these servers are connected to pairs of 40Gb switches and there is ample capacity.

What happens is when the connections per real server get to around 1000 – 1200 concurrent connections, the bandwidth outgoing per server wont go above about 250Mbps which relates to about 7.5Gbps across all servers. At that time is when the complaints start coming in about stream problems.

I guess the question is could the director somehow be limiting the throughput on the real servers or is the dc not telling the truth about?
The bandwidth going through the nics on the director is around 50 – 75Mbps in on the public nic and out on the private nic to the real servers.

Before we started using lvs, we had 10 servers running with round robin dns and these would easily handle 900Mbps each at the same time.

lidrectord config file.

# Global Directives
checktimeout=10
checkinterval=5
#fallback=127.0.0.1:80
autoreload=yes
callback="/etc/ha.d/syncsettings.sh"
logfile="/var/log/ldirectord.log"
#logfile="local0"
#emailalert="admin [at] x"
#emailalertfreq=3600
#emailalertstatus=all
quiescent=no

virtual=147
real=172.31.214.12 gate 100
real=172.31.214.13 gate 100
real=172.31.214.14 gate 100
real=172.31.214.15 gate 100
real=172.31.214.16 gate 100
real=172.31.214.17 gate 100
real=172.31.214.18 gate 100
real=172.31.214.19 gate 100
real=172.31.214.21 gate 100
real=172.31.214.22 gate 100
real=172.31.214.23 gate 100
real=172.31.214.24 gate 100
real=172.31.214.25 gate 100
real=172.31.214.26 gate 100
real=172.31.214.28 gate 100
real=172.31.214.29 gate 100
real=172.31.214.30 gate 100
real=172.31.214.31 gate 100
real=172.31.214.32 gate 100
real=172.31.214.33 gate 100
real=172.31.214.34 gate 100
real=172.31.214.35 gate 100
real=172.31.214.36 gate 100
real=172.31.214.37 gate 100
real=172.31.214.38 gate 100
real=172.31.214.39 gate 100
real=172.31.214.40 gate 100
real=172.31.214.41 gate 100
real=172.31.214.42 gate 100
scheduler=wlc
protocol=fwm
persistent=60
netmask=255.255.255.255
service=http
checkport=1935
request="/"
receive="Wowza Media Server 2"

iptables

*mangle
:PREROUTING ACCEPT [438:421747]
:INPUT ACCEPT [438:421747]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [95:14749]
:POSTROUTING ACCEPT [122:21354]
-A PREROUTING –d *.*.*.147 -p tcp -m tcp --dport 80 -j MARK --set-mark 0x93
-A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 443 -j MARK --set-mark 0x93
-A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 554 -j MARK --set-mark 0x93
-A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 1935 -j MARK --set-mark 0x93
-A PREROUTING -d *.*.*.147 -p udp -m udp --dport 6970:9999 -j MARK --set-mark 0x93
COMMIT

the main port that is used is 1935.

Thanks in advance.

Regards,

Roger.
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
Send requests to lvs-users-request [at] LinuxVirtualServer
or go to http://lists.graemef.net/mailman/listinfo/lvs-users


malcolm at loadbalancer

Feb 16, 2011, 4:13 AM

Post #2 of 4 (879 views)
Permalink
Re: [lvs-users] Could LVS/DR cause a bottleneck with media servers? [In reply to]

On 16 February 2011 02:56, Roger Littin <roger [at] rlmedia> wrote:
> Hi,
>
> I have a set up using ldirectord in direct routing to 30 media servers.
>
> The director is a dell R210 4 core with 8GB memory and has 2 1Gbps network connections, 1 public & 1 private.
>
> Each real server is a Dell R610 16 core with 12GB memory and has the same network connections as above.
>
> All the servers are running centos 5.
>
> According to the data center, these servers are connected to pairs of 40Gb switches and there is ample capacity.
>
> What happens is when the connections per real server get to around 1000 1200 concurrent connections, the bandwidth outgoing per server wont go above about 250Mbps which relates to about 7.5Gbps across all servers. At that time is when the complaints start coming in about stream problems.
>
> I guess the question is could the director somehow be limiting the throughput on the real servers or is the dc not telling the truth about?
> The bandwidth going through the nics on the director is around 50 75Mbps in on the public nic and out on the private nic to the real servers.
>
> Before we started using lvs, we had 10 servers running with round robin dns and these would easily handle 900Mbps each at the same time.
>
> lidrectord config file.
>
> # Global Directives
> checktimeout=10
> checkinterval=5
> #fallback=127.0.0.1:80
> autoreload=yes
> callback="/etc/ha.d/syncsettings.sh"
> logfile="/var/log/ldirectord.log"
> #logfile="local0"
> #emailalert="admin [at] x"
> #emailalertfreq=3600
> #emailalertstatus=all
> quiescent=no
>
> virtual=147
> real=172.31.214.12 gate 100
> real=172.31.214.13 gate 100
> real=172.31.214.14 gate 100
> real=172.31.214.15 gate 100
> real=172.31.214.16 gate 100
> real=172.31.214.17 gate 100
> real=172.31.214.18 gate 100
> real=172.31.214.19 gate 100
> real=172.31.214.21 gate 100
> real=172.31.214.22 gate 100
> real=172.31.214.23 gate 100
> real=172.31.214.24 gate 100
> real=172.31.214.25 gate 100
> real=172.31.214.26 gate 100
> real=172.31.214.28 gate 100
> real=172.31.214.29 gate 100
> real=172.31.214.30 gate 100
> real=172.31.214.31 gate 100
> real=172.31.214.32 gate 100
> real=172.31.214.33 gate 100
> real=172.31.214.34 gate 100
> real=172.31.214.35 gate 100
> real=172.31.214.36 gate 100
> real=172.31.214.37 gate 100
> real=172.31.214.38 gate 100
> real=172.31.214.39 gate 100
> real=172.31.214.40 gate 100
> real=172.31.214.41 gate 100
> real=172.31.214.42 gate 100
> scheduler=wlc
> protocol=fwm
> persistent=60
> netmask=255.255.255.255
> service=http
> checkport=1935
> request="/"
> receive="Wowza Media Server 2"
>
> iptables
>
> *mangle
> :PREROUTING ACCEPT [438:421747]
> :INPUT ACCEPT [438:421747]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [95:14749]
> :POSTROUTING ACCEPT [122:21354]
> -A PREROUTING d *.*.*.147 -p tcp -m tcp --dport 80 -j MARK --set-mark 0x93
> -A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 443 -j MARK --set-mark 0x93
> -A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 554 -j MARK --set-mark 0x93
> -A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 1935 -j MARK --set-mark 0x93
> -A PREROUTING -d *.*.*.147 -p udp -m udp --dport 6970:9999 -j MARK --set-mark 0x93
> COMMIT
>
> the main port that is used is 1935.
>
> Thanks in advance.
>
> Regards,
>
> Roger.
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
> Send requests to lvs-users-request [at] LinuxVirtualServer
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users


Roger,

My gut would say that your switch is getting saturated (not the director).
I wonder if you could get some of the servers to reply to a different
switch (local subnet) but still through the director i.e.
prove the director can can more load if the outgoing traffic is
through a different switch.
Also CPU load on the director is a pretty good indicator of stress.
We've had customers doing similar kind of load (but every setup is different).


--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
Send requests to lvs-users-request [at] LinuxVirtualServer
or go to http://lists.graemef.net/mailman/listinfo/lvs-users


charlie at playlouder

Feb 16, 2011, 4:32 AM

Post #3 of 4 (876 views)
Permalink
Re: [lvs-users] Could LVS/DR cause a bottleneck with media servers? [In reply to]

On Wed, Feb 16, 2011 at 12:13:05PM +0000, Malcolm Turnbull <malcolm [at] loadbalancer> wrote:
> On 16 February 2011 02:56, Roger Littin <roger [at] rlmedia> wrote:
> > Hi,
> >
> > I have a set up using ldirectord in direct routing to 30 media servers.
> >
> > The director is a dell R210 4 core with 8GB memory and has 2 1Gbps network connections, 1 public & 1 private.
> >
>
> Also CPU load on the director is a pretty good indicator of stress.
> We've had customers doing similar kind of load (but every setup is different).
>

graphing your interrupts on the ethernet interfaces is a good idea as
well - depending on your method of choice it may not display as "cpu
usage".

C.
--
+442077294797
http://mediaserviceprovider.com/

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
Send requests to lvs-users-request [at] LinuxVirtualServer
or go to http://lists.graemef.net/mailman/listinfo/lvs-users


roger at rlmedia

Feb 17, 2011, 2:46 PM

Post #4 of 4 (880 views)
Permalink
Re: [lvs-users] Could LVS/DR cause a bottleneck with media servers? [In reply to]

Hi,

I have done some more testing over the last day and I think it could be
something with the lvs setup.

For yesterday, I configured a second director on a separate server using the
same real servers but with a different vip and used round robin dns to split
the incoming between both directors. This is the first time we have
actually seen it go above 7.5 gb on the outgoing and it peaked yesterday at
around 450mbps per real server.

Early this morning, I only had one director running and we hit 300mbps per
server and the video was stopping every so often. Once I added the second
director into the dns, initially the connections through the first director
were still giving problems but the connections through the second director
were good with no pauses. Once about 15% of the connections dropped off of
the first director, the streaming came good on that one as well.

At the moment, both directors are running and the bandwidth running through
each server is just over 500mbps. ipvsadm is showing about 900 connections
per server for each director (1800 combined).

Looking at the graphs in hyperic, the cpu load on the directors has not gone
above about 5%.

What I have noticed though is the 5 min load average goes to about 0.8 - 1.2
when the director connections start coming on and the number of received
packets dropped per minute on the director start going up a lot.

Originally, I had the directors setup in active - passive configuration
using heartbeat to failover. I have also been running ipvs_sync between
them to keep them in sync in case of failover.

I am thinking that it may be some sysctl settings that are wrong for the
amount of traffic I am trying to push through the directors. Any help on
these settings would be greatly appreciated. Below are the current sysctl
setting. I have removed what I didn't think was at all related but if there
are some other settings that are missing then please let me know.

[root [at] lbe ~]# sysctl -a
net.ipv4.vs.nat_icmp_send = 0
net.ipv4.vs.sync_threshold = 3 50
net.ipv4.vs.expire_quiescent_template = 0
net.ipv4.vs.expire_nodest_conn = 0
net.ipv4.vs.cache_bypass = 0
net.ipv4.vs.secure_tcp = 0
net.ipv4.vs.drop_packet = 0
net.ipv4.vs.drop_entry = 0
net.ipv4.vs.am_droprate = 10
net.ipv4.vs.amemthresh = 1024
net.ipv4.conf.eth1.promote_secondaries = 0
net.ipv4.conf.eth1.force_igmp_version = 0
net.ipv4.conf.eth1.disable_policy = 0
net.ipv4.conf.eth1.disable_xfrm = 0
net.ipv4.conf.eth1.arp_accept = 0
net.ipv4.conf.eth1.arp_ignore = 0
net.ipv4.conf.eth1.arp_announce = 0
net.ipv4.conf.eth1.arp_filter = 0
net.ipv4.conf.eth1.tag = 0
net.ipv4.conf.eth1.log_martians = 0
net.ipv4.conf.eth1.bootp_relay = 0
net.ipv4.conf.eth1.medium_id = 0
net.ipv4.conf.eth1.proxy_arp = 0
net.ipv4.conf.eth1.accept_source_route = 0
net.ipv4.conf.eth1.send_redirects = 1
net.ipv4.conf.eth1.rp_filter = 0
net.ipv4.conf.eth1.shared_media = 1
net.ipv4.conf.eth1.secure_redirects = 1
net.ipv4.conf.eth1.accept_redirects = 1
net.ipv4.conf.eth1.mc_forwarding = 0
net.ipv4.conf.eth1.forwarding = 1
net.ipv4.conf.eth0.promote_secondaries = 0
net.ipv4.conf.eth0.force_igmp_version = 0
net.ipv4.conf.eth0.disable_policy = 0
net.ipv4.conf.eth0.disable_xfrm = 0
net.ipv4.conf.eth0.arp_accept = 0
net.ipv4.conf.eth0.arp_ignore = 0
net.ipv4.conf.eth0.arp_announce = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.tag = 0
net.ipv4.conf.eth0.log_martians = 0
net.ipv4.conf.eth0.bootp_relay = 0
net.ipv4.conf.eth0.medium_id = 0
net.ipv4.conf.eth0.proxy_arp = 0
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.eth0.send_redirects = 1
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth0.shared_media = 1
net.ipv4.conf.eth0.secure_redirects = 1
net.ipv4.conf.eth0.accept_redirects = 1
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.eth0.forwarding = 1

net.ipv4.conf.default.promote_secondaries = 0
net.ipv4.conf.default.force_igmp_version = 0
net.ipv4.conf.default.disable_policy = 0
net.ipv4.conf.default.disable_xfrm = 0
net.ipv4.conf.default.arp_accept = 0
net.ipv4.conf.default.arp_ignore = 0
net.ipv4.conf.default.arp_announce = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.tag = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.default.bootp_relay = 0
net.ipv4.conf.default.medium_id = 0
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.shared_media = 1
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.conf.default.accept_redirects = 1
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.all.promote_secondaries = 0
net.ipv4.conf.all.force_igmp_version = 0
net.ipv4.conf.all.disable_policy = 0
net.ipv4.conf.all.disable_xfrm = 0
net.ipv4.conf.all.arp_accept = 0
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.tag = 0
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.all.bootp_relay = 0
net.ipv4.conf.all.medium_id = 0
net.ipv4.conf.all.proxy_arp = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.send_redirects = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.shared_media = 1
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.all.forwarding = 1
net.ipv4.neigh.eth1.base_reachable_time_ms = 30000
net.ipv4.neigh.eth1.retrans_time_ms = 1000
net.ipv4.neigh.eth1.locktime = 99
net.ipv4.neigh.eth1.proxy_delay = 79
net.ipv4.neigh.eth1.anycast_delay = 99
net.ipv4.neigh.eth1.proxy_qlen = 64
net.ipv4.neigh.eth1.unres_qlen = 3
net.ipv4.neigh.eth1.gc_stale_time = 60
net.ipv4.neigh.eth1.delay_first_probe_time = 5
net.ipv4.neigh.eth1.base_reachable_time = 30
net.ipv4.neigh.eth1.retrans_time = 99
net.ipv4.neigh.eth1.app_solicit = 0
net.ipv4.neigh.eth1.ucast_solicit = 3
net.ipv4.neigh.eth1.mcast_solicit = 3
net.ipv4.neigh.eth0.base_reachable_time_ms = 30000
net.ipv4.neigh.eth0.retrans_time_ms = 1000
net.ipv4.neigh.eth0.locktime = 99
net.ipv4.neigh.eth0.proxy_delay = 79
net.ipv4.neigh.eth0.anycast_delay = 99
net.ipv4.neigh.eth0.proxy_qlen = 64
net.ipv4.neigh.eth0.unres_qlen = 3
net.ipv4.neigh.eth0.gc_stale_time = 60
net.ipv4.neigh.eth0.delay_first_probe_time = 5
net.ipv4.neigh.eth0.base_reachable_time = 30
net.ipv4.neigh.eth0.retrans_time = 99
net.ipv4.neigh.eth0.app_solicit = 0
net.ipv4.neigh.eth0.ucast_solicit = 3
net.ipv4.neigh.eth0.mcast_solicit = 3

net.ipv4.neigh.default.base_reachable_time_ms = 30000
net.ipv4.neigh.default.retrans_time_ms = 1000
net.ipv4.neigh.default.gc_thresh3 = 1024
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_interval = 30
net.ipv4.neigh.default.locktime = 99
net.ipv4.neigh.default.proxy_delay = 79
net.ipv4.neigh.default.anycast_delay = 99
net.ipv4.neigh.default.proxy_qlen = 64
net.ipv4.neigh.default.unres_qlen = 3
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.neigh.default.delay_first_probe_time = 5
net.ipv4.neigh.default.base_reachable_time = 30
net.ipv4.neigh.default.retrans_time = 99
net.ipv4.neigh.default.app_solicit = 0
net.ipv4.neigh.default.ucast_solicit = 3
net.ipv4.neigh.default.mcast_solicit = 3
net.ipv4.udp_wmem_min = 4096
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_mem = 772896 1030528 1545792
net.ipv4.cipso_rbm_strictvalid = 1
net.ipv4.cipso_rbm_optfmt = 0
net.ipv4.cipso_cache_bucket_size = 10
net.ipv4.cipso_cache_enable = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_dma_copybreak = 4096
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.tcp_base_mss = 512
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_abc = 0
net.ipv4.tcp_congestion_control = bic
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.ipfrag_max_dist = 64
net.ipv4.ipfrag_secret_interval = 600
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_frto = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.icmp_ratemask = 6168
net.ipv4.icmp_ratelimit = 1000
net.ipv4.tcp_adv_win_scale = 2
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_mem = 196608 262144 393216
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_fack = 1
net.ipv4.tcp_orphan_retries = 0
net.ipv4.inet_peer_gc_maxtime = 120
net.ipv4.inet_peer_gc_mintime = 10
net.ipv4.inet_peer_maxttl = 600
net.ipv4.inet_peer_minttl = 120
net.ipv4.inet_peer_threshold = 65664
net.ipv4.igmp_max_msf = 10
net.ipv4.igmp_max_memberships = 20
net.ipv4.route.rt_cache_rebuild_count = 4
net.ipv4.route.secret_interval = 600
net.ipv4.route.min_adv_mss = 256
net.ipv4.route.min_pmtu = 552
net.ipv4.route.mtu_expires = 600
net.ipv4.route.gc_elasticity = 8
net.ipv4.route.error_burst = 5000
net.ipv4.route.error_cost = 1000
net.ipv4.route.redirect_silence = 20480
net.ipv4.route.redirect_number = 9
net.ipv4.route.redirect_load = 20
net.ipv4.route.gc_interval = 60
net.ipv4.route.gc_timeout = 300
net.ipv4.route.gc_min_interval_ms = 500
net.ipv4.route.gc_min_interval = 0
net.ipv4.route.max_size = 4194304
net.ipv4.route.gc_thresh = 262144
net.ipv4.route.max_delay = 10
net.ipv4.route.min_delay = 2
net.ipv4.icmp_errors_use_inbound_ifaddr = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.ip_local_port_range = 32768 61000
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.ipfrag_time = 30
net.ipv4.ip_dynaddr = 0
net.ipv4.ipfrag_low_thresh = 196608
net.ipv4.ipfrag_high_thresh = 262144
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syn_retries = 5
net.ipv4.ip_nonlocal_bind = 0
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.ip_default_ttl = 64
net.ipv4.ip_forward = 1
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.core.netdev_budget = 300
net.core.somaxconn = 128
net.core.xfrm_larval_drop = 0
net.core.xfrm_acq_expires = 30
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_aevent_etime = 10
net.core.optmem_max = 20480
net.core.message_burst = 10
net.core.message_cost = 5
net.core.netdev_max_backlog = 1000
net.core.dev_weight = 64
net.core.rmem_default = 129024
net.core.wmem_default = 129024
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
vm.vm_devzero_optimized = 1
vm.max_reclaims_in_progress = 0
vm.max_writeback_pages = 1024
vm.flush_mmap_pages = 1
vm.pagecache = 100
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.zone_reclaim_interval = 30
vm.zone_reclaim_mode = 0
vm.swap_token_timeout = 300 0
vm.topdown_allocate_fast = 0
vm.legacy_va_layout = 0
vm.vfs_cache_pressure = 100
vm.block_dump = 0
vm.laptop_mode = 0
vm.max_map_count = 65536
vm.percpu_pagelist_fraction = 0
vm.min_free_kbytes = 11485
vm.drop_caches = 0
vm.lowmem_reserve_ratio = 256 256 32
vm.hugetlb_shm_group = 0
vm.nr_hugepages = 0
vm.swappiness = 60
vm.nr_pdflush_threads = 2
vm.dirty_expire_centisecs = 2999
vm.dirty_writeback_centisecs = 499
vm.mmap_min_addr = 4096
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10
vm.page-cluster = 3
vm.overcommit_ratio = 50
vm.panic_on_oom = 0
vm.overcommit_memory = 0
kernel.vsyscall64 = 1
kernel.blk_iopoll = 1
kernel.max_lock_depth = 1024
kernel.compat-log = 1
kernel.hung_task_warnings = 10
kernel.hung_task_timeout_secs = 120
kernel.hung_task_check_count = 4194304
kernel.hung_task_panic = 0
kernel.softlockup_panic = 0
kernel.softlockup_thresh = 10
kernel.acpi_video_flags = 0
kernel.randomize_va_space = 1
kernel.bootloader_type = 113
kernel.panic_on_unrecovered_nmi = 0
kernel.unknown_nmi_panic = 0
kernel.ngroups_max = 65536
kernel.printk_ratelimit_burst = 10
kernel.printk_ratelimit = 5
kernel.panic_on_oops = 1
kernel.pid_max = 32768
kernel.overflowgid = 65534
kernel.overflowuid = 65534
kernel.pty.nr = 1
kernel.pty.max = 4096
kernel.random.uuid = c2b133d1-ab0b-4c02-abf8-3a3fe18711ad
kernel.random.boot_id = 09cf8e5d-99fa-4a37-8d3a-4c18335b437f
kernel.random.write_wakeup_threshold = 128
kernel.random.read_wakeup_threshold = 64
kernel.random.entropy_avail = 2574
kernel.random.poolsize = 4096
kernel.threads-max = 147456
kernel.cad_pid = 1
kernel.sysrq = 0
kernel.sem = 250 32000 32 128
kernel.msgmnb = 65536
kernel.msgmni = 16
kernel.msgmax = 65536
kernel.shmmni = 4096
kernel.shmall = 4294967296
kernel.shmmax = 68719476736
kernel.acct = 4 2 30
kernel.hotplug =
kernel.modprobe = /sbin/modprobe
kernel.printk = 6 4 1 7
kernel.ctrl-alt-del = 0
kernel.real-root-dev = 0
kernel.cap-bound = -257
kernel.tainted = 0
kernel.core_pattern = core
kernel.core_uses_pid = 1
kernel.print-fatal-signals = 0
kernel.exec-shield = 1
kernel.panic = 0
kernel.domainname = (none)
kernel.hostname = lbe1.istreamlive.net
kernel.version = #1 SMP Wed Jan 5 17:52:25 EST 2011
kernel.osrelease = 2.6.18-194.32.1.el5
kernel.ostype = Linux
kernel.sched_interactive = 2

Thanks,

Roger.

-----Original Message-----
From: Malcolm Turnbull
Sent: Thursday, February 17, 2011 1:13 AM
To: LinuxVirtualServer.org users mailing list.
Subject: Re: [lvs-users] Could LVS/DR cause a bottleneck with media servers?

On 16 February 2011 02:56, Roger Littin <roger [at] rlmedia> wrote:
> Hi,
>
> I have a set up using ldirectord in direct routing to 30 media servers.
>
> The director is a dell R210 4 core with 8GB memory and has 2 1Gbps network
> connections, 1 public & 1 private.
>
> Each real server is a Dell R610 16 core with 12GB memory and has the same
> network connections as above.
>
> All the servers are running centos 5.
>
> According to the data center, these servers are connected to pairs of 40Gb
> switches and there is ample capacity.
>
> What happens is when the connections per real server get to around 1000
> 1200 concurrent connections, the bandwidth outgoing per server wont go
> above about 250Mbps which relates to about 7.5Gbps across all servers. At
> that time is when the complaints start coming in about stream problems.
>
> I guess the question is could the director somehow be limiting the
> throughput on the real servers or is the dc not telling the truth about?
> The bandwidth going through the nics on the director is around 50 75Mbps
> in on the public nic and out on the private nic to the real servers.
>
> Before we started using lvs, we had 10 servers running with round robin
> dns and these would easily handle 900Mbps each at the same time.
>
> lidrectord config file.
>
> # Global Directives
> checktimeout=10
> checkinterval=5
> #fallback=127.0.0.1:80
> autoreload=yes
> callback="/etc/ha.d/syncsettings.sh"
> logfile="/var/log/ldirectord.log"
> #logfile="local0"
> #emailalert="admin [at] x"
> #emailalertfreq=3600
> #emailalertstatus=all
> quiescent=no
>
> virtual=147
> real=172.31.214.12 gate 100
> real=172.31.214.13 gate 100
> real=172.31.214.14 gate 100
> real=172.31.214.15 gate 100
> real=172.31.214.16 gate 100
> real=172.31.214.17 gate 100
> real=172.31.214.18 gate 100
> real=172.31.214.19 gate 100
> real=172.31.214.21 gate 100
> real=172.31.214.22 gate 100
> real=172.31.214.23 gate 100
> real=172.31.214.24 gate 100
> real=172.31.214.25 gate 100
> real=172.31.214.26 gate 100
> real=172.31.214.28 gate 100
> real=172.31.214.29 gate 100
> real=172.31.214.30 gate 100
> real=172.31.214.31 gate 100
> real=172.31.214.32 gate 100
> real=172.31.214.33 gate 100
> real=172.31.214.34 gate 100
> real=172.31.214.35 gate 100
> real=172.31.214.36 gate 100
> real=172.31.214.37 gate 100
> real=172.31.214.38 gate 100
> real=172.31.214.39 gate 100
> real=172.31.214.40 gate 100
> real=172.31.214.41 gate 100
> real=172.31.214.42 gate 100
> scheduler=wlc
> protocol=fwm
> persistent=60
> netmask=255.255.255.255
> service=http
> checkport=1935
> request="/"
> receive="Wowza Media Server 2"
>
> iptables
>
> *mangle
> :PREROUTING ACCEPT [438:421747]
> :INPUT ACCEPT [438:421747]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [95:14749]
> :POSTROUTING ACCEPT [122:21354]
> -A PREROUTING d *.*.*.147 -p tcp -m tcp --dport 80 -j MARK --set-mark
> 0x93
> -A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 443 -j MARK --set-mark
> 0x93
> -A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 554 -j MARK --set-mark
> 0x93
> -A PREROUTING -d *.*.*.147 -p tcp -m tcp --dport 1935 -j MARK --set-mark
> 0x93
> -A PREROUTING -d *.*.*.147 -p udp -m udp --dport 6970:9999 -j
> MARK --set-mark 0x93
> COMMIT
>
> the main port that is used is 1935.
>
> Thanks in advance.
>
> Regards,
>
> Roger.
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
> Send requests to lvs-users-request [at] LinuxVirtualServer
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users


Roger,

My gut would say that your switch is getting saturated (not the director).
I wonder if you could get some of the servers to reply to a different
switch (local subnet) but still through the director i.e.
prove the director can can more load if the outgoing traffic is
through a different switch.
Also CPU load on the director is a pretty good indicator of stress.
We've had customers doing similar kind of load (but every setup is
different).


--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
Send requests to lvs-users-request [at] LinuxVirtualServer
or go to http://lists.graemef.net/mailman/listinfo/lvs-users


_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users [at] LinuxVirtualServer
Send requests to lvs-users-request [at] LinuxVirtualServer
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

Linux Virtual Server users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.