Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: nsp: foundry

ServerIron XL hard coded ICMP limit

 

 

nsp foundry RSS feed   Index | Next | Previous | View Threaded


drew.weaver at thenap

Feb 17, 2011, 8:41 AM

Post #1 of 5 (913 views)
Permalink
ServerIron XL hard coded ICMP limit

Does anyone know if there is a hard coded ICMP limit in a serveriron XL for both packets directed at the system and passed through it?

I am having the weirdest issues where ping monitoring a serveriron XL and anything directly connected to the serveriron xl gets messed up even though there is no real reason why on the network.

It is not configured (by me) to have any sort of rate-limit.

Anyone have any thoughts?

-Drew


drew.weaver at thenap

Feb 19, 2011, 11:12 AM

Post #2 of 5 (867 views)
Permalink
Re: ServerIron XL hard coded ICMP limit [In reply to]

Howdy again,

I hate replying to my own messages but I have made progress =)

It seems that the pings are failing when the health checks are running.

Are healthcks really resource intensive?

(especially ones like this):

healthck node-ssl tcp
dest-ip 222.222.222.222
port ssl
protocol ssl
protocol ssl url "GET /test/gif.gif"
protocol ssl use-complete
l7-check

I noticed that with the above configuration that pings to the switch fail quite regularly.

If I add 'interval 30' to the configuration it seems like pings only fail once every 30 seconds.

The goal is to not have it fail at all..

Anyone seen this before, know how to fix it?

Thanks,
-Drew


From: foundry-nsp-bounces [at] puck [mailto:foundry-nsp-bounces [at] puck] On Behalf Of Drew Weaver
Sent: Thursday, February 17, 2011 11:41 AM
To: foundry-nsp
Subject: [f-nsp] ServerIron XL hard coded ICMP limit

Does anyone know if there is a hard coded ICMP limit in a serveriron XL for both packets directed at the system and passed through it?

I am having the weirdest issues where ping monitoring a serveriron XL and anything directly connected to the serveriron xl gets messed up even though there is no real reason why on the network.

It is not configured (by me) to have any sort of rate-limit.

Anyone have any thoughts?

-Drew


_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


georgeb at gmail

Feb 19, 2011, 9:00 PM

Post #3 of 5 (860 views)
Permalink
Re: ServerIron XL hard coded ICMP limit [In reply to]

SSL checks are *extremely* expensive and I have run serverirons out of CPU
doing them before. I suggested to Brocade many years ago (back when they
were Foundry) that they not generate new RSA keys each time for health
checks. Give an option where the same key can be re-used for healthchecks
only. This would greatly reduce the load. I am trying to remember how I
remedied the situation but it has been years. I think I ended up just doing
a regular http health check as that would tell me if the daemon was up and
running on the server or not. I didn't need to verify so much that the
remote host could gen keys and the same process listened on both 80 and 443,
so I just checked 80 and assumed 443 was working too.

It has been a while, though. Or maybe I just did a TCP check to the port to
make sure it was listening, I don't remember. But if they had the option
not to gen new keys for each health check, it would greatly reduce load on
checks. That suggestion is probably long down the memory hole at this
point, though.

George



On Sat, Feb 19, 2011 at 11:12 AM, Drew Weaver <drew.weaver [at] thenap>wrote:

> Howdy again,
>
> I hate replying to my own messages but I have made progress =)
>
> It seems that the pings are failing when the health checks are running.
>
> Are healthcks really resource intensive?
>
> (especially ones like this):
>
> healthck node-ssl tcp
> dest-ip 222.222.222.222
> port ssl
> protocol ssl
> protocol ssl url "GET /test/gif.gif"
> protocol ssl use-complete
> l7-check
>
> I noticed that with the above configuration that pings to the switch fail
> quite regularly.
>
> If I add 'interval 30' to the configuration it seems like pings only fail
> once every 30 seconds.
>
> The goal is to not have it fail at all..
>
> Anyone seen this before, know how to fix it?
>
> Thanks,
> -Drew
>
>
> From: foundry-nsp-bounces [at] puck [mailto:
> foundry-nsp-bounces [at] puck] On Behalf Of Drew Weaver
> Sent: Thursday, February 17, 2011 11:41 AM
> To: foundry-nsp
> Subject: [f-nsp] ServerIron XL hard coded ICMP limit
>
> Does anyone know if there is a hard coded ICMP limit in a serveriron XL for
> both packets directed at the system and passed through it?
>
> I am having the weirdest issues where ping monitoring a serveriron XL and
> anything directly connected to the serveriron xl gets messed up even though
> there is no real reason why on the network.
>
> It is not configured (by me) to have any sort of rate-limit.
>
> Anyone have any thoughts?
>
> -Drew
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp [at] puck
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>


drew.weaver at thenap

Feb 20, 2011, 8:56 AM

Post #4 of 5 (862 views)
Permalink
Re: ServerIron XL hard coded ICMP limit [In reply to]

Hi George,

I am trying to figure out a way to make all of the services on the real server be marked failed if only the http healthcheck fails.

I have found this command online 'hc-track-port' but it doesn't work in 7.4

Do you or (anyone?) know how you can do this in the 7.4 version?

thanks,
-Drew


From: George B. [mailto:georgeb [at] gmail]
Sent: Sunday, February 20, 2011 12:01 AM
To: Drew Weaver
Cc: foundry-nsp [at] puck
Subject: Re: [f-nsp] ServerIron XL hard coded ICMP limit

SSL checks are *extremely* expensive and I have run serverirons out of CPU doing them before. I suggested to Brocade many years ago (back when they were Foundry) that they not generate new RSA keys each time for health checks. Give an option where the same key can be re-used for healthchecks only. This would greatly reduce the load. I am trying to remember how I remedied the situation but it has been years. I think I ended up just doing a regular http health check as that would tell me if the daemon was up and running on the server or not. I didn't need to verify so much that the remote host could gen keys and the same process listened on both 80 and 443, so I just checked 80 and assumed 443 was working too.

It has been a while, though. Or maybe I just did a TCP check to the port to make sure it was listening, I don't remember. But if they had the option not to gen new keys for each health check, it would greatly reduce load on checks. That suggestion is probably long down the memory hole at this point, though.

George


On Sat, Feb 19, 2011 at 11:12 AM, Drew Weaver <drew.weaver [at] thenap<mailto:drew.weaver [at] thenap>> wrote:
Howdy again,

I hate replying to my own messages but I have made progress =)

It seems that the pings are failing when the health checks are running.

Are healthcks really resource intensive?

(especially ones like this):

healthck node-ssl tcp
dest-ip 222.222.222.222
port ssl
protocol ssl
protocol ssl url "GET /test/gif.gif"
protocol ssl use-complete
l7-check

I noticed that with the above configuration that pings to the switch fail quite regularly.

If I add 'interval 30' to the configuration it seems like pings only fail once every 30 seconds.

The goal is to not have it fail at all..

Anyone seen this before, know how to fix it?

Thanks,
-Drew


From: foundry-nsp-bounces [at] puck<mailto:foundry-nsp-bounces [at] puck> [mailto:foundry-nsp-bounces [at] puck<mailto:foundry-nsp-bounces [at] puck>] On Behalf Of Drew Weaver
Sent: Thursday, February 17, 2011 11:41 AM
To: foundry-nsp
Subject: [f-nsp] ServerIron XL hard coded ICMP limit

Does anyone know if there is a hard coded ICMP limit in a serveriron XL for both packets directed at the system and passed through it?

I am having the weirdest issues where ping monitoring a serveriron XL and anything directly connected to the serveriron xl gets messed up even though there is no real reason why on the network.

It is not configured (by me) to have any sort of rate-limit.

Anyone have any thoughts?

-Drew

_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck<mailto:foundry-nsp [at] puck>
http://puck.nether.net/mailman/listinfo/foundry-nsp


drew.weaver at thenap

Feb 20, 2011, 9:24 AM

Post #5 of 5 (856 views)
Permalink
Re: ServerIron XL hard coded ICMP limit [In reply to]

I switched the SSL healthchecks to l4 instead of l7 and so far it looks okay.

thanks,
-Drew


From: foundry-nsp-bounces [at] puck [mailto:foundry-nsp-bounces [at] puck] On Behalf Of Drew Weaver
Sent: Sunday, February 20, 2011 11:56 AM
To: 'George B.'
Cc: foundry-nsp [at] puck
Subject: Re: [f-nsp] ServerIron XL hard coded ICMP limit

Hi George,

I am trying to figure out a way to make all of the services on the real server be marked failed if only the http healthcheck fails.

I have found this command online 'hc-track-port' but it doesn't work in 7.4

Do you or (anyone?) know how you can do this in the 7.4 version?

thanks,
-Drew


From: George B. [mailto:georgeb [at] gmail]
Sent: Sunday, February 20, 2011 12:01 AM
To: Drew Weaver
Cc: foundry-nsp [at] puck
Subject: Re: [f-nsp] ServerIron XL hard coded ICMP limit

SSL checks are *extremely* expensive and I have run serverirons out of CPU doing them before. I suggested to Brocade many years ago (back when they were Foundry) that they not generate new RSA keys each time for health checks. Give an option where the same key can be re-used for healthchecks only. This would greatly reduce the load. I am trying to remember how I remedied the situation but it has been years. I think I ended up just doing a regular http health check as that would tell me if the daemon was up and running on the server or not. I didn't need to verify so much that the remote host could gen keys and the same process listened on both 80 and 443, so I just checked 80 and assumed 443 was working too.

It has been a while, though. Or maybe I just did a TCP check to the port to make sure it was listening, I don't remember. But if they had the option not to gen new keys for each health check, it would greatly reduce load on checks. That suggestion is probably long down the memory hole at this point, though.

George

On Sat, Feb 19, 2011 at 11:12 AM, Drew Weaver <drew.weaver [at] thenap<mailto:drew.weaver [at] thenap>> wrote:
Howdy again,

I hate replying to my own messages but I have made progress =)

It seems that the pings are failing when the health checks are running.

Are healthcks really resource intensive?

(especially ones like this):

healthck node-ssl tcp
dest-ip 222.222.222.222
port ssl
protocol ssl
protocol ssl url "GET /test/gif.gif"
protocol ssl use-complete
l7-check

I noticed that with the above configuration that pings to the switch fail quite regularly.

If I add 'interval 30' to the configuration it seems like pings only fail once every 30 seconds.

The goal is to not have it fail at all..

Anyone seen this before, know how to fix it?

Thanks,
-Drew


From: foundry-nsp-bounces [at] puck<mailto:foundry-nsp-bounces [at] puck> [mailto:foundry-nsp-bounces [at] puck<mailto:foundry-nsp-bounces [at] puck>] On Behalf Of Drew Weaver
Sent: Thursday, February 17, 2011 11:41 AM
To: foundry-nsp
Subject: [f-nsp] ServerIron XL hard coded ICMP limit

Does anyone know if there is a hard coded ICMP limit in a serveriron XL for both packets directed at the system and passed through it?

I am having the weirdest issues where ping monitoring a serveriron XL and anything directly connected to the serveriron xl gets messed up even though there is no real reason why on the network.

It is not configured (by me) to have any sort of rate-limit.

Anyone have any thoughts?

-Drew
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck<mailto:foundry-nsp [at] puck>
http://puck.nether.net/mailman/listinfo/foundry-nsp

nsp foundry RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.