Gossamer Forum
Home : Products : Gossamer List : Pre Sales :

Re: [Alex] Large lists and speed

Quote Reply
Re: [Alex] Large lists and speed In reply to
But we are talking about a niche application here. An out going list mailer. None of the messages is irreplaceable, the only "cost" is some people might get a duplicate mailing. It would seem this is a "niche" which the RFC isn't directly looking at.

In that case, the risk of a ram disk/crash would be about the same as a hard-drive crash which is possible.

But, I don't understand the whole sendmail issue, since I've never needed more than basic/reasonable capacity from any mailer <G>, and our *mass* mailings are in the hundreds or low thousands at best. I understand in general how mail is sort of stripped then assembled, queued and cached, but not the specifics (and my first look at a sendmail config file turned my hair white!).

It seems that mail serving might be hard on a disk, actually, and solid state devices might pay for themselves in the long run. At what point that performance/savings kicks in, I don't have a clue :) I've load balanced indirectly server loads, but not email. It just seems that at the high volumes of out going mail mentioned, the amount of wear and tear on the disks would require replacement more often than their MTBF would suggest. Solid state devices should give 4-5 years of performance, without degredation -- longer if properly cooled and climate controlled (but at that point, the solid state technology would probably have improved to warrant upgrade anyway).

I realize there are costs here, but I got the impression we weren't looking at farming 2 servers, but 3-5 servers, or even more, to handle the load in real time. At an 800% speed increase, or performance increase, that's a lot of duplicate hardware a solid state device can replace. Granted, I'm not sure where the 200% vs 800% speed increase occurs, but I would imagine the more disk I/O that is saved, the higher the performance.

Would mailing to a list like this be higher or lower on the spectrum of disk I/O ?? The higher up, the more cost effective solid state devices would be.


PUGDOG� Enterprises, Inc.

The best way to contact me is to NOT use Email.
Please leave a PM here.
Subject Author Views Date
Thread Large lists and speed frankLo 24758 Jan 28, 2004, 4:31 AM
Thread Re: [frankLo] Large lists and speed
Alex 24524 Jan 28, 2004, 9:18 AM
Thread Re: [Alex] Large lists and speed
maxpico 24506 Feb 1, 2004, 12:56 PM
Post Re: [maxpico] Large lists and speed
Alex 24456 Feb 1, 2004, 1:27 PM
Thread Re: [maxpico] Large lists and speed
webslicer 24507 Feb 1, 2004, 9:32 PM
Thread Re: [webslicer] Large lists and speed
maxpico 24472 Feb 2, 2004, 6:46 AM
Thread Re: [maxpico] Large lists and speed
webslicer 24498 Feb 2, 2004, 8:24 AM
Thread Re: [webslicer] Large lists and speed
maxpico 24482 Feb 2, 2004, 8:35 AM
Thread Re: [maxpico] Large lists and speed
Alex 24487 Feb 2, 2004, 10:40 AM
Thread Re: [Alex] Large lists and speed
maxpico 24455 Feb 2, 2004, 10:50 AM
Post Re: [maxpico] Large lists and speed
webslicer 24447 Feb 2, 2004, 3:40 PM
Thread Re: [maxpico] Large lists and speed
brewt 24457 Feb 2, 2004, 10:51 PM
Thread Re: [brewt] Large lists and speed
maxpico 24469 Feb 3, 2004, 5:19 AM
Thread Re: [maxpico] Large lists and speed
brewt 24448 Feb 3, 2004, 11:23 AM
Thread Re: [brewt] Large lists and speed
maxpico 24442 Feb 3, 2004, 11:32 AM
Thread Re: [maxpico] Large lists and speed
brewt 24456 Feb 3, 2004, 12:09 PM
Post Re: [brewt] Large lists and speed
maxpico 24440 Feb 3, 2004, 1:22 PM
Thread Re: [brewt] Large lists and speed
pugdog 24420 Feb 3, 2004, 3:30 PM
Thread Re: [pugdog] Large lists and speed
brewt 24431 Feb 3, 2004, 4:02 PM
Thread Re: [brewt] Large lists and speed
pugdog 24403 Feb 4, 2004, 12:11 PM
Thread Re: [pugdog] Large lists and speed
Alex 24394 Feb 4, 2004, 4:17 PM
Post Re: [Alex] Large lists and speed
pugdog 24404 Feb 4, 2004, 8:31 PM
Thread Re: [maxpico] Large lists and speed
webslicer 24422 Feb 3, 2004, 6:20 PM
Post Re: [webslicer] Large lists and speed
brewt 24401 Feb 3, 2004, 7:21 PM