But we are talking about a niche application here. An out going list mailer. None of the messages is irreplaceable, the only "cost" is some people might get a duplicate mailing. It would seem this is a "niche" which the RFC isn't directly looking at.
In that case, the risk of a ram disk/crash would be about the same as a hard-drive crash which is possible.
But, I don't understand the whole sendmail issue, since I've never needed more than basic/reasonable capacity from any mailer <G>, and our *mass* mailings are in the hundreds or low thousands at best. I understand in general how mail is sort of stripped then assembled, queued and cached, but not the specifics (and my first look at a sendmail config file turned my hair white!).
It seems that mail serving might be hard on a disk, actually, and solid state devices might pay for themselves in the long run. At what point that performance/savings kicks in, I don't have a clue :) I've load balanced indirectly server loads, but not email. It just seems that at the high volumes of out going mail mentioned, the amount of wear and tear on the disks would require replacement more often than their MTBF would suggest. Solid state devices should give 4-5 years of performance, without degredation -- longer if properly cooled and climate controlled (but at that point, the solid state technology would probably have improved to warrant upgrade anyway).
I realize there are costs here, but I got the impression we weren't looking at farming 2 servers, but 3-5 servers, or even more, to handle the load in real time. At an 800% speed increase, or performance increase, that's a lot of duplicate hardware a solid state device can replace. Granted, I'm not sure where the 200% vs 800% speed increase occurs, but I would imagine the more disk I/O that is saved, the higher the performance.
Would mailing to a list like this be higher or lower on the spectrum of disk I/O ?? The higher up, the more cost effective solid state devices would be.
PUGDOG� Enterprises, Inc.
The best way to contact me is to NOT use Email.
Please leave a PM here.
In that case, the risk of a ram disk/crash would be about the same as a hard-drive crash which is possible.
But, I don't understand the whole sendmail issue, since I've never needed more than basic/reasonable capacity from any mailer <G>, and our *mass* mailings are in the hundreds or low thousands at best. I understand in general how mail is sort of stripped then assembled, queued and cached, but not the specifics (and my first look at a sendmail config file turned my hair white!).
It seems that mail serving might be hard on a disk, actually, and solid state devices might pay for themselves in the long run. At what point that performance/savings kicks in, I don't have a clue :) I've load balanced indirectly server loads, but not email. It just seems that at the high volumes of out going mail mentioned, the amount of wear and tear on the disks would require replacement more often than their MTBF would suggest. Solid state devices should give 4-5 years of performance, without degredation -- longer if properly cooled and climate controlled (but at that point, the solid state technology would probably have improved to warrant upgrade anyway).
I realize there are costs here, but I got the impression we weren't looking at farming 2 servers, but 3-5 servers, or even more, to handle the load in real time. At an 800% speed increase, or performance increase, that's a lot of duplicate hardware a solid state device can replace. Granted, I'm not sure where the 200% vs 800% speed increase occurs, but I would imagine the more disk I/O that is saved, the higher the performance.
Would mailing to a list like this be higher or lower on the spectrum of disk I/O ?? The higher up, the more cost effective solid state devices would be.
PUGDOG� Enterprises, Inc.
The best way to contact me is to NOT use Email.
Please leave a PM here.