But, Max, sorry, but physics and electical engineering rules here. The sustained data transfer rate of a drive had got to do with the number of bytes of actual data per track (averaged over the drive as tracks towards the center are shorter with a lower total .)
No current drive can transfer sustained information at the cable /interface rate.
SUSTAINED Transfer Rate(in Mbits/Second) = RPM/60 * average track data amount
Sustained therefore is limited mainly by the RPM and data bit density, as well as the track-to-track stepping speed including settling, and that is why 4200rpm laptop drives do not excell at video, etc.
Random output will take into account buffer size, algorithm, head weight (which governs how fast the head can be moved between tracks), track-to track delay, and track spanning speed, and settling time (how long it takes for head to settle into position taking into account deaccelleration, etc.
SCSI Drives have since day 1, had built in microprocessors that executed a high level data transfer language, which took away the load from the CPU.
In fact, multiple SCSI drives can make transfers between themselves unattended once the commands are given. So multi process situations are one place where SCSI drives destroy the competition. This helps in mult access/user DATABASES, web servers (!!!), etc.
Output can be increased by increasing RPM, increasing bit density, decreasing error correction data needed, and reducing CPU processing time.
and SCSI320 uses 64 bit contollers with quite a bit of smarts, and is MUCH, MUCH faster than SATA for web servers and all mid to hard use applications.
Present SCSI Drives are very fast at track to track stepping as well as random access, and the specs they have beat ALL presently made SATA drives, and that's not even mentioning the RPM speed advantage of going at 15,000 RPM.
In fact, they're 4+ times as fast in web servers under load.