[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
SCSI vs SATA research
In the hardware meeting I was asked to gather information on SATA vs SCSI
and I wanted to provide what I have found so far. Besides my own
benchmarks, which will hopefully be ready for the next hardware meeting,
this is the information I have found. I'm sending this to the UMCE list as
everyone in the hardware group is on this list, and additionally I think
others will find this information usefull.
The Western Digital Raptor series SATA drives support Native Command
Queuing (NQC). [http://www.seagate.com/products/interface/sata/native.html]
If you aren't familiar with NQC, it's seen on most SCSI drives. I simple
terms, NQC allows the drive to dynamically adjust which sector it is
going to read next, this means that sectors near each other will be read
first. The traditional way is that it just reads them in order. A sector
on the outside of a platter then a read of a sector on the inside of a
platter greatly reduces speed as the head has to travel across a larger
area. The seagate site discuss it more in depth.
Basically, it gives us more speed and more reliability due to less
wear and tear.
Puget Custom Comptuers put out an article with their results of what is
faster between SCSI vs SATA. [http://www.pugetsystems.com/articles.php?id=19]
To sum up their results, they found that the 10k RPM Raptor SATA drives,
when using NQC, actually beat out the 15K RPM SCSI drives in most of their
testing. They also found that system speed did not make much of a
difference. In addition, they found that the Raptor SATA drives without
NQC still perform on par with the SCSI 10k drives.
In Addition, Lansoft in New Zealand wrote a small article about
their findings testing SCSI and SATA RAID1.
[http://www.lansoft.co.nz/Research_7.htm] They also found that in their
testing, SATA outperformed SCSI.
In CERN's April 8th meeting, they pointed out some of their findings in
1) They found that there was no difference in reliability between
SCSI and SATA/IDE. The MTBF for both are around 200-250,000hrs. They also
found that some SATA drives *ARE NOT RATED FOR 24x7 USE*.
2) They've had bad batches of both SCSI and EIDE drives every 2-3 years.
3) They use high capicity 7,200RPM SATA drives rate for 24x7 use.
4) Not totally related, but their choice for RAID and filesystems
is hardware raid5 with xfs. They found reseirfs to be too immature for
- They RAID everything.
- They have hot spares for their raids, especially in
large disk arrays as a failure during rebuild increases.
5) They are not convinced that the TCO (?) concerns justify the
higher cost of SCSI
- buy the disks from the lowest bidder but they use as
much pre-selection of the vendors as allowed and dual source their
purchasing to minimze risk of major problems due to systematic failures
and require a 3 year warranty to encourage initial quality
Reviews by Tom's Hardware have shown that when speed and disk I/O on a
RAID truley matter, that the Fulcrum Architecture by RAIDCore (part of
broadcom), is the best way to go. They have a unique setup of spanning
across up to 4 PCI-X slots. That way you can have 8 drives per card.. for
a max of 32. In their benchmarks, they saw up to 1.1GB/s in RAID 0 and
1GB/s in RAID 50.
In addition, their RAID 0 test generated about 2,000 I/O operations
per second during the file server, database, and workstation tests. For
their web server benchmark, the RAID 0 and RAID 50's performance were
close as reads are more common.
So, those are the main things so far that I think matter. I've provided
all the websites where this information has came from so you can look at
it yourself. In addition, I will have benchmarks at the next hardware
meeting to show the comparision between our systems.
I know it's a wealth of information, but I found all of this very good
information to know.