Sun V490, E3500, V210 and StorEdge 3510, T3 disk benchmarks
Introduction
At My Place Of Work, we have a number of servers to provide our services, but the most important one of these is the server that runs the Library Management System. This machine is currently over 5 years old, and extremely heavily loaded, and is due for replacement this year. After some delays due to supply problems with the UltraSparc IV+ our shiny new Sun Fire V490 and StorEdge 3510 disk array arrived. It's part of my job to install and configure these, and as a result of some years experience with our LMS we know what a lot of the bottlenecks are during day-to-day operation.
For us, this is memory usage, and disk write speed. Our LMS maintains a separate keyword index outside of the Oracle database that sits behind it to allow for fast keyword searching. This keyword index consists of 2 sets of files - the static index, which is generated periodically from the data in oracle, and the dynamic index, which is added to as more items are catalogued on a daily basis. As more items are catalogued this index gets larger. And larger. And larger - so much so that the host can spend all its time rewriting the dynamic indexes while our users sit there waiting for things to happen. To minimise the impact of this, something with a very fast write speed is required, and this is where the SE3510 comes in.
There are a few ways to configure this disk array, and it's important that we get the best available performance out of it, so I decided to spend a couple of days running some decent benchmarks on it in various configurations, and comparing the same benchmarks against our old server.
Specifications
Sun Fire V490
- 4 x 1.5 GHz UltraSparc IV+ dual core CPUs
- 16 GB RAM
- 2 x 146 GB Fibre Channel disks
Sun StorEdge 3510
- 12 x 73 GB 10,000 rpm FC disks
- Dual 1 GB RAID controllers
- 2 x 2GB Fibre Channel connections to host
Sun Enterprise E3500
- 4 x 400 MHz UltraSparc II CPUs
- 3 GB RAM
- 8 x 18 GB Fibre Channel disks
Sun StorEdge T3
- 9 x 36 GB 10,000 rpm FC disks
- Dual 256 Mb RAID controllers
- 1 x 1 GB Fibre Channel connection to host
Sun Fire V210
- 2 x 1.34 GHz UltraSparc IIIi CPUs
- 2 GB RAM
- 2 x 73 GB SCSI disks
Tests
The tests were carried out using bonnie++ 1.03a. I've benchmarked each combination of machine and disk with a different number of threads depending on the number of CPUs available: up to 2 for the V490, up to 4 for the E3500 and up to 8 for the V490. I did not run the 8 thread tests on the smaller disks in the V490 as there was insufficient disk space to do so. This is the downside of trying to make sure there's no file cacheing happening on a machine with 16 GB of ram. I also ran the 8 thread tests with just 8GB of files per thread - enough to empty the cache, but allowing the full amount (32 GB per thread - 256 GB in total) meant I wouldn't have been able to do the test on the RAID10 array due to lack of space.
Example Commandlines
For a 2 thread test, we need the '-p 2' option - this sets up a semaphore so that the tests won't start until all threads are ready.
/usr/local/sbin/bonnie++ -d /scratch/ -m machine-disk-test -u root -p 2
/usr/local/sbin/bonnie++ -d /scratch/ -m machine-disk-test -u root -y
Then in a separate shell (as many times as necessary):
/usr/local/sbin/bonnie++ -d /scratch/ -m machine-disk-test -u root -y
Filesystems
In all cases the tests were carried out on a quiescent filesystem, with no other activity to slow things down. All partitions were formatted as UFS with logging enabled. All hosts are running Solaris 9, recently patched.
Internal Disks
The internal disks on both the V490 and the V210 are simple RAID1 mirrors, using Solaris Volume Manager software raid.
V490 and StorEdge 3510
Since this is the piece of hardware I really wanted to test, this is where most of the benchmarks will be from. We'd already decided to configure the disk array as follows:
- 3 disks as a RAID5 device as an external mirror of the internal disks
- 1 disk as a global hot spare
- 8 disks to be arranged in the fastest redundant configuration possible. This means RAID10 or RAID5, really.
As a result of this, all the tests were to be carried out with the 8 disks configured as both RAID10 and RAID5, and on the 3 disk RAID5 array as a comparison to see how much of a difference the extra spindles made. Again, the internal disks were tested to give a baseline figure for a couple of decent direct-attached disks.
The SE3510 was connected to the V490 by a pair of separate 2 GB Fibre Channel HBAs. MPxIO was set up to load balance traffic between interfaces and across both RAID controllers.
E3500 and StorEdge T3
This server was already "in production" so there was limited scope for testing. There were 6 disks in a large RAID5 array available for testing purposes, and the tests were run late at night when the machine and disk array were quiescent.
The T3 array is connected to the E3500 by a single 1 GB Fibre Channel interface.
Sun Fire V210
This is our development and test host (so the machine I compiled bonnie++ on). It seemed like a good test for getting some performance figures out of some average disks.
Results
The full results are available here
For our workload, where heavy sequential writes are the problem RAID5 appears to nudge into the lead over RAID10 - bonus, since we gain more space that way. The numbers generally appear to be very close on the 3510 for RAID5 and RAID10 performance. It would be interesting to see how these benchmarks compare with anyone elses experience.
It's also interesting to see how well the T3 performs given that it's nearly 5 years old, with much slower disks.
One thing that the disk arrays do clearly show is that disk cache is king - write performance on the internal disks falls off badly once you throw multiple threads into the equation, but with the 3510 and T3, you actually gain because disk write performance is not holding back your CPUs. For applications where you're doing write heavy multiple I/O operations per second the extra cache and spindles really makes a difference.