Recently it has come to my attention that we desperately need to update our SSD test methodology. One thing that as a consumer I like to see is a variety of different methodologies. Benchmarking to different extremes is important for a few reasons. First, different workloads can be found in different environments. Second, new methodologies ensure that vendors do not over-optimize for one architecture over the other. If we think back to SandForce’s dominance of the ATTO benchmark years ago, that is a great example of how one benchmark cannot be used as a guidepost. It is important that every site develops its own methodology. To this end, ServeTheHome will now do something different. It is time to make a change, while still presenting findings in the 1 page summary format ServeTheHome uses. e5-4620 v4
My question is simply this, is there room for benchmarks other than on Intel PCH ports? Here is a sample of the big SSD reviewing tech sites, and the controllers they use in their SSD test platforms recently:
Here’s the thing… I looked at about two dozen sites that do SSD benchmarks. All of them do something similar. They test each SSD with Intel SATA III controllers (save StorageReview’s enterprise testing.) This makes a lot of sense. Intel has well-supported controllers and offers not just TRIM, but also TRIM in RAID. After all, most users just hook up a SATA cable and away they go. Not so fast! Power users often utilize the LSI SAS 2008 or SAS 2308 based controllers. If you want to pass-through drives to an ESXi VM for example, LSI controllers are very well supported and can handle a lot of load. There is one great reason for this. The LSI SAS 2008 and SAS 2308 controllers have eight 6.0gbps ports. Intel still offers a maximum of two SATA III 6.0gbps ports. AMD offers more on their AM3+ platform, but suffers from low marketshare. Also, LSI controllers are well supported with RAID under VMware ESXi. Being fair, the StorageReview.com test configuration uses the P67 chipset only for consumer drives. They do say Intel’s controller sets the standard and do use the 9211-8i for enterprise drives. That makes sense as they do a lot of great SAS reviews.
For many applications, four or eight 2.5″ SATA SSDs are good enough and much less expensive than their enterprise counterparts. As a result, we will be changing the SSD test bed slightly:
- Motherboard: Supermicro X9DR7-LN4F (with onboard LSI SAS 2308 controller)
- OS Drive: Intel X25-M G2 SSD on a 3.0gbps port
- Memory: Kingston 64GB (8x 8GB) Registered ECC DDR3 1600MHz (four DIMMs per CPU)
- CPU: Dual Intel Xeon E5-2690
- Operating System: Windows Server 2012
Here is the reasoning, the LGA 2011 platform is simply newer than the AMD G34 platform and has access to many PCIe 3.0 lanes. The Supermicro X9DR7-LN4F is a great test motherboard because it has four onboard Intel gigabit LAN ports along with a LSI SAS 2308 chip built-in. It also has an absolute ton of PCIe slots. This is similar to what the StorageReview uses with its 9211-8i, just using the newer LSI controller and newer Intel platform. As we have seen with Jeff’s LSI 9207-8e review, the SAS 2308 is a monster performer. For those wondering, for one to three drives, the SAS 2008 and SAS 2308 chips do not show a huge difference. Loaded with eight SSDs, you will want the SAS 2308. Also, I have found with the Xeon E5 chips, moving even to a UP configuration is actually comparable with the SAS 2308. We may move to lower power Xeon E5’s at some point as the Xeon E5-2690’s are decidedly overkill.
One other major change – we are going to increasingly focus on multi-SSD RAID configurations instead of just single drives. This is a major shift in the SSD test methodology. Most review sites test single drives, we are going to start looking not just at single drive performance, but also multiple drive performance. This is a key driver for moving to the LSI SAS 2308 controller.
A Quick Word on SSD Benchmarks
Here are the current thoughts around SSD benchmarks using the setup. Please feel free to comment below and suggest alternate/ different configurations. We are trying to balance the portfolio of SSD test scenarios while making results easy to reproduce.
- ATTO is a benchmark that became either famous or infamous about the time that SandForce drives came to market. Vendors were quick to tout SandForce’s dominance on the benchmark. Very quickly the press realized that these marketing numbers were at best measures of maximum throughput.
- CrystalDiskMark – It has been a very tough decision, but the current thinking is that this will be removed from the test suite. AS SSD and Anvil’s Storage utilities do a great job at showing similar scenarios.
- HD Tune Pro – This will only be used on traditional hard drives going forward. Results with HD Tune Pro have been less than exciting. Plenty of sites cover this. If you think it should stay in the methodology, please feel free to let us know.
- AS SSD – AS SSD is easy for most users to install and gives a better cross-section of SSD performance than ATTO.
- Anvil’s Storage Utilities – Anvil’s Storage Utilities is an excellent tool to gauge all-around SSD performance.
- Iometer – We will use custom Iometer profiles that will be released on the site shortly. Iometer is industry standard and therefore we are including it in our benchmark stable.
- Oracle Swingbench – this is one that we have debated and are open to feedback on. Currently, we are inclined to leave it out to make the CPU configuration more flexible but are open to suggestions.
Another SSD Test Methodology Change – Presentation
We will be making a few other changes to the presentation of the results. First, we will present SSD results using bar graphs, with numbers, similar to the CPU charts as can be seen with examples such as the Intel Xeon E3-1230V2 review. Simply put, we value our reader’s time. Switching back and forth between reviews stinks when you need only a simple comparison. We are also working on building a new SSD test results database. The idea is that users can sort through various reviews and see SSD test results quickly comparing generations.
Final Thoughts on the New SSD Test Methodology
First off, thanks to all of our readers for their continued support and encouragement. We are trying to expand coverage in new ways such as these, moving from relatively standard testing to developing our own methodology. Please feel free to leave suggestions as the new methodology will start being seen in late November 2012. Overall, it is an exciting departure and an interesting experiment for STH.