02 Sep 2008 88 Terabyte volume
Big storage is pretty easy these days.
It’s harder when you want single-namespace or single volumes that are upwards of 100 terabytes in size.
It’s even harder when you try to do it with reasonably priced middle-tier products rather than the high end stuff that costs upwards of a million dollars for 100 usable terabytes and something to back the data up.
Our goal was a single 100 terabyte volume. With RAID6 double parity and GFS filesystem overhead we got 88 usable terabytes in our first attempt.
The storage system consists of three internally redundant fiber-attached NexSan SATABeast 4G systems , each with 42 1-terabyte disks inside. With 2 drives in each chassis left in as hot spares we end up with 120TB of raw disk. The significant overhead of RAID6 and the minor overhead of LVM2 and GFS is what brings us down to 88T “usable” space.
Attempt #2 will be when our 2nd fileserver shows up and we do a clustered LVM-GFS2 filesystem across redundant FC links and servers.
In the current setup the single fileserver is attached twice to each RAID LUN via two isolated and completely independent fiber channel switches (Linux multipath rocks for this, BTW). This gets us some measure of load balancing and protection against both storage controller failures as well as a FC switch failure. Adding a 2nd clustered fileserver that can “see” the same disks will give us as much no-single-point-of-failure protection as we can reasonably get without spending a million dollars or more on one of the tier 1 vendors.
The money shot in this image is the single entry for “/massive” showing capacity of 88T …
This storage is part of a 2-rack compute cluster backed with significant UPS and a large Qualstar tape library robot. The all inclusive price for all gear was roughly $350,000.
More pics of the hardware and actual infrastructure will eventually be posted.