Local storage performance of AWS cluster compute instances

Lots more data collected over the weekend as we were finally able to run bonnie++ against the local boot disk as well as single and striped versions of the ephermeral storage volumes that come along with every cc1.4xlarge instance type.

Key Results:

  • Performance of the root/boot disk is way slower than any other type of block based storage. This is to be expected as the boot disk (even though it comes via an EBS-resident AMI) does not get the benefit of paravirtualization acceleration. The take home message is that the boot/root disk volume should not really be used for anything. This also means that this blog post showing how to increase the size of the local OS disk is useful only for playing around and not for anything serious
  • The performance of the ephemeral disks is better and striping the two available drives together as a RAID0 volume has measurable benefits across the board

What this means in the real world:

  1. Don’t use the boot/root disk for anything but the OS and don’t bother trying to expand it’s size
  2. It is reasonable to stripe the ephemeral storage together and use it for “real” work, especially as indications are that the speed may be faster than an EBS mounted volume.

Other people have mentioned that this is worth doing even if one includes the time it takes to rsync or stage data into the ephemeral storage. Future BioTeam cluster building practices may use the ~800GB of ephemeral storage to service a NFS or parallel filesystem that offers input data to pipelines running on EC2 compute farms. Since we can’t trust ephemeral storage for anything unique we’d have a second shared filesystem (backed by EBS) to handle capturing pipeline results.

Obviously there is one other comparison to make — how do these performance numbers measure against the 1-disk. 4-disk and 8-disk EBS RAID0 stripesets that we’ve been testing all week?

That is a topic for the next blog posting …

Here are the results of our tests against local storage on a cc1.4xlarge instance. As usual the raw data is available in our public spreadsheet.

We ran tests multiple times and averaged the results. All file systems were XFS.

Summarized/averaged values used to generate the graphs:

hpcLocaldisk-001.png

Here is the graph showing block and character based read & write tests. We did not capture character-based test data for the local root disk because it was so slow already.

hpcLocaldisk-002.png

 

And here is a graph of the bonnie++ tests that deal with Seeks and Sequential/Random file creation & deletion:

hpcLocaldisk-004.png

 

 

Filed Under: Employee Posts

Tags: , , , ,

About the Author

Chris is an infrastructure geek specializing in the applied use of IT to enable and enhance scientific research in life science informatics environments.

Leave a Reply




If you want a picture to show with your comment, go get a Gravatar.