TechOpsGuys.com Diggin' technology every day

October 8, 2010

How inefficient can you get?

Filed under: Storage — Tags: , , , — Nate @ 8:29 pm

[ the page says the system was tested in Jan 2010, so not recent, but I don’t recall seeing it on the site before now, in any case it’s still crazy]

I was about to put my laptop down when I decided hey let’s go over to SPEC and see if there are any new NFS results posted.

So I did, you know me I am into that sort of stuff. I’m not a fan of NFS but for some reason the SPEC results still interest me.

So I go and see that NEC has posted some results. NEC isn’t a very well known server or even IT supplier in the U.S. at least as far as I know. I’m sure they got decent market share over in Asia or something.

But anyways they posted some results, and I have to say I’m shocked. Either there is a glaring typo or that is just the worst NAS setup on the face of the planet.

It all comes down to usable capacity. I don’t know how you can pull this off but they did – they apparently have 284 300GB disks on the system but only have 6.1 TB of usable space! That is roughly 83TB of raw storage and they only manage to get something like 6% capacity utilization out of the thing?

Why even bother with disks at all if your going to do that? Just go with a few SSDs.

But WAIT! .. WAIT! It gets better. That 6.1 TB of space is spread across — wait for it — 24 file systems.

12 filesystems were created and used per node. One of 24 filesystems consisted of 8 disks which were divided into two 4-disk RAID 1+0 pools, and each of the other 23 filesystems consisted of 12 disks which were divided into two 6-disk RAID 1+0 pools. There were 6 Disk Array Controllers. One Disk Array Controller controlled 47 disks, and each one of the other 5 controlled 48 disks.

I mean the only thing I can hope for is that the usable capacity is in fact a big typo.

Total Exported Capacity 6226.5GB

But if it’s not I have to hand it to them for being brave enough to post such terrible results. That really takes some guts.

 

3 Comments

  1. The obvious reason to configure a system for benchmarking this way is to optimize IOPs. Spreading the load over more spindles decreases IOPs per spindle. Since the metric reported for this test is “throughput” (ops/sec) and the associated response time this is a rational thing to do for a vendor trying to optimize for the metric. Clearly this is a pretty silly configuration, but I don’t think NEC is alone in trying to game the benchmark like this. With that said, when compared with other vendors the performance of this thing is… underwhelming.

    A better benchmark might report two results for every system configuration tested: throughput-per-dollar and storage-capacity-per-dollar, which would encourage vendors to submit results for realistic, balanced systems.

    Comment by Nathan Schrenk — October 9, 2010 @ 9:35 am

  2. oh I absolutely agree, short stroking is the name of the game here. Sorry if I wasn’t clear on that in the post.

    thanks for the feedback!!

    Comment by Nate — October 9, 2010 @ 9:51 am

  3. […] on a couple of other folks for the same, but this VSA sets a new standard. Well there is this NEC system with 6%, though in NEC’s case that was by choice. The current VSA architecture forces the low […]

    Pingback by New record holder for inefficient storage – VMware VSA « TechOpsGuys.com — December 2, 2011 @ 11:18 am

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress