TechOpsGuys.com Diggin' technology every day

October 8, 2010

How inefficient can you get?

Filed under: Storage — Tags: , , , — Nate @ 8:29 pm

[ the page says the system was tested in Jan 2010, so not recent, but I don’t recall seeing it on the site before now, in any case it’s still crazy]

I was about to put my laptop down when I decided hey let’s go over to SPEC and see if there are any new NFS results posted.

So I did, you know me I am into that sort of stuff. I’m not a fan of NFS but for some reason the SPEC results still interest me.

So I go and see that NEC has posted some results. NEC isn’t a very well known server or even IT supplier in the U.S. at least as far as I know. I’m sure they got decent market share over in Asia or something.

But anyways they posted some results, and I have to say I’m shocked. Either there is a glaring typo or that is just the worst NAS setup on the face of the planet.

It all comes down to usable capacity. I don’t know how you can pull this off but they did – they apparently have 284 300GB disks on the system but only have 6.1 TB of usable space! That is roughly 83TB of raw storage and they only manage to get something like 6% capacity utilization out of the thing?

Why even bother with disks at all if your going to do that? Just go with a few SSDs.

But WAIT! .. WAIT! It gets better. That 6.1 TB of space is spread across — wait for it — 24 file systems.

12 filesystems were created and used per node. One of 24 filesystems consisted of 8 disks which were divided into two 4-disk RAID 1+0 pools, and each of the other 23 filesystems consisted of 12 disks which were divided into two 6-disk RAID 1+0 pools. There were 6 Disk Array Controllers. One Disk Array Controller controlled 47 disks, and each one of the other 5 controlled 48 disks.

I mean the only thing I can hope for is that the usable capacity is in fact a big typo.

Total Exported Capacity 6226.5GB

But if it’s not I have to hand it to them for being brave enough to post such terrible results. That really takes some guts.

 

March 11, 2010

Panasas NFS performance posted

Filed under: Storage — Tags: , , , — Nate @ 5:48 pm

I have heard of Panasas on occasion and for some reason recently I saw a story or a link to them so I decided to poke around to see what they do. I like technology..

Anyways I was shocked to see their system design. I mean I’ve seen systems like Isilon and Xiotech and Pillar who have embedded controllers in each of their storage shelves, this is an interesting concept for boosting performance though given the added complexity and stuff to each shelf I imagine can boost the costs by quite a bit too I don’t know.

But Panasas has taken it to an even further extreme, putting a disk controller for every two disks in the system! I mean I’m sure it’s great for maximum performance but wow, it just seems like such a massive overkill(which can be good for certain apps I’m sure). I was/am still shocked 🙂

So today I was poking around again at the latest SPEC SFS results for NFS, and saw they posted some numbers finally.

Fairly impressive numbers but I just can’t get past the number of CPUs they are using. They posted 77,137 IOPS with 160 disks hosting NAS data (80 SATA and 80 SSD). They used a total of 110 Intel CPUs (80 1.5Ghz Celerons and 30 1.8Ghz Pentium Ms) and 440 gigabytes of  RAM cache.

By contrast, Avere which I posted about recently (never used their stuff, never talked to them before), posted 131,591 IOPS with 72 disks hosting NAS data(48 15k SAS, 24 SATA), 14 Intel CPUs(2.5Ghz quad core, so 56 cores) and 423 gigabytes of RAM cache. This is on a 6-node cluster. This Avere configuration is not using SSD (they released an SSD version since these results were posted)

The bar certainly is being raised by these players implementing massive caches. NetApp showed off some pretty impressive numbers as well with their PAM last year, more than 500GB of cache(PAM is a read cache only) though again not nearly as effective as Avere since they came in at 60,507 IOPS with 56 15k RPM disks.

March 2, 2010

Avere front ending Isilon

Filed under: Storage — Tags: , , , — Nate @ 1:21 pm

UPDATED

How do all these cool people find our blog? A friendly fellow from Isilon commented that apparently the article from The Register isn’t accurate in that Avere is front ending NetApp gear not Isilon. But in any case I have been thinking about Avere and the Symantec stuff off and on recently anyways.. END UPDATE

A really interesting article over at The Register about how Sony has deployed an Avere cluster(s) to front end their Isilon(and perhaps other) gear too. A good quote:

The thing that grabs your attention here is that Avere is being used to accelerate some of the best scale-out NAS on the planet, not bog standard filers with limited scalability.

Avere certainly has some good performance metrics(pay attention to the IOPS per physical disk), and more recently they introduced a model that run on top of SSD, I haven’t seen any performance results for it yet but I’m sure it’s a significant boost. As The Register mentions in their article if this technology really is good enough for this purpose it has the potential(of course) to be extremely disruptive in the industry, wrecking havoc with many of the remaining (and very quickly dwindling) smaller scale out NAS vendors. Kind of funny really seeing how Isilon spun the news.

From Avere’s site, in talking about comparing Spec SFS results:

A comparison of these results and the number of disks required shows that Avere used dramatically fewer disks. BlueArc used 292 disks to achieve 146,076 ops/sec with 3.34 ms ORT. Exanet used 592 disks to achieve 119,550 ops/sec with 2.07ms ORT (overall response time). HP used 584 disks to achieve 134,689 ops/sec and 2.53 ms ORT. Huawei Symantec used 960 disks to achieve 176,728 ops/sec with 1.67ms ORT. NetApp used 324 disks to achieve 120,011 ops/sec with 1.95ms ORT. By contrast, Avere used only 79 drives to achieve 131,591 ops/sec with 1.38ms ORT. Doing a little math, Avere achieves 3.3, 8.2, 7.2, 9.0, and 4.5 times more ops/sec per disk used than the other vendors.

Which got me thinking again, Symantec last year released a Filestore product, my friends over at 3PAR were asking me if I was interested in it. To-date I have not been because the only performance numbers released to-date have been not very efficient. And it’s still a new product so who knows how well it works in the real world, granted that Symantec does have a history of file systems with their Norton File System (NFS) product.

Unfortunately there isn’t much technical info on the Filestore product on their web site.

Built to run on commodity servers and most storage arrays, FileStore is an incredibly simple-to-install soft appliance. This combination of low-cost hardware, “pay as you grow” scalability and easy administration give FileStore a significant cost advantage over specialized appliances. With support for both SAN and iSCSI storage, FileStore delivers the performance needed for the most demanding applications.

It claims N-way active-active or active-passive clustering, up to 16 nodes in a cluster, up to 2PB of storage and 200 million files per file system. Which for most people is more than enough. I don’t know how it is licensed though or how well it scales on a single node, could it run on a aforementioned 48-all-round system?

Where does 3PAR fit into this? Well Symantec was the first company(so far the only one that I know of) to integrate Thin Reclamation into their file system, which integrates really well with 3PAR arrays at least. The file system uses some sort of SCSI command which is passed back to the array when files are deleted/reclaimed. So that the I/O never hits the spindles, the array transparently re-maps the blocks to be available for use.

3PAR Thin Reclamation for Veritas Storage Foundation keeps storage volumes thin over time by allowing granular, automated, non-disruptive space reclamation within the InServ array. This is accomplished by communicating deleted block information to the InServ using the Thin Reclamation API. Upon receiving this information, the InServ autonomically frees this allocated but unused storage space. The thin reclamation capabilities provide environments using Veritas Storage Foundation by Symantec an easy way to keep their thin volumes thin over time, especially in situations where a large number of writes and deletes occur.

But I was thinking that you could front end one of these Filestore clusters with an Avere cluster and get some pretty flexible high performing storage.

Something I’d like myself to explore at some point.

Powered by WordPress