TechOpsGuys.com Diggin' technology every day

September 17, 2014

NetApp Flash ray ships… with one controller

Filed under: Storage — Tags: , , — Nate @ 10:55 am

Well I suppose it is finally out, or at least in a “limited” way. NetApp apparently is releasing their ground-up rewrite all Flash product Flash Ray, based on a new “MARS” operating system (not related to Ontap).

When I first heard about MARS I heard some promising things, I suppose all of those things were just part of the vision, obviously not where the product is today on launch day. NetApp has been carefully walking back expectations all year. Which turned out to be a smart move, but it seems they didn’t go far enough.

To me it is obvious that they felt severe market pressures and could no longer risk not going to market without their next gen platform available. It’s also obvious that Ontap doesn’t cut it for flash or they wouldn’t of built Flash Ray to begin with.

But shipping a system that only supports a single controller I don’t care if it’s a controlled release or not – giving any customer such a system under any circumstance other than alpha-quality testing just seems absurd.

The “vision” they have is still a good one, on paper anyway — I’m really curious how long it takes them to execute on that vision — given the time it took to integrate the Spinmaker stuff into Ontap. Will it take several years?

In the meantime while your waiting for this vision to come out I wonder what NetApp will offer to get people to want to use this product vs any one of the competing solutions out there. Perhaps by the time this vision is complete this first or second generation of systems will be obsolete anyway.

Current FlashRay system seems to ship with less than 10TB of usable flash (in one system).

On a side note there was some chatter recently about a upcoming EMC XtremIO software update that apparently requires total data loss (or backup & restore) to perform. I suppose that is a sign that the platform is 1) not mature and 2) not designed right(not fully virtualized).

I told 3PAR management back at HP Discover – three years ago they could of counted me as among the people who did not believe 3PAR architecture would be able to adapt to this new era of all flash. I really didn’t have confidence at that time. What they’ve managed to accomplish over the past two years though has just blown me away, and gives me confidence their architecture has many years of life left to it. The main bit missing still is compression – though that is coming.

My new all flash array is of course a 7450 – to start with 4 controllers and ~27TB raw flash (16×1.92TB SSDs), a pair of disk shelves so I can go to as much as ~180TB raw flash (in 8U) without adding any shelves (before compression/dedupe of course). Cost per GB is obviously low(relative to their competition), performance is high(~105k IOPS @ 90% write in RAID 10 @ sub 1ms latency – roughly 20 fold faster than our existing 3PAR F200 with 80x15k RPM in RAID 5 — yes my workloads are over 90% write from a storage perspective), and they have the mature, battle hardened 3PAR OS (used to be named InformOS) running on it.

February 21, 2014

NetApp’s latest mid range SPC-1

Filed under: Storage — Tags: , , — Nate @ 4:14 pm

NetApp came out with their latest generation of storage systems recently and they were good citizens and promptly published SPC-1 results for them.

When is clustering, clustering?

NetApp is running the latest Ontap 8.2 in cluster mode I suppose, though there is only a single pair of nodes in the tested cluster. I’ve never really considered this a real cluster, it’s more of a workgroup of systems. Volumes live on a controller (pair) and can be moved around if needed, they probably have some fancy global management thing for the “cluster” but it’s just a collection of storage systems that are only loosely integrated with each other. I like to compare the NetApp style of clustering to a cluster of VMware hosts (where the VMs would be the storage volumes).

This strategy has it’s benefits as well, the main one being less likelihood that the entire cluster could be taken down by a failure(normally I’d consider this failure to be triggered by a software fault). This is the same reason why 3PAR has elected to-date to not go beyond 8-nodes in their cluster, the risk/return is not worth it in their mind. In their latest generation of high end boxes 3PAR decided to double up the ASICs to give them more performance/capacity rather than add more controllers, though technically there is nothing stopping them from extending the cluster further(to my knowledge).

The downside to workgroup style clustering is that optimal performance is significantly harder to obtain.

3PAR clustering is vastly more sophisticated and integrated by comparison. To steal a quote from their architecture document –

The HP 3PAR Architecture was designed to provide cost-effective, single-system scalability through a cache-coherent, multi-node, clustered implementation. This architecture begins with a multi-function node design and, like a modular array, requires just two initial Controller Nodes for redundancy. However, unlike traditional modular arrays, an optimized interconnect is provided between the Controller Nodes to facilitate Mesh-Active processing. With Mesh-Active controllers, volumes are not only active on all controllers, but they are autonomically provisioned and seamlessly load-balanced across all systems resources to deliver high and predictable levels of performance. The interconnect is optimized to deliver low latency, high-bandwidth communication and data movement between Controller Nodes through dedicated, point-to-point links and a low overhead protocol which features rapid inter-node messaging and acknowledgement.

Sounds pretty fancy right? It’s not something that is for high end only. They have extended the same architecture down as low as a $25,000 entry level price point on the 3PAR 7200 (that price may be out of date, it’s from an old slide).

I had the opportunity to ask what seemed to be a NetApp expert on some of the finer details of clustering in Ontap 8.1 (latest version is 8.2) a couple of years ago and he provided some very informative responses.

Anyway on to the results, after reading up on them it was hard for me not to compare them with the now five year old 3PAR F400 results.

Also I want to point out that the 3PAR F400 is End of Life, and is no longer available to purchase as new as of November 2013 (support on existing systems continues for another couple of years).

MetricNetApp
FAS8040
3PAR
F400
Date tested2/19/20144/27/2009
Controller
Count
24
(hey, it's an actual cluster)
SPC-1 IOPS86,07293,050
SPC-1 Usable
Capacity
32,219 GB
(RAID 6)
27,046 GB
(RAID 1)
Raw
Capacity
86,830 GB56,377 GB
SPC-1 Unused
Storage
Ratio
(may not exceed 45%)
41.79%0.03%
Tested storage
configuration
pricing
$495,652$548,432
SPC-1 Cost
per IOP
$5.76$5.89
Disk size and
number
192 x 450GB 10k RPM384 x 146GB 15k RPM
Data Cache64GB data cache
1,024GB Flash cache
24GB data cache

I find the comparison fascinating myself at least. It is certainly hard to compare the pricing, given the 3PAR results are five years old, the 3PAR mid range pricing model has changed significantly with the introduction of the 7000 series in late 2012.  I believe the pricing 3PAR provided SPC-1 was discounted(I can’t find indication either way, I just believe that based on my own 3PAR pricing I got back then) vs NetApp is list(says so in the document). But again, hard to compare pricing given the massive difference in elapsed time between tests.

Unused storage ratio

What is this number and why is there such a big difference? Well this is a SPC-1 metric and they say in the case of NetApp:

Total Unused Capacity (36,288.553 GB) divided by Physical Storage Capacity (86.830.090 GB) and may not exceed 45%.

A unused storage ratio of 42% is fairly typical for NetApp results.

In the case of 3PAR, you have to go to the bigger full disclosure document(72 pages), as the executive summary has evolved more over time and that specific quote is not in the 3PAR side of things.

So for 3PAR F400 SPC says:

The Physical Storage Capacity consisted of 56,377.243 GB distributed over 384 disk drives each with a formatted capacity of 146.816 GB. There was 0.00 GB (0.00%) of Unused Storage within the Physical Storage Capacity. Global Storage Overhead consisted of 199.071 GB (0.35%) of Physical Storage Capacity. There was 61.203 GB (0.11%) of Unused Storage within the Configured Storage Capacity. The Total ASU Capacity utilized 99.97% of the Addressable Storage Capacity resulting in 6.43 GB (0.03%) of Unused Storage within the Addressable Storage Capacity.

3PAR F400 Storage Hierarchy Ratios

3PAR F400 Storage Hierarchy Ratios

The full disclosure document is not (yet) available for NetApp as of 2/21/2014. It most certainly will become available at some point.

The metrics above and beyond the headline numbers is one of the main reasons I like SPC-1.

With so much wasted space on the NetApp side it is confusing to me why they don’t just use RAID 1 (I think the answer is they don’t support it).

Benefits from cache

The NetApp system is able to leverage it’s terabyte of flash cache to accelerate what is otherwise a slower set of 10k RPM disks, which is nice for them.

They also certainly have much faster CPUs, and more than double the data cache (3PAR’s architecture isolates data cache from the operating system, so I am not sure how much memory on the NetApp side is actually used for data cache vs operating system/meta data etc). 3PAR by contrast has their proprietary ASIC which is responsible for most of the magic when it comes to data processing on their systems.

3PAR does not have any flash cache capabilities so they do require (in this comparison) double the spindle count to achieve the same performance results. Obviously in a newer system configuration 3PAR would likely configure a system with SSDs and sub LUN auto tiering to compensate for the lack of a flash based cache. This does not completely completely compensate however, and of course I have been hounding 3PAR and HP for at least four years now to develop some sort of cache technology that leverages flash. They announced SmartCache in December 2012 (host-based SSD caching for Gen8 servers) however 3PAR integration has yet to materialize.

However keep in mind the NetApp flash cache only accelerates reads. If you have a workload like mine which is 90%+ write the flash cache doesn’t help.

Conclusion

NetApp certainly makes good systems, they offer a lot of features, and have respectable performance. The systems are very very flexible and they have a very unified product line up (same software runs across the board).

For me personally after seeing results like this I feel continually reassured that the 3PAR architecture was the right choice for my systems vs  NetApp (or other 3 letter storage companies).  But not everyone’s priorities are the same. I give NetApp props for continuing to support SPC-1 and being public with their numbers. Maybe some day these flashy storage startups will submit SPC-1 results…….not holding my breath though.

June 22, 2012

NetApp Cluster SPC-1

Filed under: Storage — Tags: , — Nate @ 7:19 pm

Sorry for the off topic posts recently – here is something a little more on topic.

I don’t write about NetApp much, mainly because I believe they have some pretty decent technology, they aren’t a Pillar or an Equallogic. Though sometimes I poke fun. BTW did you hear about that senior Oracle guy that got canned recently and the comments he made about Sun? Oh my, was that funny. I can only imagine what he thought of Pillar. Then there are the folks that are saying Oracle is heavily discounting software so they can sell hardware at list price thus propping up the revenues, net result is Oracle software folks hate Sun. Not a good situation to be in. I don’t know why Oracle couldn’t of just of been happy owning BEA Jrockit JVM and let Sun whither away.

Anyways…

NetApp tried to make some big news recently when they released their newest OS, Ontap 8.1.1. For such a minor version number change (8.1 -> 8.1.1) they sure did try to raise a big fuss about it. Shortly after 8.1 came out I came across some NetApp guy’s blog who was touting this release quite heavily. I was interested in some of the finer points and tried to ask some fair technical questions – I like to know the details. Despite me being a 3PAR person I tried, really hard to be polite and balanced, and the blogger was very thoughtful, informed and responsive and gave a great reply to my questions.

Anyways I’m still sort of un clear what is really new in 8.1.1 vs 8.1 – it sounds to me like it’s just some minor changes from a technical side and they slapped some new marketing on top of it. Well I think the new Hybrid aggregates are perhaps specifically new to 8.1.1 (Also I think some new form of Ontap that can run in a VM for small sites). Maybe 8.1 by itself didn’t make a big enough splash. Or maybe 8.1.1 is what 8.1 was supposed to be (I think I saw someone mention that perhaps 8.1 was a release candidate or something). The SpecSFS results posted by NetApp for their clusters are certainly pretty impressive from a raw performance standpoint. They illustrate excellent scalability up to 24 nodes.

But the whole story isn’t told in the SpecSFS results – partially because things like cost are not disclosed in the results, partially because it doesn’t illustrate the main weakness of the system in that it’s not a single file system, it’s not automatically balanced from either a load or a space perspective.

But I won’t harp on that much this post is about their recent SPC-1 results which I just stumbled upon. These are the first real SPC-1 results NetApp has posted in almost four years – you sort of have to wonder what took them so long. I mean they did release some SPC-1E results a while back but those are purely targeting energy measurements. For me at least, energy usage is probably not even in the top 5 things I look for when I want some new storage. The only time I really care about energy usage is if the installation is really, really small. I mean like the whole site being less than one rack. Energy efficiency is nice but there are a lot of things that are higher on my priority list.

This SPC-1 result from them is built using a 6-node cluster, 3TB of flash cache and 288 GB of data cache spread across the controllers, and only 432 disks – 144 x 450GB per pair of controllers protected with RAID DP. The cost given is $1.67M for the setup. They say it is list pricing – so not being a customer of theirs I’m not sure if it’s apples to apples compared to other setups – some folks show discounted pricing and some show list – I would think it would heavily benefit the tester to illustrate the typical price a customer would pay for the configuration.

  • 250,039 IOPS  @ 3.35ms latency  ($6.89 per SPC-1 IOP)
  • 69.8TB Usable capacity ($23,947 per Usable TB)

Certainly a very respectable I/O number and really amazing latency – I think this is the first real SPC-1 result that is flash accelerated (as opposed to being entirely flash).

What got me thinking though was the utilization. I ragged on what could probably be considered a tier 3 or 4 storage company a while back for just inching by the SPC-1 minimum efficiency requirements. The maximum unused storage cannot exceed 45% and that company was at 44.77%.

Where’s NetApp with this ? Honestly higher than I thought especially considering RAID DP they are at 43.20% unused storage. I mean really – would it not make more sense to simply use RAID 10 and get the extra performance ? I understand that NetApp doesn’t support RAID 10 but it just seems a crying shame to have such low utilization of the spindles. I really would of expected the Flash cache to allow them to drive utilization up. But I suppose they decided to inch out more performance at the cost of usable capacity. I’d honestly be fascinated to see results when they drive unused storage ratio down to say 20%.

The flash cache certainly does a nice job at accelerating reads and letting the spindles run more writes as a result. Chuck over at EMC wrote an interesting post where he picked apart the latest NetApp release. What I found interesting from an outsider perspective is how so much of this new NetApp technology feels bolted on rather than integrated. They seem unable to adapt the core of their OS with this (now old) scale out Spinmaker stuff even after this many years have elapsed.  From a high level perspective the new announcements really do sound pretty cool. But once I got to know more aobut what’s on the inside,  I became less enthusiastic about them. There’s some really neat stuff there but at the same time some pretty dreadful shortcomings in the system still (see the NetApp blog posts above for info).

The plus side though is that at least parts of NetApp are becoming more up front with where they target their technology. Some of the posts I have seen recently both in comments on The Register as well as the NetApp blog above have been really excellent. These posts are honest in that they acknowledge they can’t be everything to everyone, they can’t be the best in all markets. There isn’t one storage design to rule them all. As EMC’s Chuck said – compromise. All storage systems have some degree of compromise in them, NetApp always seems to have had less compromise on the features and more compromise on the design. That honesty is nice to see coming from a company like them.

I met with a system engineer of theirs about a year ago now when I had a bunch of questions to ask and I was tired of getting pitched nothing but dedupe. This guy from NetApp came out and we had a great talk for what must’ve been 90 minutes. Not once was the word dedupe used and I learned a whole lot more about the innards of the platform. It was the first honest discussion I had had with a NetApp rep in all the years I had dealt with them off and on.

At the end of the day I still wasn’t interested in using the storage but felt that hey – if some day I really feel a need to combine the best storage hardware, with what many argue to say is the best storage software (management headaches aside e.g no true active-active automagic load balanced clustering), I can – just go buy a V-series and slap it in front of a 3PAR. I did it once before (really only because there was no other option at the time). I could do it again. I don’t plan to (at least not at the company I’m at now). But the option is there. Just as long as I don’t have to deal with the NetApp team in the Northwest and their dirty underhanded threatening tactics. I’m in the Bay area now so that shouldn’t be hard. The one surprising thing I heard from the reps here is they still can’t do evaluations. Which just seems strange to me. The guy told me if a deal hinged on an evaluation he wouldn’t know what to do.

3PAR of course has no such flash cache technology shipping today, something I’ve brought up with the head of HP storage before. I have been wanting them to release something like it (more specifically more like EMC’s Fast cache – EMC has made some really interesting investments in Flash over recent years – but like NetApp – at least for me the other compromises involved in using an EMC platform doesn’t make me want to use it over a 3PAR even though they have this flash technology) for some time now. I am going to be visiting 3PAR HQ soon and will learn a lot of cool things I’m sure that I won’t be able to talk about for some time to come.

April 10, 2012

What’s wrong with this picture?

Filed under: Datacenter,Virtualization — Tags: , — Nate @ 7:36 am

I was reading this article from our friends at The Register which has this picture for an entry level FlexPod from NetApp/Cisco.

It just seems wrong. I mean the networking stuff. Given NetApp’s strong push for IP-based storage, one would think an entry level solution would simply have 2×48 port 10gig stackable switches, or whatever Cisco’s equivalent is(maybe this is it).

This solution is supposed to provide scalability for up to 1,000 users – what those 1,000 users are actually doing I have no idea, does it mean VDI? Database? web site users? File serving users? ?????????????

It’s also unclear in the article if this itself scales to that level or it just provides the minimum building blocks to scale to 1,000 users (I assume the latter) – and if so what does 1,000 user configuration look like? (or put another way how many users does the below picture support)

I’ll be the first to admit I’m ignorant as to the details and the reason why Cisco needs 3 different devices with these things but whatever the reason seems major overkill for an entry level solution assuming the usage of IP-based storage.

The new entry level flex pod

The choice of a NetApp FAS2000 array seems curious to me – at least given the fact that it does not appear to support that Flex Cache stuff which they like to tout so much.

November 14, 2011

NetApp challenge falls without winners

Filed under: Storage — Tags: — Nate @ 9:52 am

This is just too funny.

Nobody won a million pounds of kit from NetApp because data centre nerds thought the offer was unbelievable or couldn’t be bothered with the paperwork.

NetApp UK offered an award of up to £1m in NetApp hardware, software, and services to a lucky customer that managed to reduce its storage usage by 50 per cent through using NetApp gear. The competition’s rules are still available on NetApp’s website.

For years, I have been bombarded with marketing from NetApp (indirectly from the VARs and consultants they have bribed over the years) about the efficiency of the NetApp platform, especially de-dupe, how it will save you tons of money on storage etc.

Indirectly because I absolutely refused to talk directly to the NetApp team in Seattle after how badly they treated me when I was interested in being a customer a few years ago. It seemed just as bad as EMC was (at the time). I want to be treated as a customer, not a number.

Ironically enough it was NetApp that drove me into the arms of 3PAR, long before I really understood what the 3PAR technology was all about. It was NetApp’s refusal to lend me an evaluation system for any length of time which is what sealed the deal for my first 3PAR purchase. Naturally, at the time 3PAR was falling head over heels at the opportunity to give us an eval, and so we did evaluate their product, an E200 at the time. Which in the end directly led to 4 more array purchases(with a 5th coming soon I believe), and who knows how many more indirectly as a result of my advocacy either here or as a customer reference.

Fortunately my boss at the time, was kind of like me, when the end of the road came for NetApp – and when they dropped their pants (prices) so low that most people could not ignore, my boss was to the point where NetApp could of given us the stuff for free and he would of still bought 3PAR. We asked for eval for weeks and they refused us every time until the last minute.

I have no doubt that de-dupe is effective, how effective is dependent on a large number of factors. Suffice to say I don’t buy it yet myself, at least not as a primary reason to purchase primary storage for online applications.

Anyways, I think this contest, more than anything else is a perfect example. You would think that the NetApp folks out there would of jumped on this, but there are no winners, that is too bad, but very amusing.

I can’t stop giggling.

I guess I should thank NetApp for pointing me in the right direction in the beginning of my serious foray into the storage realm.

So, thanks NetApp! 🙂

Four posts in one morning! And it’s not even 9AM yet! You’d think I have been up since 5AM writing and you’d be right.

November 8, 2011

EMC and their quad core processors

Filed under: Storage — Tags: , , , — Nate @ 8:48 am

I first heard that Fujitsu had storage maybe one and a half years ago, someone told me that Fujitsu was one company that was seriously interested in buying Exanet at the time, which caused me to go look at their storage, I had no idea they had storage systems. Even today I really never see anyone mention them anywhere, my 3PAR reps say they never encounter Fujitsu in the field(at least in these territories they suspect over in Europe they go head to head more often).

Anyways, EMC folks seem to be trying to attack the high end Fujitsu system, saying it’s not “enterprise”, in the end the main leg that EMC has trying to hold on to what in their eyes is “enterprise” is mainframe connectivity, which Fujitsu rightly tries to debunk that myth since there are a lot of organizations that are consider themselves “enterprise” that don’t have any mainframes. It’s just stupid, but EMC doesn’t really have any other excuses.

What prompted me to write this, more than anything else was this

One can scale from one to eight engines (or even beyond in a short timeframe), from 16 to 128 four-core CPUs, from two to 16 backend- and front-end directors, all with up to 16 ports.

The four core CPUs is what gets me. What a waste! I have no doubt that in EMC’s  (short time frame)  they will be migrating to quad socket 10 core CPUs right? After all, unlike someone like 3PAR who can benefit from a purpose built ASIC to accelerate their storage, EMC has to rely entirely on software. After seeing SPC-1 results for HDS’s VSP, I suspect the numbers for VMAX wouldn’t be much more impressive.

My main point is, and this just drives me mad. These big manufacturers touting the Intel CPU drum and then not exploiting the platform to it’s fullest extent. Quad core CPUs came out in 2007. When EMC released the VMAX in 2009, apparently Intel’s latest and greatest was still quad core. But here we are, practically 2012 and they’re still not onto at LEAST hex core yet? This is Intel architecture, it’s not that complicated. I’m not sure what quad core CPUs specifically are in the VMAX, but the upgrade from Xeon 5500 to Xeon 5600 for the most part was

  1. Flash bios (if needed to support new CPU)
  2. Turn box off
  3. Pull out old CPU(s)
  4. Put in new CPU(s)
  5. Turn box on
  6. Get back to work

That’s the point of using general purpose CPUs!! You don’t need to pour 3 years of R&D into something to upgrade the processor.

What I’d like to see, something I mentioned in a comment recently is a quad socket design for these storage systems. Modern CPUs have had integrated memory controllers for a long time now (well only available on Intel since the Xeon 5500). So as you add more processors you add more memory too. (Side note: the documentation for VMAX seems to imply a quad socket design for a VMAX engine but I suspect it is two dual socket systems since the Intel CPUs EMC is likely using are not quad-socket capable). This page claims the VMAX uses the ancient Intel 5400 processors, which if I remember right was the first generation quad cores I had in my HP DL380 G5s many eons ago. If true, it’s even more obsolete than I thought!

Why not 8 socket? or more? Well cost mainly. The R&D involved in an 8-socket design I believe is quite a bit higher, and the amount of physical space required is high as well. With quad socket blades common place, and even some vendors having quad socket 1U systems, the price point and physical size related to quad socket designs is well within reach of storage systems.

So the point is on these high end storage systems you start out with a single socket populated on a quad socket board with associated memory. Want to go faster? add another CPU and associated memory? Go faster still? add two more CPUs and associated memory (though I think it’s technically possible to run 3 CPUs, well there have been 3 CPU systems in the past, it seems common/standard to add them in pairs). Your spending probably at LEAST a quarter million for this system initially, probably more than that, the incremental cost of R&D to go quad socket given this is Intel after all is minimal.

Currently VMAX goes to 8 engines, they say they will expand that to more. 3PAR took the opposite approach, saying while their system is not as clustered as a VMAX is (not their words), they feel such a tightly integrated system (theirs included) becomes more vulnerable to “something bad happening” that impacts the system as a whole, more controllers is more complexity. Which makes some sense. EMC’s design is even more vulnerable being that it’s so tightly integrated with the shared memory and stuff.

3PAR V-Class Cluster Architecture with low cost high speed passive backplane with point to point connections totalling 96 Gigabytes/second of throughput

3PAR goes even further in their design to isolate things – like completely separating control cache which is used for the operating system that powers the controllers and for the control data on top of it, with the data cache, which as you can see in the diagram below is only connected to the ASICs, not to the Intel CPUs. On top of that they separate the control data flow from the regular data flow as well.

One reason I have never been a fan of “stacking” or “virtual chassis” on switches is the very same reason, I’d rather have independent components that are not tightly integrated in the event “something bad” takes down the entire “stack”. Now if your running with two independent stacks, so that one full stack can fail without an issue then that works around that issue, but most people don’t seem to do that. The chances of such a failure happening are low, but they are higher than something causing all of the switches to fail if the switches were not stacked.

One exception might be some problems related to STP which some people may feel they need when operating multiple switches. I’ll answer that by saying I haven’t used STP in more than 8 years, so there have been ways to build a network with lots of devices without using STP for a very long time now. The networking industry recently has made it sound like this is something new.

Same with storage.

So back to 3PAR. 3PAR changed their approach in their V-series of arrays, for the first time in the company’s history they decided to include TWO ASICs in each controller, effectively doubling the I/O processing abilities of the controller. Fewer, more powerful controllers. A 4-node V400 will likely outperform an 8-node T800. Given the system’s age, I suspect a 2-node V400 would probably be on par with an 8-node S800 (released around 2003 if I remember right).

3PAR V-Series ASIC/CPU/PCI/Memory Architecture

EMC is not alone, and not the worst abuser here though. I can cut them maybe a LITTLE slack given the VMAX was released in 2009. I can’t cut any slack to NetApp though. They recently released some new SPEC SFS results, which among other things, disclosed that their high end 6240 storage system is using quad core Intel E5540 processors. So basically a dual proc quad core system. And their lower end system is — wait for it — dual proc dual core.

Oh I can’t describe how frustrated that makes me, these companies touting using general purpose CPUs and then going out of their way to cripple their systems. It would cost NetApp all of maybe what $1200 to upgrade their low end box to quad cores? Maybe $2500 for both controllers? But no they rather you spend an extra, what $50,000-$100,000  to get that functionality?

I have to knock NetApp more to some extent since these storage systems are significantly newer than the VMAX, but I knock them less because they don’t champion the Intel CPUs as much as EMC does, that I have seen at least.

3PAR is not a golden child either, their latest V800 storage system uses — wait for it — quad core processors as well. Which is just as disgraceful. I can cut 3PAR more slack because their ASIC is what provides the horsepower on their boxes, not the Intel processors, but still that is no excuse for not using at LEAST 6 core processors. While I cannot determine precisely which Intel CPUs 3PAR is using, I know they are not using Intel CPUs because they are ultra low power since the clock speeds are 2.8Ghz.

Storage companies aren’t alone here, load balancing companies like F5 Networks and Citrix do the same thing. Citrix is better than F5 in their offering software “upgrades” on their platform that unlock additional throughput. Without the upgrade you have full reign of all of the CPU cores on the box which allow you to run more expensive software features that would normally otherwise impact CPU performance. To do this on F5 you have to buy the next bigger box.

Back to Fujitsu storage for a moment, their high end box certainly seems like a very respectable system with regards to paper numbers anyways. I found it very interesting the comment on the original article that mentioned Fujitsu can run the system’s maximum capacity behind a single pair of controllers if the customer wanted to, of course the controllers couldn’t drive all the I/O but it is nice to see the capacity not so tightly integrated to the controllers like it is on the VMAX or even on the 3PAR platform. Especially when it comes to SATA drives which aren’t known for high amounts of I/O, higher end storage systems such as the recently mentioned HDS, 3PAR and even VMAX tap out in “maximum capacity” long before they tap out in I/O if your loading the system with tons of SATA disks. It looks like Fujitsu can get up to 4.2PB of space leaving, again HDS, 3PAR and EMC in the dust. (Capacity utilization is another story of course).

With Fujitsu’s ability to scale the DX8700 to 8 controllers, 128 fibre channel interfaces, 2,700 drives and 512GB of cache that is quite a force to be reckoned with. No sub-disk distributed RAID, no ASIC acceleration, but I can certainly see how someone would be willing to put the DX8700 up against a VMAX.

EMC was way late to the 2+ controller hybrid modular/purpose built game and is still playing catch up. As I said to Dell last year, put your money where your mouth is and publish SPC-1 results for your VMAX, EMC.

With EMC so in love with Intel I have to wonder how hard they had to fight off Intel from encouraging EMC to use the Itanium processor in their arrays instead of Xeons. Or has Intel given up completely on Itanium now (which, again we have to thank AMD for – without AMD’s x86-64 extensions the Xeon processor line would of been dead and buried many years ago).

For insight to what a 128-CPU core Intel-based storage system may perform in SPC-1, you can look to this system from China.

(I added a couple diagrams, I don’t have enough graphics on this site)

March 2, 2010

Avere front ending Isilon

Filed under: Storage — Tags: , , , — Nate @ 1:21 pm

UPDATED

How do all these cool people find our blog? A friendly fellow from Isilon commented that apparently the article from The Register isn’t accurate in that Avere is front ending NetApp gear not Isilon. But in any case I have been thinking about Avere and the Symantec stuff off and on recently anyways.. END UPDATE

A really interesting article over at The Register about how Sony has deployed an Avere cluster(s) to front end their Isilon(and perhaps other) gear too. A good quote:

The thing that grabs your attention here is that Avere is being used to accelerate some of the best scale-out NAS on the planet, not bog standard filers with limited scalability.

Avere certainly has some good performance metrics(pay attention to the IOPS per physical disk), and more recently they introduced a model that run on top of SSD, I haven’t seen any performance results for it yet but I’m sure it’s a significant boost. As The Register mentions in their article if this technology really is good enough for this purpose it has the potential(of course) to be extremely disruptive in the industry, wrecking havoc with many of the remaining (and very quickly dwindling) smaller scale out NAS vendors. Kind of funny really seeing how Isilon spun the news.

From Avere’s site, in talking about comparing Spec SFS results:

A comparison of these results and the number of disks required shows that Avere used dramatically fewer disks. BlueArc used 292 disks to achieve 146,076 ops/sec with 3.34 ms ORT. Exanet used 592 disks to achieve 119,550 ops/sec with 2.07ms ORT (overall response time). HP used 584 disks to achieve 134,689 ops/sec and 2.53 ms ORT. Huawei Symantec used 960 disks to achieve 176,728 ops/sec with 1.67ms ORT. NetApp used 324 disks to achieve 120,011 ops/sec with 1.95ms ORT. By contrast, Avere used only 79 drives to achieve 131,591 ops/sec with 1.38ms ORT. Doing a little math, Avere achieves 3.3, 8.2, 7.2, 9.0, and 4.5 times more ops/sec per disk used than the other vendors.

Which got me thinking again, Symantec last year released a Filestore product, my friends over at 3PAR were asking me if I was interested in it. To-date I have not been because the only performance numbers released to-date have been not very efficient. And it’s still a new product so who knows how well it works in the real world, granted that Symantec does have a history of file systems with their Norton File System (NFS) product.

Unfortunately there isn’t much technical info on the Filestore product on their web site.

Built to run on commodity servers and most storage arrays, FileStore is an incredibly simple-to-install soft appliance. This combination of low-cost hardware, “pay as you grow” scalability and easy administration give FileStore a significant cost advantage over specialized appliances. With support for both SAN and iSCSI storage, FileStore delivers the performance needed for the most demanding applications.

It claims N-way active-active or active-passive clustering, up to 16 nodes in a cluster, up to 2PB of storage and 200 million files per file system. Which for most people is more than enough. I don’t know how it is licensed though or how well it scales on a single node, could it run on a aforementioned 48-all-round system?

Where does 3PAR fit into this? Well Symantec was the first company(so far the only one that I know of) to integrate Thin Reclamation into their file system, which integrates really well with 3PAR arrays at least. The file system uses some sort of SCSI command which is passed back to the array when files are deleted/reclaimed. So that the I/O never hits the spindles, the array transparently re-maps the blocks to be available for use.

3PAR Thin Reclamation for Veritas Storage Foundation keeps storage volumes thin over time by allowing granular, automated, non-disruptive space reclamation within the InServ array. This is accomplished by communicating deleted block information to the InServ using the Thin Reclamation API. Upon receiving this information, the InServ autonomically frees this allocated but unused storage space. The thin reclamation capabilities provide environments using Veritas Storage Foundation by Symantec an easy way to keep their thin volumes thin over time, especially in situations where a large number of writes and deletes occur.

But I was thinking that you could front end one of these Filestore clusters with an Avere cluster and get some pretty flexible high performing storage.

Something I’d like myself to explore at some point.

Powered by WordPress