Stella & Dot relies on HP 3PAR StoreServ Storage
"Highly reliable HP 4-node storage architecture supports over 30,000 e-commerce Independent Business Owners worldwide"
This is also available on HP's case study website http://www.hp.com/go/success
I have had this blog since July 2009, and I don't believe ever once have I mentioned any of my employers names. This will be an exception to that record.
HP came to us last year when we were in the market for their 3PAR 7450-all flash system. There was some people in management over there that really liked our company's brand and I'm told practically everyone in 3PAR is aware of me. So they wanted to do a case study with the company I work for on our usage of 3PAR. I have participated in one, or maybe two 3PAR case studies in the past, prior to HP acquisition. The last one was in 2010 on a 3PAR T400. That particular company I was with had a policy where nobody with less than a VP title could be quoted. So my boss's boss got all the credit even though of course I did all of the magic. Coincidentally I left that company just a few months later(for a completely different reason).
This company is different, I've had an extremely supportive management for my entire four years at this company and the newest management that joined in late 2013/early 2014 has been even more supportive. They really wanted me to get as much credit as I could get for all the hard work I do. So it's my name all over the case study not theirs. It's people like this that more than anything keep me happy in my current role, I don't see myself going anywhere anytime soon (added bonus is the company is stable and I believe will have no trouble surviving the next tech crash without an issue since we aren't tech oriented).
Anyway, the experience with HP in making the case study was quite good. They are very patient, we said we needed time to work with the new system before we did the case study. They told us take all the time we want, no rush. About 8 months into using the new 7450 they reached out again and we agreed to start the process.
I spent a couple hours on the phone with them, and exchanged a few emails. They converted a lot of my technical jargon into marketing jargon (I don't actually talk like that!), though the words seemed reasonable to me. The only thing that is kind of a mistake in the article is we don't leverage any of the 3PAR "Application Suites". I mentioned this to them saying they can remove those references if they wish, I didn't care either way. At the end they also make reference to a support event I had with 3PAR five years ago which was at the previous company, and they credited HP for it when technically it was well before the acquisition(told them that as well, though seems reasonable to credit HP to some extent for that since I'd wager the same staff that performed those actions worked at HP for a while anyways, or maybe they are still there).
I would wager that my feedback into the benefits I see with 3PAR are probably not typical among HP customers. The HP Solutions Architect assigned to our account has told me on several occasions that he believes I operate my 3PAR systems better than most any other customer he's seen, that made me feel good even though I already felt I operate my systems pretty well!
On that note, our SaaS monitoring service Logic Monitor is working with me to try to formalize my custom 3PAR monitoring I wrote (which gathers about 12,000 data points a minute from our three arrays) into something more customers can leverage, and if they can get that done then I hope I can get HP to endorse their service for monitoring 3PAR because it really works well in general, and better than any other tool I've seen or used for my 3PAR monitoring needs at least.
3PAR 8000 & 20450
I'm pretty excited about the new 8000-series and the new 20450 (4-node 20k series) that came out a few days ago. I would say really excited but given 3PAR's common architecture, the newer form factors were already expected by me. I am quite happy that HP released an 8440 with the exact same specs as the 8450 (meaning 3X more data cache than the 8400 and more, faster CPU cores), also the 4-node 20450 has the same cache and CPU allocations per-node that the all-flash 8-node 20850 has. This means you can get these systems and not be "forced" into an all flash configuration, since the 8450 and the 20850 systems are "marketing limited" to be all flash(to make people like Gartner happy).
(I'm in Vegas, waiting for HP Discover to start(got here Friday afternoon), this time around I had my own company pay my way, so HP isn't providing me with the trip. Since I haven't been blogging much the past year I didn't feel good about asking HP to cover me as a blogger.)
UPDATED - 6/4/2015
When flash first started becoming a force to be reckoned with a few years ago in enterprise storage it was clear to me controller performance wasn't up to the task of being able to exploit the performance potential of the medium. The ratio of controllers to SSDs was just way out of whack.
In my opinion this has been largely addressed in the 3PAR Generation 5 systems that are being released today.
NEW HP 3PAR 20800
I believe this replaces the 10800/V800 (2-8 controllers) system, I believe it also replaces the 10400/V400(2-4 controllers) system as well.
- 2-8 controllers, max 96 x 2.5Ghz CPU cores(12/controller) and 16 Generation 5 ASICs (2/controller)
- 224GB of RAM cache per controller (1.8TB total - 768GB control / 1,024GB data)
- Up to 32TB of Flash read cache
- Up to 6PB of raw capacity (SSD+HDD)
- Up to 15PB of usable capacity w/deduplication
- 12Gb SAS back end, 16Gb FC (Max 160 ports) / 10Gb iSCSI (Max 80 ports) front end
- 10 Gigabit ethernet replication port per controller
- 2.5 Million Read IOPS under 1 millisecond
- Up to 75 Gigabytes/second throughput (read), up to 30 Gigabytes/second throughput (write)
NEW HP 3PAR 20850
This augments the existing 7450 with a high end 8-controller capable all flash offering, similar to the 20800 but with more adrenaline.
- 2-8 controllers, max 128 x 2.5Ghz CPU cores(16/controller), and 16 Generation 5 ASICs (2/controller)
- 448GB of RAM cache per controller (3.6TB total - 1,536GB control - 2,048GB data)
- Up to 4PB of raw capacity (SSD only)
- Up to 10PB of usable capacity w/deduplication
- 12Gb SAS back end, 16Gb FC (Max 160 ports) / 10Gb iSCSI (Max 80 ports) front end
- 10 Gigabit ethernet replication port per controller
- 3.2 Million Read IOPS under 1 millisecond
- 75 Gigabytes/second throughput (read), 30 Gigabytes/second throughput (write)
Even though flash is really, really fast, 3PAR leverages large amounts of cache to optimize writes to the back end media, this not only improves performance but extends SSD life. If I recall correctly nearly 100% of the cache on the all SSD-systems is write cache, reads are really cheap so very little caching is done on reads.
It's only going to get faster
One of the bits of info I learned is that the claimed throughput by HP is a software limitation at this point. The hardware is capable of much more and performance will improve as the software matures to leverage the new capabilities of the Generation 5 ASIC (code named "Harrier 2").
Magazine sleds are no more
Since the launch of the early 3PAR high end units more than a decade ago 3PAR had leveraged custom drive magazines which allowed them to scale to 40x3.5" drives in a 4U enclosure. That is quite dense, though they did not have similar ultra density for 2.5" drives. I was told that nearline drives represent around 10% of disks sold on 3PAR so they decided to do away with these high density enclosures and go with 2U enclosures without drive magazines. This allows them to scale to 48x2.5" drives in 4U (vs 40 2.5" in 4U before), but only 24x3.5" drives in 4U (vs 40 before). Since probably greater than 80% of the drives they ship now are 2.5" that is probably not a big deal. But as a 3PAR historian(perhaps) I found it interesting.
Along similar lines, the new 20k series systems are fully compatible with 3rd party racks. The previous 10k series as far as I know the 4-node variant was 3rd party compatible but 8-node required a custom HP rack.
Inner guts of a 3PAR 20k-series controller
With dual internal SATA SSDs for the operating system(I assume some sort of mirroring going on), eight memory slots for data cache(max 32GB per slot) - these are directly connected to the pair of ASICs(under the black heatsinks). Another six memory slots for the control cache(operating system, meta data etc) also max 32GB per slot those are controlled by the Intel processors under the giant heat sinks.
How much does it cost?
I'm told the new 20k series is surprisingly cost effective in the market. The price point is significantly lower than earlier generation high end 3PAR systems. It's priced higher than the all flash 7450 for example but not significantly more, entry level pricing is said to be in the $100,000 range, and I heard a number tossed around that was lower than that. I would assume that the blanket thin provisioning licensing model of the 7000 series extends to the new 20k series, but I am not certain.
It is only available in a 8-controller capable system at this time, so requires at least 16U of space for the controllers since they are connected over the same sort of backplane as earlier models. Maybe in the future HP will release a smaller 2-4 controller capable system or they may leave that to whatever replaces the 7450. I hope they come out with a smaller model because the port scalability of the 7000-series(and F and E classes before them) is my #1 complaint on that platform, having only one PCIe expansion slot/controller is not sufficient.
NEW HP 3PAR 3.84TB cMLC SSD
HP says that this is the new Sandisk Optimus MAX 4TB drive, which is targeted at read intensive applications. If a customer decides to use this drive in non read intensive(and non 3PAR) then the capacity will drop to 3.2TB. So with 3PAR's adaptive sparing they are able to gain 600GB of capacity on the drive while simultaneously supporting any workload without sacrificing anything.
This is double the size of the 1.92TB drive that was released last year. It will be available on at least the 20k and the 7450, most likely all other Gen4 3PAR platforms as well.
HP says this drops the effective cost of flash to $1.50/GB usable.
This new SSD comes with the same 5 year unconditional warranty that other 3PAR SSDs already enjoy.
I specifically mention the 7450 here because this new SSD effectively doubles the raw capacity of the system to 920TB of raw flash today (vs 460TB before). How many all flash systems scale to nearly a petabyte of raw flash?
NEW HP 3PAR Persistent Checksum
With the Generation 4 systems 3PAR had end to end T10 data integrity checking within the array itself from the HBAs to the ASICs, to the back end ports and disks/SSDs. Today they are extending that to the host HBAs, and fibre channel switches as well (not sure if this extends to iSCSI connections or not).
The Generation 5 ASIC has a new line rate SHA1 engine which replaces the line rate CRC engine in Generation 4 for even better data protection. I am not certain if persistent checksum is Generation 5 specific(given they are extending it beyond the array I really would expect it to be possible in Generation 4 as well).
NEW HP 3PAR Asynchronous Streaming Replication
I first heard about this almost two years ago at HP Storage Tech Day, but today it's finally here. HP adds another method of replication to the existing sets they already had:
- Synchronous replication - 0 data loss (strict latency limits)
- Synchronous long distance replication (requires 3 arrays) - 0 data loss (latency limits between two of the three arrays)
- Asynchronous replication - as low as 5 minutes of data loss (less strict latency limits)
- Asynchronous streaming replication - as low as 1 second of data loss (less strict latency limits)
HP compares this to EMC's SRDF async replication which has as low as a 15 seconds of data loss, vs 3PAR with as low as 1 second.
If for some reason more data comes into the system than the replication link can handle, the 3PAR will automatically go into asynchronous replication mode until the replication is caught up then switch back to asynchronous streaming.
This new feature is available on all Gen4 and Gen5 systems.
NEW Co-ordinated Snapshots
3PAR has long had the ability to snapshot multiple volumes on a single system simultaneously, and it's always been really easy to use. Now they have extended this to be able to snapshot across multiple arrays simultaneously and make them application aware (in the case of VMware initially, Exchange, SQL Server and Oracle to follow).
This new feature is available on all Gen4 and Gen5 systems.
HP 3PAR Storage Federation
Up to 60PB of usable capacity and 10 Million IOPS with zero overhead
HP has talked about Storage Federation in the past, today with the new systems of course the capacity knobs have been turned up a lot, they've made it easier to use than earlier versions of the software, though don't yet have completely automatic load balancing between arrays yet.
This federation is possible between all Gen4 and Gen5 systems.
Benefits from ASIC Acceleration
3PAR has always use in house custom ASICs on their systems and these are no different
The ASICs within each HP 3PAR StoreServ 20850 and 20800 Storage controller node serve as the high-performance engines that move data between three I/O buses, a four memory-bank data cache, and seven high-speed links to the other controller nodes over the full-mesh backplane. These ASICs perform RAID parity calculations on the data cache and inline zero-detection to support the system’s data compaction technologies. CRC Logical Block Guard used by T10-DIF is automatically calculated by the HBAs to validate data stored on drives with no additional CPU overhead. An HP 3PAR StoreServ 20800 Storage system with eight controller nodes has 16 ASICs totaling 224 GB/s of peak interconnect bandwidth.
NEW Online data import from HDS arrays
You are now able to do online import of data volumes from Hitachi arrays in addition to the EMC VMAX, CX4, VNX, and HP EVA systems.
HP touts the scalability of usable and raw flash capacity of these new systems + the new 3.84TB SSD against their competition:
- Consolidate thirty Pure Storage //m70 storage systems onto a single 3PAR 20850 (with 87% less power/cooling/space) ***
- Consolidate eight XtremeIO storage systems onto a single 3PAR 20850 (with 62% less power/cooling/space)
- Consolidate three EMC VMAX 400K storage systems onto a single 3PAR 20850 (with 85% less power/cooling/space)
HP also touts their throughput numbers (75GB/second) are between two and ten times faster than the competition. The 7450 came in at only 5.5GB/second, so this is quite a step up.
*** HP revised their presentation last minute their original claims were against the Pure 450, which was replaced by the m70 on the same day of the 3PAR announcement. The numbers here are from memory from a couple of days ago they may not be completely accurate.
Fastest growing in the market
HP touted again 3PAR was the fastest growing all flash in the market last year. They also said they have sold more than 1,000 all flash systems in the first half which is more than Pure Storage sold in all of last year. In other talks with 3PAR folks specifically on market share they say they are #1 in midrange in Europe and #2 in Americas, with solid growth across the board consistently for many quarters now. 3PAR is still #5 in the all flash market, part of that is likely due to compression(see below), but I have no doubt this new generation of systems will have a big impact on the market.
Still to come
Compression remains a road map item, they are working on it, but obviously not ready for release today. Also this marks probably the first 3PAR hardware released in more than a decade that wasn't accompanied by SPC-1 results. HP says SPC-1 is coming, and it's likely they will do their first SPC-2 (throughput) test on the new systems as well.
HP continues to show that it's 3PAR architecture is fully capable of embracing the all flash era and has a long life left in it. Not only are you getting the maturity of the enterprise proven 3PAR systems (over a decade at this point), but you are not having to compromise on almost anything else related to all flash(compression being the last holdout).
Well I suppose it is finally out, or at least in a "limited" way. NetApp apparently is releasing their ground-up rewrite all Flash product Flash Ray, based on a new "MARS" operating system (not related to Ontap).
When I first heard about MARS I heard some promising things, I suppose all of those things were just part of the vision, obviously not where the product is today on launch day. NetApp has been carefully walking back expectations all year. Which turned out to be a smart move, but it seems they didn't go far enough.
To me it is obvious that they felt severe market pressures and could no longer risk not going to market without their next gen platform available. It's also obvious that Ontap doesn't cut it for flash or they wouldn't of built Flash Ray to begin with.
But shipping a system that only supports a single controller I don't care if it's a controlled release or not - giving any customer such a system under any circumstance other than alpha-quality testing just seems absurd.
The "vision" they have is still a good one, on paper anyway -- I'm really curious how long it takes them to execute on that vision -- given the time it took to integrate the Spinmaker stuff into Ontap. Will it take several years?
In the meantime while your waiting for this vision to come out I wonder what NetApp will offer to get people to want to use this product vs any one of the competing solutions out there. Perhaps by the time this vision is complete this first or second generation of systems will be obsolete anyway.
Current FlashRay system seems to ship with less than 10TB of usable flash (in one system).
On a side note there was some chatter recently about a upcoming EMC XtremIO software update that apparently requires total data loss (or backup & restore) to perform. I suppose that is a sign that the platform is 1) not mature and 2) not designed right(not fully virtualized).
I told 3PAR management back at HP Discover - three years ago they could of counted me as among the people who did not believe 3PAR architecture would be able to adapt to this new era of all flash. I really didn't have confidence at that time. What they've managed to accomplish over the past two years though has just blown me away, and gives me confidence their architecture has many years of life left to it. The main bit missing still is compression - though that is coming.
My new all flash array is of course a 7450 - to start with 4 controllers and ~27TB raw flash (16x1.92TB SSDs), a pair of disk shelves so I can go to as much as ~180TB raw flash (in 8U) without adding any shelves (before compression/dedupe of course). Cost per GB is obviously low(relative to their competition), performance is high(~105k IOPS @ 90% write in RAID 10 @ sub 1ms latency - roughly 20 fold faster than our existing 3PAR F200 with 80x15k RPM in RAID 5 -- yes my workloads are over 90% write from a storage perspective), and they have the mature, battle hardened 3PAR OS (used to be named InformOS) running on it.
(I don't know if I need one of those disclaimer things at the top here that says HP paid for my hotel and stuff in Vegas for Discover because I learned about this before I got here and was going to write about it anyway, but in any case know that..)
All about Flash
The 3PAR announcements at HP Discover this week are all about HP 3PAR's all flash array the 7450, which was announced at last year's Discover event in Las Vegas. HP has tried hard to convince the world that the 3PAR architecture is competitive even in the new world of all flash. Several of the other big players in storage - EMC, NetApp, and IBM have all either acquired companies specialized in all flash or in the case of NetApp they acquired and have been simultaneously building a new system(apparently called Flash Ray which folks think will be released later in the year).
Dell and HDS, like HP have not decided to do that, instead relying on in house technology for all flash use cases. Of course there have been a ton of all flash startups, all trying to be market disruptors.
So first a brief recap of what HP has done with 3PAR to-date to optimize for all flash workloads:
- Faster CPUs, doubling of the data cache (7400 vs 7450)
- Sophisticated monitoring and alerting with SSD wear leveling (alert at 90% of max endurance, force fail the SSD at 95% max endurance)
- Adaptive Read cache - only read what you need, does not attempt to read ahead because the penalty for going back to the SSD is so small, and this optimizes bandwidth utilization
- Adaptive write cache - only write what you require, if 4kB of a 16kB page is written then the array only writes 4kB, which reduces wear on the SSD who typically has a shorter life span than that of spinning rust.
- Autonomic cache offload - more sophisticated cache flushing algorithms (this particular one has benefits for disk-based 3PARs as well)
- Multi tenant I/O processing - multi threaded cache flushing, and supports both large(typically sequential) and small I/O sizes(typically random) simultaneously in an efficient manor - separates the large I/Os into more efficient small I/Os for the SSDs to handle.
- Adaptive sparing - basically allows them to unlock hidden storage capacity (upwards of 20%) on each SSD to use for data storage without compromising anything.
- Optimize the 7xxx platform by leveraging PCI Express' Message Signal Interrupts which allowed the system to reach a staggering 900,000 IOPS at 0.7 millisecond response times (caveat that is a 100% read workload)
I learned at HP Storage tech day last year that among the features 3PAR was working on was:
- In line deduplication for file and block
- File+Object services running directly on 3PAR controllers
There were no specifics given at the time.
Well part of that wait is over.
In what I believe is an industry exclusive, somehow 3PAR has managed to find some spare silicon in their now 3-year old Gen4 ASIC to give complete CPU-offloaded inline deduplication for transactional workloads on their 7450 all flash array.
They say that the software will return typically a 4:1 to 10:1 data reduction levels. This is not meant to compete against HP StoreOnce which offers much higher levels of data reduction, this is for transaction processing (which StoreOnce cannot do) and primarily to reduce the cost of operating an all flash system.
It has been interesting to see 3PAR evolve, as a customer of theirs for almost eight years now. I remember when NetApp came out and touted deduplication for transactional workloads and 3PAR didn't believe in the concept due to the performance hit you would(and they did) take.
Now they have line rate(I believe) hardware deduplication so that argument no longer applies. The caveat, at least for this announcement is this feature is limited to the 7450. There is nothing technically that prevents it from getting to their other Gen4 systems whether it is the 7200, 7400, the 10400, and 10800. But support for those is not mentioned yet, I imagine 3PAR is beating their drum to the drones out there who might be discounting 3PAR still because they have a unified architecture between AFA and hybrid flash/disk and disk-only systems(like mine).
One of 3PAR's main claims to fame is that you can crank up a lot of their features and they do not impact system performance because most of it is performed by the ASIC, it is nice to see that they have been able to continue this trend, and while it obviously wasn't introduced on day one with the Gen4 ASIC, it does not require customers wait, or upgrade their existing systems (whenever the Gen5 comes out I'd wager December 2015) to the next generation ASIC to get this functionality.
The deduplication operates using fixed page sizes that are 16kB each, which is a standard 3PAR page size for many operations like provisioning.
For 3PAR customers note that this technology is based on Common Provisioning Groups(CPG). So data within a CPG can be deduplicated. If you opt for a single CPG on your system and put all of your volumes on it, then that effectively makes the deduplication global.
This is a patented approach which allows 3PAR to use significantly less memory than would be otherwise required to store lookup tables.
Thin clones are basically the ability to deduplicate VM clones (I imagine this would need hypervisor integration like VAAI) for faster deployment. So you could probably deploy clones at 5-20x the speed at which you could before.
NetApp here too has been touting a similar approach for a few years on their NFS platform anyway.
Two Terabyte SSDs
Well almost 2TB, coming in at 1.9TB, these are actually 1.6TB cMLC SSDs but with the aforementioned adaptive sparing it allows 3PAR to bump the usable capacity of the device way up without compromising on any aspect of data protection or availability.
I also quote aforementioned PDF
The 1920GB is available only in the StoreServ 7450 until the end of September 2014.
It will then be available in other models as well October 2014.
These SSDs come with a five year unconditional warranty, which is better than the included warranty on disks on 3PAR(three year). This 5-year warranty is extended to the 480GB and 920GB MLC SSDs as well. Assuming Sandisk is indeed the supplier as they claim the 5-year warranty exceeds the manufacturer's own 3-year warranty.
These are technically consumer grade however HP touts their sophisticated flash features that make the media effectively more reliable than it otherwise might be in another architecture, and that claim is backed by the new unconditional warranty.
These are much more geared towards reads vs writes, and are significantly lower cost on a per GB basis than all previous SSD offerings from HP 3PAR.
The cost impact of these new SSDs is pretty dramatic, with the per GB list cost dropping from about $26.50 this time last year to about $7.50 this year.
These new SSDs allow for up to 460TB of raw flash on the 7450, which HP claims is seven times more than Pure Storage(which is a massively funded AFA startup), and 12 times more than a four brick EMC ExtremIO system.
With deduplication the 7450 can get upwards of 1.3PB of usable flash capacity in a single system along with 900,000 read I/Os with sub millisecond response times.
Dell Compellent about a year or so ago updated their architecture to leverage what they called read optimized low cost SSDs, and updated their auto tiering software to be aware of the different classes of SSDs. There are no tiering enhancements announced today, in fact I suspect you can't even license the tiering software on a 3PAR 7450 since there is only one "tier" there.
So what do you get when you combine this hardware accelerated deduplication and high capacity low cost solid state media?
Solid state at less than $2/GB usable
HP says this puts solid state costs roughly in line with that of 15k RPM spinning disk. This is a pretty impressive feat. Not a unique one, there are other players out there that have reached the same milestone, but that is obviously not the only arrow 3PAR has in their arsenal.
That arsenal, is what HP believes is the reason you should go 3PAR for your all flash workloads. Forget about the startups, forget about EMC's Xtrem IO, forget about NetApp Flash Ray, forget about IBM's TMS flash systems etc etc.
Six nines of availability, guaranteed
HP is now willing to put their money where their mouth is and sign a contract that guarantees six nines of availability on any 4-node 3PAR system (originally thought it was 7450-specific it is not). That is a very bold statement to make in my opinion. This obviously comes as a result of an architecture that has been refined over the past roughly fifteen years and has some really sophisticated availability features including:
- Persistent ports - very rapid fail over of host connectivity for all protocols in the event of planned or unplanned controller disruption. They have laser loss detection for fibre channel as well which will fail over the port if the cable is unplugged. This means that hosts do not require MPIO software to deal with storage controller disruptions.
- Persistent cache - rapid re-mirroring of cache data to another node in the event of planned or unplanned controller disruption. This prevents the system from going into "write through" mode which can otherwise degrade performance dramatically. The bulk of the 128 GB of cache(4-node) on a 7450 is dedicated to writes (specifically optimizing I/O for the back end for the most efficient usage of system resources).
- The aforementioned media wear monitoring and proactive alerting (for flash anyway)
They have other availability features that span systems(the guarantee does not require any of these):
- Synchronous short range, and long range(3 site) replication
- Peer persistence - a pair of 3PAR arrays act as active-active for a VMware cluster with zero downtime in the event of a failure.
I would bet that you'll have to follow very strict guidelines to get HP to sign on the dotted line, no deviation from supported configurations. 3PAR has always been a stickler for what they have been willing to support, for good reason.
HP won't sign on a dotted line for this, but with the previously released Priority Optimization, customers can guarantee their applications:
- Performance minimum threshold
- Performance maximum threshold (rate limiting)
- Latency target
In a very flexible manor, these capabilities (combined) I believe are still unique in the industry (some folks can do rate limiting alone).
Online import from EMC VNX
This was announced about a month or so ago, but basically HP makes it easy to import data from a EMC VNX without any external appliances, professional services, or performance impact.
This product (which is basically a plugin written to interface with VNX/CX's SMI-S management interface) went generally available I believe this past Friday. I watched a full demo of it at Discover and it was pretty neat. It does require direct fibre channel connections between the EMC and 3PAR system, and it does require (in the case of Windows anyway that was the demo) two outages on the server side:
- Outage 1: remove EMC Powerpath - due to some damage Powerpath leaves behind you must also uninstall and re-install the Microsoft MPIO software using the standard control panel method. These require a restart.
- Outage 2: Configure Microsoft MPIO to recognize 3PAR (requires restart)
Once the 2nd restart is complete the client system can start using the 3PAR volumes as the data migrates in the background.
So online import may not be the right term for it, since the system does have to go down at least in the case of Windows configuration.
The import process currently supports Windows and Linux. The product came about as a result of I believe the end of life status of the MPX 2000 appliance which HP had been using to migrate data. So they needed something to replace that functionality so they were able to leverage the Peer Motion technology on 3PAR already that was already used for importing data from HP EVA storage and extend it to EMC. They are evaluating the possibility of extending this to more platforms - I guess VNX/CX was an easy first target given there are a lot of those out there that are old and there isn't an easier migration path than the EMC to 3PAR import tool (which is significantly easier and less complex apparently than EMC's options). One of the benefits that HP touts of their approach is it has no impact on host performance as the data goes directly between the arrays.
The downside to this direct approach is the 7000-series of 3PAR arrays are very limited in port counts, especially if you happen to have iSCSI HBAs in them(as did the F and E classes before them). 10000-series has a ton of ports though. I learned last year at HP storage tech day, was HP was looking at possibly shrinking the bigger 10000-series controllers for the next round of mid range systems (rather than making the 7000-series controllers bigger) in an effort to boost expansion capacity. I'd like to see at least 8 FC-host and 2 iSCSI Host per controller on mid range. Currently you get only 2 FC-host if you have a iSCSI HBA installed in a 7000-series controller.
The tool is command line based, there are a few basic commands it does and it interfaces directly with the EMC and 3PAR systems.
The tool is free of charge as far as I know, and while 3PAR likes to tout no professional services required HP says some customers may need assistance (especially at larger scales) planning migrations, if this is the case then HP has services ready to hold your hand through every step of the way.
What I'd like to see from 3PAR still
Read and write SSD caches
I've talked about it for years now, but still want to see a read (and write - my workloads are 90%+ write) caching system that leverages high endurance SSDs on 3PAR arrays. HP announced SmartCache for Proliant Gen8 systems I believe about 18 months ago with plans to extend support to 3PAR but that has not yet happened. 3PAR is well aware of my request, so nothing new here. David did mention that they do want to do this still, no official timelines yet. Also it sounded like they will not go forward with the server-side SmartCache integration with 3PAR (I'd rather have the cache in the array anyway and they seem to agree).
3PAR 7450 SPC-1
I'd like to see SPC-1 numbers for the 7450 especially with these new flash media, it ought to provide some pretty compelling cost and performance numbers. You can see some recent performance testing that was done(that wasn't 100% read) on a four node 7450 on behalf of HP.
Demartek also found that the StoreServ 7450 was not very heavily taxed with a single OLTP database accessing the array. As a result, we proceeded to run a combination of database workloads including two online transaction processing (OLTP) workloads and a data warehouse workload to see how well this storage system would handle a fairly heavy, mixed workload.
HP says the lack of SPC-1 comes down to priorities, it's a decent amount of work to do the tests and they have had people working on other things, they still intend to do them but not sure when it will happen.
Would like to see compression support, not sure whether or not that will have to wait for Gen5 ASIC or if there are more rabbits hiding in the Gen4 hat.
Certainly want to see deduplication come to the other Gen4 platforms. HP touts a lot about the competition's flash systems being silos. I'll let you in on a little secret - the 3PAR 7450 is a flash silo as well. Not a technical silo, but one imposed on the product by marketing. While the reasons behind it are understandable it is unfortunate that HP feels compelled into limiting the product to appease certain market observers.
I was expecting a CPU refresh on the 7450, which was launched with an older generation of processor because HP didn't want to wait for Intel's newest chip to launch their new storage platform. I was told last year the 7450 is capable of operating with the newer chip, so it should just be a matter of plugging it in and doing some testing. That is supposed to be one of the benefits of using x86 processors is you don't need to wait years to upgrade. HP says the Gen4 ASIC is not out of gas, the performance numbers to-date are limited by the CPU cores in the system, so faster CPUs would certainly benefit the system further without much cost.
At the end of the day the 3PAR flash story has evolved into one of no compromises. You get the low cost flash, you get the inline hardware accelerated deduplication, you get the high performance with multitenancy and low latency, you get all of that and your not compromising on any other tier 1 capabilities(too many to go into here, you can see past posts for more info). Your getting a proven architecture that has matured over the past decade, a common operating system, the only storage platform that leverages custom ASICs to give uncompromising performance even with the bells & whistles turned on.
The only compromise here is you had to read all of this and I didn't give you many pretty pictures to look at.
When is clustering, clustering?
NetApp is running the latest Ontap 8.2 in cluster mode I suppose, though there is only a single pair of nodes in the tested cluster. I've never really considered this a real cluster, it's more of a workgroup of systems. Volumes live on a controller (pair) and can be moved around if needed, they probably have some fancy global management thing for the "cluster" but it's just a collection of storage systems that are only loosely integrated with each other. I like to compare the NetApp style of clustering to a cluster of VMware hosts (where the VMs would be the storage volumes).
This strategy has it's benefits as well, the main one being less likelihood that the entire cluster could be taken down by a failure(normally I'd consider this failure to be triggered by a software fault). This is the same reason why 3PAR has elected to-date to not go beyond 8-nodes in their cluster, the risk/return is not worth it in their mind. In their latest generation of high end boxes 3PAR decided to double up the ASICs to give them more performance/capacity rather than add more controllers, though technically there is nothing stopping them from extending the cluster further(to my knowledge).
The downside to workgroup style clustering is that optimal performance is significantly harder to obtain.
3PAR clustering is vastly more sophisticated and integrated by comparison. To steal a quote from their architecture document -
The HP 3PAR Architecture was designed to provide cost-effective, single-system scalability through a cache-coherent, multi-node, clustered implementation. This architecture begins with a multi-function node design and, like a modular array, requires just two initial Controller Nodes for redundancy. However, unlike traditional modular arrays, an optimized interconnect is provided between the Controller Nodes to facilitate Mesh-Active processing. With Mesh-Active controllers, volumes are not only active on all controllers, but they are autonomically provisioned and seamlessly load-balanced across all systems resources to deliver high and predictable levels of performance. The interconnect is optimized to deliver low latency, high-bandwidth communication and data movement between Controller Nodes through dedicated, point-to-point links and a low overhead protocol which features rapid inter-node messaging and acknowledgement.
Sounds pretty fancy right? It's not something that is for high end only. They have extended the same architecture down as low as a $25,000 entry level price point on the 3PAR 7200 (that price may be out of date, it's from an old slide).
I had the opportunity to ask what seemed to be a NetApp expert on some of the finer details of clustering in Ontap 8.1 (latest version is 8.2) a couple of years ago and he provided some very informative responses.
Anyway on to the results, after reading up on them it was hard for me not to compare them with the now five year old 3PAR F400 results.
Also I want to point out that the 3PAR F400 is End of Life, and is no longer available to purchase as new as of November 2013 (support on existing systems continues for another couple of years).
(hey, it's an actual cluster)
|86,830 GB||56,377 GB|
(may not exceed 45%)
|Disk size and|
|192 x 450GB 10k RPM||384 x 146GB 15k RPM|
|Data Cache||64GB data cache|
1,024GB Flash cache
|24GB data cache|
I find the comparison fascinating myself at least. It is certainly hard to compare the pricing, given the 3PAR results are five years old, the 3PAR mid range pricing model has changed significantly with the introduction of the 7000 series in late 2012. I believe the pricing 3PAR provided SPC-1 was discounted(I can't find indication either way, I just believe that based on my own 3PAR pricing I got back then) vs NetApp is list(says so in the document). But again, hard to compare pricing given the massive difference in elapsed time between tests.
Unused storage ratio
What is this number and why is there such a big difference? Well this is a SPC-1 metric and they say in the case of NetApp:
Total Unused Capacity (36,288.553 GB) divided by Physical Storage Capacity (86.830.090 GB) and may not exceed 45%.
A unused storage ratio of 42% is fairly typical for NetApp results.
In the case of 3PAR, you have to go to the bigger full disclosure document(72 pages), as the executive summary has evolved more over time and that specific quote is not in the 3PAR side of things.
So for 3PAR F400 SPC says:
The Physical Storage Capacity consisted of 56,377.243 GB distributed over 384 disk drives each with a formatted capacity of 146.816 GB. There was 0.00 GB (0.00%) of Unused Storage within the Physical Storage Capacity. Global Storage Overhead consisted of 199.071 GB (0.35%) of Physical Storage Capacity. There was 61.203 GB (0.11%) of Unused Storage within the Configured Storage Capacity. The Total ASU Capacity utilized 99.97% of the Addressable Storage Capacity resulting in 6.43 GB (0.03%) of Unused Storage within the Addressable Storage Capacity.
The full disclosure document is not (yet) available for NetApp as of 2/21/2014. It most certainly will become available at some point.
The metrics above and beyond the headline numbers is one of the main reasons I like SPC-1.
With so much wasted space on the NetApp side it is confusing to me why they don't just use RAID 1 (I think the answer is they don't support it).
Benefits from cache
The NetApp system is able to leverage it's terabyte of flash cache to accelerate what is otherwise a slower set of 10k RPM disks, which is nice for them.
They also certainly have much faster CPUs, and more than double the data cache (3PAR's architecture isolates data cache from the operating system, so I am not sure how much memory on the NetApp side is actually used for data cache vs operating system/meta data etc). 3PAR by contrast has their proprietary ASIC which is responsible for most of the magic when it comes to data processing on their systems.
3PAR does not have any flash cache capabilities so they do require (in this comparison) double the spindle count to achieve the same performance results. Obviously in a newer system configuration 3PAR would likely configure a system with SSDs and sub LUN auto tiering to compensate for the lack of a flash based cache. This does not completely completely compensate however, and of course I have been hounding 3PAR and HP for at least four years now to develop some sort of cache technology that leverages flash. They announced SmartCache in December 2012 (host-based SSD caching for Gen8 servers) however 3PAR integration has yet to materialize.
However keep in mind the NetApp flash cache only accelerates reads. If you have a workload like mine which is 90%+ write the flash cache doesn't help.
NetApp certainly makes good systems, they offer a lot of features, and have respectable performance. The systems are very very flexible and they have a very unified product line up (same software runs across the board).
For me personally after seeing results like this I feel continually reassured that the 3PAR architecture was the right choice for my systems vs NetApp (or other 3 letter storage companies). But not everyone's priorities are the same. I give NetApp props for continuing to support SPC-1 and being public with their numbers. Maybe some day these flashy storage startups will submit SPC-1 results.......not holding my breath though.
First off, sorry about the lack of posts, there just hasn't been very much in tech that has inspired me recently. I'm sure part of the reason is my job has been fairly boring for a long time now, so I'm not being exposed to a whole lot of stuff. I don't mind that trade off for the moment - still a good change of pace compared to past companies. Hopefully 2014 will be a more interesting year.
3PAR still manages to create some exciting news every now and then and they seem to be on a 6-month release cycle now, far more aggressive than they were pre acquisition. Of course now they have far more resources. Their ability to execute really continues to amaze me, whether it is on the sales or on the technology side. I think technical support still needs some work though. In theory that aspect of things should be pretty easy to fix it's just a matter of spending the $$ to get more good people. All in all though they've done a pretty amazing job at scaling 3PAR up, basically they are doing more than 10X the revenue they had before acquisition in just a matter of a few short years.
This all comes from HP Discover - there is a bit more to write about but per usual 3PAR is the main point of interest for myself.
Turbo-charging the 3PAR 7000
Roughly six months ago 3PAR released their all-flash array the 7450. Which was basically a souped up 7400 with faster CPUs, double the memory, optimized software for SSDs and a self imposed restriction that they would only sell it with flash(no spinning rust).
At the time they said they were still CPU bound and that their in house ASIC was nowhere near being taxed to the limit. Simultaneously they could not put more (or more powerful) CPUs in the chassis due to cooling restraints in the relatively tiny 2U package that a pair of controllers come in.
Given the fine grained software improvements they released earlier this year I (along with probably most everyone else) was not expecting that much more could be done. You can read in depth details, but highlights included:
- Adaptive read caching - mostly disabling read caching for SSDs, at the same time disabled prefetching of other blocks. SSDs are so fast that there is little benefit to doing either. Not caching reads to SSDs has a benefit of dedicating more of the cache to writes.
- Adaptive write caching - with disks 3PAR would write an entire 16kB block to disk because there is no penalty for doing so. With SSDs they are much more selective in only writing the small blocks that changed, they will not write 16kB if only 4kB has changed because there is no penalty with SSDs like there are with disks.
- Autonomic cache offload - More sophisticated cache management algorithms
- Multi tenant improvements - Multi threaded cache flushing, breaking up large sequential I/O requests into smaller chunks for the SSDs to ingest at a faster rate. 3PAR has always been about multi tenancy.
Net effect of all of these are more effective IOPS and throughput, more efficiency as well.
With these optimizations, the 7450 was rated at roughly 540,000 IOPS @ 0.6ms read latency (100% read). I guesstimated based on the SPC-1 results from the 7400 that a 7450 could perhaps reach around 410,000 IOPS. Just a guess though..
So imagine my surprise when they come out and say the same system with the same CPUs, memory etc is now performing at a level of 900,000 IOPS with a mere 0.7 milliseconds of latency.
The difference? Better software.
Mid range I/O scalability
|3PAR F200 |
|100% Random Read|
disks and controllers)
(Front end between
hosts and controllers)
(Front end between
hosts and controllers)
|Not possible||Not possible||Not|
disks and controllers)
(Front end between
hosts and controllers)
Stop interrupting me
What allowed 3PAR to reach this level of performance is by leveraging a PCI-express feature called Message Signaled Interrupts, or MSI-X which Wikipedia describes as:
MSI-X (first defined in PCI 3.0) permits a device to allocate up to 2048 interrupts. The single address used by original MSI was found to be restrictive for some architectures. In particular, it made it difficult to target individual interrupts to different processors, which is helpful in some high-speed networking applications. MSI-X allows a larger number of interrupts and gives each one a separate target address and data word. Devices with MSI-X do not necessarily support 2048 interrupts but at least 64 which is double the maximum MSI interrupts.
I'm not a hardware guy to this depth for sure. But I did immediately recognize MSI-X from a really complicated troubleshooting process I went through several years ago with some Broadcom network chips on Dell R610 servers (though the issue wasn't Dell specific). It ended up being a bug with how the Broadcom driver was handling(or not) MSI-X (Redhat bug here). It took practically a year of (off and on) troubleshooting before I came across that bug report. The solution was to disable MSI-X via a driver option (which apparently the Dell supplied drivers came with by default, the OS-supplied drivers did not have that disabled by default).
So some fine grained kernel work improving interrupts gave them a 1.6 fold improvement in performance.
This performance enhancement applies to the SAS-based 3PAR 7000-series only, the 10000-series had equivalent functionality already in place, and the previous generations (F/T platforms) are PCI-X based(and I believe are all in their end of life phases), and this is a PCI Express specific optimization. I think this level of optimization might really only help SSD workloads as they push the controllers to the limit, unlike spinning rust.
This optimization also reduces the latency on the system by 25%, because the CPU is no longer being interrupted nearly as often it can no only do more work but do the work faster too.
Give me more!
There are several capacity improvements here as well.
There are new 480GB and 920GB SSDs available, which takes the 3PAR 4-node 7400/7450 to a max raw capacity of 220TB (up from 96TB) on up to 240 SSDs.
Bigger entry level
The 3PAR 7200's spindle capacity is being increased by 60% - from 144 drives to 240 drives. The 7200 is equipped with only 8GB of data cache (4GB per controller - it is I believe the first/only 3PAR system with more control cache than data cache), though it still makes a good low cost bulk data platform with support for up to 400TB of raw storage behind two controllers(which is basically the capacity of the previous generation's 4-node T400 which had 48GB of data cache, 16GB of control cache, 24 CPU cores, 4 ASICs -- obviously the T400 had a much higher price point!).
Not a big shocker here just bigger drives - 4TB Nearline SAS is now supported across the 7k and 10k product lines, bringing the high end 10800 array to support 3.2PB of raw capacity, and the 7400 sporting up to 1.1PB now. These drives are obviously 3.5" so on the 7000 series you'll need the 3.5" drive cages to use them - the 10k line uses 3PAR's custom enclosures which support both 2.5" and 3.5" (though for 2.5" drives the enclosures are not compact like they are on 7k).
I was told at some point that the 3PAR OS would start requiring RAID 6 on volumes that were on nearline drives at some point - perhaps that point is now(I am not sure). I was also told you would be able to override this at an OS level if you wish, the parallel chunklet architecture recovers from failures far faster than competing architectures. Obviously with the distributed architecture on 3PAR you are not losing any spindles to dedicated spares nor dedicated parity drives.
If you are really paranoid about disk failures you can on a per-volume basis if you wish use quadruple mirroring on a 3PAR system - which means you can lose up to 75% of the disks in the system and still be OK on those volume(s).
3PAR also uses dynamic sparing -- if the default spare reserve space runs out, and you have additional unwritten capacity(3PAR views capacity as portions of drives, not whole drives) the system can sustain even more disk failures without data loss or additional overhead of re-configuration etc.
Like almost all things on 3PAR the settings can be changed on the fly without application impact and without up front planning or significant effort on the part of the customer.
The 3PAR 10400 has received a memory boost - doubling it's memory configuration from the original configuration. Basically it seems like they decided it was a better idea to unify the 10800 and 10400 controller configurations, though the data sheet seems to have some typos in it(pending clarification). I believe the numbers are 96GB of cache per controller (64GB data, 32GB control), giving a 4-node system 384GB of memory.
Compare this to the 7400 which has 16GB of cache per controller (8GB data, 8GB control) giving a 4-node system 64GB of memory. The 10400 has six times the cache, and still supports 3rd party cabinets.
Now if they would just double the 7200 and 7400's memory that would be nice
Keeps getting better
Multi tenant improvements
Six months ago 3PAR released their storage quality of service software offering called Priority Optimization. As mentioned before 3PAR has always been about multi tenancy, and due to their architecture they have managed to do a better job at it than pretty much anyone else. But it still wasn't perfect obviously - there was a need for real array based QoS. They delivered on that earlier this year and now have announced some significant improvements on that initial offering.
Brief recap of what their initial release was about - you were able to define both IOP and bandwidth threshold levels for a particular volume(or group of volumes), the system would respond basically in real time to throttle the workload if it exceeded that level. 3PAR has tons of customers that run multi tenant configurations so they went further in being able to define both a customer as well as an application.
So as you can see from the picture above, the initial release allowed you to specify say 20,000 IOPS for a customer, and be able to over provision IOPS for individual applications that customer uses, allowing for maximum flexibility, efficiency and control at the same time.
So the initial release was all about basically rate limiting workloads on a multi tenant system. I suppose you could argue that there wasn't a lot of QoS it was more rate limiting.
The new software is more QoS oriented - going beyond rate limiting they now have three new capabilities:
- Allows you to specify a performance minimum threshold for a given application/customer
- Allows you to specify a latency target for a given application
- Using 3PAR's virtual domains feature(basically carve a 3PAR up into many different virtual arrays for service providers) you can now assign a QoS to a given virtual domain! That is really cool.
Like almost everything 3PAR - configuring this is quite simple and does not require professional services.
3PAR Replication: M to N
With the latest software release 3PAR now supports M to N topologies for replication. Before this they supported 1 to 1, as well as synchronous long distance replication
All configurable via point and click interface no less, no professional services required.
New though is M to N.
Need Bigger? How about nine arrays all in some sort of replication party? That's a lot of arrows.
More scalable replication
On top of the new replication topology they've also tripled(or more) the various limits around the maximum number of volumes that can be replication in the various modes. A four node 3PAR can now replicate up to a maximum of 6,000 volumes in asynchronous mode and 2,400 volumes in synchronous mode.
You can also run up to 32 remote copy fibre channel links per system and up to 8 remote copy over IP links per system (RCIP links are dedicated 1GbE ports on each controller).
Peer motion enhancements
Peer motion is 3PAR's data mobility package which allows you to transparently move volumes between arrays. It came out a few years ago primarily as a means to provide ease of migration/upgrade between 3PAR systems, and was later extended to support EVA->3PAR migrations. HP's StoreVirtual platform also does peer motion, though as far as I know it is not yet directly inter-operable with 3PAR. Not sure if it ever will be.
Anyway like most sophisticated things there are always caveats - the most glaring of which in peer motion is they did not support SCSI reservations. Which basically means you couldn't use peer motion with VMware or other clustering software. With the latest software that limitation has been removed! VMware, Microsoft and Redhat clustering are all supported now.
Persistent port enhancements
Persistent ports is an availability feature 3PAR introduced about a year ago which basically leverages NPIV at the array level - it allows a controller to assume the Fibre Channel WWNs of it's peer in the event the peer goes offline. This means fail over is much faster, and it removes the dependency of multi pathing software to provide for fault tolerance. That's not to say that you should not use MPIO software you still should if for nothing else other than better distribution of I/O across multiple HBAs, ports and controllers. But the improved recovery times are a welcome plus.
So what's new here?
- Added support for FCoE and iSCSI connections
- Laser loss detection - in the event a port is disconnected persistent ports kick in (don't need to have a full controller failure)
- The speed at which the fail over kicks in has been improved
Combine Persistent Ports with 3PAR Persistent cache on a 4-8 controller system and you have some pretty graceful fail over capabilities.
3PAR Persistent Cache was released back in 2010 I believe, no updates here, just put the reference here for people that may not know what it is since it is a fairly unique ability to have especially in the mid range.
Also being announced is a new set of FIPS 140-2 validated self encrypting drives with sizes ranging from 450GB 10k to 4TB nearline.
3PAR also has a 400GB SSD encrypting drive as well though I don't see any mention of FIPS validation on that unit.
3PAR arrays can either be encrypted or not encrypted - they do not allow you to mix/match. Also once you enable encryption on a 3PAR array it cannot be disabled.
I imagine you probably aren't allowed to use Peer Motion to move data from an encrypted to a non encrypted system? Same goes for replication ? I am not sure, I don't see any obvious clarifications in the docs.
SSDs, like hard drives all come with a chunk of hidden storage set aside for when blocks wear out or go bad, the disk transparently re-maps from this spare pool. I think SSDs take it to a new level with their wear leveling algorithms.
Anyway, 3PAR's Adaptive sparing basically allows them to utilize some of the storage from this otherwise hidden pool on the SSDs. The argument is 3PAR is already doing sparing at the sub-disk (chunklet) level, if a chunklet fails then it is reconstructed on the fly - much like a SSD would do to itself if a segment of flash went bad. If too many chunklets fail over time on a disk/SSD the system will pro actively fail the device.
At the end of the day the customer gets more usable capacity out of the system without sacrificing any availability. Given the chunklet architecture I think this approach is probably going to be a fairly unique capability.
Lower cost SSDs
Take Adaptive sparing, and combine it with the new SSDs that are being released and you get SSD list pricing(on a per GB basis) which is reduced by 50%. I'd really love to see an updated SPC-1 for the 7450 with these new lower cost devices(plus MSI-X enhancements of course!), I'd be surprised if they weren't working on one already.
3PAR came out with their first web services API a year ago. They've since improved upon that, as well as adding enhancements for Openstack Havana (3PAR was the reference implementation for Fibre Channel in Openstack).
3PAR is continuing to kick butt in the market place with their 7000-series, with El Reg reporting that their mid range products have had 300% year over year increases in sales and they have overtaken IBM and NetApp in market share to be #2 behind EMC (23% vs 17%).
This might upset the ethernet vendors but they also report that fibre channel is the largest and fastest growing storage protocol in the mid range space(at least year over year), I'm sure again largely driven by 3PAR who historically has been a fibre channel system. Fibre channel has 50% market share with 49% year over year growth.
Well the elephant in the room that is still not here is some sort of SSD-based caching. HP went so far as to announce something roughly a year ago with their SmartCache technology for Gen8 systems, though they opted not to mention much on that this time around. It's something I have hounded 3PAR for the past four years to get going, I'm sure they are working on something......
Also I would like to see them support, or at least explain why they might not support, the Seagate Enterprise Turbo SSHD - which is a hybrid drive providing 32GB of eMLC flash cache in front of what I believe is an otherwise 10k RPM 300-600GB disk with self proclaimed upwards of 3X improvement in random I/O over 15k disks. There's even a FIPS 140-2 model available. I don't know what the price point of this drive is but find it hard to believe that it's not a cost effective alternative to flash tiering when you do not have a flash-based cache to work off of.
Lastly I would like to see some sort of automatic workload load balancing with Peer motion - as far as I know that does not yet exist. Though moving TBs of data around between arrays is not something to be taken lightly anyway!
Before I forget again..
Travel to HP Storage Tech Day/Nth Generation Symposium was paid for by HP; however, no monetary compensation is expected nor received for the content that is written in this blog.
So, HP hammered us with a good seven to eight hours of storage related stuff today, the bulk of the morning was devoted to 3PAR and the afternoon covered StoreVirtual, StoreOnce, StoreAll, converged management and some really technical bits from HP Labs.
This post is all about 3PAR. They covered other topics of course but this one took so long to write I had to call it a night, will touch on the other topics soon.
I won't cover everything since I have covered a bunch of this in the past. I'll try not to be too repetitive...
I tried to ask as many questions as I could, they answered most .. the rest I'll likely get with another on site visit to 3PAR HQ after I sign another one of those Nate Disclosure Agreements (means I can't tell you unless your name is Nate). I always feel guilty about asking questions directly to the big cheeses at 3PAR. I don't want to take up any of their valuable time...
There wasn't anything new announced today of course, so none of this information is new, though some of is new to this blog, anyway!
I suppose if there is one major take away for me for this SSD deep dive, is the continued insight into how complex storage really is, and how well 3PAR does at masking that complexity and extracting the most of everything out of the underlying hardware.
Back when I first started on 3PAR in late 2006, I really had no idea what real storage was. As far as I was concerned one dual controller system with 15K disks was the same as the next. Storage was never my focus in my early career (I did dabble in a tiny bit of EMC Clariion (CX6/700) operations work - though when I saw the spreadsheets and visios the main folks used to plan and manage I decided I didn't want to get into storage), it was more servers, networking etc.
I learned a lot in the first few years of using 3PAR, and to a certain extent you could say I grew up on it. As far as I am concerned being able to wide stripe, or have mesh active controllers is all I've ever (really) known. Sure since then I have used a few other sorts of systems. When I see architectures and processes of doing things on other platforms I am often sort of dumbfounded why they do things that way. It's sometimes not obvious to me that storage used to be really in the dark ages many years ago.
Case in point below, there's a lot more to (efficient, reliable, scalable, predictable) SSDs than just tossing a bunch of SSDs into a system and throwing a workload at them..
I've never tried to proclaim I am a storage expert here(or anywhere) though I do feel I am pretty adept at 3PAR stuff at least, which wasn't a half bad platform to land on early on in the grand scheme of things. I had no idea where it would take me over the years since. Anyway, enough about the past....
New to 3PAR
Still the focus of the majority of HP storage related action these days, they had a lot to talk about. All of this initial stuff isn't there yet(up until the 7450 stuff below), just what they are planning for at some point in the future(no time frames on anything that I recall hearing).
Asynchronous Streaming Replication
Just a passive mention of this on a slide, nothing in depth to report about, but I believe the basic concept is instead of having asynchronous replication running on snapshots that kick off every few minutes (perhaps every five minutes) the replication process would run much more frequently (but not synchronous still), perhaps as frequent as every 30 seconds or something.
I've never used 3PAR replication myself. Never needed array based replication really. I have built my systems in ways that don't require array based replication. In part because I believe it makes life easier(I don't build them specifically to avoid array replication it's merely a side effect), and of course the license costs associated with 3PAR replication are not trivial in many circumstances(especially if your only needing to replicate a small percentage of the data on the system). The main place where I could see leveraging array based replication is if I was replicating a large number of files, doing this at the block layer is often times far more efficient(and much faster) than trying to determine changed bits from a file system perspective.
I wrote/built a distributed file transfer architecture/system for another company a few years ago that involved many off the shelf components(highly customized) that was responsible for replicating several TB of data a day between WAN sites, it was an interesting project and proved to be far more reliable and scalable than I could of hoped for initially.
Increasing Maximum Limits
I think this is probably out of date, but it's the most current info I could dig up on HP's site. Though this dates back to 2010. These pending changes are all about massively increasing the various supported maximum limits of various things. They didn't get into specifics. I think for most customers this won't really matter since they don't come close to the limits in any case(maybe someone from 3PAR will read this and send me more up to date info).
The PDF says updated May 2013, though the change log says last update is December. HP has put out a few revisions to the document(which is the Compatibility Matrix) which specifically address hardware/software compatibility, but the most recent Maximum Limits that I see are for what is now considered quite old - 2.3.1 release - this was before their migration to a 64-bit OS (3.1.1).
Compression / De-dupe
They didn't talk about it, other than mention it on a slide, but this is the first time I've seen HP 3PAR publicly mention the terms. Specifically they mention in-line de-dupe for file and block, as well as compression support. Again, no details.
Personally I am far more interested in compression than I am de-dupe. De-dupe sounds great for very limited workloads like VDI(or backups, which StoreOnce has covered already). Compression sounds like a much more general benefit to improving utilization.
Myself I already get some level of "de duplication" by using snapshots. My main 3PAR array runs roughly 30 MySQL databases entirely from read-write snapshots, part of the reason for this is to reduce duplicate data, another part of the reason is to reduce the time it takes to produce that duplicate data for a database(fraction of a second as opposed to several hours to perform a full data copy).
File + Object services directly on 3PAR controllers
No details here other than just mentioning having native file/object services onto the existing block services. They did mention they believe this would fit well in the low end segment, they don't believe it would work well at the high end since things can scale in different ways there. Obviously HP has file/object services in the IBRIX product (though HP did not get into specifics what technology would be used other than taking tech from several areas inside HP), and a 3PAR controller runs Linux after all, so it's not too far fetched.
I recall several years ago back when Exanet went bust, I was trying to encourage 3PAR to buy their assets as I thought it would of been a good fit. Exanet folks mentioned to me that 3PAR engineering was very protective of their stuff and were very paranoid about running anything other than the core services on the controllers, it is sensitive real estate after all. With more recent changes such as supporting the ability to run their reporting software(System Reporter) directly on the controller nodes I'm not sure if this is something engineering volunteered to do themselves or not. Both approaches have their strengths and weaknesses obviously.
Where are 3PAR's SPC-2 results?
This is a question I asked them (again). 3PAR has never published SPC-2 results. They love to tout their SPC-1, but SPC-2 is not there....... I got a positive answer though: Stay tuned. So I have to assume something is coming.. at some point. They aren't outright disregarding the validity of the test.
In the past 3PAR systems have been somewhat bandwidth constrained due to their use of PCI-X. Though the latest generation of stuff (7xxx/10xxx) all leverage PCIe.
The 7450 tops out at 5.2 Gigabytes/second of throughput, a number which they say takes into account overhead of a distributed volume system (it otherwise might be advertised as 6.4 GB/sec as a 2-node system does 3.2GB/sec). Given they admit the overhead to a distributed system now, I wonder how, or if, that throws off their previous throughput metrics of their past arrays.
I have a slide here from a few years ago that shows a 8-controller T800 supporting up to 6.4GB/sec of throughput, and a T400 having 3.2GB/sec (both of these systems were released in Q3 of 2008). Obviously the newer 10400 and 10800 go higher(don't recall off the top of my head how much higher).
This compares to published SPC-2 numbers from IBM XIV at more than 7GB/sec, as well as HP P9500/HDS VSP at just over 13GB/sec.
Announced more than a month ago now, the 7450 is of course the purpose built flash platform which is, at the moment all SSD.
Can it run with spinning rust?
One of the questions I had, was I noticed that the 7450 is currently only available in a SSD-only configuration. No spinning rust is supported. I asked why this was and the answer was pretty much what I expected. Basically they were getting a lot of flak for not having something that was purpose built. So at least in the short term, the decision not to support spinning rust is purely a marketing one. The hardware and software is the same(other than being more beefy in CPU & RAM) than the other 3PAR platforms. The software is identical as well. They just didn't want to give people more excuses to label the 3PAR architecture as something that wasn't fully flash ready.
It is unfortunate that the market has compelled HP to do this, as other workloads would still stand to gain a lot especially with the doubling up of data cache on the platform.
Still CPU constrained
One of the questions asked by someone was about whether or not the ASIC is the bottleneck in the 7450 I/O results. The answer was a resounding NO - the CPU is still the bottleneck even at max throughput. So I followed up with why did HP choose to go with 8 core CPUs instead of 10-core which Intel of course has had for some time. You know how I like more cores! The answer was two fold to this. The primary reason was cooling(the enclosure as is has two sockets, two ASICs, two PCIe slots, 24 SSDs, 64GB of cache and a pair of PSUs in 2U). The second answer was the system is technically Ivy-bridge capable but they didn't want to wait around for those chips to launch before releasing the system.
They covered a bit about the competition being CPU limited as well especially with data services, and the amount of I/O per CPU cycle is much lower on competing systems vs 3PAR and the ASIC. The argument is an interesting one though at the end of the day the easy way to address that problem is throw more CPUs at it, they are fairly cheap after all. The 7000-series is really dense so I can understand the lack of ability to support a pair of dual socket systems within a 2U enclosure along with everything else. The 10400/10800 are dual socket(though older generation of processors).
I really have not much cared for Intel's code names for their recent generation of chips. I don't follow CPU stuff all that closely these days(haven't for a while), but I have to say it's mighty easy to confuse code name A from B, which is newer? I have to look it up. every. single. time.
I believe in the AMD world (AMD seems to have given up on the high end, sadly), while they have code names, they have numbers as well. I know 6200 is newer than 6100 ..6300 is newer than 6200..it's pretty clear and obvious. I believe this goes back to Intel and them not being able to trademark the 486.
On the same note, I hate Intel continuing to re-use the code word i7 in laptops. I have an Core i7 laptop from 3 years ago, and guess what the top end today still seems to be? I think it's i7 still. Confusing. again.
</ END TANGENT >
Effortless SSD management of each SSD with proactive alerts
I wanted to get this in before going deeper into the cache optimizations since that is a huge topic. But the basic gist of this is they have good monitoring of the wear of the SSDs in the platform(something I think that was available on Lefthand a year or two ago), in addition to that the service processor (dedicated on site appliance that monitors the array) will alert the customer when the SSD is 90% worn out. When the SSD gets to 95% then the system pro-actively fails the drive and migrates data off of it(I believe). They raised a statistic that was brought up at Discover that something along the lines of 95% of all deployed SSDs in 3PAR were still in the field - very few have worn out. I don't recall anyone mentioning the # of SSDs that have been deployed on 3PAR but it's not an insignificant number.
SSD Caching Improvements in 3PAR OS 3.1.2
There have been a number of non trivial caching optimizations in the 3PAR OS to maximize performance as well as life span of SSDs. Some of these optimizations also benefit spinning rust configurations as well - I have personally seen a noticeable drop in latency in back end disk response time since I upgraded to 3.1.2 back in May(it was originally released in December), along with I believe better response times under heavy load on the front end.
Bad version numbers
I really dislike 3PAR's version numbering, they have their reasons for doing what they do, but I still think it is a really bad customer experience. For example going from 2.2.4 to 2.3.1 back in what was it 2009 or 2010. The version number implies minor update, but this was a MASSIVE upgrade. Going from 2.3.x to 3.1.1 was a pretty major upgrade too (as the version implied). 3.1.1 to 3.1.2 was also a pretty major upgrade. On the same note the 3.1.2 MU2 (patch level!) upgrade that was released last month was also a major upgrade.
I'm hoping they can fix this in the future, I don't think enough effort is made to communicate major vs minor releases. The version numbers too often imply minor upgrades when in fact they are major releases. For something as critical as a storage system I think this point is really important.
Adaptive Read Caching
One of the things they covered with regards to caching with SSD is the read cache is really not as effective(vs with spinning rust), because the back end media is so fast, there is significantly less need for caching reads. So in general, significantly more cache is used with writes.
For spinning rust 3PAR reads a full 16kB of data from the back end disk regardless of the size of the read on the front end (e.g. 4kB). This is because the operation to go to disk is so expensive already and there is no added penalty to grab the other 12kB while your grabbing the 4kB you need. The next I/O request might request part of that 12kB and you can save yourself a second trip to the disk when doing this.
With flash things are different. Because the media is so fast, you are much more likely to become bandwidth constrained rather than IOPS constrained. So if for example you have that 500,000 4k read IOPS on the front end, and your performing those same 16kB read IOPS on the back end, that is, well 4x more bandwidth that is required to perform those operations. Again because the flash is so fast, there is significantly less penalty to go back to SSD again and again to retrieve those smaller blocks. It also improves latency of the system.
So in short, read more from disks because you can and there is no penalty, read only what you need from SSDs because you should and there is (almost) no penalty.
Adaptive Write Caching
With writes the situation is similar to reads, to maximize SSD life span, and minimize latency you want to minimize the number of write operations to the SSD whenever possible.
With spinning rust again 3PAR works with 16kB pages, if a 4kB write comes in then the full 16kB is written to disk, again because there is no additional penalty for writing the 16kB vs writing 4kB. Unlike SSDs your not likely bandwidth constrained when it comes to disks.
With SSDs, the optimizations they perform, again to maximize performance and reduce wear, is if a 4kB write comes in, a 16kB write occurs to the cache, but only the 4kB of changed data is committed to the back end.
If I recall right they mentioned this operation benefits RAID 1 (anything RAID 1 in 3PAR is RAID 10, same for RAID 5 - it's RAID 50) significantly more than it benefits RAID 5/6, but it still benefits RAID 5/6.
Autonomic Cache offload
Here the system changes the frequency at which it flushes cache to back end media based on utilization. I think this plays a lot into the next optimization.
Multi Tenant I/O Processing
3PAR has long been about multi tenancy of their systems. The architecture lends itself well to running in this mode though it wasn't perfect, I believe for the most part the addition of Priority Optimization that was announced late last year and finally released last month fills the majority of the remainder of that hole. I have run "multi tenant" 3PAR systems since the beginning. Now to be totally honest the tenants were all me, just different competing workloads, whether it is disparate production workloads or a mixture of production and non production(and yes in all cases they ran on the same spindles). It wasn't nearly as unpredictable as say a service provider with many clients running totally different things, that would sort of scare me on any platform. But there was still many times where rogue things (especially horrible SQL queries) overran the system (especially write cache). 3PAR handles it as well, if not better than anyone else but every system has it's limits.
Front end operations
The caching flushing process to back end media is now multi threaded. This benefits both SSD as well as existing spinning rust configurations. Significantly less(no?) locking involved when flushing cache to disk.
Here is a graph from my main 3PAR array, you can see the obvious latency drop from the back end spindles once 3.1.2 was installed back in May (again the point of this change was not to impact back end disk latency as much as it was to improve front end latency, but there is a significant positive behavior change post upgrade):
There was a brief time when latency actually went UP on the back end disks. I was concerned at first but later determined this was the disk defragmentation processes running(again with improved algorithms), before the upgrade they took FAR too long, post upgrade they completed a big backlog in a few days and latency returned to low levels.
Back end operations
On the topic of multi tenant with SSDs an interesting point was raised which I had never heard of before. They even called it out as being a problem specific to SSDs, and does not exist with spinning rust. Basically the issue is if you have two workloads going to the same set of SSDs, one of them issuing large I/O requests(e.g. sequential workload), and the other issuing small I/O requests(e.g. 4kB random read), the smaller I/O requests will often get stuck behind the larger ones causing increases in latency to the app using smaller I/O requests.
To address this, the 128kB I/Os are divided up into four 32kB I/O requests and sent in parallel to the other workload. I suppose I can get clarification but I assume for a sequential read operation with 128kB I/O request there must not be any additional penalty for grabbing the 32kB, vs splitting it up even further into even more smaller I/Os.
Maintaining performance during media failures
3PAR has always done wide striping, and sub disk distributed RAID so the rebuild times are faster, the latency is lower and all around things run better(no idle hot spares) that way vs the legacy designs of the competition. The system again takes additional steps now to maximize SSD life span by optimizing the data reads and writes under a failure condition.
HP points out that SSDs are poor at large sequential writes, so as mentioned above they divide the 128kB writes that would be issued during a rebuild operation (since that is largely a sequential operation) into 32kB I/Os again to protect those smaller I/Os from getting stuck behind big I/Os.
They also mentioned that during one of the SPC-1 tests (not sure if it was 7400 or 7450) one of the SSDs failed and the system rebuilt itself. They said there was no significant performance hit(as one might expect given experience with the system) as the test ran. I'm sure there was SOME kind of hit especially if you drive the system to 100% of capacity and suffer a failure. But they were pleased with the results regardless. The competition would be lucky to have something similar.
What 3PAR is not doing
When it comes to SSDs and caching something 3PAR is not doing, is leveraging SSDs to optimize back end I/Os to other media as sequential operations. Some storage startups are doing this to gain further performance out of spinning rust while retaining high random performance using SSD. 3PAR doesn't do this and I haven't heard of any plans to go this route.
I continue to be quite excited about the future of 3PAR, even more so pre acquisition. HP has been able to execute wonderfully on the technology side of things. Sales from all accounts at least on the 7000 series are still quite brisk. Time will tell if things hold up after EVA is completely off the map, but I think they are doing many of the right things. I know even more of course but can't talk about it here(yet)!!!
That's it for tonight, at ~4,000 (that number keeps going up, I should goto bed) words this took three hours or more to write+proof read, it's also past 2AM. There is more to cover, the 3PAR stuff was obviously what I was most interested in. I have a few notes from the other sessions but they will pale in comparison to this.
Today I had a pretty good idea on how HP could improve it's messaging around whether to choose 3PAR or StoreVirtual for a particular workload. The messaging to-date to me has been very confusing and conflicting (HP tried to drive home a point about single platforms and reducing complexity, something this dual message seems to conflict with). I have been communicating with HP off and on for the past few months, and today out of the blue I came up with this idea which I think will help clear the air. I'll touch on this soon when I cover the other areas that were talked about today.
Tomorrow seems to be a busy day, apparently we have front row seats, and the only folks with power feeds. I won't be "live blogging"(as some folks tend to love to do), I'll leave that to others. I work better at spending some time to gather thoughts and writing something significantly longer.
If you are new to this site you may want to check out a couple of these other articles I have written about 3PAR(among the dozens...)
- 3PAR: The Next Generation (aka 7000 series) - December 2012 (also covers a ton of the new software features as well)
- 3PAR 7400: all SSD SPC-1 performance results - May 2013
- Capacity Utilization: an extreme example, but an easy one to illustrate the point. 3PAR F400 vs Pillar Axiom 600 (October 2010)
Thanks for reading!
[NOTE: I expect to revise this many times - I'm not at HP Discover (maybe next year!), so I am basing this post off what info I have seen elsewhere, I haven't yet got clarification on what NDA info specifically I can talk about yet so am trying to be cautious !]
[Update: HP's website now has the info]
I was hoping they would announce the SPC-1 results of this new system, and I was going to wait until that happens, but I am not sure if they have them finalized yet, I've heard the ballpark figures, but am waiting for the official results.
The upside is I am on the east coast so I am up bright and early relative to my normal Pacific time zone morning.
I thought it would be announced later in the week but my first hint was this Russian blog (google translated), which I saw on LinkedIn a few minutes ago(relative to the time I started the blog post which took me a good two hours to write), also came across this press release of sorts, and there is the data sheet for the new system.
In addition to mixed SSD/HDD and all-SSD configurations across the HP 3PAR StoreServ family, HP has announced the intent to develop an SSD-optimized hardware model based on the 3PAR operating system.
As fast as the all-SSD 7400 was, that was not the "optimized" hardware model - this one is (the one that was mentioned last December). I think the distinction with the word optimized vs using the phrase purpose built is important to keep in mind.
The changes from a hardware perspective are not revolutionary, 3PAR has, for the first time in their history (as far as I know anyway) has fairly quickly leveraged the x86 processors and upgraded both the processors and the memory (ASIC is the same as 7400) to provide the faster data ingest rate. I had previously (incorrectly of course) assumed that the ASIC was tapped out with earlier results and perhaps they would need even more ASICs to drive the I/O needs of an all-SSD system. The ASIC will be a bottleneck at some point but it doesn't seem to be today - the bottleneck was the x86 CPUs.
They also beefed up the cache, doubling what the 7400 has.
- 4-Node 7400: 4 x Intel Xeon 6-core 1.8 Ghz w/64GB Cache
- 4-Node 7450: 4 x Intel Xeon 8-core 2.3Ghz w/128GB Cache
Would of been nice to have seen them use the 10-core chips, maybe the turnaround for such a change would of been too difficult to pull off in a short time frame. 8-core Intel is not bad though.
The Russian blog above touts a 55% increase in performance on the 7450 over the 7400, and the cost is about 6% more (the press release above quotes $99,000 as entry level pricing)
Throughput is touted as 5.5 Gigabytes/second, which won't win any SPC-2 trophies, but is no slouch either - 3PAR has always been more about random IOPS than sequential throughput (though they often tout they can do both simultaneously within a single array - more effectively than other platforms).
The new system is currently tested (according to press release) at 540,000 read IOPS @ 0.6ms of latency. Obviously SPC-1 will be less than the 100% random read. This compares to the 7400 which was tested(under the same 100% read test I believe) to run at 320,000 IOPS @ 1.6ms of latency. So a 59.2% improvement in read IOPS and about 62% less latency.
Maybe we could extrapolate that number a bit here, the 7400 achieved 258,000 SPC-1 IOPS. 59.2% more would make the 7450 look like it would score around 413,000 SPC-1 IOPS, which is nearly the score of an 8-node P10800 which has 16 ASICs and 16 x Quad core Xeon processors! (that P10800 requires basically a full rack for just the controllers vs 4U for the 7450 (assuming they can get the full performance out of the controllers with only 48 SSD drives).
The blog also talks about the caching improvements targeted to improve performance and lifetime of the SSDs. The new 3PAR software also has a media wear gauge for the SSDs, something I believe the HP Lefthand stuff got in a year or two ago (better late than never!). The graphics the Russian blog has are quite good, I didn't want to too shamelessly rip them from their blog to re-post here so I encourage you to go there to see the details on the caching improvements that are specific to SSD).
This system is meant to go head to head with the all-flash offerings from the likes of EMC, IBM NetApp (not aware of any optimized flash systems from HDS yet - maybe they will buy one of those new startups to fill that niche - they do have an optimized flash module for their VSP but I'd consider that a different class of product which may retain the IOPS constraints of the VSP platform).
However unlike the competition who has had to go outside of their core technology, HP 3PAR has been able to bring this all flash offering under the same architecture as the spinning rust models, basically it's the same system with some tweaked software and faster processors with more memory. The underlying OS is the same, the features are the same, the administrative experience is the same. It's the same, which is important to keep in mind. This is both good and bad, though for the moment I believe more good (Granted of course HP had to go to 3PAR to get all of this stuff, but as this blog has had a lot of 3PAR specific things I view this more in a 3PAR light than in a HP light if you get what I mean).
Of the four major competitors, EMC is the only one that touts deduplication (which, IMO is only really useful for things like VDI in transactional workloads)
3PAR is the only one with a mature enterprise/service provider grade operating system. On top of that obviously 3PAR is the only one that has a common platform amongst all of it's systems from the 7200 all the way to the 10800.
3PAR and IBM are the only ones that are shipping now. Just confirmed from El Reg that the 7450 is available immediately.
None of big four tout compression, which I think would be a greater value add than deduplication for most workloads. I'm sure it's on all of their minds though, it could be a non trivial performance hit, and in 3PAR's case they'd likely need to implement it in the ASIC, if so, it means having to wait until the next iteration of the ASIC comes out. There has been gzip compression available in hardware form for many years so I imagine it wouldn't be difficult to put into the silicon to keep the performance up under such conditions.
The new system also supports a 400GB MLC self encrypting drive (along with other SEDs for other 3PAR platforms as well) - 3PAR finally has a native encryption option, for those that need it.
Who should buy this
This isn't an array for everyone (nor are the ones from the other big storage players). It's a specialized system for specific very high performance workloads where latency is critical, yet at the same time providing the availability and manageability of the 3PAR platform to an all SSD solution.
You can probably go buy a server and stuff it with a few PCIe flash boards and meet or exceed the IOPS at a similar latency and maybe less price. If your workload is just dumb IOPS and you care about the most performance at the least price then there are other options available to you (they probably won't work as well but you get what you (don't) pay for).
There clearly is a market for such a product though, the first hint of this was dropped when HP announced an all flash version of it's P10000 about a year ago. Customers really wanted an all flash system and they really wanted the 3PAR OS on it. If your not familiar with the high end 3PAR systems well from a form factor perspective driving 400k+ SPC-1 IOPS on a P10800 vs a 7450 you would probably get a good chuckle out of how much floor space and power circuits are required for the P10800 (power draw would be light on the SSDs of course, but they have hard requirements for power provisioning - most customers would pay per circuit regardless of draw).
I think a lot of this may be in the banking sector, where folks are happy to buy tons of fancy low latency stuff to make sure their transactions are processed in milliseconds.
Fifteen milliseconds may not seem like a significant amount of time—it is literally shorter than a human blink of an eye, which takes 300 to 400 milliseconds. But in the age of super-high-speed computerized trading, Wall Street firms need less than a millisecond to execute a trade.
All told, Nanex calculated that $28 million worth of shares were exchanged in a short time[15 milliseconds] before the official release of the ISM data.
There have been a lot of skeptics out there wondering whether or not the 3PAR architecture could be extended to cover an all flash offering (you can actually sort of count me in the skeptical camp as well, I was not sure even after they tried to re-assure me, I want to see the numbers at the end of the day). I believe with this announcement they have shown that even more so than the 7400, they have a very solid all flash offering that will, in most cases beat the tar out of the competition, not only on performance, not only on latency, not only on enterprise grade availability and functionality, but on price as well.
Even with this high performance system, these all SSD systems illustrate quite well how a modern storage controller is not able to scale anywhere nearly as well with SSDs as with spinning rust. Most of the SSD offerings have a small number of SSDs before they tap out the controllers. No single controller(that I've seen) supports the multi millions of IOPS that would be required to drive many hundreds of SSDs at line rate simultaneously(like regular storage arrays would drive hundreds of disks today).
It is just interesting to me to see the massive bottleneck shift continues to be the controller, and will be for some time to come. I wonder when the processors will get fast enough that they might shift the bottleneck back to the storage media, a decade? Or perhaps by that time everyone will be running on some sort of mature grid storage technology, and the notion of controllers as most of us know them today will be obsolete as a concept. Certainly several cloud providers are already trying to provide grid storage as an alternative, though in most cases, while the cost can be low, the performance is very poor as well (relative to an HP 3PAR anyway).
There is always more work to do (in this case mainly dedupe and compression), and as you might expect HP, along with the other big storage companies are constantly working to add more, I am very excited about what the future holds for 3PAR, really have never been so excited since the launch of the 7000 series last year(as a customer now for almost seven years) and am very pleased with what HP has managed to accomplish with the technology thus far.
Other 3PAR announcements
- 3PAR Priority Optimization is made available now (first announced last December) - this is basically fine grained QoS for IOPS and throughput, something that will be a welcome enhancement to those running true multi tenant systems.
- 3PAR Recovery Manager for Hyper-V - sounds like they are bringing Hyper-V up to the same level of support as VMware.
- As mentioned earlier, Self encrypting drive options are cited on the Russian blog include - 400GB MLC SSD, 450GB 10k, 900GB 10k, 1TB 7.2k 2.5 "
Side note: there are a few other things to write about later, such as the IBM XIV SPC-1, the HP StoreOnce VSA, and probably whatever else comes out at Discover. For sure I won't get to those today(or maybe even tomorrow, I am on a semi vacation/working week this week).
I came across this article on The Register which covered some of HP's storage woes (short story: legacy storage is nose diving and 3PAR is shining). El Reg linked to the conference call transcript and I just ran a quick keyword search for 3PAR and saw this
This has been one of our most successful product introductions and 3PAR has now exceeded the $1 billion run-rate revenue mark.
Converged storage products were up 48% year-over-year and within that 3PAR was up 82%
Congratulations 3PAR! Woohoo! All of us over here at Techopsguys are really proud of you - keep up the good work! <voice="Scotty">Almost brings a tear to me eye.</voice>
For a comparison, I dug up the 3PAR results on archive.org for the quarter immediately previous to them being acquired:
3PAR® (NYSE: PAR), the leading global provider of utility storage, today reported results for the first quarter of fiscal year 2011, which ended June 30th, 2010. Revenue for the first quarter was $54.3 million, an increase of 22% as compared to revenue of $44.5 million for the same period in the prior year, and an increase of 1% as compared to $53.7 million in the prior quarter, which ended March 31st, 2010.
I can't help but wonder how well Compellent is doing for Dell these days by contrast, since Dell withdrew from the bidding war for 3PAR with HP and went for them instead.. (side note: I once saw some value in Compellent as an alternative to 3PAR but that all went away with the 3PAR 7000-series.). I looked at the transcript for Dell's latest conference call and the only thing they touch about storage is declines of 10%, no mention of any product lines as far as I could tell.
I've been waiting to see these final results for a while, and now they are out! The numbers(performance + cost + latency) are actually better than I was expecting.
You can see a massive write up I did on this platform when it was released last year.
(last minute edits to add a new Huawei results that was released yesterday)
(more last minute edits to add a HP P6500 EVA SPC-1E)
I'll say this again in case this happens to be read by someone who is new here. Myself, I see value in the SPC-1 as it provides a common playing field for reporting on performance in random transactional workloads (the vast majority of workloads are transactional). On top of the level playing field the more interesting stuff comes in the disclosures of the various vendors. You get to see things like
- Cost (SpecSFS for example doesn't provide this and the resulting claims from the vendors showing high performance relative to others at a massive cost premium but not disclosing the costs is very sad)
- Utilization (SPC-1 minimum protected utilization is 55%)
- Configuration complexity (only available in the longer full disclosure report)
- Other compromises the vendor might of made (see the note about disabling cache mirroring)
- 3 year 24x7 4 hour on site hardware support costs
There is a brief executive summary as well as what is normally a 50-75 page full disclosure report with the nitty gritty details.
SPC-1 also has maximum latency requirements - no I/O request can take longer than 30ms to serve or the test is invalid.
There is another test suite - SPC-2, which tests throughput with various means. Much fewer systems participate in that test (3PAR never has, though I'd certainly like them to).
Having gone through several storage purchases over the years I can say from personal experience it is a huge pain to try to evaluate stuff under real workloads - often times vendors don't even want to give evaluation gear (that is in fact in large part why I am a 3PAR customer today). Even if you do manage to get something in house to test, there are many things out there, with wide ranging performance / utilization ratios. At least with something like SPC-1 you can get some idea how the system performs relative to other systems at non trivial utilization rates. This example is rather extreme but is a good illustration.
I have no doubt the test is far from perfect, but in my opinion at least it's far better than the alternatives, like people running 100% read tests with IOMeter to show they can get 1 million IOPS.
I find it quite strange that none of the new SSD startups have participated in SPC-1, I've talked to a couple different ones and they don't like the test, they give the usual it's not real world, customers should take the gear and test it out themselves. Typical stuff. Usually means they would score poorly - especially those that leverage SSD as a cache tier, with high utilization rates of SPC-1 you are quite likely to blow out that tier, once that happens performance tanks. I have heard reports of some of these guys getting their systems yanked out of production because they fail to perform after utilization goes up. System shines like a star during brief evaluation - then after several months of usage and utilization increasing, performance no longer holds up.
One person said their system is optimized for multiple workloads and SPC-1 is a single workload. I don't really agree with that, SPC-1 does a ton of reads and writes all over the system, usually from multiple servers simultaneously. I look back to 3PAR specifically, who have been touting multiple workload (and mixed workload) support since their first array was released more than a decade ago. They have participated in SPC-1 for over a decade as well, so arguments saying testing is too expensive etc doesn't hold water either. They did it when they were small, on systems that are designed from the ground up for multiple workloads (not just riding a wave of fast underlying storage and hoping that can carry them), these new small folks can do it too. If they can come up with a better test with similar disclosures I'm all ears too.
The one place where I think SPC-1 could be improved is in failure testing. Testing a system in a degraded state to see how it performs.
The below results are from what I could find on all SSD SPC-1 results. If there is one/more I have missed(other than TMS, see note below), let me know. I did not include the IBM servers with SSD, since those are..servers.
|HP 3PAR 7400||May 23, 2013|
|HP P6500 EVA (SPC-1E)||February 17, 2012|
|IBM Storwize V7000||June 4, 2012|
|HDS Unified Storage 150||March 26, 2013|
|Huawei OceanStor Dorado2100 G2||May 22, 2013|
|Huawei OceanStor Dorado5100||August 13, 2012|
I left out the really old TMS (now IBM) SPC-1 results as they were from 2011, too old for a worthwhile comparison.
Performance / Latency
|System Name||SPC-1 |
|# of times|
|HP 3PAR 7400||258,078||0.66ms||0.86ms||0 / 15||32x
|HP P6500 EVA (SPC-1E)||20,003||4.01ms||11.23ms||13 / 15||8x
|IBM Storwize V7000||120,492||2.6ms||4.32ms||15 / 15||18x
|HDS Unified Storage 150||125,018||0.86ms||1.09ms||12 / 15||20x
|Huawei OceanStor Dorado2100 G2||400,587||0.60ms||0.75ms||0 / 15||50x
|Huawei OceanStor Dorado5100||600,052||0.87ms||1.09ms||7 / 15||96x
A couple of my own data points:
- Avg latency (All utilization levels) - I just took aggregate latency of "All ASUs" for each of the utilization levels and divided it by 6 (the number of utilization levels)
- Number of times above 1ms of latency - I just counted the number of cells in the I/O throughput table for each of the ASUs (15 cells total) that the test reported above 1ms of latency
|HP 3PAR 7400||$148,737||$0.58||$133,019|
|HP P6500 EVA (SPC-1E)||$130,982||$6.55||$260,239|
|IBM Storwize V7000||$181,029||$1.50||$121,389|
|HDS Unified Storage 150||$198,367||$1.59||$118,236|
|Huawei OceanStor Dorado2100 G2||$227,062||$0.57||$61,186|
|Huawei OceanStor Dorado5100||$488,617||$0.81||$77,681|
|HP 3PAR 7400||3,250 GB||1,159 GB||70.46%|
|HP P6500 EVA (SPC-1E)||1,600 GB||515 GB||64.41%|
|IBM Storwize V7000||3,600 GB||1,546 GB||84.87%|
|HDS Unified Storage 150||3,999 GB||1,717 GB||85.90%|
|Huawei OceanStor Dorado2100 G2||10,002 GB||3,801 GB||75.97%|
|Huawei OceanStor Dorado5100||19,204 GB||6,442 GB||67.09%|
The new utilization charts in the latest 3PAR/Huawei tests are quite nice to see, really good illustrations as to where the space is being used. They consume a full 3 pages in the executive summary. I wish SPC would go back and revise previous reports so they have these new easier forms of disclosure in them. The data is there for users to compute on their own.
This is a SPC-1e result rather than SPC-1 - I believe the work load is the same(?) they just measure power draw in addition to everything else. The stark contrast between the new 3PAR and the older P6500 is remarkable from every angle whether it is cost, performance, capacity, latency. Any way you slice it (well except power I am sure 3PAR draws more power )
It is somewhat interesting in the power results for the P6500 that there is only a 16 watt difference between 0% load and 100% load.
I noticed that the P6500 is no longer being sold (P6550 was released to replace it - and the 3PAR 7000-series was released to replace the P6550 which is still being sold).
While I don't expect Huawei to be a common rival for the other three outside of China perhaps, I find their configuration very curious. On the 5100 with such a large number of apparently low cost SLC(!) SSDs, and "short stroking" (even though there are no spindles I guess the term can still apply) they have managed to provide a significant amount of performance at a reasonable cost. I am confused though they claim SLC but yet they have so many disks(would think you'd need fewer with SLC), at the same time at a much lower cost. Doesn't compute..
Huawei appears to have absolutely no software options for these products - no thin provisioning, no snapshots, no replication, nothing. Usually vendors don't include any software options as part of the testing since they are not used. In this case the options don't appear to exist at all.
They seem to be more in line with something that LSI/NetApp E-series, or Infortrend or something like that rather than an enterprise storage system. Though looking at Infortrend's site earlier this morning shows them supporting thin provisioning, snapshots, and replication on some arrays. Even NetApp seems to have thin provisioning on their E-series included.
3PAR's utilization in this test is hampered by (relatively) excessive metadata, the utilization results say only 7% unused storage ratio which on the surface is an excellent number. But this number excludes metadata which in this case is 13%(418GB) of the system. Given the small capacity of the system this has a significant impact on utilization (compared to 3PAR's past results). They are working to improve this.
The next largest meta data size in the above systems is IBM which has only 1GB of metadata (about 99.8% less than 3PAR). I would be surprised if 3PAR was not able to significantly slash the metadata size in the future.
In the grand scheme of things this problem is pretty trivial. It's not as if the meta data scales linearly with the system.
Only quad controller system
3PAR is the only SSD solution above tested with 4 controllers(totalling 4 Gen4 ASICs, 24 x 1.8Ghz Xeon CPU cores, 64GB of data cache, and 32GB of control cache), meaning with their persistent cache technology(which is included at no extra cost) you can lose a controller and keep a fully protected and mirrored write cache. I don't believe any of the other systems are even capable of such a configuration regardless of cost.
The 7400 managed to stay below 1 millisecond response times even at maximum utilization which is quite impressive.
Thin provisioning built in
The new license model of the 3PAR 7000 series means this is the first SPC-1 result to include thin provisioning for a 3PAR system at least. I'm sure they did not use thin provisioning(no point when your driving to max utilization), but from a cost perspective it is something good to keep in mind. In the past thin provisioning would add significant costs onto a 3PAR system. I believe thin provisioning is still a separate license on the P10000-series (though would not be surprised if that changes as well).
Low cost model
They managed to do all of this while remaining a lower cost offering than the competition - the economics of this new 7000 series are remarkable.
IBM's poor latency
IBM's V7000 latency is really terrible relative to HDS and HP. I guess that is one reason they bought TMS. Though it may take some time for them to integrate TMS technology (assuming they even try) to have similar software/availability capabilities as their main enterprise offerings.
With these results I believe 3PAR is showing well that they too can easily compete in the all SSD market opportunities without requiring excessive amounts of rack space or power circuits as some of their previous systems required. All of that performance(only 32 of the 48 drive bays are occupied!), in a small 4U package. Previously you'd likely be looking at a absolute minimum of half a rack!
I don't know whether or not 3PAR will release performance results for the 7000 series on spinning rust, it's not too important at this point though. The system architecture is distributed and they have proven time and again they can drive high utilization, so it's just a matter of knowing the performance capacity of the controllers (which we have here), and just throwing as much disk as you want at it. The 7400 series tops out at 480 disks at the moment - even if you loaded it up with 15k spindles you wouldn't come close to the peak performance of the controllers.
It is, of course nice to see 3PAR trouncing the primary competition in price, performance and latency. They have some work to do on utilization as mentioned above.