TechOpsGuys.com Diggin' technology every day

December 4, 2012

3PAR: The Next Generation

Filed under: Storage — Tags: , — Nate @ 12:40 am

(Cue Star Trek: The Next Generation theme music)

[Side note: I think this is one of my most popular post ever with nearly 3,000 hits to it so far (excluding my own IPs). Thanks for reading!]

[I get the feeling I will get lots of people linking to this since I suspect what is below will be the most complete guide as to what was released – for those of you that haven’t been here before I am in no way associated with HP or 3PAR – or compensated by them in any way of course! Just been using it for a long time and it’s one of the very few technologies that I am passionate about – I have written a ton about 3PAR over past three years]

HP felt their new storage announcements were so ground breaking that they decided to have a special event a day before HP Discover is supposed to start. They say it’s the biggest announcement for storage from HP in more than a decade.

I first got wind of what was coming last Fall, though there wasn’t much information available at the time other than a picture and some thoughts as to what might happen. Stuff wasn’t nailed down yet. I was fortunate enough to finally visit 3PAR HQ a couple of months ago and get a much more in depth briefing as to what was coming, and I’ll tell you what it’s been damn hard to contain my excitement.

HP announced a 75% year over year increase in 3PAR sales, along with more than 1,200 new customers in 2012 alone. Along with that HP said that their StoreOnce growth is 45% year over year.

By contrast HP did not reveal any growth numbers for either their Lefthand  StoreVirtual platform nor their IBRIX StoreAll platforms.

David Scott, former CEO of 3PAR tried to set the tone as a general storage product launch, they have enhancements to primary storage, to file/object scale-out storage as well as backup/archive storage.

You know I’m biased, I don’t try to hide that. But it was obvious to me at the end of the presentation this announcement was all about one thing: David’s baby – 3PAR.

Based on the web site, I believe the T-class of 3PAR systems is finally retired now. Replaced last year by the V-Class (aka P10000 or 10400 and 10800)

Biggest changes to 3PAR in at least six years

The products that are coming out today are in my opinion, the largest set of product (AND policy) enhancements/changes/etc from 3PAR in at least the past six years that I’ve been a customer.

First – a blast from the past.

The first mid range 3PAR system – the E200

Hello 2006!

There is some re-hashing of old concepts, specifically the concept of mid range. 3PAR introduced their first mid range system back in 2006, which was the system I was able to deploy – the E200. The E200 was a dual node system that went up to 4GB data cache per controller and up to 128 drives or 96TB of usable capacity whichever came first. It was powered by the same software and same second generation ASIC (code named Eagle if I remember right) that was in the high end S-class at the time.

The E200 was replaced by the F200, and the product line extended to include the first quad controller mid range system the F400 in 2009. The F-class, along with the T-class (which replaced the S-class) had the third generation ASIC in it (code named Osprey if I remember right?? maybe I have those reversed). The V-class which was released last year, along with what came out today has the 4th generation ASIC (code named Harrier).

To-date – as far as I know the F400 is still the most efficient SPC-1 result out there, with greater than 99% storage utilization – no other platforms (3PAR included) before or since have come close.

These systems, while coined mid range in the 3PAR world were still fairly costly. The main reason behind this was the 3PAR architecture itself. It is a high end architecture. Where other vendors like EMC and HDS chose radically different designs for their high end vs. their mid range, 3PAR aimed a shrink ray at their system and kept the design the same. NetApp on the other hand was an exception – they too have a single architecture that scales from the bottom on up. Though as you might expect – NetApp and 3PAR architectures aren’t remotely comparable.

Here is a diagram of the V-series controller architecture, which is very similar to the 7200 and 7400, just at a much larger scale:

3PAR V-Series ASIC/CPU/PCI/Memory Architecture

Here is a diagram of the inter-node communications on an 8-node P10800, or T800 before it, again similar to the new 7000-series just larger scale:

3PAR Cluster Architecture with low cost high speed passive backplane with point to point connections totalling 96 Gigabytes/second of throughput

Another reason for the higher costs was the capacity based licensing (& associated support). Some things were licensed per controller pair, some things based on raw capacity, some things licensed per system, etc. 3PAR licensing was not very friendly to the newbie.

Renamed Products

There was some basic name changes for 3PAR product lines:

  • The HP 3PAR InServ is now the HP 3PAR StorServ
  • The HP 3PAR V800 is now the HP 3PAR 10800
  • The HP 3PAR V400 is now the HP 3PAR 10400

The 3PAR 7000-series – mid range done right

The 3PAR 7000-series leverages all of the same tier one technology that is in the high end platform and puts it in a very affordable package, starting at roughly $25,000 for a two-node 7200 system, and $32,000 for an entry level two-node 7400 system(which can later be expanded to four nodes, non disruptively).

I’ve seen the base 7200 model (2 controllers, no disks, 3 year 24×7 4-hour on site support “parts only”) online for as low as $10,000.

HP says this puts 3PAR in a new $11 Billion market that it was previously unable to compete.

This represents roughly a 55-65% discount over the previous F-class mid range 3PAR solution. More on this later.

Note that it is not possible to upgrade in place a 7200 to a 7400. So you still have to be sure if you want a 4-node capable system to choose the 7400 up front (you can, of course purchase a two-node 7400 and add the other two nodes later).

Dual vs quad controller

The controller configurations are different between the two and the 7400 has extra cluster cross connects to unify the cluster across enclosures. The 7400 is the first 3PAR system that is not leveraging a passive backplane for all inter-node communications. I don’t know what technology 3PAR is using to provide this interconnect over a physical cable – it may be entirely proprietary. They use their own custom light weight protocols on the connection, so from a software standpoint it is their own stuff. Hardware – I don’t have that information yet.

A unique and key selling point for having a 4-node 3PAR system is persistent cache, which keeps the cache in write back mode during planned or unplanned controller maintenance.

3PAR Persistent Cache mirrors cache from a degraded controller pair to another pair in the cluster automatically.

The 3PAR 7000 series is based on what I believe is the Xyratex OneStor SP-2224 enclosure, the same one IBM uses for their V7000 StorWize system (again, speculation). Speaking of the V7000 I learned tonight that this IBM system implemented RAID 5 in software resulting in terrible performance. 3PAR RAID 5 is well – you really can’t get any faster than 3PAR RAID, that’s another topic though.

3PAR 7000 Series StorServs

3PAR 7000 Series StorServs

3PAR has managed to keep it’s yellow color, and not go to the HP beige/grey. Somewhat surprising though I’m told it’s because it helps the systems stand out in the data center.

The 7000 series comes in two flavors – a two node 7200, and a two or four node 7400. Both will be available starting December 14.

2.5″ or 3.5″ (or both)

There is also a 3.5″ drive enclosure for large capacity SAS (up to 3TB today). There are also 3.5″ SSDs but their capacities are unchanged from the 2.5″ variety – I suspect they are just 2.5″ drives in a caddy. This is based, I believe on the Xyratex OneStor SP-2424.

Xyratex OneStor SP-2424

This is a 4U, 24-drive enclosure for disks only(controllers go in the 2U chassis). 3PAR kept their system flexible by continuing to allow customers to use large capacity disks, however do keep in mind that for the best availability you do need to maintain at least two (RAID 10),  three (RAID 5), or six (RAID 6)  drive enclosures. You can forgo cage level availability if you want, but I wouldn’t recommend it – that provides an extra layer of protection from hardware faults, at basically no cost of complexity on the software side (no manual layouts of volumes etc).

HP has never supported the high density 3.5″ disk chassis on the mid range systems I believe primarily for cost, as they are custom designed. By contrast the high end systems only support the high density enclosures at this time.

3PAR High Density 3.5" Disk Chassis - not available on mid range systems

The high end chassis is designed for high availability. The disks are not directly accessible with this design. In order to replace disks the typical process is to run a software task on the array which then migrates all of the data from the disks in that particular drive sled (pack of four drives), to other disks on the system(any disks of the same RPM), once the drive sled is evacuated it can be safely removed. Another method is you can just pull the sled, the system will go into logging mode for writes for those disks(sending the writes elsewhere), and you have roughly seven minutes to do what you need to do and re-insert the sled before the system marks those drives as failed and begins the rebuild process.

The one thing that HP does not allow on SP-2424-based 3.5″ drive chassis is high performance (10 or 15K RPM) drives. So you will not be able to build a 7000-series with the same 650GB 15k RPM drives that are available on the high end 10000-series. However they do have a nice 900GB 10k RPM option in a 2.5″ form factor which I think is a good compromise.  Or you could go with a 300GB 15k RPM 2.5″. I don’t think there is a technical reason behind this, so I imagine if enough customers really want this sort of setup and yell about it, then HP will cave and start supporting it. Probably won’t be enough demand though.

Basic array specifications

Array
Model
Max
Cont.
Nodes
Max
Raw
Capacity
Max
Drives
Max
Ports
Max
Data
Cache
72002250TB144Up to 12x8Gbps FC OR
4x8Gbps FC AND 4x10Gbps iSCSI
24GB
74004864TB480Up to 24x8Gbps FC OR
8x8Gbps FC AND 8x10Gbps iSCSI
64GB
104004800TB960Up to 96x8Gbps FC ports
Up to 16x10Gbps iSCSI
128GB
1080081600TB1920Up to 192x8Gbps FC ports
Up to 32x10Gbps iSCSI
512GB

(Note: All current 3PAR arrays have dedicated gigabit network ports on each controller for IP-based replication)

In a nut shell, vs the F-class mid range systems, the new 7000-series:

  • Doubles the data cache per controller to 12GB compared to F200, almost triple if you compare the 7400 to the F200/F400)
  • Doubles the control cache per controller to 8GB, The control cache is dedicated memory for the operating system completely isolated from the data cache.
  • Brings PCI-Express support to the 3PAR mid range allowing for 8Gbps Fibre Channel and 10Gbps iSCSI
  • Brings the mid range up to spec with the latest 4th generation ASIC, and latest Intel processor technology.
  • Nearly triples the raw capacity
  • Moves from an entirely Fibre channel based system to a SAS back end with a Fibre front end
  • Moves from exclusively 3.5″ drives to primarily 2.5″ drives with a couple 3.5″ drive options
  • Brings FC0E support to the 3PAR mid range (in 2013) for the four customers who use FCoE.
  • Cuts the size of the controllers by more than half
  • Obviously dramatically increases the I/O and throughput of the system with the new ASIC with PCIe, faster CPU cores, more CPU cores(in 7400)  and the extra cache.

Where’s the Control Cache?

Control cache is basically dedicated memory associated with the Intel processors to run the Debian Linux operating system which is the base for 3PAR’s own software layer.

HP apparently has removed all references to the control cache in the specifications, I don’t understand why. I verified with 3PAR last night that there was no re-design in that department, the separated control cache still exists, and as previously mentioned is 8GB on the 7000-series. It’s important to note that some other storage platforms share the same memory for both data and control cache and they give you a single number for how much cache there is – when in reality the data cache can be quite a bit less.

Differences between the 7200 and 7400 series controllers

Unlike previous generations of 3PAR systems, where all controllers for a given class of system were identical, the new controllers for the 104800 vs 10800, as well as the 7200 vs 7400 are fairly different.

  • 7200 has quad core 1.8Ghz CPUs, 7400 has hex core 1.8Ghz CPUs.
  • 7200 has 12GB cache/controller, 7400 has 16GB/controller.
  • 7200 supports 144 disks/controller pair, 7400 is 240 disks.
  • Along that same note 7200 supports 5 disk enclosures/pair, 7400 supports nine.
  • 7400 has extra cluster interconnects to link two enclosures together forming a mesh active cluster.

iSCSI No longer a second class citizen

3PAR has really only sort of half heartily embraced iSCSI over the years, their customer base was solidly fibre channel. When you talk to them of course they’ll say yes they do iSCSI as well as anyone else but the truth is they didn’t. They didn’t because the iSCSI HBA that they used was the 4000 series from Qlogic. The most critical failing of this part is it’s pathetic throughput. Even though it has 2x1Gbps ports, the card itself is only capable of 1Gbps of throughput. So you look at your 3PAR array and make a decision:

  • I can install a 4x4Gbps Fibre channel card and push the PCI-X bus to the limit
  • I can install a 2x1Gbps iSCSI card and hobble along with less capacity than a single fibre channel connection

I really don’t understand why they did not go back and re-visit alternative iSCSI HBA suppliers since they kept the same HBA for a whole six years. I would of liked to have seen at least a quad port 1Gbps card that could do 4Gbps of throughput. I hammered on them for years it just wasn’t a priority.

But no more! I don’t know what card they are using now, but it is PCIe and it is 10Gbps! Of course the same applies to the 10000-series – I’d assume they are using the same HBA in both but I am not certain.

Lower cost across the board for the SME

For me these details are just as much, if not more exciting than the new hardware itself. These are the sorts of details people don’t learn about until you actually get into the process of evaluating or purchasing a system.

Traditionally 3PAR has all been about margin – at one point I believe they were known to have the highest margins in the industry (pre acquisition). I don’t know where that point stands today, but from an up front standpoint they were not a cheap platform to use. I’ve always gotten a ton of value out of the platform, making the cost from my standpoint trivial to justify. But to less experienced management out there they often see cost per TB or cost per drive or support costs or whatever, compared to other platforms at a high level they often cost more. How much value you derive from those costs can very greatly.

Now it’s obvious that HP is shifting 3PAR’s strategy from something that is entirely margin focused to most likely lower margins but orders of magnitude more volume to make up for it.

I do not know if any of these apply to anything other than the 7000-series, for now assume they do not.

Thin licensing included in base software

Winning the no brainer of the year award in the storage category HP is throwing in all thin licensing as part of the  array with the base license. Prior to this there were separate charges to license thin functionality based on how much written storage was used for thin provisioning. You could license only 10TB on a 100TB array if you want, but you lose the ability to provision new thin provisioned volumes if you exceed that license (I believe there is no impact on existing volumes, but the system will pester you on a daily basis that you are in violation of the license). This approach often caught customers off guard during upgrades – they sometimes thought they only needed to buy disks – but they needed software licenses for those disks, as well as support for those software licenses.

HP finally realized that thin provisioning is the norm rather than the exception. HP is borrowing a page from the Dell Compellent handbook here.

Software License costs capped

Traditionally, most of 3PAR’s software features are based upon some measure of capacity of the system, in most cases it is raw capacity, for thin provisioning it is a more arbitrary value.

HP is once again following the Dell Compellent handbook which caps license costs at a set value(in Dell’s case I believe it is 96 spindles). For the 3PAR 7000-series the software license caps are:

  • 7200: 48 drives (33% of array capacity)
  • 7400: 168 drives (35% of array capacity)

Easy setup with Smart Start

Leveraging technology from the EVA line of arrays, HP has radically simplified the installation process of a 7000-series array, so much so that the customer can now perform the installation on their own without professional services. This is huge for this market segment. The up front professional services to install a mid range F200 storage system had a list price of $10,000 (as of last year anyway).

User serviceable components

Again for the first time in 3PAR’s history a customer will be allowed to replace their own components (disks at least, I assume controllers as well though). This again is huge – it will slash the entry level pricing for support for organizations that have local support staff available.

The 7000-series comes by default with a 24x7x365 4-hour on site support (parts only). I believe software support and higher end on site services are available for an additional charge.

All SSD 7000 series

Like the 10000-series, the 7000-series can run on 100% SSDs, a configuration that for some reason was not possible on the previous F-series of midrange systems (also I think T-class could not as well).

HP claims that with a maximum configuration, a 4-node 7400 maxed out with 240 x 100 or 200GB SSDs the system can achieve 320,000 IOPS, a number which HP claims is a 2.4x performance advantage to their closest priced competitor. This number is based on a 100% random read test with 8kB block sizes @ 1.6 milliseconds of latency. SPC-1 numbers are coming – I’d guesstimate that SPC-1 for the 7400 will be in the ~110,000 IOPS range since it’s roughly 1/4th the power of a 10800 (half the nodes, and each node has half the ASICs & CPUs and far less data cache).

HP is also announcing their intention to develop a purpose built all-SSD solution based on 3PAR technology.

Other software announcements

Most of them from here.

Priority Optimization

For a long time 3PAR has touted it’s ability to handle many workloads of different types simultaneously, providing multiple levels of QoS on a single array. This was true, to a point.

3PAR: Mixed quality of service in the same array

While it is true that you can provide different levels of QoS on the same system, 3PAR customers such as myself realized years ago that it could be better. A workload has the potential to blow out the caches on the controllers (my biggest performance headache with 3PAR – it doesn’t happen often, all things considered I’d say it’s probably a minor issue compared to competing platforms but for me it’s a pain!). This is even more risky in a larger service provider environment where the operator has no idea what kind of workloads the customers will be running. Sure you can do funky things like carve the system up so less of it is impacted when that sort of event happens but there are trade offs there as well.

Priority Optimization

The 3PAR world is changing – with Priority Optimization – a feature that essentially beta at this point, allows the operator to set thresholds both on an IOPS as well as bandwidth perspective. The system reacts basically in real time. Now on a 3PAR platform you can guarantee a certain level of performance to a workload. Whereas in the past, there was a lot more hope involved.  Correct me if I’m wrong but I thought this sort of QoS was exactly the sort of thing that Oracle Pillar used to tout. I’m not sure if they had knobs like this, but I do recall them touting QoS a lot.

Priority Optimization will be available sometime in 2013 – I’d imagine it’d be early 2013 but not sure.

Autonomic Replication

As I’ve said before – I’ve never used 3PAR replication – never needed it. I’ve tended to build things so that data is replicated via other means, and low level volume-based replication is just overkill – not to mention the software licensing costs.

3PAR Synchronous long distance replication: unique in the mid range

But many others I’m sure do use it, and this industry first as HP called it is pretty neat. Once you have your arrays connected, and your replication policies defined, when you create a new volume on the source array, all details revolving around replication are automatically configured to protect that volume according to the policy that is defined. 3PAR replication was already a breeze to configure, this just made it that much easier.

Autonomic Rebalance

3PAR has long had the ability to re-stripe data across all spindles when new disks were added, however this was always somewhat of a manual process, and it could take a not insignificant amount of time because your basically reading and re-writing every bit of data on the system. It was a very brute force approach. On top of that you had to have a software license for Dynamic Optimization in order to use it.

Autonomic rebalance is now included in the base software license and will automatically re-balance the system when resources change, new disks, new controllers etc. It will try, whenever possible, to move the least amount of data – so the brute force approach is gone, the system has the ability to be more intelligent about re-laying out data.

I believe this approach also came from the EVA storage platform.

Persistent Ports

This is a really cool feature as well – it gives the ability to provide redundant connectivity to multiple controllers on a 3PAR array without having to have host-based multipathing software. How is this possible? Basically it is NPIV for the array. Peer controllers can assume the world wide names for the ports on their partner controller. If a controller goes down, it’s peer assumes the identities of that controller’s ports, instantaneously providing connectivity for hosts that were (not directly) connected to the ports on the downed controller. This eliminates pauses for MPIO software to detect faults and fail over, and generally makes life a better place.

HP claims that some other tier 1 vendors can provide this functionality for software changes, but they do not today, provide it for hardware changes. 3PAR provides this technology for both hardware and software changes – on all of their currently shipping systems!

Peer Persistence

This is basically a pair of 3PAR arrays acting as a transparent fail over cluster for local or metro distances. From the PDF

The Peer Persistence software achieves this key enhancement by taking advantage of the Asymmetric Logical Unit Access (ALUA) capability that allows paths to a SCSI device to be marked as having different characteristics.

Peer persistence also allows for active-active to maximize available storage I/O under normal conditions.

Initially Peer Persistence is available for VMware, other platforms to follow.

3PAR Peer Persistence

Virtualized Service Processor

All 3PAR systems have come with a dedicated server known as the Service Processor, this acts as a proxy of sorts between the array and 3PAR support. It is used for alerting as well as remote administration. The hardware configuration of this server was quite inflexible and it made it needlessly complex to deploy in some scenarios (mainly due to having only a single network port).

The service processor was also rated to consume a mind boggling 300W of power (it may of been a legacy typo but that’s the number that was given in the specs).

The Service processor can now be deployed as a virtual machine!

Web Services API

3PAR has long had a CIM API (never really knew what that was to be honest), and it had a very easy-to-use CLI as well (used that tons!), but now they’ll have a RESTful Web Services API that uses JSON (ugh, I hate JSON as you might recall! If it’s not friends with grep or sed it’s not friends with me!). Fortunately for people like me we can keep using the CLI.

This API is, of course, designed to be integrated with other provisioning systems, whether it’s something off the shelf like OpenStack, or custom stuff organizations write on their own.

Additional levels of RAID 6

3PAR first introduced RAID 6 (aka RAID DP) with the aforementioned last major software release three years ago, with that version there were two options for RAID 6:

  • 6+2
  • 14+2

The new software adds several more options:

  • 4+2
  • 8+2
  • 10+2

Thick Conversion

I’m sure many customers have wanted this over the years as well. The new software will allow you to convert a thin volume to a thick (fat) volume. The main purpose of this of course is to save on licensing for thin provisioning when you have a volume that is fully provisioned (along with the likelihood of space reclamation on that volume being low as well). I know I could of used this years ago.. I always shook my fist at 3PAR when they made it easy to convert to thin, but really impossible to convert back to thick (without service disruption anyway). Basically all that is needed is to flip a bit in the OS (I’m sure the nitty gritty is more complicated).

Online Import

This basically allows EVA customers to migrate to 3PAR storage without disruption (in most cases).

System Tuner now included by default

The System Tuner package is now included in the base operating system (at least on 7000-series). System Tuner is a pretty neat little tool written many years ago that can look at a 3PAR system in real time, and based on thresholds that you define recommend dynamic movement of data around the system to optimize the data layout. From what I recall it was written in response to a particular big customer request to prove that they could do such data movement.

3PAR System Tuner moves chunklets around in real time

It is important to note that this tool is an on demand tool, when running it gathers tens of thousands of additional performance statistics from the chunklets on the system. It’s not something that can(or should be) run all the time. You need to run it when the workload you want to analyse is running in order to see if further chunklet optimization would benefit you.

System Tuner will maintain all existing availability policies automatically.

In the vast majority of cases the use of this tool is not required. In fact in my experience going back six years I’ve used it on a few different occasions, and in all cases it didn’t provide any benefit. The system generally does a very good job of distributing resources. But if your data access patterns change significantly, System Tuner may be for you – and now it’s included!

3PAR File Services

This announcement was terribly confusing to me at first. But I got some clarification. The file services module is based on the HP StoreEasy 3830 storage gateway.

  • Hardware platform is a DL380p Gen8 rack server attached to the 3PAR via Fibre Channel
  • Software platform is Microsoft Windows Storage Server 2012 Standard Edition
  • Provides NFS, CIFS for files and iSCSI for block
  • SMB 3.0 supported (I guess that is new, I don’t use CIFS much)
  • NFS 4.1 supported (I’ll stick to NFSv3, thanks – I assume that is supported as well)
  • Volumes up to 16TB in size
  • Integrated de-duplication (2:1 – 20:1)
  • VSS Integration – I believe that means no file system-based snapshots (e.g. transparent access of the snapshot from within the same volume) ?
  • Uses Microsoft clustering for optional HA
  • Other “Windowsey” things

The confusion comes from them putting this device under the 3PAR brand. It doesn’t take a rocket scientist to look at the spec sheets and see there are no Ethernet ports on the arrays for file serving. I’d be curious to find out the cost of this file services add-on myself, and what it’s user interface is like. I don’t believe there is any special integration between this file services module and 3PAR – it’s just a generic gateway appliance.

For someone with primarily a Linux background I have to admit I wouldn’t feel comfortable relying on a Microsoft implementation of NFS for my Linux boxes (by the same token I feel the same way about using Samba for serious Windows work – these days I wouldn’t consider it – I’d only use it for light duty simple stuff).

Oh while your at it HP – gimme a VSA of this thing too.

Good-bye EVA and VSP, I never knew thee

Today I think was one of the last nails in the coffin for EVA. Nowhere was EVA present on the presentation other than providing tools to seamlessly migrate off of EVA onto 3PAR. Well that and they have pulled some of the ease of use from EVA into 3PAR.

Literally nowhere was Hitachi VSP (aka HP P9500). Since HP acquired 3PAR the OEM’d Hitachi equipment has been somewhat of a fifth wheel in the HP storage portfolio. Like the EVA, HP had customers who wanted the VSP for things that 3PAR simply could not or would not do at the time. Whether it was mainframe connectivity, or perhaps ultra high speed data warehousing. When HP acquired 3PAR, the high end was still PCI-X based and there wasn’t a prayer it was going to be able to dish out 10+ GB/second. The V800 changed that though. HP is finally making inroads into P9500 customers with the new 3PAR gear. I personally know of two shops that have massive deployments of HP P9500 that will soon have their first 3PAR in their respective data centers. I’m sure many more will follow.

Time will tell how long P9500 sticks around, but I’d be shocked – really shocked if HP decided to OEM whatever came next out of Hitachi.

What’s Missing

This is a massive set of announcements, the result of blood sweat and tears of many engineers work, assuming it all works as advertised they did an awesome job!

BUT.

There’s always a BUT isn’t there.

There is one area that I have hammered on 3PAR for what feels like three years now and haven’t gotten anywhere, the second area is more of a question/clarification.

SSD-Accelerated write caching

Repeat after me – AO (Adaptive Optimization) is not enough. Sub LUN auto tiering is not enough. I brought this up with David Scott himself last year, and I bring it up every time I talk to 3PAR. Please, I beg you please, come out with SSD-accelerated write caching technology. The last time I saw 3PAR in person I gave them two examples – EMC FastCache which is both a read and a write back cache. The second is Dell Compellent’s Data Progression technology. I’ve known about Compellent’s storage technology for years but there was one bit of information that I was not made aware of until earlier this year. That is their Data Progression technology by default automatically sends ALL writes (regardless of what tier the blocks live on), to the highest tier. On top of that, this feature is included in the base software license, it is not part of the add-on automatic tiering software.

The key is accelerating writes. Not reads, though reads are nice too. Reads are easy to accelerate compared to writes. The workload on my 3PAR here at my small company is roughly 92% write (yes you read that right). Accelerating reads on the 3PAR end of things won’t do anything for me!

If they can manage to pull themselves together and create a stable product, the Mt. Rainier technology from Qlogic could be a stop gap. I believe NetApp is partnered with them already for those products. Mt. Rainier, other than being a mountain near Seattle, is a host-based read and write acceleration technology for fibre channel storage systems.

Automated Peer Motion

HP released this more than a year ago – however to-date I have not noticed anything revolving around automatic movement of volumes. Call it what you want, load balancing, tiering, or something, as far as I know at this point any actions involving peer motion are entirely manual. Another point is I’m not sure how many peers an array can have. HP tries to say it’s near limitless – could you have 10 ? 20 ? 30 ? 100 ?  I don’t know the answer to that.

Again going back to Dell Compellent (sorry) their Live Volume software has automatic workload distribution. I asked HP about this last year and they said it was not in place then – I don’t see it in place yet.

That said – especially with the announcements here I’m doubling down on my 3PAR passion. I was seriously pushing Compellent earlier in the year(one of the main drivers was cost – one reseller I know calls them the Poor Man’s 3PAR) but where things stand now, their platform isn’t competitive enough at this point, from either a cost or architecture standpoint. I’d love to have my writes going to SSD as Compellent’s Data Progression does things, but now that the cost situation is reversed, it’s a no brainer to stick with 3PAR.

More Explosions

HP needs to take an excursion and blow up some 3PAR storage to see how fast and well it handles disaster recovery, take that new Peer Persistence technology and use it in the test.

Other storage announcements

As is obvious by now, the rest of the announcements pale in comparison to what came out of 3PAR. This really is the first major feature release of 3PAR software in three years (the last one being 2.3.1 which my company at the time participated in the press event and I was lucky enough to be the first production customer to run it in early January 2010 (had to for Exanet support – Exanet was going bust and I wanted to get on their latest code before they went *poof*)).

StoreOnce Improvements

The StoreOnce product line was refreshed earlier in the year and HP made some controversial performance claims. From what I see the only improvement here is they brought down some performance enhancements from the high end to all other levels of the StoreOnce portfolio.

I would really like to see HP release a VMware VSA with StoreOnce, really sounds like a no brainer, I’ll keep waiting..

StoreAll Improvements

StoreAll is the new name for the IBRIX product line, HP’s file and object storage offering. The main improvement here is something called Express Query which I think is basically a meta data search engine that is 1000s of times faster than using regular search functions for unstructured data. For me I’d rather just structure the data a bit more, the example given is tagging all files for a particular movie  to make it easier to retrieve later. I’d just have a directory tree and put all the files in the tree – I like to be organized. I think this new query tool depends on some level of structure – the structure being the tags you can put on files/objects in the system.

HP Converged storage growth - 38% YoY - notice no mention of StoreAll/IBRIX! Also no mention of growth for Lefthand either

HP has never really talked a whole lot about IBRIX – and as time goes on I’m understanding why. Honestly it’s not in the same league (or sport for that matter) for quality and reliability as 3PAR is, not even close. It lacks features, and according to someone I know who has more than a PB on HP IBRIX storage (wasn’t his idea it’s a big company)  it’s really not pleasant to use. I could say more but I’ll end by saying it’s too bad that HP does not have a stronger NAS offering. IBRIX may scale well on paper, but there’s a lot more to it than the paper specs of course. I went over the IBRIX+3PAR implementation guide, for using 3PAR back end storage on a IBRIX system and wasn’t impressed with some of the limitations.

Like everything else, I would like to see a full IBRIX cluster product deployable as a VMware VSA. It would be especially handy for small deployments(e.g. sub 1TB). The key here is the high availability.

HP also announced integration between StoreAll ExpressQuery and Autonomy software. When the Autonomy guy came on the stage I really just had one word to describe it: AWKWARD – given what happened recently obviously!

StoreVirtual

This was known as the P4000, or Lefthand before that. It was also refreshed earlier in the year. Nothing new announced today. HP is trying to claim the P4000 VSA as Software Defined Storage (ugh).

Conclusion

Make no mistake people – this storage announcement was all about 3PAR. David Scott tried his best to share the love, but there just wasn’t much exciting to talk about outside of 3PAR.

6,000+ words ! Woohoo. That took a lot of time to write, hopefully it’s the most in depth review of what is coming out.

51 Comments

  1. Nice summary, the big difference between Compellent and 3PAR is that you know 3PAR and have experience of both the reality and the marketing. The features you mention around Compellent are largely marketing and not in the same class as 3PAR, but the poor mans 3PAR tag line has served them well. You can do pretty much what Compellent is doing with data progression using Adaptive Optimization, no it’s not free but it just got a hell of a lot cheaper and you can pin all new writes in SSD, so it may be enough. The caveat being in order to maintain performance you need to ensure space is available for all new writes to SSD, if you fill SSD then writes will go to the next tier. That’s also why Dell recommend you change the default policy on Compellent. If using SSD’s they recommend you disable the default policy and don’t place all new writes to the SSD tier. There are a few reasons behind that really, firstly there’s limited space on SSD as the write tier is also Raid 1, secondly the data progression algorithm can only move once a day and thirdly SSD aint that fast for writes, unless you have plenty of them, which then starts to get expensive. But I do agree some other form of write cache would be beneficial, I know HP pre announced SmartCache ealrier this year which you may now get to see at Discover.
    http://h30613.www3.hp.com/media/files/downloads/Non-FilmedSessions/TB2999_Gaudet.pdf

    BTW the Gen4 ASIC’s name is Harrier.

    Comment by John_H — December 4, 2012 @ 7:38 am

  2. Thanks for the comment! Per the data progression vs AO – yes I agree most of what data progression is possible with AO – with the exception if this one use case where all new writes go to SSD. AO can’t do that. When I was at 3PAR HQ I met with a bunch of folks including a senior software guy (I’m sure he wrote some AO stuff) he made the same argument but after I clarified more he said he’d get back to me and so far hasn’t. AO has to have moved those blocks to SSD before the writes come in for those writes to originate on SSD.

    Also Compellent’s write cache is limited to 1GB, even though they can have up to 64GB of read cache/controller. So with a 1GB write cache sending writes to SSD is pretty much essential.

    Yeah I recall the new SmartCache for Gen8 servers being announced, though that is still a read only cache. I’m not sure what the big advantage of it is though(and why it’s limited to Gen8). Fusion IO has several caching products that I could use today if I wanted read caching at the host layer. Also I don’t have to go out and replace my servers to use Fusion IO’s cache.

    Comment by Nate — December 4, 2012 @ 8:19 am

  3. I knew you would do a stellar job writing this up. A few things off the top of my head from what I had been told recently:

    The key point to RESTful is to be used as teh hook into hadoop/cloudera/hp cloud/openstack. It looks like they will continue to sell the F-Series into October but will EOL around June. They are going to be offering File services though the StoreEasy 3850 Gateway running Win2012 and using SMB3.0 and the deDuplication it has built in, I don’t have experience with that configuration but Ive heard good and bad. Persistent Ports – port identity is stored in memory when the controllers are active. The SSD based version will have a limit of 240 SSD drives and they are claiming 320K IOPS will be producible on the 7400. FCoE is forthcoming but not many customers have asked for it. You will probably see a limited number of customers attempt to use the flat san direct connect capabilities to these systems due to the limited number of FC ports.

    There are also announcements around revamping of IBRIX plus its integration of Autonomy tech, for me this space is still Isilon’s to own. HP sees NAS as a high growth industry thats why you see only 10G connectivity on the new units, iSCSI will be offloaded as will FCoE when its turned on. Some of the other stuff is listed here: http://h17007.www1.hp.com/us/en/storage/nextera/index.aspx

    Comment by gchapman — December 4, 2012 @ 9:04 am

  4. Hey Gabriel!

    Per the API – yes I totally agree it’s for integration with things like OpenStack and other custom provisioning systems. It makes a lot of sense for that though really just using SSH should be fine as well 🙂 I guess you can say 3PAR is Web 2.0 compliant now..

    IBRIX vs Isilon for scale-out NAS – absolutely right (though I’m not sure if Isilon has API-based object storage like EMC Atmos – so in that edge case IBRIX is good).

    Isilon being a Seattle-based company and I recently spent 10 years living there I’ve come across a bunch of folks that use it, haven’t heard many complaints over the years. Have a few friends that work there too. It sounds like very easy to use stuff for scale out. I’m surprised how little progress HP has made with IBRIX since the acquisition. From the looks of it they announced the acquisition in July 2009.

    BUT IBRIX can operate as a gateway as well with 3PAR or other storage on the back end, Isilon doesn’t operate in that mode (that I’m aware of). Gateway mode doesn’t make a lot of sense for massive deployments but for small setups it can be really handy.

    That is of course another advantage that Dell has – they have Exanet tech that I suspect is better for NFS at least than IBRIX, and also can be used in gateway mode.

    Comment by Nate — December 4, 2012 @ 9:17 am

  5. […] administrator would want to have especially given the reliance upon thin provisioning with 3PAR. I won’t go into the ins-n-outs of all these various packages, Nate over at TechOpsGuys alrea… And trust me, he knows the systems far better than I do so his input is highly […]

    Pingback by Baby 3PAR to the rescue | Thankfully the RAID is Gone — December 4, 2012 @ 10:14 am

  6. Actually if you create the volumes on SSD first or or even tune them later using Dynamic Optimization and then apply the Adaptive Optimization Policy. All new writes (and overwrites to data pinned in SSD) will go to SSD tier. However if you have an established system it’s going to take time and effort to first shuffle those volumes into SSD and ensure you have enough room to accommodate the volume until AO can perform the sub LUN tuning. This comes back to the reason Data Progression is not all that it’s cracked up to be in a SSD environment, typically you don’t have enough of it to accommodate established volumes and also you still need quite a few to outperform lots of wide striped SAS drives. With AO you could perform the tunes more frequently allowing you to drain SSD more quickly than you could with Data Progression, I’m not really convinced anyone has all bases covered on SSD, but AO is very powerful and flexible tech on a 3PAR array.

    Comment by John_H — December 4, 2012 @ 10:20 am

  7. Oh OK – I see your point there that is an interesting strategy, it is sort of a hack and yeah that strategy on existing volumes would make it more difficult to enable on an existing system with a bunch of data. I will talk to 3PAR more about this! The other aspect is software licensing. I don’t know how much AO licensing is now but that added benefit on the Compellent side is that feature was part of the core license, with 3PAR all disks would have to have AO licensing in the current state. Though with the license caps on the 7000 that is much less painful.

    Thanks for clarifying! that makes a lot of sense.

    Comment by Nate — December 4, 2012 @ 10:31 am

  8. Nate, sorry can’t help with the licensing, but software it pretty much all margin so definitely up for negotiation. Also on the StoreServ 7000 you can go with the Optimization Suite which provides Dynamic Optimization, Adaptive Optimization and Peer Motion in a single license, so you get massive flexibility. You can move data up down or sideways with those licenses. With Compellent your only direction for online data is down, unless the data gets promoted after being hit consecutively over a number of days, hardly responsive.

    Comment by John_H — December 4, 2012 @ 10:42 am

  9. With the iBrix gateway the advantage is that it includes data tiering so you can lug a pair of gateways into a 3PAR and land your data their. However you can also have another tier based on a X9000 appliance or two. The gateways and the appliances operate as part of the same 16PB namespace. thsi then allows you to set data tiering policies based on different file properties etc. So you can retrieve recently written data from 3PAR via the gateways but dat say older than 3 months can be moves seamlessly to a secon appliance based on more economical storage like P2000 with SAS drives, data over 1 year old or some other criteria can them move to a a big X9000 data store with MDS6000’s and 3TB drives. Since it’s a single namespace retrieval of the data is transparent to the user, regardless of the tier the data sits on. Sounds a bit like data progression but for files 🙂

    Comment by John_H — December 4, 2012 @ 10:54 am

  10. Corrections – With the iBrix gateway the advantage is that it includes data tiering so you can plug a pair of gateways into a 3PAR and land your data there. However you can also have another tier based on addiyional X9000 appliances. The gateways and the appliances operate as part of the same 16PB namespace. This then allows you to set data tiering policies based on different file properties etc. So you can retrieve recently written data directly from the 3PAR via the gateways, but data that is say older than 3 months can be moved seamlessly off to a second X9000 appliance based on more economical storage system like P2000 with SAS drives. Data over 1 year old or some other criteria you set can then be moved to a larger X9000 data store with MDS6000’s and 3TB drives. Since it’s all a single namespace, retrieval of the data is transparent to the user or application, regardless of the tier the data sits on.
    Sounds a bit like data progression but for files 🙂

    Comment by John_H — December 4, 2012 @ 10:58 am

  11. File tiering on Ibrix huh! That is pretty neat. I recall being briefed on a similar technology on BlueArc a few years ago, in BlueArc’s case(at the time anyways) they could use any NFS target (BlueArc or not) to tier. The example they gave at the time was the Data Domain NFS gateway, their system could transparently tier files over to a Data domain system via NFS for cold storage, then upon access the BlueArc would retrieve the file(s) and serve them to the users.

    Can IBRIX tiering work with any NFS or is it Ibrix specific (I assume your from HP you seem to be coming from an HP proxy in Europe 🙂 )? Is there a similar NFS-style gateway for StoreOnce ? Could IBRIX tier to StoreOnce for cold storage ?

    thanks again for the info!!

    Comment by Nate — December 4, 2012 @ 11:04 am

  12. One more Q – going back to your 3PAR AO thing. Was thinking about it more and I want to clarify – if you put the volume on SSD and then tier it down using AO – once blocks are moved to a lower tier – writes to those blocks go to that tier NOT to SSD. Unwritten blocks would go to SSD.

    If that is the case then it’s not adequate still then. I’d like to see ALL writes regardless of tier end up on SSD first and then tiered down from there.

    Comment by Nate — December 4, 2012 @ 11:42 am

  13. AFAIK iBrix tiering is internal to the iBrix NAS platform only, but this can span multiple block storage devices with a single namespace. Also there are lots of other cool things around IBrix, real converged inf, snapshots, data validation, express query, scale up. scale out etc.

    StoreOnce has a NFS interface as well as FC/iSCSI/CIFS and Catalyst, but the appliances aren’t really designed for use as a general purpose NAS store. However StoreOnce is designed as a portable deduplication engine. As a proof point of this it’s also available now as a software based appliance in HP Data Protector, allowing Customers to use their own external storage to build a StoreOnce software store. Since it’s portable it could potentially go anywhere.

    Yes AO would send all net new writes and overwrites to existing data in SSD, to the SSD tier. If the data doesn’t reside in SSD because it’s deemed cold and been demoted, then the overwrite occurs on the current tier. I’m told this was a conscious design decision as writeback cache is there to soak up such transient spikes to cold data, although potentially the new QOS features could help.

    Speak to your HP resource, Demo and NFR licenses are available to allow you to try before you buy. I’ve of plenty of Customers who’ve insisted their workloads mandate SSD because a n other Vendor told them so. But very few that I’ve seen would actually benefit greatly, especially in the cased where they’re attempting to accelerate data system wide.

    Comment by John_H — December 4, 2012 @ 12:17 pm

  14. Yeah that’s what I’m trying to do – accelerate all writes to prevent or significantly delay the risk of blowing out the write cache thus degrading the entire system. A bad SQL query can come in which causes MySQL to go nuts, forcing it to write ~10GB+ of data in order to get the query results – the query takes a long time and then the user issues the query again, and again.. now you have a half dozen queries trying to write a massive amount of data as quickly as they can using very large I/O sizes = bad situation to be in.

    Being able to absorb quickly say 50-100GB of data on a 7000-class system in writes without blowing out the write cache for all storage tiers would be wonderful and what I’ve been pestering 3PAR about for years now.

    A few companies ago I had a 4-node T400 (so 48GB of data cache between the 4 nodes), I had similar processes(though those were predictable since they were scheduled). These in house applications would dump 10s of GB of data from up to 50 different servers simultaneously on an Exanet cluster (with the T400 back end), and in many cases blew the caches out there as well. Back on my first E200, with 2GB data cache/node I was unable to create file systems in linux with RAID 5 larger than about 100GB without blowing out the caches and causing the latency to really spike as well. My solution in that case was to create them in RAID 1 and do a physical copy to RAID 5(though “DO” would work too). Also we could not create Oracle data files of any significant size on RAID 5 without blowing out the measly 2GB cache, fortunately it was raw on ASM so the DBAs just created more data files that were smaller and spaced out the operations. Those were the only two predictable operations that I could overrun the E200.

    I agree most people don’t need SSDs for their volumes it is way overkill – hence using it as a cache as being a much better solution.

    thanks!!

    Comment by Nate — December 4, 2012 @ 12:34 pm

  15. The real problem with a write cache is cost, you need to mirror for data integrity, the larger the cache the bigger the pipes required. Also even with a large cache you need to eventually be able to destage that to disk. In the case of a failure you need to be able to destage quickly to maintain data integrity on disk, there’s also the write cliff and wear levelling issues in a system wide cache that need to be overcome. That’s not to say it won’t happen just no ideas how this would be implemented.

    Instead of trying to brute force this with SSD, QOS – Priority Optimization might be a way of limiting the impact of, rather than jusy accelerating such queries.

    Comment by John_H — December 4, 2012 @ 1:01 pm

  16. Yeah that is all true. Though as long as your at least mirroring it across different disk cages the risk should be manageable. I’d advocate of course using very high quality SLC flash for this purpose, not the lower cost stuff that 3PAR is using today for Tier 0(assuming they are still using the Mach8IOPS – I don’t know for sure – the size limit of 200G matches the max size of the Mach8IOPS disk still). The cost of the flash could be much more too – but you won’t need nearly as much of it. An upside of course is since its Non volatile you don’t have to worry about flushing it in the event of a power failure or something.

    The QoS may help in the meantime, I can hope at least. It’ll probably be another 6 months until I try it out.

    Comment by Nate — December 4, 2012 @ 1:25 pm

  17. Nate, all of the 100GB & 200GB are SLC drives.

    Comment by John_H — December 4, 2012 @ 3:26 pm

  18. yeah though there seems to be many different kinds of SLC at least vastly varying degrees of quality and performance..

    Comment by Nate — December 4, 2012 @ 3:32 pm

  19. Intel Ralston peak I believe.
    http://www.hgst.com/tech/techlib.nsf/techdocs/6638A549EBEF196888257A0700037239/$file/USSSD400S.B_OEMSpec_Enc_v1.01.pdf

    Comment by John_H — December 4, 2012 @ 4:31 pm

  20. Considering the 3Par history of having really dense JBOD’s, it seems strange to me that they used the OneStor-2224 chassis. Do you think we may see a new T-class replacement using the likes of the much more dense OneStor-2584?

    Comment by Andre Beukes — December 5, 2012 @ 1:07 am

  21. @Nate

    The AO you are talking about actually sounds a lot more like the new Apple Fusion drive – all writes go to SSD and are then moved per block depending on hotness rating. In a Compellent system the write profile is similar – all incoming writes are ingested at RAID10 (fastest, least penalty) into the highest available tier. Even on a small system this is hugely efficient and can soak some pretty demanding workloads. Dynamic tiering runs once a day by default but you can set it to go more often – I really think if Dell beefed up the controllers then Tiering could actually run hourly (or every 5 mins), making a small SSD Tier0 great value.

    3Par could easily implement this by re-writing the AO script. If its still the same as the one I used before then its pretty useless for real workloads. Once of the techies I worked with wrote his own script that did exactly the same process, but more efficiently. I’m sure on the newer systems you could do something similar as they have more capacity in the controllers – AO pretty much sunk the S400’s we had. I’m sure I still actually have the script about someplace if anyone is interested?

    Comment by Andre Beukes — December 5, 2012 @ 1:13 am

  22. […] on here Rate this:Share this:TwitterEmailLinkedInPrintDiggFacebookGoogle +1 Leave a Comment by […]

    Pingback by New 3PAR – What’s under the hood ? « Storage CH Blog — December 5, 2012 @ 4:47 am

  23. John: Interesting! thanks! So for the new SAS stuff they went with a SAS SSD instead of SATA that makes sense

    Andre: Per T-class with Xyratex. I would say the likelihood is low but anything is possible. The 2584 claims to support longer cable lengths (doesn’t say how long). Also I don’t see how drives are installed in that chassis. I was talking with someone very familiar with how DDN does stuff and they too have a very dense enclosure, which in fact has to be pulled out from the rack in order to swap a drive because the drives are top loaded. The cables can sometimes get tweaked out when sliding the chassis around which then causes very strange problems. DDN systems at least apparently are designed to be taken offline for maintenance for things like this. I don’t know if the Xyratex high density enclosure is similar at all.

    Given the high end nature of the system and the fact that 3PAR has had their custom chassis go through multiple iterations and is a very mature product, there is significantly less need to OEM the chassis from someone else. I am curious to see if they come up with a further enhanced version on whatever comes after the “V” class that is more optimized for 2.5″ drives (perhaps doubling drive densities).

    Per the AO stuff – yeah from what I understand it is still only possible to write to SSD the blocks that AO have moved to SSD. I don’t know if it is possible to change it, the placement of data may be limited by the ASIC in some way. I did bring it up with some of their senior software folks when I was at their HQ in early October, they were taking notes, I’m not sure if they’ve thought about it at all since then. But I will be pinging them on it again (I know many people at 3PAR have read this post).

    If you have the script I’d be interested in seeing it for sure.

    thanks for the comments!!

    Comment by Nate — December 5, 2012 @ 7:01 am

  24. Keep in mind AO isn’t a static product, it is constantly evolving and being improved upon with tweaking based on new firmware, features and hardware releases. If your basing your opinions on S-Class then it’s likely you’ll be very pleasantly surprised on the newer 64 bit platforms and firmware levels. I would also be very careful of using scripting in an attempt to provide sub LUN tiering. The testing, safeguards and support are just not there and from experience, you won’t always get what you were expecting.

    Comment by John_H — December 5, 2012 @ 10:24 am

  25. Was looking at the Hitachi SSD info, on paper at least they claim to be pretty reliable – “The 400GB capacity Ultrastar SSD endures up to 35 petabytes (PB) of random writes over the life of the drive – the equivalent of writing 19.2 terabytes (TB) per day for five years.”

    Comment by Nate — December 5, 2012 @ 11:58 am

  26. Your throw away line re IBM Storwize V7000 is unfortunate. It is true that V7000 implements RAID in software as do most other modern disk systems (VNX, DS8000, XIV, Netapp FAS etc) but your criticism of V7000 performance is not supported by anything I have seen out in the world. The RAID code IBM uses in V7000 was lifted from the DS8000 product – that is some of the best performance on the planet.

    It’s true that everyone runs their benchmarks on RAID10 (DS8870 451,000 IOPS, V7000 120,000 IOPS) but so does HP (3PAR V800 450,000 IOPS). It’s best to stick to real evidence if we can. Check out http://www.storageperformance.org

    Thanks, Jim

    Comment by Jim Kelly — December 5, 2012 @ 12:07 pm

  27. Hey Jim!

    I was thinking about the chassis – which I think is the same between V7000 and 3PAR 7000 series so I went over to the IBM site and was bored so started reading the customer feedback on the site, and saw mostly positive feedback, then I saw one guy come out and complain about RAID 5 performance vs EMC and that they had to use RAID 1 for their databases, and an IBM person responded confirming that RAID is in software(for “cost” reasons), and is slower compared to the high end platforms.

    Slow RAID 5 performance in this day & age is not really expected from such a product so thought it was worth mentioning. Otherwise the V7000 looks like a really nice product features wise with the integrated SVC, the real time compression(I’d love it if 3PAR had compression..) etc.

    But I suppose the upside is the V7000 can do RAID 5, the XIV can’t (?!! – is 64 CPU cores not enough for 180 disks ??), nor can it do RAID 6(last I checked).

    Comment by Nate — December 5, 2012 @ 12:32 pm

  28. Fantastic write up! I think for 3PAR to be successful in the SMB market, and for self-installs, two products need a big re-write: System Reporter and Recovery Manager for VMware. Between usability, security, configuration, and architecture, they aren’t in the same league as the competition. Hardware and IMC are great, though, which count the most.

    Comment by Derek — December 6, 2012 @ 10:21 pm

  29. thanks! and yes I agree the software that doesn’t run on the array needs work. I’ve complained about it myself on this blog on several occasions(specifically called the vmware plugin near worthless). I’m optimistic about the new VMware Insight control plugin that supports 3PAR. I should be able to try it out next month. I have it installed now but need to put a patch on the array to get the full functionality out of it. All changes are frozen till next year due to end of year stuff.

    For me it hasn’t been a real big deal (I even wrote my own performance trending tools for 3PAR many years ago that tie into cacti). They are working on it though, can’t say more than that right now 🙂

    thanks again for reading

    Comment by Nate — December 6, 2012 @ 10:39 pm

  30. Hey Nate, we’re looking at pricing on this new version and I wanted to ask you something since you ‘re using ssds. our vendor says that SSDs must use their own shelf and can not be used with combined with other Sas drives. Is that true? Thanks.

    Comment by Eddie — December 20, 2012 @ 8:46 am

  31. I don’t believe there is any issue with mixing and matching SSD and spinning rust in the same enclosure (it has never been an issue in the past on 3PAR. I am double checking to confirm though. There is a limitation on SSDs in the LFF enclosure where they say you cannot mix/match NL SAS and SSD within a specific column of disks, though you can still mix/match as long as they are in different columns.

    Should have confirmation shortly..

    Comment by Nate — December 20, 2012 @ 8:53 am

  32. Update – I am told that for SFF drives you can mix/match SSD and spinning rust in the same 2.5″ drive enclosure though if you have multiple enclosures they need to be deployed in columns. So if you put in 16 SSDs across 4 enclosures you’d put 4 SSDs in each enclosure in the same slot ID for each enclosure. The remaining slots can be used by spinning rust or other SSDs. I’m not sure if the column extends the entire length of the array or not, but for now I suppose it’s safer to say it does, until that can be clarified further.

    I do not see mention of any restrictions on 2.5″ and SSD on the data sheet, so am asking for further clarification if what I am told really applies to 2.5″ or if it applies to 3.5″ only. When I get that info I’ll post it here.

    Hope this helps?!

    Comment by Nate — December 20, 2012 @ 3:40 pm

  33. Thanks! That does help. I appreciate you taking the time to find that out.

    Comment by Eddie — December 20, 2012 @ 8:02 pm

  34. I got another update last night – the SSD column limitation applies only to the 3.5″ chassis as I originally suspected. There is no column limitation on the 2.5″ enclosures.

    Comment by Nate — December 28, 2012 @ 11:00 am

  35. yeah, so I run enterprise 3par in our large data center and its not stacking up feature or really performance wise to what i can get with an equally [discounted] netapp. If 3par was still 3par we would have seen a lot more innovation but sadly now with HP they are going to capitalize on what they have and innovate as little as possible to wring every dollar possible out of the customer.

    Comment by Andy — March 28, 2013 @ 7:23 pm

  36. I suppose it depends on the workload — the architecture of the NetApp is quiet archaic in that they don’t balance resources well at all. So you typically can get very high utilization rates out of a 3PAR box vs a NetApp. If your workload is very conducive to the NetApp PAM modules then that may be a better platform — more read cache has never been beneficial for my workloads (present workload is 92% write). And of course if you want to run a NetApp in front of a 3PAR for file services that has been possible for a long time now with the V-series.

    NetApp is a decent platform — I’ve never really knocked it here unlike some other platforms. It’s certainly not my first(or second) choice, but it’s not terrible either.

    The main thing that got me excited about this next gen 3PAR systems on the mid range is the overall cost went down roughly 60-75% (whether it is purchase price, support, upgrades etc) vs the earlier platforms, lower margins for wider market opportunities.

    thanks for the post!

    Comment by Nate — March 28, 2013 @ 7:45 pm

  37. Hi Nate,
    Great read , I work for a company that has just purchased a 7400 and am looking for some articles on best for use raid configs.
    I have been reading about the fast raid 5 and 6, but there are no good articles on the number of disks and parity drives in the 3par and what you could expect out of these configs.
    My previous storage back ground is evas and IBM’s.
    We are a highly virtualised platform running many blade enclosures (hp) and Gen 8 Servers. We are only attaching 2 enclosures at this stage but all the articles i have dont say much about the number of data disks and parity disks for the base setup of the CPG’s.
    I suspect I will use raid 5 3+1 and raid 6 4+2. But this is just based on articles and no real comparisons of say raid 5 4+1 or raid 6 6+2.
    I know at 3+1 RAID 5 with give 91% the performance of a raid1.
    Would be really keen on any sites or your personnel experience in this matter.
    Cheers
    Steve

    Comment by steve — March 30, 2013 @ 5:36 pm

  38. thanks! For RAID 6 on the 3PAR platform I suggest you check this post out as well.

    3PAR’s best practice typically is to match the RAID to the # of enclosures on the system for maximum availability. So if you have 2 enclosures the “best practice” is likely just RAID 10. If you have 3 enclosures then the best practice would be RAID 5 2+1, if you have 4 enclosures then 3+1. This assumes “cage level availability”(default) which protects against an entire shelf failing. With cage level availability you should keep in mind that if you want to run large amounts of RAID 10 then you should have an even number of disk cages(shelves). The more evenly balanced configuration you have the better. You can, of course have any configuration you wish, with an unbalanced configuration the system will balance itself as best it can, when certain resources run out of space (e.g. one shelf has more disks than another shelf) then the system will adapt to that automatically, though availability may be impacted, as well as performance.

    Also you can get an evaluation copy of Dynamic Optimization (assuming you did not license it with the system) which will allow you to experiment more by changing the RAID levels on the fly.

    any questions you have I am happy to try to help – I’ve had about 7 years experience on the platform and know it pretty good inside & out.

    oh and thanks for the comment 🙂

    Comment by Nate — March 30, 2013 @ 7:35 pm

  39. Great informative write-up about 3PAR! I will admit I’m a novice to SANs. I’m beginning to evaluate both 3PAR and Compellent. Now that they are both in the same price range I am wondering what about 3PAR’s architecture makes it better than Compellent? Why is Compellent a “Poor Man’s 3PAR”?

    Comment by Daniel — April 14, 2013 @ 10:37 pm

  40. I’m trying to think of the best way to word this for it to make sense. I got the term “poor man’s 3PAR” myself from one of the largest resellers of Compellent in the Northwest.

    Rather than expend that amount of effort right now (I may try again later) — I think this term does not apply any more — not with the release of the 7000 series 3PAR last year which dramatically slashes the cost(I believe very possible to get under Compellent’s cost as well). So the 3PAR architecture is no longer out of reach of the mid range buyer.

    (removed a bit of my original comment I didn’t realize this comment was under the blog post of a link I was sending you to!)

    Comment by Nate — April 18, 2013 @ 2:56 pm

  41. Many thanks to the author for this informative compilation. My 3PAR boxes arrived today.

    Comment by Jeff Gray — April 26, 2013 @ 11:13 am

  42. Thanks Jeff! glad you found it useful.

    Comment by Nate — April 28, 2013 @ 4:09 pm

  43. […] can see a massive write up I did on this platform when it was released last […]

    Pingback by 3PAR 7400 SSD SPC-1 « TechOpsGuys.com — May 23, 2013 @ 10:48 am

  44. […] note: I once saw some value in Compellent as an alternative to 3PAR but that all went away with the 3PAR 7000-series.) Download article as […]

    Pingback by 3PAR up 82% YoY – $1 Billion run rate « TechOpsGuys.com — May 28, 2013 @ 9:10 am

  45. […] a nut shell the 7450 is the system that HP mentioned at the launch event for the 7400 last December – though the model number was not revealed they said In addition to mixed SSD/HDD and all-SSD […]

    Pingback by Pedal to the metal: HP 3PAR 7450 « TechOpsGuys.com — June 11, 2013 @ 8:43 am

  46. Congratulations!

    You did an excellent job describing the 3PAR features. I had the chance to play with one of the new 7200 systems and i am deeply impressed.
    If older systems are upgraded with the current OS (a.k.a. InForm / now 3PAR OS), all systems will support the same features. One configuration tool across the whole product line, that is great. The GUI is so intuitive, that i found out most functions without any documentation.

    The RAID system is based on chunklets, and not before you have read the “3par storage concepts guide” (just Google) you will understand the geniality of this concept.

    Greetings from a new 3PAR fanatic from Germany
    Ernesto

    Comment by Ernst Limbrunner — July 1, 2013 @ 9:48 am

  47. […] 3PAR: The Next Generation (aka 7000 series) – December 2012 (also covers a ton of the new software features as well) […]

    Pingback by HP Storage Tech Day – 3PAR « TechOpsGuys.com — July 30, 2013 @ 2:11 am

  48. Been looking into the capabilities of the arrays and while most of it is good there are a couple of gotchas depending on what your looking to get from the array.
    1) Remote-Copy/Peer-Persistence/Active-Active clustering/ect all can done either RCFC ..aka fiber channel (FCIP is not support for some scenarios) or the built-in 1GB copper (RCIP) ports. If you have a large deployment and are turning more than 1TB data per hour across some long geography there will be some limitations/overhead in design outside of what is advertised.
    2) The reporting even with the optional reporting options definitely has room for improvement. Exporting reports? Something looking a bit better than a poor FLOSS project would even be a start.
    3) No eMLC for the 7200/7400…7450 only which means you’ll be paying for the SLC premium… when comparing to other arrays in the general purpose mid-range many will balk at the pricing premium.
    4) HP may advertise low prices but what they don’t tell you are all the mandatory un-boxing, racking, installing, configuring, dusting, and other fees which IMO most are profit padding. Oh, and if you bought a second one for a second location…you pay all these fees twice. If you want to charge that’s fine but look at what the competition charges to keep yourself competitive.
    5) While advertised as an iSCSI and now FCOE box, if you want all the features in a 50TB+ environment you’ll need FC. Which they would stop treating 10GB iSCSI as a second class citizen when aiming at organizations who will spend under 200k…under 100k on arrays.
    6) Where’s the marketing? Advertising?
    7) HP, yes HP. I can’t understand why anyone would buy a product from a company who can’t even talk between departments. The right index finger doesn’t even know what the right thumb is doing. I understand there has been a large amount of turn-over and senior leadership shuffling/turn-over but HP really needs to get their act together.

    ………………………………….
    The biggest strengths I see compared to the competition

    1) Software feature development on previous generation hardware, providing new software releases years later…unlike some others *cough* EMC.

    2) Management interface is simple enough most can figure out how to do most tasks within a few minutes.

    3) ASICS designed around a 7-year lifecycle…ie they have features baked in on the ASICS which are not even active until a year or two after released to production

    4) phone-home support, if a drive has an issue…guess what support will automatically send you a new one.

    IMO, most of the problems can be solved either through programmatic solutions or by HP getting there business act together.

    Comment by Patrick — August 29, 2013 @ 9:32 pm

  49. thanks for the excellent and detailed comment! I think the only clarification might be the 7000 series doesn’t(shouldn’t?) require any custom installation fees, unlike the other 3PAR systems. I have gotten quotes for 7000 and there are no such services(you can get them optionally though). Our first 7000 we had to specifically request installation services because it’s for a remote site where we have no staff. There are some stupid fees HP tries to tack on for things like Dynamic Optimization professional services(e.g. install license key), but you can usually get them removed. I think HP uses that kind of stuff as a way to show bigger discounts.

    MLC is available on the 72/7400. It might be new though. 3PAR released new MLC SSDs on August 19th, including a MLC drive for the older F-series. I am not sure if these new MLC drives are the same, I don’t see mention of eMLC.. But the MLC SSD part numbers for 7450 vs 7400/7200 are the same at the moment.

    Their reporting system needs a bunch of work for sure, I have been using it since 2007 and only recently determined the way to see the “top N volumes for I/O in the past 30 minutes” for example. I wrote my own monitoring system several years ago and I use that most of the time.

    You *might* be interested in this too:
    https://www.techopsguys.com/2013/07/30/hp-storage-tech-day-3par/

    Mentions some of the new things that are coming(though no details without NDA), along with the SSD-specific optimizations that were done in the latest OS revs (was done specifically for 7450 but applies to all platforms).

    thanks again, excellent comment!!

    Comment by Nate — August 30, 2013 @ 10:54 am

  50. Nate,

    This is a very helpful article with better, more concrete information and justification for selecting the 3PAR than HP’s own specifications. I wish I could USE it to support purchase criteria.

    Thank you for taking the time to write it.

    Elisa

    Comment by elisa — September 18, 2013 @ 10:06 am

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress