TechOpsGuys.com Diggin' technology every day

June 19, 2014

My longest road trip to-date

Filed under: Random Thought — Tags: — Nate @ 11:37 am

I got back from the longest road trip I’ve ever personally driven anyway to-date on Tuesday.

Pictures in case your interested, I managed to cut them down to roughly 600:

 

Long road trip June 2014 - 2,900 miles total

Long road trip June 2014 – 2,900 miles total

California

I decided to take the scenic route and went through Yosemite on the way over, specifically to see Glacier Point, a place I wasn’t aware of and did not visit on my last trip through Yosemite last year. I ended up leaving too late so managed to get to Glacier point and take some good pictures, though by the time I was back on the normal route it was pretty much too dark to take pictures of anything else in Yosemite. I sped over towards Tonopah, NV for my first night’s stay before heading to Vegas the next day. That was a fun route, at least once I got near the Nevada border at that time of night I didn’t see anyone on the road for a good 30-40 or more miles (had to slow down on some areas of the road I was getting too much air! literally!). Though I did encounter some wildlife playing in the road, fortunately managed to avoid casualties.

Las Vegas area

I took a ferry tour on Lake Mead, that was pretty neat (was going to say cool but damn was it hot as hell there my phone claimed 120 degrees from it’s ambient temp sensor, car said it was 100). That ferry is the largest boat on the lake by far, and there wasn’t many people on there for that particular tour on that day, maybe 40 or so out of probably 250-300 that it can hold. I was surprised given the gajillions of gallons of water right there that the surrounding area was still so dry and barren, so the pictures I took weren’t as good as I thought they might of been otherwise.

I went to the Hoover dam for a few minutes at least, couldn’t go inside as I had my laptop bag with me(wasn’t checked into hotel yet) and they wouldn’t let me in with the bag, and I wasn’t going to leave it in my car!

HP Discover

(you can see all of my Discover related posts here)

A decent chunk of it was in Las Vegas at HP Discover where I am grateful for the wonderful folks over there which really made the time quite pleasant.

I probably wouldn’t attend an event like Discover even though I know people at HP if it weren’t for the more personalized experience that we got. I don’t like to wander around show floors and go into fixed sessions, I have never gotten anything out of that sort of thing.

Being able to talk in a somewhat more private setting in a room on the show floor with various groups was helpful. I didn’t learn too much new things, but was able to confirm several ideas I already had in my head.

I did meet David Scott, head of HP storage for the first time, and ran into him again at the big HP party and he came over and chatted with Calvin Zito and myself for a good 30 minutes. He’s quite a guy I was very impressed. I thought it was funny how he poked fun at the storage startups during the 3PAR announcements. It was very interesting to hear his thoughts on some topics. Apparently he reads most/all of my blog posts and my comments on The Register too.

We also went up on the High Roller at night which was nice, though couldn’t take any good pictures, was too dark  and most things just ended up blurry.

All in all it was a good time, met some cool people, had some good discussions.

Arizona

I was in the neighborhood, so I decided to check out Arizona again, maybe for the last time. I was there a couple of times in the past to visit a friend who lived in the Tuscon area but he moved away early this year. I did plan to visit Sedona the last time I was in AZ, but decided to skip it in favor of NFL playoffs.  So I went to AZ again in part to visit Sedona which I had heard was pretty.

Part of the expected route to Sedona was closed off due to the recent fire(s), so I had to take a longer way around.

I also decided to visit the Grand Canyon (north end), and was expecting to visit the south end the same day but that food poisoning hit me pretty good right about the time I got to the north end, so I was only there about 45 minutes and I had to go straight back to the hotel (~200 miles away). I still managed to get some good pictures though. There is a little trail that goes out to the edge there, though for the most part had no hand rails, was pretty scary to me anyway being so close to a very big drop off.

Food poisoning settled down by Monday morning and I was able to get out and about after my company asked me to extend my stay to support a big launch (which turned out to be nothing fortunately) and visit more places before I headed back early Tuesday morning. I went through Vegas again and made a couple pit stops before making the long trek back home.

Was a pretty productive trip though got quite a bit accomplished I suppose. One thing I wanted to do is get a picture of my car next to a “Welcome to Sedona” sign to send to one of my former bosses. There was a “secret” project at that company to move out of a public cloud and it was so controversial that my boss gave it a code name of Sedona so we wouldn’t upset people in earlier days of the project. So I sent him that pic and he liked it 🙂

Car's trip meter - need some color on this blog

Car’s trip meter – need some color on this blog (yes that is almost 60 hours of driving over 10 days)

One concern I had on my trip is my car has a time bomb ticking waiting for the engine to explode. I’ve been planning on getting that fixed the next time I am in Seattle, I think I am still safe for the time being given the mileage. The dealership closest to me is really bad (and I complained loudly about them to Nissan) so I will not go there, and the next closest is pretty far away, the operation to repair the problem is a 4-5 hour one and I don’t want to stick around. Besides I really love the service department at the dealership that I bought my car at, and I’ll be back in that area soon enough anyway (for a visit).

 

June 15, 2014

HP Discover 2014: Datacenter services

Filed under: Datacenter — Tags: , — Nate @ 12:54 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I should be out sight seeing but have been stuck in my hotel room here in Sedona, AZ due to the worst food poisoning I’ve ever had from food I ate on Friday night.

X As a service

The trend towards “as a service” as what seems to be an accounting thing more than anything else to shift dollars to another column in the books continues with HP’s Facility as a service.

HP will go so far as to buy you a data center(the actual building), fill it with equipment and rent it back to you for some set fee – with entry level systems starting at 150kW (which would be as few as say 15 x high density racks). They can even manage it end to end if you want them to. I didn’t realize myself the extent that their services go to. requires a 5 or 10 year commitment however (has to do with accounting again I believe). HP says they are getting a lot of positive feedback on this new service.

This is really targeted at those that must operate on premise due to regulations and cannot rely on a 3rd party data center provider (colo).

Flexible capacity

FAAS doesn’t cover the actual computer equipment though, that is just the building, power, cooling etc. The equipment can either come from you or you can get it from HP using their Flexible Capacity program. This program also extends to the HP public cloud as well as a resource pool for systems.

HP Flexible Capacity program

HP Flexible Capacity program

Entry level for Flexible capacity we were told was roughly a $500k contract ($100k/year).

Thought this was a good quote

“We have designed more than 65 million square feet of data center space. We are responsible for more than two-thirds of all LEED Gold and Platinum certified data centers, and we’ve put our years of practical experience to work helping many enterprises successfully implement their data center programs. Now we can do the same for you.”

Myself I had no idea that was the case, not even close.

HP Discover 2014: Software defined

Filed under: Datacenter,Events — Tags: , , , — Nate @ 12:26 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I have tried to be a vocal critic of the whole software defined movement, in that much of it is hype today and has been for a while and will likely to continue to be for a while yet. My gripe is not so much about the approach, the world of “software defined” sounds pretty neat, my gripe is about the marketing behind it that tries to claim we’re already there, and we are not, not even close.

I was able to vent a bit with the HP team(s) on the topic and they acknowledged that we are not there yet either. There is a vision, and there is a technology. But there aren’t a lot of products yet, at least not a lot of promising products.

Software defined networking is perhaps one of the more (if not the most) mature platforms to look at. Last year I ripped pretty good into the whole idea with good points I thought, basically that technology solves a problem I do not have and have never had. I believe most organizations do not have a need for it either (outside of very large enterprises and service providers). See the link for a very in depth 4,000+ word argument on SDN.

More recently HP tried to hop on the bandwagon of Software Defined Storage, which in their view is basically the StoreVirtual VSA. A product that to me doesn’t fit the scope of Software defined, it is just a brand  propped up onto a product that was already pretty old and already running in a VM.

Speaking of which, HP considers this VSA along with their ConvergedSystem 300 to be “hyper converged”, and least the people we spoke to do not see a reason to acquire the likes of Simplivity or Nutanix (why are those names so hard to remember the spelling..). HP says most of the deals Nutanix wins are small VDI installations and aren’t seen as a threat, HP would rather go after the VCEs of the world. I believe Simplivity is significantly smaller.

I’ve never been a big fan of StoreVirtual myself, it seems like a decent product, but not something I get too excited about. The solutions that these new hyper converged startups offer sound compelling on paper at least for lower end of the market.

The future is software defined

The future is not here yet.

It’s going to be another 3-5 years (perhaps more). In the mean time customers will get drip fed the technology in products from various vendors that can do software defined in a fairly limited way (relative to the grand vision anyway).

When hiring for a network engineer, many customers would rather opt to hire someone who has a few years of python experience than more years of networking experience because that is where they see the future in 3-5 years time.

My push back to HP on that particular quote (not quoted precisely) is that level of sophistication is very hard (and expensive) to hire for. A good comparative mark is hiring for something like Hadoop.  It is very difficult to compete with the compensation packages of the largest companies offering $30-50k+ more than smaller (even billion $) companies.

So my point is the industry needs to move beyond the technology and into products. Having a requirement of knowing how to code is a sign of an immature product. Coding is great for extending functionality, but need not be a requirement for the basics.

HP seemed to agree with this, and believes we are on that track but it will take a few more years at least for the products to (fully) materialize.

HP Oneview

(here is the quick video they showed at Discover)

I’ll start off by saying I’ve never really seriously used any of HP’s management platforms(or anyone else’s for that matter). All I know is that they(in general not HP specific) seem to be continuing to proliferate and fragment.

HP Oneview 1.1 is a product that builds on this promise of software defined. In the past five years of HP pitching converged systems seeing the demo for Oneview was the first time I’ve ever shown just a little bit of interest in converged.

HP Oneview was released last October I believe and HP claims something along the lines of 15,000 downloads or installations. Version 1.10 was announced at Discover which offers some new integration points including:

  • Automated storage provisioning and attachment to server profiles for 3PAR StoreServ Storage in traditional Fibre Channel SAN fabrics, and Direct Connect (FlatSAN) architectures.
  • Automated carving of 3PAR StoreServ volumes and zoning the SAN fabric on the fly, and attaching of volumes to server profiles.
  • Improved support for Flexfabric modules
  • Hyper-V appliance support
  • Integration with MS System Center
  • Integration with VMware vCenter Ops manager
  • Integration with Red Hat RHEV
  • Similar APIs to HP CloudSystem

Oneview is meant to be light weight, and act as a sort of proxy into other tools, such as Brocade’s SAN manager in the case of Fibre channel (myself I prefer Qlogic management but I know Qlogic is getting out of the switch business). Though for several HP products such as 3PAR and Bladesystem Oneview seems to talk to them directly.

Oneview aims to provide a view that starts at the data center level and can drill all the way down to individual servers, chassis, and network ports.

However the product is obviously still in it’s early stages – it currently only supports HP’s Gen8 DL systems (G7 and Gen8 BL), HP is thinking about adding support for older generations but their tone made me think they will drag their feet long enough that it’s no longer demanded by customers. Myself the bulk of what I have in my environment today is G7, only recently deployed a few Gen8 systems two months ago. Also all of my SAN switches are Qlogic (and I don’t use HP networking now) so Oneview functionality would be severely crippled if I were to try to use it today.

The product on the surface does show a lot of promise though, there is a 3 minute video introduction here.

HP pointed out you would not manage your cloud from this, but instead the other way around, cloud management platforms would leverage Oneview APIs to bring that functionality to the management platform higher up in the stack.

HP has renamed their Insight Control systems for vCenter and MS System Center to Oneview.

The goal of Oneview is automation that is reliable and repeatable. As with any such tools it seems like you’ll have to work within it’s constraints and go around it when it doesn’t do the job.

“If you fancy being able to deploy an ESX cluster in 30 minutes or less on HP Proliant Gen8 systems, HP networking and 3PAR storage than this may be the tool for you.” – me

The user interface seems quite modern and slick.

They expose a lot of functionality in an easy to use way but one thing that struck me watching a couple of their videos is it can still be made a lot simpler – there is a lot of jumping around to do different tasks.  I suppose one way to address this might be broader wizards that cover multiple tasks in the order they should be done in or something.

HP Discover 2014: Helion (Openstack)

Filed under: Datacenter,Events — Tags: , , , — Nate @ 10:36 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

HP Helion

This is a new brand for HP’s cloud platform based on OpenStack. There is a commercial version and a community edition. The community edition is pure OpenStack without some of the fancier HP management interfaces on top of it.

The easiest thing about OpenStack is setting it up – organizations spend the majority of the time simply keeping it running after it is set up.”

HP admits that OpenStack has a long way to go before it is considered a mature enterprise application stack. But they do have experience running a large OpenStack public cloud and have hundreds of developers working on the product. In fact HP says that most OpenStack community projects these days are basically run by HP, while other larger contributors (even Rack Space) have pulled back on resource allocation to the project HP has gone in full steam ahead.

HP has many large customers who specifically asked HP to get involved in the project and to provide a solution for them that can be supported end to end. I must admit the prospect does sound attractive, being that you can get HP Storage, Servers, Networking all battle tested and ready to run this new cloud platform, the Openstack platform is by far the biggest weak point today.

It is not there yet though, HP does offer a professional services for the customers entire life cycle of OpenStack deployment.

One key area that has been weak in OpenStack which recently made the news, is the networking component Neutron.

[..] once you get beyond about 50 nodes, Neutron falls apart”

So to stabilize this component HP integrated support with their SDN controller into the lower levels of Neutron. This allowed it to scale much better and maintain complete compatibility with existing APIs.

That is something HP is doing in several cases, they emphasize very strongly they are NOT building a proprietary solution, and they are NOT changing any of the APIs (they are helping change them upstream) as to break compatibility. They are however adding/moving some things around beneath the API level to improve stability.

The initial cost for the commercial $1,400/server/year which is quite reasonable, I assume that includes basic support. The commercial version is expected to become generally available in the second half of 2014.

Major updates will be released every six months, and minor updates every three months.

Very limited support cycle

One thing that took almost everyone in the room by surprise is the support cycle for this product. Normally enterprise products have support for 3-5 years, Helion has support for a maximum of 18 months. HP says 12 of those months is general support and the last six of those are specifically geared towards migration to the next version, which they say is not a trivial task.

I checked Red Hat’s policy as they are another big distribution of OpenStack, and their policy is similar – they had one year of support on version three of their production and have one and a half years on version four (current version). Despite the version numbers apparently version three was the first release to the public.

So given that it should just reinforce the fact that Openstack is not a mature platform at this point and it will take some time before it is, probably another 2-3 years at least. They only recently got the feature that allowed for upgrading the system.

HP does offer a fully integrated ConvergedSystem with Helion, though despite my best efforts I am unable to find a link that specifically mentions Helion or OpenStack.

HP is supporting ESXi and KVM as the initial hypervisors in their Helion. Openstack supports a much wider variety itself but HP is electing those two to begin with anyway. Support for Hyper-V will follow shortly.

HP also offers full indemnification from legal issues as well.

This site has a nice diagram of what HP is offering, not sure if it is an HP image or not so sending you there to see it.

Conclusion

My own suggestion is to steer clear of Openstack for a while yet, give it time to stabilize, don’t deploy it just because you can. Don’t deploy it because it’s today’s hype.

If you really, truly need this functionality internally then it seems like HP has by far the strongest offerings from a product and support standpoint(they are willing and able to do everything from design to deployment to operationally running it). Keep in mind depending on scale of deployment you may be constantly planning for the next upgrade (or having HP plan for you).

I would argue that the vast majority of organizations do not need OpenStack (in it’s current state) and would do themselves a favor by sticking to whatever they are already using until it’s more stable. Your organization may have pains running whatever your running now, but your likely to just trade those pains for other pains going the OpenStack route right now.

When will it be stable? I would say a good indicator will be the support cycle, when HP (or Redhat) starts having a full 3 year support cycle on the platform (with back ported fixes etc) that means it’s probably hit a good milestone.

I believe OpenStack will do well in the future, it’s just not there yet today.

June 10, 2014

HP Discover Las Vegas 2014: Apollo 8000

Filed under: Datacenter — Tags: , — Nate @ 1:40 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I witnessed what I’d like to say is one of the most insane unveiling of a new server in history. It was sort of mimicking the launch of an Apollo space craft. Lots of audio and video from NASA, and then when the system appeared lots of compressed air/smoke (very very loud) and dramatic music.

Here’s the full video in 1080p, beware it is 230MB. I have 100Mbit of unlimited bandwidth connected to a fast backbone, will see how it goes.

HP Apollo 8000 launch

HP Apollo 8000

Apollo is geared squarely at compute bound HPC, and is the result of a close partnership between HP, Intel and the National Renewable Energy Laboratory (NREL).

The challenge HP presented itself with is what would it take to drive a million teraflops of compute. They said with today’s technology it would require one gigawatt of power and 30 football fields of space.

Apollo is supposed to help fix that, though the real savings are still limited by today’s technology, it’s not as if they were able to squeeze 10 fold improvement in performance out of the same power footprint. Intel said they were able to get I want to say 35% more performance out of the same power footprint using Apollo vs (I believe) the blades they were using before, I am assuming in both cases the CPUs were about the same and the savings came mainly from the more efficient design of power/cooling.

They build the system as a rack design, you probably haven’t been reading this blog very long but four years ago I lightly criticized HP on their SL series as not being visionary enough in that they were not building for the rack. The comparison I gave was with another solution I was toying with at the time for a project from SGI called CloudRack.

Fast forward to today and HP has answered that, and then some with a very sophisticated design that is integrated as an entire rack in the water cooled Apollo 8000. They have a mini version called the Apollo 6000(If this product was available before today I had never heard of it myself though I don’t follow HPC closely), of which Intel apparently already has something like 20,000 servers deployed on this platform.

Apollo 8000 water cooling system

Apollo 8000 water cooling system

Anyway one of the keys to this design is the water cooling – it’s not just any water cooling though, the water in this case never gets into the server, they use heat pipes on the CPUs and GPUs and transfer the heat to what appears to be a heat sink of some kind that is on the outside of the chassis, which then “melds” with the rack’s water cooling system to transfer the heat away from the servers. Power is also managed at a rack level. Don’t get me wrong this is far more advanced than the SGI system of four years ago. HP is apparently not giving this platform a very premium price either.

Apollo 8000 server level water cooling

Apollo 8000 server level water cooling

Their claims include:

  • 4 X the teraflops per square foot (vs air cooled servers)
  • 4 X density per rack per dollar (not sure what the per dollar means but..)
  • Deployment possible within days (instead of “years”)
  • More than 250 Teraflops per rack (the NREL Apollo 8000 system is 1.2 Petaflops in 18 racks …)
  • Apollo 6000 offering 160 servers per rack, and Apollo 8000 having 144
  • Some fancy new integrated management platform
  • Up to 80KW powering the rack (less if you want redundancy – 10kW per module)
  • One cable for ILO management of all systems in the rack
  • Can run on water temps as high as 86 degrees F (30 C)

The cooling system for the 8000 goes in another rack, which consumes 1/2 of a rack for a maximum of four server racks. If you need redundancy then I believe a 2nd cooling unit is required so two racks. The cooling system weighs over 2,000 pounds by itself so it appears unlikely that you’d be able to put two of them in a single cabinet.

The system takes in 480V AC and converts it into DC in up to 8x10kW redundant rectifiers in the middle of the rack.

NREL integrated the Apollo 8000 system with their building’s heating system, so that the Apollo cluster contributes to heating their entire building so that heat is not wasted.

It looks as if SGI discontinued the product I was looking at four  years ago (at the time for a Hadoop cluster). It supported up to 76 dual socket servers in a rack at the time supporting a max of something like 30kW(I don’t believe any configuration at the time could draw more than 20kW) and had rack based power distribution as well as rack based cooling(air cooling). It seems as if it was replaced with a newer product called Infinite data cluster which can go up to 80 dual socket servers in an air cooled rack(though GPUs not supported unlike Apollo).

This new system doesn’t mean a whole lot to me, I mean I don’t deal with HPC so I may never use it but the tech behind it seemed pretty neat, and obviously was interested in HP finally answering my challenge to deploy a system based on an entire rack with rack based power and cooling.

The other thing that stuck out to me was the new management platform, HP said it was another unique management platform that is specific to Apollo which was sort of confusing given I sat through what I thought was a very compelling preview of what HP OneView (the latest version announced today) had to offer which is HP’s new converged management interface. Seems strange to me that they would not integrate Apollo into that from the start, but I guess that’s what you get from a big company with teams working in parallel.

HP tries to justify this approach by saying there are several unique things in Apollo components so they needed a custom management package. I think they just didn’t have time to integrate with OneView, since there is no justification I can think of to not expose those management points via APIs that OneView can call/manage/abstract on behalf of the customer.

It sure looks pretty though(more so in person, I’m sure I’ll get a better pic of it on the showroom floor in normal lighting conditions along with pics of the cooling system).

UPDATE – some pics of the compute stuff

HP Apollo Series Server node Front part

HP Apollo Series Server node inside back part

HP Apollo  8000 Series server front end

HP Apollo 8000 Series server inside front part

HP Apollo 8000 Series front end

HP Apollo 8000 Series front end

HP Apollo 8000 series heat pipes, one for each CPU or GPU in the server and passes heat to the side of the server

HP Apollo 8000 series heat pipes, one for each CPU or GPU in the server and passes heat to the side of the server

June 9, 2014

3PAR June 2014: The 7450 AFA keeps getting better

Filed under: Storage — Tags: , , — Nate @ 3:16 pm

(I don’t know if I need one of those disclaimer things at the top here that says HP paid for my hotel and stuff in Vegas for Discover because I learned about this before I got here and was going to write about it anyway, but in any case know that..)

All about Flash

The 3PAR announcements at HP Discover this week are all about HP 3PAR’s all flash array the 7450, which was announced at last year’s Discover event in Las Vegas. HP has tried hard to convince the world that the 3PAR architecture is competitive even in the new world of all flash. Several of the other big players in storage – EMC, NetApp, and IBM have all either acquired companies specialized in all flash or in the case of NetApp they acquired and have been simultaneously building a new system(apparently called Flash Ray which folks think will be released later in the year).

Dell and HDS, like HP have not decided to do that, instead relying on in house technology for all flash use cases. Of course there have been a ton of all flash startups, all trying to be market disruptors.

So first a brief recap of what HP has done with 3PAR to-date to optimize for all flash workloads:

  • Faster CPUs, doubling of the data cache (7400 vs 7450)
  • Sophisticated monitoring and alerting with SSD wear leveling (alert at 90% of max endurance, force fail the SSD at 95% max endurance)
  • Adaptive Read cache – only read what you need, does not attempt to read ahead because the penalty for going back to the SSD is so small, and this optimizes bandwidth utilization
  • Adaptive write cache – only write what you require, if 4kB of a 16kB page is written then the array only writes 4kB, which reduces wear on the SSD who typically has a shorter life span than that of spinning rust.
  • Autonomic cache offload – more sophisticated cache flushing algorithms (this particular one has benefits for disk-based 3PARs as well)
  • Multi tenant I/O processing – multi threaded cache flushing, and supports both large(typically sequential) and small I/O sizes(typically random) simultaneously in an efficient manor – separates the large I/Os into more efficient small I/Os for the SSDs to handle.
  • Adaptive sparing – basically allows them to unlock hidden storage capacity (upwards of 20%) on each SSD to use for data storage without compromising anything.
  • Optimize the 7xxx platform by leveraging PCI Express’ Message Signal Interrupts which allowed the system to reach a staggering 900,000 IOPS at 0.7 millisecond response times (caveat that is a 100% read workload)

I learned at HP Storage tech day last year that among the features 3PAR was working on was:

  • In line deduplication for file and block
  • Compression
  • File+Object services running directly on 3PAR controllers

There were no specifics given at the time.

Well part of that wait is over.

Hardware-accelerated deduplication

In what I believe is an industry exclusive, somehow 3PAR has managed to find some spare silicon in their now 3-year old Gen4 ASIC to give complete CPU-offloaded inline deduplication for transactional workloads on their 7450 all flash array.

They say that the software will return typically a 4:1 to 10:1 data reduction levels. This is not meant to compete against HP StoreOnce which offers much higher levels of data reduction, this is for transaction processing (which StoreOnce cannot do) and primarily to reduce the cost of operating an all flash system.

It has been interesting to see 3PAR evolve, as a customer of theirs for almost eight years now. I remember when NetApp came out and touted deduplication for transactional workloads and 3PAR didn’t believe in the concept due to the performance hit you would(and they did) take.

Now they have line rate(I believe) hardware deduplication so that argument no longer applies. The caveat, at least for this announcement is this feature is limited to the 7450. There is nothing technically that prevents it from getting to their other Gen4 systems whether it is the 7200, 7400, the 10400, and 10800. But support for those is not mentioned yet, I imagine 3PAR is beating their drum to the drones out there who might be discounting 3PAR still because they have a unified architecture between AFA and hybrid flash/disk and disk-only systems(like mine).

One of 3PAR’s main claims to fame is that you can crank up a lot of their features and they do not impact system performance because most of it is performed by the ASIC, it is nice to see that they have been able to continue this trend, and while it obviously wasn’t introduced on day one with the Gen4 ASIC, it does not require customers wait, or upgrade their existing systems (whenever the Gen5 comes out I’d wager December 2015) to the next generation ASIC to get this functionality.

The deduplication operates using fixed page sizes that are 16kB each, which is a standard 3PAR page size for many operations like provisioning.

For 3PAR customers note that this technology is based on Common Provisioning Groups(CPG). So data within a CPG can be deduplicated. If you opt for a single CPG on your system and put all of your volumes on it, then that effectively makes the deduplication global.

Express Indexing

This is a patented approach which allows 3PAR to use significantly less memory than would be otherwise required to store lookup tables.

Thin Clones

Thin clones are basically the ability to deduplicate VM clones (I imagine this would need hypervisor integration like VAAI)  for faster deployment. So you could probably deploy clones at 5-20x the speed at which you could before.

NetApp here too has been touting a similar approach for a few years on their NFS platform anyway.

Two Terabyte SSDs

Well almost 2TB, coming in at 1.9TB, these are actually 1.6TB cMLC SSDs  but with the aforementioned adaptive sparing it allows 3PAR to bump the usable capacity of the device way up without compromising on any aspect of data protection or availability.

These appear to be Sandisk Lightning Read-intensive SSDs. I found that info here(PDF).

Info on ordering Sandisk SSDs for HP 3PAR

Info on ordering Sandisk SSDs for HP 3PAR

I also quote aforementioned PDF

The 1920GB is available only in the StoreServ 7450 until the end of September 2014.

It will then be available in other models as well October 2014.

New 1.6TB SSD I believe comes from Sandisk

New 1.6TB SSD I believe comes from Sandisk, not the fastest around but certainly a bunch faster than a 15k RPM disk!

These SSDs come with a five year unconditional warranty, which is better than the included warranty on disks on 3PAR(three year). This 5-year warranty is extended to the 480GB and 920GB MLC SSDs as well. Assuming Sandisk is indeed the supplier as they claim the 5-year warranty exceeds the manufacturer’s own 3-year warranty.

These are technically consumer grade however HP touts their sophisticated flash features that make the media effectively more reliable than it otherwise might be in another architecture, and that claim is backed by the new unconditional warranty.

These are much more geared towards reads vs writes, and are significantly lower cost on a per GB basis than all previous SSD offerings from HP 3PAR.

The cost impact of these new SSDs is pretty dramatic, with the per GB list cost dropping from about $26.50 this time last year to about $7.50 this year.

These new SSDs allow for up to 460TB of raw flash on the 7450, which HP claims is seven times more than Pure Storage(which is a massively funded AFA startup), and 12 times more than a four brick EMC ExtremIO system.

With deduplication the 7450 can get upwards of 1.3PB of usable flash capacity in a single system along with 900,000 read I/Os with sub millisecond response times.

Dell Compellent about a year or so ago updated their architecture to leverage what they called read optimized low cost SSDs, and updated their auto tiering software to be aware of the different classes of SSDs. There are no tiering enhancements announced today, in fact I suspect you can’t even license the tiering software on a 3PAR 7450 since there is only one “tier” there.

So what do you get when you combine this hardware accelerated deduplication and high capacity low cost solid state media?

Solid state at less than $2/GB usable

HP says this puts solid state costs roughly in line with that of 15k RPM spinning disk. This is a pretty impressive feat. Not a unique one, there are other players out there that have reached the same milestone, but that is obviously not the only arrow 3PAR has in their arsenal.

That arsenal, is what HP believes is the reason you should go 3PAR for your all flash workloads. Forget about the startups, forget about EMC’s Xtrem IO, forget about NetApp Flash Ray, forget about IBM’s TMS flash systems etc etc.

Six nines of availability, guaranteed

HP is now willing to put their money where their mouth is and sign a contract that guarantees six nines of availability on any 4-node 3PAR system (originally thought it was 7450-specific it is not). That is a very bold statement to make in my opinion. This obviously comes as a result of an architecture that has been refined over the past roughly fifteen years and has some really sophisticated availability features including:

  • Persistent ports – very rapid fail over of host connectivity for all protocols in the event of planned or unplanned controller disruption. They have laser loss detection for fibre channel as well which will fail over the port if the cable is unplugged. This means that hosts do not require MPIO software to deal with storage controller disruptions.
  • Persistent cache – rapid re-mirroring of cache data to another node in the event of planned or unplanned controller disruption. This prevents the system from going into “write through” mode which can otherwise degrade performance dramatically. The bulk of the 128 GB of cache(4-node) on a 7450 is dedicated to writes (specifically optimizing I/O for the back end for the most efficient usage of system resources).
  • The aforementioned media wear monitoring and proactive alerting (for flash anyway)

They have other availability features that span systems(the guarantee does not require any of these):

I would bet that you’ll have to follow very strict guidelines to get HP to sign on the dotted line, no deviation from supported configurations. 3PAR has always been a stickler for what they have been willing to support, for good reason.

Performance guaranteed

HP won’t sign on a dotted line for this, but with the previously released Priority Optimization, customers can guarantee their applications:

  • Performance minimum threshold
  • Performance maximum threshold (rate limiting)
  • Latency target

In a very flexible manor, these capabilities (combined) I believe are still unique in the industry (some folks can do rate limiting alone).

Online import from EMC VNX

This was announced about a month or so ago, but basically HP makes it easy to import data from a EMC VNX without any external appliances, professional services, or performance impact.

This product (which is basically a plugin written to interface with VNX/CX’s SMI-S management interface) went generally available I believe this past Friday. I watched a full demo of it at Discover and it was pretty neat. It does require direct fibre channel connections between the EMC and 3PAR system, and it does require (in the case of Windows anyway that was the demo) two outages on the server side:

  • Outage 1: remove EMC Powerpath  – due to some damage Powerpath leaves behind you must also uninstall and re-install the Microsoft MPIO software using the standard control panel method. These require a restart.
  • Outage 2: Configure Microsoft MPIO to recognize 3PAR (requires restart)

Once the 2nd restart is complete the client system can start using the 3PAR volumes as the data migrates in the background.

So online import may not be the right term for it, since the system does have to go down at least in the case of Windows configuration.

The import process currently supports Windows and Linux. The product came about as a result of I believe the end of life status of the MPX 2000 appliance which HP had been using to migrate data. So they needed something to replace that functionality so they were able to leverage the Peer Motion technology on 3PAR already that was already used for importing data from HP EVA storage and extend it to EMC.  They are evaluating the possibility of extending this to more platforms – I guess VNX/CX was an easy first target given there are a lot of those out there that are old and there isn’t an easier migration path than the EMC to 3PAR import tool (which is significantly easier and less complex apparently than EMC’s options). One of the benefits that HP touts of their approach is it has no impact on host performance as the data goes directly between the arrays.

The downside to this direct approach is the 7000-series of 3PAR arrays are very limited in port counts, especially if you happen to have iSCSI HBAs in them(as did the F and E classes before them). 10000-series has a ton of ports though. I learned last year at HP storage tech day, was HP was looking at possibly shrinking the bigger 10000-series controllers for the next round of mid range systems (rather than making the 7000-series controllers bigger) in an effort to boost expansion capacity. I’d like to see at least 8 FC-host and 2 iSCSI Host per controller on mid range. Currently you get only 2 FC-host if you have a iSCSI HBA installed in a 7000-series controller.

The tool is command line based, there are a few basic commands it does and it interfaces directly with the EMC and 3PAR systems.

The tool is free of charge as far as I know, and while 3PAR likes to tout no professional services required HP says some customers may need assistance (especially at larger scales) planning migrations, if this is the case then HP has services ready to hold your hand through every step of the way.

What I’d like to see from 3PAR still

Read and write SSD caches

I’ve talked about it for years now, but still want to see a read (and write – my workloads are 90%+ write) caching system that leverages high endurance SSDs on 3PAR arrays. HP announced SmartCache for Proliant Gen8 systems I believe about 18 months ago with plans to extend support to 3PAR but that has not yet happened. 3PAR is well aware of my request, so nothing new here. David did mention that they do want to do this still, no official timelines yet. Also it sounded like they will not go forward with the server-side SmartCache integration with 3PAR (I’d rather have the cache in the array anyway and they seem to agree).

David Scott - SVP and GM of all of HP Storage talking with us bloggers in the lounge

David Scott – SVP and GM of all of HP Storage talking with us bloggers in the lounge

3PAR 7450 SPC-1

I’d like to see SPC-1 numbers for the 7450 especially with these new flash media, it ought to provide some pretty compelling cost and performance numbers. You can see some recent performance testing that was done(that wasn’t 100% read) on a four node 7450 on behalf of HP.

Demartek also found that the StoreServ 7450 was not very heavily taxed with a single OLTP database accessing the array. As a result, we proceeded to run a combination of database workloads including two online transaction processing (OLTP) workloads and a data warehouse workload to see how well this storage system would handle a fairly heavy, mixed workload.

HP says the lack of SPC-1 comes down to priorities, it’s a decent amount of work to do the tests and they have had people working on other things, they still intend to do them but not sure when it will happen.

Compression

Would like to see compression support, not sure whether or not that will have to wait for Gen5 ASIC or if there are more rabbits hiding in the Gen4 hat.

Feature parity

Certainly want to see deduplication come to the other Gen4 platforms. HP touts a lot about the competition’s flash systems being silos. I’ll let you in on a little secret – the 3PAR 7450 is a flash silo as well. Not a technical silo, but one imposed on the product by marketing. While the reasons behind it are understandable it is unfortunate that HP feels compelled into limiting the product to appease certain market observers.

Faster CPUs

I was expecting a CPU refresh on the 7450, which was launched with an older generation of processor because HP didn’t want to wait for Intel’s newest chip to launch their new storage platform. I was told last year the 7450 is capable of operating with the newer chip, so it should just be a matter of plugging it in and doing some testing. That is supposed to be one of the benefits of using x86 processors is you don’t need to wait years to upgrade. HP says the Gen4 ASIC is not out of gas, the performance numbers to-date are limited by the CPU cores in the system, so faster CPUs would certainly benefit the system further without much cost.

No compromises

At the end of the day the 3PAR flash story has evolved into one of no compromises. You get the low cost flash, you get the inline hardware accelerated deduplication, you get the high performance with multitenancy and low latency, you get all of that and your not compromising on any other tier 1 capabilities(too many to go into here, you can see past posts for more info). Your getting a proven architecture that has matured over the past decade, a common operating system, the only storage platform that leverages custom ASICs to give uncompromising performance even with the bells & whistles turned on.

The only compromise here is you had to read all of this and I didn’t give you many pretty pictures to look at.

Powered by WordPress