TechOpsGuys.com Diggin' technology every day

September 1, 2015

HP 3PAR Case study with my organization

Filed under: Storage — Tags: , — Nate @ 7:39 am
Stella & Dot HP 3PAR Case Study

Stella & Dot HP 3PAR Case Study (click to download)

Stella & Dot relies on HP 3PAR StoreServ Storage

“Highly reliable HP 4-node storage architecture supports over 30,000 e-commerce Independent Business Owners worldwide”

This is also available on HP’s case study website http://www.hp.com/go/success

I have had this blog since July 2009, and I don’t believe ever once have I mentioned any of my employers names. This will be an exception to that record.

HP came to us last year when we were in the market for their 3PAR 7450-all flash system. There was some people in management over there that really liked our company’s brand and I’m told practically everyone in 3PAR is aware of me. So they wanted to do a case study with the company I work for on our usage of 3PAR. I have participated in one, or maybe two 3PAR case studies in the past, prior to HP acquisition. The last one was in 2010 on a 3PAR T400. That particular company I was with had a policy where nobody with less than a VP title could be quoted. So my boss’s boss got all the credit even though of course I did all of the magic. Coincidentally I left that company just a few months later(for a completely different reason).

This company is different, I’ve had an extremely supportive management for my entire four years at this company and the newest management that joined in late 2013/early 2014 has been even more supportive. They really wanted me to get as much credit as I could get for all the hard work I do. So it’s my name all over the case study not theirs.  It’s people like this that more than anything keep me happy in my current role, I don’t see myself going anywhere anytime soon (added bonus is the company is stable and I believe will have no trouble surviving the next tech crash without an issue since we aren’t tech oriented).

Anyway, the experience with HP in making the case study was quite good. They are very patient, we said we needed time to work with the new system before we did the case study. They told us take all the time we want, no rush. About 8 months into using the new 7450 they reached out again and we agreed to start the process.

I spent a couple hours on the phone with them, and exchanged a few emails. They converted a lot of my technical jargon into marketing jargon (I don’t actually talk like that!), though the words seemed reasonable to me. The only thing that is kind of a mistake in the article is we don’t leverage any of the 3PAR “Application Suites”. I mentioned this to them saying they can remove those references if they wish, I didn’t care either way. At the end they also make reference to a support event I had with 3PAR five years ago which was at the previous company, and they credited HP for it when technically it was well before the acquisition(told them that as well, though seems reasonable to credit HP to some extent for that since I’d wager the same staff that performed those actions worked at HP for a while anyways, or maybe they are still there).

I would wager that my feedback into the benefits I see with 3PAR are probably not typical among HP customers. The HP Solutions Architect assigned to our account has told me on several occasions that he believes I operate my 3PAR systems better than most any other customer he’s seen, that made me feel good even though I already felt I operate my systems pretty well!

On that note, our SaaS monitoring service Logic Monitor is working with me to try to formalize my custom 3PAR monitoring I wrote (which gathers about 12,000 data points a minute from our three arrays) into something more customers can leverage, and if they can get that done then I hope I can get HP to endorse their service for monitoring 3PAR because it really works well in general, and better than any other tool I’ve seen or used for my 3PAR monitoring needs at least.

3PAR 8000 & 20450

I’m pretty excited about the new 8000-series and the new 20450 (4-node 20k series) that came out a few days ago. I would say really excited but given 3PAR’s common architecture, the newer form factors were already expected by me. I am quite happy that HP released an 8440 with the exact same specs as the 8450 (meaning 3X more data cache than the 8400 and more, faster CPU cores), also the 4-node 20450 has the same cache and CPU allocations per-node that the all-flash 8-node 20850 has. This means you can get these systems and not be “forced” into an all flash configuration, since the 8450 and the 20850 systems are “marketing limited” to be all flash(to make people like Gartner happy).

June 1, 2015

3PAR Gen5: True flash scale storage

Filed under: Storage — Tags: , — Nate @ 6:14 am

(I’m in Vegas, waiting for HP Discover to start(got here Friday afternoon), this time around I had my own company pay my way, so HP isn’t providing me with the trip. Since I haven’t been blogging much the past year I didn’t feel good about asking HP to cover me as a blogger.)

UPDATED – 6/4/2015

When flash first started becoming a force to be reckoned with a few years ago in enterprise storage it was clear to me controller performance wasn’t up to the task of being able to exploit the performance potential of the medium. The ratio of controllers to SSDs was just way out of whack.

In my opinion this has been largely addressed in the 3PAR Generation 5 systems that are being released today.

3PAR 20k system in the flesh

3PAR 20k system in the flesh

NEW HP 3PAR 20800

I believe this replaces the 10800/V800 (2-8 controllers) system, I believe it also replaces the 10400/V400(2-4 controllers) system as well.

  • 2-8 controllers, max 96 x 2.5Ghz CPU cores(12/controller) and 16 Generation 5 ASICs (2/controller)
  • 224GB of RAM cache per controller (1.8TB total – 768GB control / 1,024GB data)
  • Up to 32TB of Flash read cache
  • Up to 6PB of raw capacity (SSD+HDD)
  • Up to 15PB of usable capacity w/deduplication
  • 12Gb SAS back end, 16Gb FC (Max 160 ports) / 10Gb iSCSI (Max 80 ports) front end
  • 10 Gigabit ethernet replication port per controller
  • 2.5 Million Read IOPS under 1 millisecond
  • Up to 75 Gigabytes/second throughput (read), up to 30 Gigabytes/second throughput (write)

NEW HP 3PAR 20850

This augments the existing 7450 with a high end 8-controller capable all flash offering, similar to the 20800 but with more adrenaline.

  • 2-8 controllers, max 128 x 2.5Ghz CPU cores(16/controller), and 16 Generation 5 ASICs (2/controller)
  • 448GB of RAM cache per controller (3.6TB total – 1,536GB control – 2,048GB data)
  • Up to 4PB of raw capacity (SSD only)
  • Up to 10PB of usable capacity w/deduplication
  • 12Gb SAS back end, 16Gb FC (Max 160 ports) / 10Gb iSCSI (Max 80 ports) front end
  • 10 Gigabit ethernet replication port per controller
  • 3.2 Million Read IOPS under 1 millisecond
  • 75 Gigabytes/second throughput (read), 30 Gigabytes/second throughput (write)

Even though flash is really, really fast, 3PAR leverages large amounts of cache to optimize writes to the back end media, this not only improves performance but extends SSD life. If I recall correctly nearly 100% of the cache on the all SSD-systems is write cache, reads are really cheap so very little caching is done on reads.

It’s only going to get faster

One of the bits of info I learned is that the claimed throughput by HP is a software limitation at this point. The hardware is capable of much more and performance will improve as the software matures to leverage the new capabilities of the Generation 5 ASIC (code named “Harrier 2”).

Magazine sleds are no more

Since the launch of the early 3PAR high end units more than a decade ago 3PAR had leveraged custom drive magazines which allowed them to scale to 40×3.5″ drives in a 4U enclosure. That is quite dense, though they did not have similar ultra density for 2.5″ drives. I was told that nearline drives represent around 10% of disks sold on 3PAR so they decided to do away with these high density enclosures and go with 2U enclosures without drive magazines. This allows them to scale to 48×2.5″ drives in 4U (vs 40 2.5″ in 4U before), but only 24×3.5″ drives in 4U (vs 40 before). Since probably greater than 80% of the drives they ship now are 2.5″ that is probably not a big deal. But as a 3PAR historian(perhaps) I found it interesting.

Along similar lines, the new 20k series systems are fully compatible with 3rd party racks. The previous 10k series as far as I know the 4-node variant was 3rd party compatible but 8-node required a custom HP rack.

Inner guts of a 3PAR 20k-series controller

With dual internal SATA SSDs for the operating system(I assume some sort of mirroring going on), eight memory slots for data cache(max 32GB per slot) – these are directly connected to the pair of ASICs(under the black heatsinks). Another six memory slots for the control cache(operating system, meta data etc) also max 32GB per slot those are controlled by the Intel processors under the giant heat sinks.

3PAR 20k series controller

3PAR 20k series controller

How much does it cost?

I’m told the new 20k series is surprisingly cost effective in the market. The price point is significantly lower than earlier generation high end 3PAR systems. It’s priced higher than the all flash 7450 for example but not significantly more, entry level pricing is said to be in the $100,000 range, and I heard a number tossed around that was lower than that. I would assume that the blanket thin provisioning licensing model of the 7000 series extends to the new 20k series, but I am not certain.

It is only available in a 8-controller capable system at this time, so requires at least 16U of space for the controllers since they are connected over the same sort of backplane as earlier models. Maybe in the future HP will release a smaller 2-4 controller capable system or they may leave that to whatever replaces the 7450. I hope they come out with a smaller model because the port scalability of the 7000-series(and F and E classes before them) is my #1 complaint on that platform, having only one PCIe expansion slot/controller is not sufficient.

NEW HP 3PAR 3.84TB cMLC SSD

HP says that this is the new Sandisk Optimus MAX 4TB drive, which is targeted at read intensive applications. If a customer decides to use this drive in non read intensive(and non 3PAR) then the capacity will drop to 3.2TB. So with 3PAR’s adaptive sparing they are able to gain 600GB of capacity on the drive while simultaneously supporting any workload without sacrificing anything.

This is double the size of the 1.92TB drive that was released last year. It will be available on at least the 20k and the 7450, most likely all other Gen4 3PAR platforms as well.

HP says this drops the effective cost of flash to $1.50/GB usable.

This new SSD comes with the same 5 year unconditional warranty that other 3PAR SSDs already enjoy.

I specifically mention the 7450 here because this new SSD effectively doubles the raw capacity of the system to 920TB of raw flash today (vs 460TB before). How many all flash systems scale to nearly a petabyte of raw flash?

NEW HP 3PAR Persistent Checksum

With the Generation 4 systems 3PAR had end to end T10 data integrity checking within the array itself from the HBAs to the ASICs, to the back end ports and disks/SSDs. Today they are extending that to the host HBAs, and fibre channel switches as well (not sure if this extends to iSCSI connections or not).

The Generation 5 ASIC has a new line rate SHA1 engine which replaces the line rate CRC engine in Generation 4 for even better data protection. I am not certain if persistent checksum is Generation 5 specific(given they are extending it beyond the array I really would expect it to be possible in Generation 4 as well).

NEW HP 3PAR Asynchronous Streaming Replication

I first heard about this almost two years ago at HP Storage Tech Day, but today it’s finally here. HP adds another method of replication to the existing sets they already had:

  • Synchronous replication – 0 data loss (strict latency limits)
  • Synchronous long distance replication (requires 3 arrays) – 0 data loss (latency limits between two of the three arrays)
  • Asynchronous replication – as low as 5 minutes of data loss (less strict latency limits)
  • Asynchronous streaming replication – as low as 1 second of data loss (less strict latency limits)

HP compares this to EMC’s SRDF async replication which has as low as a 15 seconds of data loss, vs 3PAR with as low as 1 second.

If for some reason more data comes into the system than the replication link can handle, the 3PAR will automatically go into asynchronous replication mode until the replication is caught up then switch back to asynchronous streaming.

This new feature is available on all Gen4 and Gen5 systems.

NEW Co-ordinated Snapshots

3PAR has long had the ability to snapshot multiple volumes on a single system simultaneously, and it’s always been really easy to use. Now they have extended this to be able to snapshot across multiple arrays simultaneously and make them application aware (in the case of VMware initially, Exchange, SQL Server and Oracle to follow).

This new feature is available on all Gen4 and Gen5 systems.

HP 3PAR Storage Federation

Up to 60PB of usable capacity and 10 Million IOPS with zero overhead

3PAR Federation

3PAR Federation

HP has talked about Storage Federation in the past, today with the new systems of course the capacity knobs have been turned up a lot, they’ve made it easier to use than earlier versions of the software, though don’t yet have completely automatic load balancing between arrays yet.

This federation is possible between all Gen4 and Gen5 systems.

Benefits from ASIC Acceleration

3PAR has always use in house custom ASICs on their systems and these are no different

The ASICs within each HP 3PAR StoreServ 20850 and 20800 Storage controller node serve as the high-performance engines that move data between three I/O buses, a four memory-bank data cache, and seven high-speed links to the other controller nodes over the full-mesh backplane. These ASICs perform RAID parity calculations on the data cache and inline zero-detection to support the system’s data compaction technologies. CRC Logical Block Guard used by T10-DIF is automatically calculated by the HBAs to validate data stored on drives with no additional CPU overhead. An HP 3PAR StoreServ 20800 Storage system with eight controller nodes has 16 ASICs totaling 224 GB/s of peak interconnect bandwidth.

3PAR 208x0 Controller Architecture

3PAR 208×0 Controller Architecture

3par-mixedworkload

NEW Online data import from HDS arrays

You are now able to do online import of data volumes from Hitachi arrays in addition to the EMC VMAX, CX4, VNX, and HP EVA systems.

The competition

HP touts the scalability of usable and raw flash capacity of these new systems + the new 3.84TB SSD against their competition:

  • Consolidate thirty Pure Storage //m70 storage systems onto a single 3PAR 20850 (with 87% less power/cooling/space) ***
  • Consolidate eight XtremeIO storage systems onto a single 3PAR 20850 (with 62% less power/cooling/space)
  • Consolidate three EMC VMAX 400K storage systems onto a single 3PAR 20850 (with 85% less power/cooling/space)

HP also touts their throughput numbers (75GB/second) are between two and ten times faster than the competition. The 7450 came in at only 5.5GB/second, so this is quite a step up.

*** HP revised their presentation last minute their original claims were against the Pure 450, which was replaced by the m70 on the same day of the 3PAR announcement. The numbers here are from memory from a couple of days ago they may not be completely accurate.

Fastest growing in the market

HP touted again 3PAR was the fastest growing all flash in the market last year. They also said they have sold more than 1,000 all flash systems in the first half which is more than Pure Storage sold in all of last year. In other talks with 3PAR folks specifically on market share they say they are #1 in midrange in Europe and #2 in Americas, with solid growth across the board consistently for many quarters now. 3PAR is still #5 in the all flash market, part of that is likely due to compression(see below), but I have no doubt this new generation of systems will have a big impact on the market.

Still to come

Compression remains a road map item, they are working on it, but obviously not ready for release today. Also this marks probably the first 3PAR hardware released in more than a decade that wasn’t accompanied by SPC-1 results. HP says SPC-1 is coming, and it’s likely they will do their first SPC-2 (throughput) test on the new systems as well.

Bottom line

HP continues to show that it’s 3PAR architecture is fully capable of embracing the all flash era and has a long life left in it. Not only are you getting the maturity of the enterprise proven 3PAR systems (over a decade at this point), but you are not having to compromise on almost anything else related to all flash(compression being the last holdout).

June 15, 2014

HP Discover 2014: Datacenter services

Filed under: Datacenter — Tags: , — Nate @ 12:54 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I should be out sight seeing but have been stuck in my hotel room here in Sedona, AZ due to the worst food poisoning I’ve ever had from food I ate on Friday night.

X As a service

The trend towards “as a service” as what seems to be an accounting thing more than anything else to shift dollars to another column in the books continues with HP’s Facility as a service.

HP will go so far as to buy you a data center(the actual building), fill it with equipment and rent it back to you for some set fee – with entry level systems starting at 150kW (which would be as few as say 15 x high density racks). They can even manage it end to end if you want them to. I didn’t realize myself the extent that their services go to. requires a 5 or 10 year commitment however (has to do with accounting again I believe). HP says they are getting a lot of positive feedback on this new service.

This is really targeted at those that must operate on premise due to regulations and cannot rely on a 3rd party data center provider (colo).

Flexible capacity

FAAS doesn’t cover the actual computer equipment though, that is just the building, power, cooling etc. The equipment can either come from you or you can get it from HP using their Flexible Capacity program. This program also extends to the HP public cloud as well as a resource pool for systems.

HP Flexible Capacity program

HP Flexible Capacity program

Entry level for Flexible capacity we were told was roughly a $500k contract ($100k/year).

Thought this was a good quote

“We have designed more than 65 million square feet of data center space. We are responsible for more than two-thirds of all LEED Gold and Platinum certified data centers, and we’ve put our years of practical experience to work helping many enterprises successfully implement their data center programs. Now we can do the same for you.”

Myself I had no idea that was the case, not even close.

HP Discover 2014: Software defined

Filed under: Datacenter,Events — Tags: , , , — Nate @ 12:26 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I have tried to be a vocal critic of the whole software defined movement, in that much of it is hype today and has been for a while and will likely to continue to be for a while yet. My gripe is not so much about the approach, the world of “software defined” sounds pretty neat, my gripe is about the marketing behind it that tries to claim we’re already there, and we are not, not even close.

I was able to vent a bit with the HP team(s) on the topic and they acknowledged that we are not there yet either. There is a vision, and there is a technology. But there aren’t a lot of products yet, at least not a lot of promising products.

Software defined networking is perhaps one of the more (if not the most) mature platforms to look at. Last year I ripped pretty good into the whole idea with good points I thought, basically that technology solves a problem I do not have and have never had. I believe most organizations do not have a need for it either (outside of very large enterprises and service providers). See the link for a very in depth 4,000+ word argument on SDN.

More recently HP tried to hop on the bandwagon of Software Defined Storage, which in their view is basically the StoreVirtual VSA. A product that to me doesn’t fit the scope of Software defined, it is just a brand  propped up onto a product that was already pretty old and already running in a VM.

Speaking of which, HP considers this VSA along with their ConvergedSystem 300 to be “hyper converged”, and least the people we spoke to do not see a reason to acquire the likes of Simplivity or Nutanix (why are those names so hard to remember the spelling..). HP says most of the deals Nutanix wins are small VDI installations and aren’t seen as a threat, HP would rather go after the VCEs of the world. I believe Simplivity is significantly smaller.

I’ve never been a big fan of StoreVirtual myself, it seems like a decent product, but not something I get too excited about. The solutions that these new hyper converged startups offer sound compelling on paper at least for lower end of the market.

The future is software defined

The future is not here yet.

It’s going to be another 3-5 years (perhaps more). In the mean time customers will get drip fed the technology in products from various vendors that can do software defined in a fairly limited way (relative to the grand vision anyway).

When hiring for a network engineer, many customers would rather opt to hire someone who has a few years of python experience than more years of networking experience because that is where they see the future in 3-5 years time.

My push back to HP on that particular quote (not quoted precisely) is that level of sophistication is very hard (and expensive) to hire for. A good comparative mark is hiring for something like Hadoop.  It is very difficult to compete with the compensation packages of the largest companies offering $30-50k+ more than smaller (even billion $) companies.

So my point is the industry needs to move beyond the technology and into products. Having a requirement of knowing how to code is a sign of an immature product. Coding is great for extending functionality, but need not be a requirement for the basics.

HP seemed to agree with this, and believes we are on that track but it will take a few more years at least for the products to (fully) materialize.

HP Oneview

(here is the quick video they showed at Discover)

I’ll start off by saying I’ve never really seriously used any of HP’s management platforms(or anyone else’s for that matter). All I know is that they(in general not HP specific) seem to be continuing to proliferate and fragment.

HP Oneview 1.1 is a product that builds on this promise of software defined. In the past five years of HP pitching converged systems seeing the demo for Oneview was the first time I’ve ever shown just a little bit of interest in converged.

HP Oneview was released last October I believe and HP claims something along the lines of 15,000 downloads or installations. Version 1.10 was announced at Discover which offers some new integration points including:

  • Automated storage provisioning and attachment to server profiles for 3PAR StoreServ Storage in traditional Fibre Channel SAN fabrics, and Direct Connect (FlatSAN) architectures.
  • Automated carving of 3PAR StoreServ volumes and zoning the SAN fabric on the fly, and attaching of volumes to server profiles.
  • Improved support for Flexfabric modules
  • Hyper-V appliance support
  • Integration with MS System Center
  • Integration with VMware vCenter Ops manager
  • Integration with Red Hat RHEV
  • Similar APIs to HP CloudSystem

Oneview is meant to be light weight, and act as a sort of proxy into other tools, such as Brocade’s SAN manager in the case of Fibre channel (myself I prefer Qlogic management but I know Qlogic is getting out of the switch business). Though for several HP products such as 3PAR and Bladesystem Oneview seems to talk to them directly.

Oneview aims to provide a view that starts at the data center level and can drill all the way down to individual servers, chassis, and network ports.

However the product is obviously still in it’s early stages – it currently only supports HP’s Gen8 DL systems (G7 and Gen8 BL), HP is thinking about adding support for older generations but their tone made me think they will drag their feet long enough that it’s no longer demanded by customers. Myself the bulk of what I have in my environment today is G7, only recently deployed a few Gen8 systems two months ago. Also all of my SAN switches are Qlogic (and I don’t use HP networking now) so Oneview functionality would be severely crippled if I were to try to use it today.

The product on the surface does show a lot of promise though, there is a 3 minute video introduction here.

HP pointed out you would not manage your cloud from this, but instead the other way around, cloud management platforms would leverage Oneview APIs to bring that functionality to the management platform higher up in the stack.

HP has renamed their Insight Control systems for vCenter and MS System Center to Oneview.

The goal of Oneview is automation that is reliable and repeatable. As with any such tools it seems like you’ll have to work within it’s constraints and go around it when it doesn’t do the job.

“If you fancy being able to deploy an ESX cluster in 30 minutes or less on HP Proliant Gen8 systems, HP networking and 3PAR storage than this may be the tool for you.” – me

The user interface seems quite modern and slick.

They expose a lot of functionality in an easy to use way but one thing that struck me watching a couple of their videos is it can still be made a lot simpler – there is a lot of jumping around to do different tasks.  I suppose one way to address this might be broader wizards that cover multiple tasks in the order they should be done in or something.

HP Discover 2014: Helion (Openstack)

Filed under: Datacenter,Events — Tags: , , , — Nate @ 10:36 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

HP Helion

This is a new brand for HP’s cloud platform based on OpenStack. There is a commercial version and a community edition. The community edition is pure OpenStack without some of the fancier HP management interfaces on top of it.

The easiest thing about OpenStack is setting it up – organizations spend the majority of the time simply keeping it running after it is set up.”

HP admits that OpenStack has a long way to go before it is considered a mature enterprise application stack. But they do have experience running a large OpenStack public cloud and have hundreds of developers working on the product. In fact HP says that most OpenStack community projects these days are basically run by HP, while other larger contributors (even Rack Space) have pulled back on resource allocation to the project HP has gone in full steam ahead.

HP has many large customers who specifically asked HP to get involved in the project and to provide a solution for them that can be supported end to end. I must admit the prospect does sound attractive, being that you can get HP Storage, Servers, Networking all battle tested and ready to run this new cloud platform, the Openstack platform is by far the biggest weak point today.

It is not there yet though, HP does offer a professional services for the customers entire life cycle of OpenStack deployment.

One key area that has been weak in OpenStack which recently made the news, is the networking component Neutron.

[..] once you get beyond about 50 nodes, Neutron falls apart”

So to stabilize this component HP integrated support with their SDN controller into the lower levels of Neutron. This allowed it to scale much better and maintain complete compatibility with existing APIs.

That is something HP is doing in several cases, they emphasize very strongly they are NOT building a proprietary solution, and they are NOT changing any of the APIs (they are helping change them upstream) as to break compatibility. They are however adding/moving some things around beneath the API level to improve stability.

The initial cost for the commercial $1,400/server/year which is quite reasonable, I assume that includes basic support. The commercial version is expected to become generally available in the second half of 2014.

Major updates will be released every six months, and minor updates every three months.

Very limited support cycle

One thing that took almost everyone in the room by surprise is the support cycle for this product. Normally enterprise products have support for 3-5 years, Helion has support for a maximum of 18 months. HP says 12 of those months is general support and the last six of those are specifically geared towards migration to the next version, which they say is not a trivial task.

I checked Red Hat’s policy as they are another big distribution of OpenStack, and their policy is similar – they had one year of support on version three of their production and have one and a half years on version four (current version). Despite the version numbers apparently version three was the first release to the public.

So given that it should just reinforce the fact that Openstack is not a mature platform at this point and it will take some time before it is, probably another 2-3 years at least. They only recently got the feature that allowed for upgrading the system.

HP does offer a fully integrated ConvergedSystem with Helion, though despite my best efforts I am unable to find a link that specifically mentions Helion or OpenStack.

HP is supporting ESXi and KVM as the initial hypervisors in their Helion. Openstack supports a much wider variety itself but HP is electing those two to begin with anyway. Support for Hyper-V will follow shortly.

HP also offers full indemnification from legal issues as well.

This site has a nice diagram of what HP is offering, not sure if it is an HP image or not so sending you there to see it.

Conclusion

My own suggestion is to steer clear of Openstack for a while yet, give it time to stabilize, don’t deploy it just because you can. Don’t deploy it because it’s today’s hype.

If you really, truly need this functionality internally then it seems like HP has by far the strongest offerings from a product and support standpoint(they are willing and able to do everything from design to deployment to operationally running it). Keep in mind depending on scale of deployment you may be constantly planning for the next upgrade (or having HP plan for you).

I would argue that the vast majority of organizations do not need OpenStack (in it’s current state) and would do themselves a favor by sticking to whatever they are already using until it’s more stable. Your organization may have pains running whatever your running now, but your likely to just trade those pains for other pains going the OpenStack route right now.

When will it be stable? I would say a good indicator will be the support cycle, when HP (or Redhat) starts having a full 3 year support cycle on the platform (with back ported fixes etc) that means it’s probably hit a good milestone.

I believe OpenStack will do well in the future, it’s just not there yet today.

June 10, 2014

HP Discover Las Vegas 2014: Apollo 8000

Filed under: Datacenter — Tags: , — Nate @ 1:40 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I witnessed what I’d like to say is one of the most insane unveiling of a new server in history. It was sort of mimicking the launch of an Apollo space craft. Lots of audio and video from NASA, and then when the system appeared lots of compressed air/smoke (very very loud) and dramatic music.

Here’s the full video in 1080p, beware it is 230MB. I have 100Mbit of unlimited bandwidth connected to a fast backbone, will see how it goes.

HP Apollo 8000 launch

HP Apollo 8000

Apollo is geared squarely at compute bound HPC, and is the result of a close partnership between HP, Intel and the National Renewable Energy Laboratory (NREL).

The challenge HP presented itself with is what would it take to drive a million teraflops of compute. They said with today’s technology it would require one gigawatt of power and 30 football fields of space.

Apollo is supposed to help fix that, though the real savings are still limited by today’s technology, it’s not as if they were able to squeeze 10 fold improvement in performance out of the same power footprint. Intel said they were able to get I want to say 35% more performance out of the same power footprint using Apollo vs (I believe) the blades they were using before, I am assuming in both cases the CPUs were about the same and the savings came mainly from the more efficient design of power/cooling.

They build the system as a rack design, you probably haven’t been reading this blog very long but four years ago I lightly criticized HP on their SL series as not being visionary enough in that they were not building for the rack. The comparison I gave was with another solution I was toying with at the time for a project from SGI called CloudRack.

Fast forward to today and HP has answered that, and then some with a very sophisticated design that is integrated as an entire rack in the water cooled Apollo 8000. They have a mini version called the Apollo 6000(If this product was available before today I had never heard of it myself though I don’t follow HPC closely), of which Intel apparently already has something like 20,000 servers deployed on this platform.

Apollo 8000 water cooling system

Apollo 8000 water cooling system

Anyway one of the keys to this design is the water cooling – it’s not just any water cooling though, the water in this case never gets into the server, they use heat pipes on the CPUs and GPUs and transfer the heat to what appears to be a heat sink of some kind that is on the outside of the chassis, which then “melds” with the rack’s water cooling system to transfer the heat away from the servers. Power is also managed at a rack level. Don’t get me wrong this is far more advanced than the SGI system of four years ago. HP is apparently not giving this platform a very premium price either.

Apollo 8000 server level water cooling

Apollo 8000 server level water cooling

Their claims include:

  • 4 X the teraflops per square foot (vs air cooled servers)
  • 4 X density per rack per dollar (not sure what the per dollar means but..)
  • Deployment possible within days (instead of “years”)
  • More than 250 Teraflops per rack (the NREL Apollo 8000 system is 1.2 Petaflops in 18 racks …)
  • Apollo 6000 offering 160 servers per rack, and Apollo 8000 having 144
  • Some fancy new integrated management platform
  • Up to 80KW powering the rack (less if you want redundancy – 10kW per module)
  • One cable for ILO management of all systems in the rack
  • Can run on water temps as high as 86 degrees F (30 C)

The cooling system for the 8000 goes in another rack, which consumes 1/2 of a rack for a maximum of four server racks. If you need redundancy then I believe a 2nd cooling unit is required so two racks. The cooling system weighs over 2,000 pounds by itself so it appears unlikely that you’d be able to put two of them in a single cabinet.

The system takes in 480V AC and converts it into DC in up to 8x10kW redundant rectifiers in the middle of the rack.

NREL integrated the Apollo 8000 system with their building’s heating system, so that the Apollo cluster contributes to heating their entire building so that heat is not wasted.

It looks as if SGI discontinued the product I was looking at four  years ago (at the time for a Hadoop cluster). It supported up to 76 dual socket servers in a rack at the time supporting a max of something like 30kW(I don’t believe any configuration at the time could draw more than 20kW) and had rack based power distribution as well as rack based cooling(air cooling). It seems as if it was replaced with a newer product called Infinite data cluster which can go up to 80 dual socket servers in an air cooled rack(though GPUs not supported unlike Apollo).

This new system doesn’t mean a whole lot to me, I mean I don’t deal with HPC so I may never use it but the tech behind it seemed pretty neat, and obviously was interested in HP finally answering my challenge to deploy a system based on an entire rack with rack based power and cooling.

The other thing that stuck out to me was the new management platform, HP said it was another unique management platform that is specific to Apollo which was sort of confusing given I sat through what I thought was a very compelling preview of what HP OneView (the latest version announced today) had to offer which is HP’s new converged management interface. Seems strange to me that they would not integrate Apollo into that from the start, but I guess that’s what you get from a big company with teams working in parallel.

HP tries to justify this approach by saying there are several unique things in Apollo components so they needed a custom management package. I think they just didn’t have time to integrate with OneView, since there is no justification I can think of to not expose those management points via APIs that OneView can call/manage/abstract on behalf of the customer.

It sure looks pretty though(more so in person, I’m sure I’ll get a better pic of it on the showroom floor in normal lighting conditions along with pics of the cooling system).

UPDATE – some pics of the compute stuff

HP Apollo Series Server node Front part

HP Apollo Series Server node inside back part

HP Apollo  8000 Series server front end

HP Apollo 8000 Series server inside front part

HP Apollo 8000 Series front end

HP Apollo 8000 Series front end

HP Apollo 8000 series heat pipes, one for each CPU or GPU in the server and passes heat to the side of the server

HP Apollo 8000 series heat pipes, one for each CPU or GPU in the server and passes heat to the side of the server

June 9, 2014

3PAR June 2014: The 7450 AFA keeps getting better

Filed under: Storage — Tags: , , — Nate @ 3:16 pm

(I don’t know if I need one of those disclaimer things at the top here that says HP paid for my hotel and stuff in Vegas for Discover because I learned about this before I got here and was going to write about it anyway, but in any case know that..)

All about Flash

The 3PAR announcements at HP Discover this week are all about HP 3PAR’s all flash array the 7450, which was announced at last year’s Discover event in Las Vegas. HP has tried hard to convince the world that the 3PAR architecture is competitive even in the new world of all flash. Several of the other big players in storage – EMC, NetApp, and IBM have all either acquired companies specialized in all flash or in the case of NetApp they acquired and have been simultaneously building a new system(apparently called Flash Ray which folks think will be released later in the year).

Dell and HDS, like HP have not decided to do that, instead relying on in house technology for all flash use cases. Of course there have been a ton of all flash startups, all trying to be market disruptors.

So first a brief recap of what HP has done with 3PAR to-date to optimize for all flash workloads:

  • Faster CPUs, doubling of the data cache (7400 vs 7450)
  • Sophisticated monitoring and alerting with SSD wear leveling (alert at 90% of max endurance, force fail the SSD at 95% max endurance)
  • Adaptive Read cache – only read what you need, does not attempt to read ahead because the penalty for going back to the SSD is so small, and this optimizes bandwidth utilization
  • Adaptive write cache – only write what you require, if 4kB of a 16kB page is written then the array only writes 4kB, which reduces wear on the SSD who typically has a shorter life span than that of spinning rust.
  • Autonomic cache offload – more sophisticated cache flushing algorithms (this particular one has benefits for disk-based 3PARs as well)
  • Multi tenant I/O processing – multi threaded cache flushing, and supports both large(typically sequential) and small I/O sizes(typically random) simultaneously in an efficient manor – separates the large I/Os into more efficient small I/Os for the SSDs to handle.
  • Adaptive sparing – basically allows them to unlock hidden storage capacity (upwards of 20%) on each SSD to use for data storage without compromising anything.
  • Optimize the 7xxx platform by leveraging PCI Express’ Message Signal Interrupts which allowed the system to reach a staggering 900,000 IOPS at 0.7 millisecond response times (caveat that is a 100% read workload)

I learned at HP Storage tech day last year that among the features 3PAR was working on was:

  • In line deduplication for file and block
  • Compression
  • File+Object services running directly on 3PAR controllers

There were no specifics given at the time.

Well part of that wait is over.

Hardware-accelerated deduplication

In what I believe is an industry exclusive, somehow 3PAR has managed to find some spare silicon in their now 3-year old Gen4 ASIC to give complete CPU-offloaded inline deduplication for transactional workloads on their 7450 all flash array.

They say that the software will return typically a 4:1 to 10:1 data reduction levels. This is not meant to compete against HP StoreOnce which offers much higher levels of data reduction, this is for transaction processing (which StoreOnce cannot do) and primarily to reduce the cost of operating an all flash system.

It has been interesting to see 3PAR evolve, as a customer of theirs for almost eight years now. I remember when NetApp came out and touted deduplication for transactional workloads and 3PAR didn’t believe in the concept due to the performance hit you would(and they did) take.

Now they have line rate(I believe) hardware deduplication so that argument no longer applies. The caveat, at least for this announcement is this feature is limited to the 7450. There is nothing technically that prevents it from getting to their other Gen4 systems whether it is the 7200, 7400, the 10400, and 10800. But support for those is not mentioned yet, I imagine 3PAR is beating their drum to the drones out there who might be discounting 3PAR still because they have a unified architecture between AFA and hybrid flash/disk and disk-only systems(like mine).

One of 3PAR’s main claims to fame is that you can crank up a lot of their features and they do not impact system performance because most of it is performed by the ASIC, it is nice to see that they have been able to continue this trend, and while it obviously wasn’t introduced on day one with the Gen4 ASIC, it does not require customers wait, or upgrade their existing systems (whenever the Gen5 comes out I’d wager December 2015) to the next generation ASIC to get this functionality.

The deduplication operates using fixed page sizes that are 16kB each, which is a standard 3PAR page size for many operations like provisioning.

For 3PAR customers note that this technology is based on Common Provisioning Groups(CPG). So data within a CPG can be deduplicated. If you opt for a single CPG on your system and put all of your volumes on it, then that effectively makes the deduplication global.

Express Indexing

This is a patented approach which allows 3PAR to use significantly less memory than would be otherwise required to store lookup tables.

Thin Clones

Thin clones are basically the ability to deduplicate VM clones (I imagine this would need hypervisor integration like VAAI)  for faster deployment. So you could probably deploy clones at 5-20x the speed at which you could before.

NetApp here too has been touting a similar approach for a few years on their NFS platform anyway.

Two Terabyte SSDs

Well almost 2TB, coming in at 1.9TB, these are actually 1.6TB cMLC SSDs  but with the aforementioned adaptive sparing it allows 3PAR to bump the usable capacity of the device way up without compromising on any aspect of data protection or availability.

These appear to be Sandisk Lightning Read-intensive SSDs. I found that info here(PDF).

Info on ordering Sandisk SSDs for HP 3PAR

Info on ordering Sandisk SSDs for HP 3PAR

I also quote aforementioned PDF

The 1920GB is available only in the StoreServ 7450 until the end of September 2014.

It will then be available in other models as well October 2014.

New 1.6TB SSD I believe comes from Sandisk

New 1.6TB SSD I believe comes from Sandisk, not the fastest around but certainly a bunch faster than a 15k RPM disk!

These SSDs come with a five year unconditional warranty, which is better than the included warranty on disks on 3PAR(three year). This 5-year warranty is extended to the 480GB and 920GB MLC SSDs as well. Assuming Sandisk is indeed the supplier as they claim the 5-year warranty exceeds the manufacturer’s own 3-year warranty.

These are technically consumer grade however HP touts their sophisticated flash features that make the media effectively more reliable than it otherwise might be in another architecture, and that claim is backed by the new unconditional warranty.

These are much more geared towards reads vs writes, and are significantly lower cost on a per GB basis than all previous SSD offerings from HP 3PAR.

The cost impact of these new SSDs is pretty dramatic, with the per GB list cost dropping from about $26.50 this time last year to about $7.50 this year.

These new SSDs allow for up to 460TB of raw flash on the 7450, which HP claims is seven times more than Pure Storage(which is a massively funded AFA startup), and 12 times more than a four brick EMC ExtremIO system.

With deduplication the 7450 can get upwards of 1.3PB of usable flash capacity in a single system along with 900,000 read I/Os with sub millisecond response times.

Dell Compellent about a year or so ago updated their architecture to leverage what they called read optimized low cost SSDs, and updated their auto tiering software to be aware of the different classes of SSDs. There are no tiering enhancements announced today, in fact I suspect you can’t even license the tiering software on a 3PAR 7450 since there is only one “tier” there.

So what do you get when you combine this hardware accelerated deduplication and high capacity low cost solid state media?

Solid state at less than $2/GB usable

HP says this puts solid state costs roughly in line with that of 15k RPM spinning disk. This is a pretty impressive feat. Not a unique one, there are other players out there that have reached the same milestone, but that is obviously not the only arrow 3PAR has in their arsenal.

That arsenal, is what HP believes is the reason you should go 3PAR for your all flash workloads. Forget about the startups, forget about EMC’s Xtrem IO, forget about NetApp Flash Ray, forget about IBM’s TMS flash systems etc etc.

Six nines of availability, guaranteed

HP is now willing to put their money where their mouth is and sign a contract that guarantees six nines of availability on any 4-node 3PAR system (originally thought it was 7450-specific it is not). That is a very bold statement to make in my opinion. This obviously comes as a result of an architecture that has been refined over the past roughly fifteen years and has some really sophisticated availability features including:

  • Persistent ports – very rapid fail over of host connectivity for all protocols in the event of planned or unplanned controller disruption. They have laser loss detection for fibre channel as well which will fail over the port if the cable is unplugged. This means that hosts do not require MPIO software to deal with storage controller disruptions.
  • Persistent cache – rapid re-mirroring of cache data to another node in the event of planned or unplanned controller disruption. This prevents the system from going into “write through” mode which can otherwise degrade performance dramatically. The bulk of the 128 GB of cache(4-node) on a 7450 is dedicated to writes (specifically optimizing I/O for the back end for the most efficient usage of system resources).
  • The aforementioned media wear monitoring and proactive alerting (for flash anyway)

They have other availability features that span systems(the guarantee does not require any of these):

I would bet that you’ll have to follow very strict guidelines to get HP to sign on the dotted line, no deviation from supported configurations. 3PAR has always been a stickler for what they have been willing to support, for good reason.

Performance guaranteed

HP won’t sign on a dotted line for this, but with the previously released Priority Optimization, customers can guarantee their applications:

  • Performance minimum threshold
  • Performance maximum threshold (rate limiting)
  • Latency target

In a very flexible manor, these capabilities (combined) I believe are still unique in the industry (some folks can do rate limiting alone).

Online import from EMC VNX

This was announced about a month or so ago, but basically HP makes it easy to import data from a EMC VNX without any external appliances, professional services, or performance impact.

This product (which is basically a plugin written to interface with VNX/CX’s SMI-S management interface) went generally available I believe this past Friday. I watched a full demo of it at Discover and it was pretty neat. It does require direct fibre channel connections between the EMC and 3PAR system, and it does require (in the case of Windows anyway that was the demo) two outages on the server side:

  • Outage 1: remove EMC Powerpath  – due to some damage Powerpath leaves behind you must also uninstall and re-install the Microsoft MPIO software using the standard control panel method. These require a restart.
  • Outage 2: Configure Microsoft MPIO to recognize 3PAR (requires restart)

Once the 2nd restart is complete the client system can start using the 3PAR volumes as the data migrates in the background.

So online import may not be the right term for it, since the system does have to go down at least in the case of Windows configuration.

The import process currently supports Windows and Linux. The product came about as a result of I believe the end of life status of the MPX 2000 appliance which HP had been using to migrate data. So they needed something to replace that functionality so they were able to leverage the Peer Motion technology on 3PAR already that was already used for importing data from HP EVA storage and extend it to EMC.  They are evaluating the possibility of extending this to more platforms – I guess VNX/CX was an easy first target given there are a lot of those out there that are old and there isn’t an easier migration path than the EMC to 3PAR import tool (which is significantly easier and less complex apparently than EMC’s options). One of the benefits that HP touts of their approach is it has no impact on host performance as the data goes directly between the arrays.

The downside to this direct approach is the 7000-series of 3PAR arrays are very limited in port counts, especially if you happen to have iSCSI HBAs in them(as did the F and E classes before them). 10000-series has a ton of ports though. I learned last year at HP storage tech day, was HP was looking at possibly shrinking the bigger 10000-series controllers for the next round of mid range systems (rather than making the 7000-series controllers bigger) in an effort to boost expansion capacity. I’d like to see at least 8 FC-host and 2 iSCSI Host per controller on mid range. Currently you get only 2 FC-host if you have a iSCSI HBA installed in a 7000-series controller.

The tool is command line based, there are a few basic commands it does and it interfaces directly with the EMC and 3PAR systems.

The tool is free of charge as far as I know, and while 3PAR likes to tout no professional services required HP says some customers may need assistance (especially at larger scales) planning migrations, if this is the case then HP has services ready to hold your hand through every step of the way.

What I’d like to see from 3PAR still

Read and write SSD caches

I’ve talked about it for years now, but still want to see a read (and write – my workloads are 90%+ write) caching system that leverages high endurance SSDs on 3PAR arrays. HP announced SmartCache for Proliant Gen8 systems I believe about 18 months ago with plans to extend support to 3PAR but that has not yet happened. 3PAR is well aware of my request, so nothing new here. David did mention that they do want to do this still, no official timelines yet. Also it sounded like they will not go forward with the server-side SmartCache integration with 3PAR (I’d rather have the cache in the array anyway and they seem to agree).

David Scott - SVP and GM of all of HP Storage talking with us bloggers in the lounge

David Scott – SVP and GM of all of HP Storage talking with us bloggers in the lounge

3PAR 7450 SPC-1

I’d like to see SPC-1 numbers for the 7450 especially with these new flash media, it ought to provide some pretty compelling cost and performance numbers. You can see some recent performance testing that was done(that wasn’t 100% read) on a four node 7450 on behalf of HP.

Demartek also found that the StoreServ 7450 was not very heavily taxed with a single OLTP database accessing the array. As a result, we proceeded to run a combination of database workloads including two online transaction processing (OLTP) workloads and a data warehouse workload to see how well this storage system would handle a fairly heavy, mixed workload.

HP says the lack of SPC-1 comes down to priorities, it’s a decent amount of work to do the tests and they have had people working on other things, they still intend to do them but not sure when it will happen.

Compression

Would like to see compression support, not sure whether or not that will have to wait for Gen5 ASIC or if there are more rabbits hiding in the Gen4 hat.

Feature parity

Certainly want to see deduplication come to the other Gen4 platforms. HP touts a lot about the competition’s flash systems being silos. I’ll let you in on a little secret – the 3PAR 7450 is a flash silo as well. Not a technical silo, but one imposed on the product by marketing. While the reasons behind it are understandable it is unfortunate that HP feels compelled into limiting the product to appease certain market observers.

Faster CPUs

I was expecting a CPU refresh on the 7450, which was launched with an older generation of processor because HP didn’t want to wait for Intel’s newest chip to launch their new storage platform. I was told last year the 7450 is capable of operating with the newer chip, so it should just be a matter of plugging it in and doing some testing. That is supposed to be one of the benefits of using x86 processors is you don’t need to wait years to upgrade. HP says the Gen4 ASIC is not out of gas, the performance numbers to-date are limited by the CPU cores in the system, so faster CPUs would certainly benefit the system further without much cost.

No compromises

At the end of the day the 3PAR flash story has evolved into one of no compromises. You get the low cost flash, you get the inline hardware accelerated deduplication, you get the high performance with multitenancy and low latency, you get all of that and your not compromising on any other tier 1 capabilities(too many to go into here, you can see past posts for more info). Your getting a proven architecture that has matured over the past decade, a common operating system, the only storage platform that leverages custom ASICs to give uncompromising performance even with the bells & whistles turned on.

The only compromise here is you had to read all of this and I didn’t give you many pretty pictures to look at.

May 30, 2014

HP Discover 2014 – Las Vegas

Filed under: Uncategorized — Tags: , — Nate @ 10:31 am

HP Discover 2014I’m going to be attending my first HP Discover in two weeks in Las Vegas. HP has asked me for a while to go but I do not like big trade shows(or anywhere with large crowds of people), so until now have shied away.

I had a really good time at the HP Storage tech day and Nth symposium last year so I decided I wanted to try out Discover this year given that I know at least some folks that will be there and we’ll be in a somewhat organized group of “bloggers” led by Calvin Zito the HP Storage blogger.

I’ve never been to Las Vegas before but I’ll be there from June 8th and leaving on the 13th. After that I’m going to Arizona to check out the Grand Canyon and a few other places for a few days and return home the following week some time.

Looking forward to meeting some folks there, should be pretty fun.

December 19, 2013

Facebook deploying HP Vertica

Filed under: General — Tags: , , — Nate @ 9:33 am

I found this interesting. Facebook – a company that designs their own servers(in custom racks no less), writes their own software, does fancy stuff in PHP to make it scale, is a big user of Hadoop, massive horizontal scaling of sharded MySQL systems, and has developed an exabyte scale query engine -  is going to be deploying HP Vertica as part of their big data infrastructure.

Apparently announced at HP Discover

“Data is incredibly important: it provides the opportunity to create new product enhancements, business insights, and a significant competitive advantage by leveraging the assets companies already have. At Facebook, we move incredibly fast. It’s important for us to be able to handle massive amounts of data in a respectful way without compromising speed, which is why HP Vertica is such a perfect fit.”

Not much else to report on, just thought it was interesting given all the stuff Facebook tries to do on it’s own.

December 9, 2013

HP Moonshot: VDI Edition

Filed under: Virtualization — Tags: , — Nate @ 8:30 pm

To-date I have not been too excited about HP’s Moonshot system, I’ve been far more interested in AMD’s Seamicro. However HP has now launched a Moonshot based solution that does look very innovative and interesting.

HP Moonshot: VDI Edition (otherwise known as HP ConvergedSystem 100 for hosted desktops) takes advantage of (semi ironically enough) AMD APUs which combine CPU+GPU in a single chip and allows you to host up to 180 users in a 4.3U system each with their own dedicated CPU and GPU! The GPU being the key thing here, that is an area where most VDI has fallen far short.

Everything as you might imagine is self contained within the Moonshot chassis, there is no external storage.

EACH USER gets:

  • Quad core 1.5Ghz CPU
  • 128 GPU Cores (Radeon of course)
  • 8GB RAM (shared with GPU)
  • 32GB SSD
  • Windows 7 OS

That sounds luxurious! I’d be really interested to see how this solution stacks up against competing VDI solutions.

AMD X2150 APU: The brains of the HP Moonshot VDI experience

AMD X2150 APU: The brains of the HP Moonshot VDI experience

They claim you can be up and going in as little as two hours — with no hypervisor. This is bare metal.

You probably get superior availability as well given there are 45 different cartridges in the chassis, if one fails you lose only four desktops. The operational advantages(on paper at least) for something like this seem quite compelling.

I swear it seems a day doesn’t go by when a SSD storage vendor touts their VDI cost savings etc (and they never seem to mention things like, you know servers, LICENSING,  GPUs, etc etc – really annoys me).

VDI is not an area I have expertise in but I found this solution very interesting, and it seems like it is VERY dense at the same time.

HP Moonshot M700 cartridge

HP Moonshot M700 cartridge with four servers on it (45 of these in a chassis)

HP doesn’t seem to get specific on power usage other than you save a lot vs desktop systems. The APUs themselves seem to be rated at 15W/ea on the specs, which implies a minimum power usage of 2,700W. Though it seems each power supply in the Moonshot has a rated steady-rate power output of 653W, with four of those that is 2,600W for the whole chassis, though HP says the Moonshot supports only 1200W PSUs, so it’s sort of confusing. The HP Power Advisor has no data for this module.

It wouldn’t surprise me if the power usage was higher than a typical VDI system, but given the target workload(and the capabilities offered) it still sounds like a very compelling solution.

Obviously the question is might AMD one-up HP at their own game given that AMD owns both these APUs and SeaMico, and if so might that upset HP?

Older Posts »

Powered by WordPress