I got back from the longest road trip I've ever personally driven anyway to-date on Tuesday.
Pictures in case your interested, I managed to cut them down to roughly 600:
- Yosemite (mainly Glacier Point)
- Las Vegas (just a few pics)
- Lake Mead
- Hoover Dam
- HP Discover (just a few pics)
- Red Rock Canyon (near Vegas)
- Grand Canyon (north end)
- Sedona, AZ
- Misc AZ
I decided to take the scenic route and went through Yosemite on the way over, specifically to see Glacier Point, a place I wasn't aware of and did not visit on my last trip through Yosemite last year. I ended up leaving too late so managed to get to Glacier point and take some good pictures, though by the time I was back on the normal route it was pretty much too dark to take pictures of anything else in Yosemite. I sped over towards Tonopah, NV for my first night's stay before heading to Vegas the next day. That was a fun route, at least once I got near the Nevada border at that time of night I didn't see anyone on the road for a good 30-40 or more miles (had to slow down on some areas of the road I was getting too much air! literally!). Though I did encounter some wildlife playing in the road, fortunately managed to avoid casualties.
Las Vegas area
I took a ferry tour on Lake Mead, that was pretty neat (was going to say cool but damn was it hot as hell there my phone claimed 120 degrees from it's ambient temp sensor, car said it was 100). That ferry is the largest boat on the lake by far, and there wasn't many people on there for that particular tour on that day, maybe 40 or so out of probably 250-300 that it can hold. I was surprised given the gajillions of gallons of water right there that the surrounding area was still so dry and barren, so the pictures I took weren't as good as I thought they might of been otherwise.
I went to the Hoover dam for a few minutes at least, couldn't go inside as I had my laptop bag with me(wasn't checked into hotel yet) and they wouldn't let me in with the bag, and I wasn't going to leave it in my car!
(you can see all of my Discover related posts here)
A decent chunk of it was in Las Vegas at HP Discover where I am grateful for the wonderful folks over there which really made the time quite pleasant.
I probably wouldn't attend an event like Discover even though I know people at HP if it weren't for the more personalized experience that we got. I don't like to wander around show floors and go into fixed sessions, I have never gotten anything out of that sort of thing.
Being able to talk in a somewhat more private setting in a room on the show floor with various groups was helpful. I didn't learn too much new things, but was able to confirm several ideas I already had in my head.
I did meet David Scott, head of HP storage for the first time, and ran into him again at the big HP party and he came over and chatted with Calvin Zito and myself for a good 30 minutes. He's quite a guy I was very impressed. I thought it was funny how he poked fun at the storage startups during the 3PAR announcements. It was very interesting to hear his thoughts on some topics. Apparently he reads most/all of my blog posts and my comments on The Register too.
We also went up on the High Roller at night which was nice, though couldn't take any good pictures, was too dark and most things just ended up blurry.
All in all it was a good time, met some cool people, had some good discussions.
I was in the neighborhood, so I decided to check out Arizona again, maybe for the last time. I was there a couple of times in the past to visit a friend who lived in the Tuscon area but he moved away early this year. I did plan to visit Sedona the last time I was in AZ, but decided to skip it in favor of NFL playoffs. So I went to AZ again in part to visit Sedona which I had heard was pretty.
Part of the expected route to Sedona was closed off due to the recent fire(s), so I had to take a longer way around.
I also decided to visit the Grand Canyon (north end), and was expecting to visit the south end the same day but that food poisoning hit me pretty good right about the time I got to the north end, so I was only there about 45 minutes and I had to go straight back to the hotel (~200 miles away). I still managed to get some good pictures though. There is a little trail that goes out to the edge there, though for the most part had no hand rails, was pretty scary to me anyway being so close to a very big drop off.
Food poisoning settled down by Monday morning and I was able to get out and about after my company asked me to extend my stay to support a big launch (which turned out to be nothing fortunately) and visit more places before I headed back early Tuesday morning. I went through Vegas again and made a couple pit stops before making the long trek back home.
Was a pretty productive trip though got quite a bit accomplished I suppose. One thing I wanted to do is get a picture of my car next to a "Welcome to Sedona" sign to send to one of my former bosses. There was a "secret" project at that company to move out of a public cloud and it was so controversial that my boss gave it a code name of Sedona so we wouldn't upset people in earlier days of the project. So I sent him that pic and he liked it
One concern I had on my trip is my car has a time bomb ticking waiting for the engine to explode. I've been planning on getting that fixed the next time I am in Seattle, I think I am still safe for the time being given the mileage. The dealership closest to me is really bad (and I complained loudly about them to Nissan) so I will not go there, and the next closest is pretty far away, the operation to repair the problem is a 4-5 hour one and I don't want to stick around. Besides I really love the service department at the dealership that I bought my car at, and I'll be back in that area soon enough anyway (for a visit).
(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc...)
I should be out sight seeing but have been stuck in my hotel room here in Sedona, AZ due to the worst food poisoning I've ever had from food I ate on Friday night.
X As a service
The trend towards "as a service" as what seems to be an accounting thing more than anything else to shift dollars to another column in the books continues with HP's Facility as a service.
HP will go so far as to buy you a data center(the actual building), fill it with equipment and rent it back to you for some set fee - with entry level systems starting at 150kW (which would be as few as say 15 x high density racks). They can even manage it end to end if you want them to. I didn't realize myself the extent that their services go to. requires a 5 or 10 year commitment however (has to do with accounting again I believe). HP says they are getting a lot of positive feedback on this new service.
This is really targeted at those that must operate on premise due to regulations and cannot rely on a 3rd party data center provider (colo).
FAAS doesn't cover the actual computer equipment though, that is just the building, power, cooling etc. The equipment can either come from you or you can get it from HP using their Flexible Capacity program. This program also extends to the HP public cloud as well as a resource pool for systems.
Entry level for Flexible capacity we were told was roughly a $500k contract ($100k/year).
Thought this was a good quote
"We have designed more than 65 million square feet of data center space. We are responsible for more than two-thirds of all LEED Gold and Platinum certified data centers, and we’ve put our years of practical experience to work helping many enterprises successfully implement their data center programs. Now we can do the same for you."
Myself I had no idea that was the case, not even close.
(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc...)
I have tried to be a vocal critic of the whole software defined movement, in that much of it is hype today and has been for a while and will likely to continue to be for a while yet. My gripe is not so much about the approach, the world of "software defined" sounds pretty neat, my gripe is about the marketing behind it that tries to claim we're already there, and we are not, not even close.
I was able to vent a bit with the HP team(s) on the topic and they acknowledged that we are not there yet either. There is a vision, and there is a technology. But there aren't a lot of products yet, at least not a lot of promising products.
Software defined networking is perhaps one of the more (if not the most) mature platforms to look at. Last year I ripped pretty good into the whole idea with good points I thought, basically that technology solves a problem I do not have and have never had. I believe most organizations do not have a need for it either (outside of very large enterprises and service providers). See the link for a very in depth 4,000+ word argument on SDN.
More recently HP tried to hop on the bandwagon of Software Defined Storage, which in their view is basically the StoreVirtual VSA. A product that to me doesn't fit the scope of Software defined, it is just a brand propped up onto a product that was already pretty old and already running in a VM.
Speaking of which, HP considers this VSA along with their ConvergedSystem 300 to be "hyper converged", and least the people we spoke to do not see a reason to acquire the likes of Simplivity or Nutanix (why are those names so hard to remember the spelling..). HP says most of the deals Nutanix wins are small VDI installations and aren't seen as a threat, HP would rather go after the VCEs of the world. I believe Simplivity is significantly smaller.
I've never been a big fan of StoreVirtual myself, it seems like a decent product, but not something I get too excited about. The solutions that these new hyper converged startups offer sound compelling on paper at least for lower end of the market.
The future is software defined
The future is not here yet.
It's going to be another 3-5 years (perhaps more). In the mean time customers will get drip fed the technology in products from various vendors that can do software defined in a fairly limited way (relative to the grand vision anyway).
When hiring for a network engineer, many customers would rather opt to hire someone who has a few years of python experience than more years of networking experience because that is where they see the future in 3-5 years time.
My push back to HP on that particular quote (not quoted precisely) is that level of sophistication is very hard (and expensive) to hire for. A good comparative mark is hiring for something like Hadoop. It is very difficult to compete with the compensation packages of the largest companies offering $30-50k+ more than smaller (even billion $) companies.
So my point is the industry needs to move beyond the technology and into products. Having a requirement of knowing how to code is a sign of an immature product. Coding is great for extending functionality, but need not be a requirement for the basics.
HP seemed to agree with this, and believes we are on that track but it will take a few more years at least for the products to (fully) materialize.
(here is the quick video they showed at Discover)
I'll start off by saying I've never really seriously used any of HP's management platforms(or anyone else's for that matter). All I know is that they(in general not HP specific) seem to be continuing to proliferate and fragment.
HP Oneview 1.1 is a product that builds on this promise of software defined. In the past five years of HP pitching converged systems seeing the demo for Oneview was the first time I've ever shown just a little bit of interest in converged.
HP Oneview was released last October I believe and HP claims something along the lines of 15,000 downloads or installations. Version 1.10 was announced at Discover which offers some new integration points including:
- Automated storage provisioning and attachment to server profiles for 3PAR StoreServ Storage in traditional Fibre Channel SAN fabrics, and Direct Connect (FlatSAN) architectures.
- Automated carving of 3PAR StoreServ volumes and zoning the SAN fabric on the fly, and attaching of volumes to server profiles.
- Improved support for Flexfabric modules
- Hyper-V appliance support
- Integration with MS System Center
- Integration with VMware vCenter Ops manager
- Integration with Red Hat RHEV
- Similar APIs to HP CloudSystem
Oneview is meant to be light weight, and act as a sort of proxy into other tools, such as Brocade's SAN manager in the case of Fibre channel (myself I prefer Qlogic management but I know Qlogic is getting out of the switch business). Though for several HP products such as 3PAR and Bladesystem Oneview seems to talk to them directly.
Oneview aims to provide a view that starts at the data center level and can drill all the way down to individual servers, chassis, and network ports.
However the product is obviously still in it's early stages - it currently only supports HP's Gen8 DL systems (G7 and Gen8 BL), HP is thinking about adding support for older generations but their tone made me think they will drag their feet long enough that it's no longer demanded by customers. Myself the bulk of what I have in my environment today is G7, only recently deployed a few Gen8 systems two months ago. Also all of my SAN switches are Qlogic (and I don't use HP networking now) so Oneview functionality would be severely crippled if I were to try to use it today.
The product on the surface does show a lot of promise though, there is a 3 minute video introduction here.
HP pointed out you would not manage your cloud from this, but instead the other way around, cloud management platforms would leverage Oneview APIs to bring that functionality to the management platform higher up in the stack.
HP has renamed their Insight Control systems for vCenter and MS System Center to Oneview.
The goal of Oneview is automation that is reliable and repeatable. As with any such tools it seems like you'll have to work within it's constraints and go around it when it doesn't do the job.
"If you fancy being able to deploy an ESX cluster in 30 minutes or less on HP Proliant Gen8 systems, HP networking and 3PAR storage than this may be the tool for you." - me
The user interface seems quite modern and slick.
They expose a lot of functionality in an easy to use way but one thing that struck me watching a couple of their videos is it can still be made a lot simpler - there is a lot of jumping around to do different tasks. I suppose one way to address this might be broader wizards that cover multiple tasks in the order they should be done in or something.
(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc...)
This is a new brand for HP's cloud platform based on OpenStack. There is a commercial version and a community edition. The community edition is pure OpenStack without some of the fancier HP management interfaces on top of it.
"The easiest thing about OpenStack is setting it up - organizations spend the majority of the time simply keeping it running after it is set up."
HP admits that OpenStack has a long way to go before it is considered a mature enterprise application stack. But they do have experience running a large OpenStack public cloud and have hundreds of developers working on the product. In fact HP says that most OpenStack community projects these days are basically run by HP, while other larger contributors (even Rack Space) have pulled back on resource allocation to the project HP has gone in full steam ahead.
HP has many large customers who specifically asked HP to get involved in the project and to provide a solution for them that can be supported end to end. I must admit the prospect does sound attractive, being that you can get HP Storage, Servers, Networking all battle tested and ready to run this new cloud platform, the Openstack platform is by far the biggest weak point today.
It is not there yet though, HP does offer a professional services for the customers entire life cycle of OpenStack deployment.
One key area that has been weak in OpenStack which recently made the news, is the networking component Neutron.
"[..] once you get beyond about 50 nodes, Neutron falls apart"
So to stabilize this component HP integrated support with their SDN controller into the lower levels of Neutron. This allowed it to scale much better and maintain complete compatibility with existing APIs.
That is something HP is doing in several cases, they emphasize very strongly they are NOT building a proprietary solution, and they are NOT changing any of the APIs (they are helping change them upstream) as to break compatibility. They are however adding/moving some things around beneath the API level to improve stability.
The initial cost for the commercial $1,400/server/year which is quite reasonable, I assume that includes basic support. The commercial version is expected to become generally available in the second half of 2014.
Major updates will be released every six months, and minor updates every three months.
Very limited support cycle
One thing that took almost everyone in the room by surprise is the support cycle for this product. Normally enterprise products have support for 3-5 years, Helion has support for a maximum of 18 months. HP says 12 of those months is general support and the last six of those are specifically geared towards migration to the next version, which they say is not a trivial task.
I checked Red Hat's policy as they are another big distribution of OpenStack, and their policy is similar - they had one year of support on version three of their production and have one and a half years on version four (current version). Despite the version numbers apparently version three was the first release to the public.
So given that it should just reinforce the fact that Openstack is not a mature platform at this point and it will take some time before it is, probably another 2-3 years at least. They only recently got the feature that allowed for upgrading the system.
HP does offer a fully integrated ConvergedSystem with Helion, though despite my best efforts I am unable to find a link that specifically mentions Helion or OpenStack.
HP is supporting ESXi and KVM as the initial hypervisors in their Helion. Openstack supports a much wider variety itself but HP is electing those two to begin with anyway. Support for Hyper-V will follow shortly.
HP also offers full indemnification from legal issues as well.
This site has a nice diagram of what HP is offering, not sure if it is an HP image or not so sending you there to see it.
My own suggestion is to steer clear of Openstack for a while yet, give it time to stabilize, don't deploy it just because you can. Don't deploy it because it's today's hype.
If you really, truly need this functionality internally then it seems like HP has by far the strongest offerings from a product and support standpoint(they are willing and able to do everything from design to deployment to operationally running it). Keep in mind depending on scale of deployment you may be constantly planning for the next upgrade (or having HP plan for you).
I would argue that the vast majority of organizations do not need OpenStack (in it's current state) and would do themselves a favor by sticking to whatever they are already using until it's more stable. Your organization may have pains running whatever your running now, but your likely to just trade those pains for other pains going the OpenStack route right now.
When will it be stable? I would say a good indicator will be the support cycle, when HP (or Redhat) starts having a full 3 year support cycle on the platform (with back ported fixes etc) that means it's probably hit a good milestone.
I believe OpenStack will do well in the future, it's just not there yet today.
(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc...)
I witnessed what I'd like to say is one of the most insane unveiling of a new server in history. It was sort of mimicking the launch of an Apollo space craft. Lots of audio and video from NASA, and then when the system appeared lots of compressed air/smoke (very very loud) and dramatic music.
Here's the full video in 1080p, beware it is 230MB. I have 100Mbit of unlimited bandwidth connected to a fast backbone, will see how it goes.
Apollo is geared squarely at compute bound HPC, and is the result of a close partnership between HP, Intel and the National Renewable Energy Laboratory (NREL).
The challenge HP presented itself with is what would it take to drive a million teraflops of compute. They said with today's technology it would require one gigawatt of power and 30 football fields of space.
Apollo is supposed to help fix that, though the real savings are still limited by today's technology, it's not as if they were able to squeeze 10 fold improvement in performance out of the same power footprint. Intel said they were able to get I want to say 35% more performance out of the same power footprint using Apollo vs (I believe) the blades they were using before, I am assuming in both cases the CPUs were about the same and the savings came mainly from the more efficient design of power/cooling.
They build the system as a rack design, you probably haven't been reading this blog very long but four years ago I lightly criticized HP on their SL series as not being visionary enough in that they were not building for the rack. The comparison I gave was with another solution I was toying with at the time for a project from SGI called CloudRack.
Fast forward to today and HP has answered that, and then some with a very sophisticated design that is integrated as an entire rack in the water cooled Apollo 8000. They have a mini version called the Apollo 6000(If this product was available before today I had never heard of it myself though I don't follow HPC closely), of which Intel apparently already has something like 20,000 servers deployed on this platform.
Anyway one of the keys to this design is the water cooling - it's not just any water cooling though, the water in this case never gets into the server, they use heat pipes on the CPUs and GPUs and transfer the heat to what appears to be a heat sink of some kind that is on the outside of the chassis, which then "melds" with the rack's water cooling system to transfer the heat away from the servers. Power is also managed at a rack level. Don't get me wrong this is far more advanced than the SGI system of four years ago. HP is apparently not giving this platform a very premium price either.
Their claims include:
- 4 X the teraflops per square foot (vs air cooled servers)
- 4 X density per rack per dollar (not sure what the per dollar means but..)
- Deployment possible within days (instead of "years")
- More than 250 Teraflops per rack (the NREL Apollo 8000 system is 1.2 Petaflops in 18 racks ...)
- Apollo 6000 offering 160 servers per rack, and Apollo 8000 having 144
- Some fancy new integrated management platform
- Up to 80KW powering the rack (less if you want redundancy - 10kW per module)
- One cable for ILO management of all systems in the rack
- Can run on water temps as high as 86 degrees F (30 C)
The cooling system for the 8000 goes in another rack, which consumes 1/2 of a rack for a maximum of four server racks. If you need redundancy then I believe a 2nd cooling unit is required so two racks. The cooling system weighs over 2,000 pounds by itself so it appears unlikely that you'd be able to put two of them in a single cabinet.
The system takes in 480V AC and converts it into DC in up to 8x10kW redundant rectifiers in the middle of the rack.
NREL integrated the Apollo 8000 system with their building's heating system, so that the Apollo cluster contributes to heating their entire building so that heat is not wasted.
It looks as if SGI discontinued the product I was looking at four years ago (at the time for a Hadoop cluster). It supported up to 76 dual socket servers in a rack at the time supporting a max of something like 30kW(I don't believe any configuration at the time could draw more than 20kW) and had rack based power distribution as well as rack based cooling(air cooling). It seems as if it was replaced with a newer product called Infinite data cluster which can go up to 80 dual socket servers in an air cooled rack(though GPUs not supported unlike Apollo).
This new system doesn't mean a whole lot to me, I mean I don't deal with HPC so I may never use it but the tech behind it seemed pretty neat, and obviously was interested in HP finally answering my challenge to deploy a system based on an entire rack with rack based power and cooling.
The other thing that stuck out to me was the new management platform, HP said it was another unique management platform that is specific to Apollo which was sort of confusing given I sat through what I thought was a very compelling preview of what HP OneView (the latest version announced today) had to offer which is HP's new converged management interface. Seems strange to me that they would not integrate Apollo into that from the start, but I guess that's what you get from a big company with teams working in parallel.
HP tries to justify this approach by saying there are several unique things in Apollo components so they needed a custom management package. I think they just didn't have time to integrate with OneView, since there is no justification I can think of to not expose those management points via APIs that OneView can call/manage/abstract on behalf of the customer.
It sure looks pretty though(more so in person, I'm sure I'll get a better pic of it on the showroom floor in normal lighting conditions along with pics of the cooling system).
UPDATE - some pics of the compute stuff
(I don't know if I need one of those disclaimer things at the top here that says HP paid for my hotel and stuff in Vegas for Discover because I learned about this before I got here and was going to write about it anyway, but in any case know that..)
All about Flash
The 3PAR announcements at HP Discover this week are all about HP 3PAR's all flash array the 7450, which was announced at last year's Discover event in Las Vegas. HP has tried hard to convince the world that the 3PAR architecture is competitive even in the new world of all flash. Several of the other big players in storage - EMC, NetApp, and IBM have all either acquired companies specialized in all flash or in the case of NetApp they acquired and have been simultaneously building a new system(apparently called Flash Ray which folks think will be released later in the year).
Dell and HDS, like HP have not decided to do that, instead relying on in house technology for all flash use cases. Of course there have been a ton of all flash startups, all trying to be market disruptors.
So first a brief recap of what HP has done with 3PAR to-date to optimize for all flash workloads:
- Faster CPUs, doubling of the data cache (7400 vs 7450)
- Sophisticated monitoring and alerting with SSD wear leveling (alert at 90% of max endurance, force fail the SSD at 95% max endurance)
- Adaptive Read cache - only read what you need, does not attempt to read ahead because the penalty for going back to the SSD is so small, and this optimizes bandwidth utilization
- Adaptive write cache - only write what you require, if 4kB of a 16kB page is written then the array only writes 4kB, which reduces wear on the SSD who typically has a shorter life span than that of spinning rust.
- Autonomic cache offload - more sophisticated cache flushing algorithms (this particular one has benefits for disk-based 3PARs as well)
- Multi tenant I/O processing - multi threaded cache flushing, and supports both large(typically sequential) and small I/O sizes(typically random) simultaneously in an efficient manor - separates the large I/Os into more efficient small I/Os for the SSDs to handle.
- Adaptive sparing - basically allows them to unlock hidden storage capacity (upwards of 20%) on each SSD to use for data storage without compromising anything.
- Optimize the 7xxx platform by leveraging PCI Express' Message Signal Interrupts which allowed the system to reach a staggering 900,000 IOPS at 0.7 millisecond response times (caveat that is a 100% read workload)
I learned at HP Storage tech day last year that among the features 3PAR was working on was:
- In line deduplication for file and block
- File+Object services running directly on 3PAR controllers
There were no specifics given at the time.
Well part of that wait is over.
In what I believe is an industry exclusive, somehow 3PAR has managed to find some spare silicon in their now 3-year old Gen4 ASIC to give complete CPU-offloaded inline deduplication for transactional workloads on their 7450 all flash array.
They say that the software will return typically a 4:1 to 10:1 data reduction levels. This is not meant to compete against HP StoreOnce which offers much higher levels of data reduction, this is for transaction processing (which StoreOnce cannot do) and primarily to reduce the cost of operating an all flash system.
It has been interesting to see 3PAR evolve, as a customer of theirs for almost eight years now. I remember when NetApp came out and touted deduplication for transactional workloads and 3PAR didn't believe in the concept due to the performance hit you would(and they did) take.
Now they have line rate(I believe) hardware deduplication so that argument no longer applies. The caveat, at least for this announcement is this feature is limited to the 7450. There is nothing technically that prevents it from getting to their other Gen4 systems whether it is the 7200, 7400, the 10400, and 10800. But support for those is not mentioned yet, I imagine 3PAR is beating their drum to the drones out there who might be discounting 3PAR still because they have a unified architecture between AFA and hybrid flash/disk and disk-only systems(like mine).
One of 3PAR's main claims to fame is that you can crank up a lot of their features and they do not impact system performance because most of it is performed by the ASIC, it is nice to see that they have been able to continue this trend, and while it obviously wasn't introduced on day one with the Gen4 ASIC, it does not require customers wait, or upgrade their existing systems (whenever the Gen5 comes out I'd wager December 2015) to the next generation ASIC to get this functionality.
The deduplication operates using fixed page sizes that are 16kB each, which is a standard 3PAR page size for many operations like provisioning.
For 3PAR customers note that this technology is based on Common Provisioning Groups(CPG). So data within a CPG can be deduplicated. If you opt for a single CPG on your system and put all of your volumes on it, then that effectively makes the deduplication global.
This is a patented approach which allows 3PAR to use significantly less memory than would be otherwise required to store lookup tables.
Thin clones are basically the ability to deduplicate VM clones (I imagine this would need hypervisor integration like VAAI) for faster deployment. So you could probably deploy clones at 5-20x the speed at which you could before.
NetApp here too has been touting a similar approach for a few years on their NFS platform anyway.
Two Terabyte SSDs
Well almost 2TB, coming in at 1.9TB, these are actually 1.6TB cMLC SSDs but with the aforementioned adaptive sparing it allows 3PAR to bump the usable capacity of the device way up without compromising on any aspect of data protection or availability.
I also quote aforementioned PDF
The 1920GB is available only in the StoreServ 7450 until the end of September 2014.
It will then be available in other models as well October 2014.
These SSDs come with a five year unconditional warranty, which is better than the included warranty on disks on 3PAR(three year). This 5-year warranty is extended to the 480GB and 920GB MLC SSDs as well. Assuming Sandisk is indeed the supplier as they claim the 5-year warranty exceeds the manufacturer's own 3-year warranty.
These are technically consumer grade however HP touts their sophisticated flash features that make the media effectively more reliable than it otherwise might be in another architecture, and that claim is backed by the new unconditional warranty.
These are much more geared towards reads vs writes, and are significantly lower cost on a per GB basis than all previous SSD offerings from HP 3PAR.
The cost impact of these new SSDs is pretty dramatic, with the per GB list cost dropping from about $26.50 this time last year to about $7.50 this year.
These new SSDs allow for up to 460TB of raw flash on the 7450, which HP claims is seven times more than Pure Storage(which is a massively funded AFA startup), and 12 times more than a four brick EMC ExtremIO system.
With deduplication the 7450 can get upwards of 1.3PB of usable flash capacity in a single system along with 900,000 read I/Os with sub millisecond response times.
Dell Compellent about a year or so ago updated their architecture to leverage what they called read optimized low cost SSDs, and updated their auto tiering software to be aware of the different classes of SSDs. There are no tiering enhancements announced today, in fact I suspect you can't even license the tiering software on a 3PAR 7450 since there is only one "tier" there.
So what do you get when you combine this hardware accelerated deduplication and high capacity low cost solid state media?
Solid state at less than $2/GB usable
HP says this puts solid state costs roughly in line with that of 15k RPM spinning disk. This is a pretty impressive feat. Not a unique one, there are other players out there that have reached the same milestone, but that is obviously not the only arrow 3PAR has in their arsenal.
That arsenal, is what HP believes is the reason you should go 3PAR for your all flash workloads. Forget about the startups, forget about EMC's Xtrem IO, forget about NetApp Flash Ray, forget about IBM's TMS flash systems etc etc.
Six nines of availability, guaranteed
HP is now willing to put their money where their mouth is and sign a contract that guarantees six nines of availability on any 4-node 3PAR system (originally thought it was 7450-specific it is not). That is a very bold statement to make in my opinion. This obviously comes as a result of an architecture that has been refined over the past roughly fifteen years and has some really sophisticated availability features including:
- Persistent ports - very rapid fail over of host connectivity for all protocols in the event of planned or unplanned controller disruption. They have laser loss detection for fibre channel as well which will fail over the port if the cable is unplugged. This means that hosts do not require MPIO software to deal with storage controller disruptions.
- Persistent cache - rapid re-mirroring of cache data to another node in the event of planned or unplanned controller disruption. This prevents the system from going into "write through" mode which can otherwise degrade performance dramatically. The bulk of the 128 GB of cache(4-node) on a 7450 is dedicated to writes (specifically optimizing I/O for the back end for the most efficient usage of system resources).
- The aforementioned media wear monitoring and proactive alerting (for flash anyway)
They have other availability features that span systems(the guarantee does not require any of these):
- Synchronous short range, and long range(3 site) replication
- Peer persistence - a pair of 3PAR arrays act as active-active for a VMware cluster with zero downtime in the event of a failure.
I would bet that you'll have to follow very strict guidelines to get HP to sign on the dotted line, no deviation from supported configurations. 3PAR has always been a stickler for what they have been willing to support, for good reason.
HP won't sign on a dotted line for this, but with the previously released Priority Optimization, customers can guarantee their applications:
- Performance minimum threshold
- Performance maximum threshold (rate limiting)
- Latency target
In a very flexible manor, these capabilities (combined) I believe are still unique in the industry (some folks can do rate limiting alone).
Online import from EMC VNX
This was announced about a month or so ago, but basically HP makes it easy to import data from a EMC VNX without any external appliances, professional services, or performance impact.
This product (which is basically a plugin written to interface with VNX/CX's SMI-S management interface) went generally available I believe this past Friday. I watched a full demo of it at Discover and it was pretty neat. It does require direct fibre channel connections between the EMC and 3PAR system, and it does require (in the case of Windows anyway that was the demo) two outages on the server side:
- Outage 1: remove EMC Powerpath - due to some damage Powerpath leaves behind you must also uninstall and re-install the Microsoft MPIO software using the standard control panel method. These require a restart.
- Outage 2: Configure Microsoft MPIO to recognize 3PAR (requires restart)
Once the 2nd restart is complete the client system can start using the 3PAR volumes as the data migrates in the background.
So online import may not be the right term for it, since the system does have to go down at least in the case of Windows configuration.
The import process currently supports Windows and Linux. The product came about as a result of I believe the end of life status of the MPX 2000 appliance which HP had been using to migrate data. So they needed something to replace that functionality so they were able to leverage the Peer Motion technology on 3PAR already that was already used for importing data from HP EVA storage and extend it to EMC. They are evaluating the possibility of extending this to more platforms - I guess VNX/CX was an easy first target given there are a lot of those out there that are old and there isn't an easier migration path than the EMC to 3PAR import tool (which is significantly easier and less complex apparently than EMC's options). One of the benefits that HP touts of their approach is it has no impact on host performance as the data goes directly between the arrays.
The downside to this direct approach is the 7000-series of 3PAR arrays are very limited in port counts, especially if you happen to have iSCSI HBAs in them(as did the F and E classes before them). 10000-series has a ton of ports though. I learned last year at HP storage tech day, was HP was looking at possibly shrinking the bigger 10000-series controllers for the next round of mid range systems (rather than making the 7000-series controllers bigger) in an effort to boost expansion capacity. I'd like to see at least 8 FC-host and 2 iSCSI Host per controller on mid range. Currently you get only 2 FC-host if you have a iSCSI HBA installed in a 7000-series controller.
The tool is command line based, there are a few basic commands it does and it interfaces directly with the EMC and 3PAR systems.
The tool is free of charge as far as I know, and while 3PAR likes to tout no professional services required HP says some customers may need assistance (especially at larger scales) planning migrations, if this is the case then HP has services ready to hold your hand through every step of the way.
What I'd like to see from 3PAR still
Read and write SSD caches
I've talked about it for years now, but still want to see a read (and write - my workloads are 90%+ write) caching system that leverages high endurance SSDs on 3PAR arrays. HP announced SmartCache for Proliant Gen8 systems I believe about 18 months ago with plans to extend support to 3PAR but that has not yet happened. 3PAR is well aware of my request, so nothing new here. David did mention that they do want to do this still, no official timelines yet. Also it sounded like they will not go forward with the server-side SmartCache integration with 3PAR (I'd rather have the cache in the array anyway and they seem to agree).
3PAR 7450 SPC-1
I'd like to see SPC-1 numbers for the 7450 especially with these new flash media, it ought to provide some pretty compelling cost and performance numbers. You can see some recent performance testing that was done(that wasn't 100% read) on a four node 7450 on behalf of HP.
Demartek also found that the StoreServ 7450 was not very heavily taxed with a single OLTP database accessing the array. As a result, we proceeded to run a combination of database workloads including two online transaction processing (OLTP) workloads and a data warehouse workload to see how well this storage system would handle a fairly heavy, mixed workload.
HP says the lack of SPC-1 comes down to priorities, it's a decent amount of work to do the tests and they have had people working on other things, they still intend to do them but not sure when it will happen.
Would like to see compression support, not sure whether or not that will have to wait for Gen5 ASIC or if there are more rabbits hiding in the Gen4 hat.
Certainly want to see deduplication come to the other Gen4 platforms. HP touts a lot about the competition's flash systems being silos. I'll let you in on a little secret - the 3PAR 7450 is a flash silo as well. Not a technical silo, but one imposed on the product by marketing. While the reasons behind it are understandable it is unfortunate that HP feels compelled into limiting the product to appease certain market observers.
I was expecting a CPU refresh on the 7450, which was launched with an older generation of processor because HP didn't want to wait for Intel's newest chip to launch their new storage platform. I was told last year the 7450 is capable of operating with the newer chip, so it should just be a matter of plugging it in and doing some testing. That is supposed to be one of the benefits of using x86 processors is you don't need to wait years to upgrade. HP says the Gen4 ASIC is not out of gas, the performance numbers to-date are limited by the CPU cores in the system, so faster CPUs would certainly benefit the system further without much cost.
At the end of the day the 3PAR flash story has evolved into one of no compromises. You get the low cost flash, you get the inline hardware accelerated deduplication, you get the high performance with multitenancy and low latency, you get all of that and your not compromising on any other tier 1 capabilities(too many to go into here, you can see past posts for more info). Your getting a proven architecture that has matured over the past decade, a common operating system, the only storage platform that leverages custom ASICs to give uncompromising performance even with the bells & whistles turned on.
The only compromise here is you had to read all of this and I didn't give you many pretty pictures to look at.
I'm going to be attending my first HP Discover in two weeks in Las Vegas. HP has asked me for a while to go but I do not like big trade shows(or anywhere with large crowds of people), so until now have shied away.
I had a really good time at the HP Storage tech day and Nth symposium last year so I decided I wanted to try out Discover this year given that I know at least some folks that will be there and we'll be in a somewhat organized group of "bloggers" led by Calvin Zito the HP Storage blogger.
I've never been to Las Vegas before but I'll be there from June 8th and leaving on the 13th. After that I'm going to Arizona to check out the Grand Canyon and a few other places for a few days and return home the following week some time.
Looking forward to meeting some folks there, should be pretty fun.
(if you prefer you can skip my review and jump straight to the pictures, usual disclaimers apply)
This isn't directly related to tech but I wanted to write about it a bit, since it was quite a blast. I just went by myself though I made a few new friends.
I'll apologize now for all misspellings of names, didn't get any written stuff so just had to wing it.
I learned about it a couple of weeks ago, though this was I believe their 8th annual event. I purchased a VIP ticket ($100) which included close to stage seating as well as a back stage pass(which was outdoors in 95 degree heat!). The venue was at the Saddle Rack in Fremont, CA. The staff there were very friendly and quick to serve out drinks, of which I had many.
It had representatives from the four bay area Hooters locations, 31 hooters girls in all, last year I was told there was quite a bit more. There were a handful of judges, the only ones I remember were a couple radio DJs from 107.7 The Bone.
I have never been to this kind of event before and I wasn't sure what to expect, but my expectations were exceeded, it ended around 9:30PM and it packed with entertainment.
One of the host's was Amanda I think (someone behind me kept yelling her name anyway) she was quite good as well.
Madman's Lullaby, which is a local band(from Campbell it seems) here played for quite a while I was very impressed with their talent(I've never followed local bands before). They had a very polished performance, by far I think the best live performance I have seen/heard in a club settings (granted I haven't seen many I usually avoid places with live music it is often too loud - wasn't in this case). I purchased two of their CDs (professionally made with case, shrink wrap etc no CD-R stuff here), they recently got signed on by a record label(Kivel Records). The album is called Unhinged.
I had my phone, and then later went and got my real camera. The lighting in the place was good for watching in person but made it difficult to take pictures(w/o flash), most of which were washed out by the bright spotlight. Auto focus was also very slow due to low surrounding lighting. Video recording was more successful and I was able to take snapshots from the video frames.
I live and work in San Bruno, CA - and the Hooters here is roughly two blocks from my apartment which is convenient. So of course I wanted the San Bruno girls to win.
During intermission there was a Surrender the booty contest which was very entertaining, and fortunately a San Bruno Hooters girl won that contest so congrats to Dominique.
Winners of Hooterpalooza 2014
From right to left:
- First place went to Lexi from Hooters of Dublin, CA (?? not sure the voice was difficult to understand)
- Second place went Ariana from Hooters of San Bruno, CA
- Third place went to Brittney from Hooters of Dublin, CA
(Fifth place went to San Bruno as well)
For sure the most fun I've had (in the bay area) since I moved here almost three years ago. Looking forward to next year's event!
A co-worker pointed this video out to me, from a person at Microsoft research, he starts out by saying the views are his own and do not represent Microsoft.
Cloud comes up at 5:36 into the video. The whole video is good(30min), everything is spot on, and he manages to do it in a very entertaining way.
He is very entertaining throughout and does a good job at explaining some of the horrors around cloud services. Myself I still literally suffer from PTSD from the few years I spent with the Amazon cloud. That is not an exaggeration, not a joke, it is real.
Sorry for not having posted recently - I have seen nothing that has gotten me interested enough to post on anything. Tech has been pretty boring. I did attend HOOTERPALOOZA 2014 last night though that was by far the best time I've had in the bay area since I returned to California almost 3 years ago. I will be writing a review of that along with pictures and video soon, will take a day or two to sort through everything.
The above video was so great though I wanted to post it here.
About a month ago I wrote about my experience on the first 30 days of switching from a WebOS ecosystem to a Android Ecosystem. Specifically from the never-officially-released HP Pre3 to a Samsung Galaxy Note 3.
There were a few outstanding issues at the time, and I just wanted to write/rant a little bit about one of them.
Inductive charging technology has been with the WebOS platform since day one I believe(2009). I had become accustomed to using it, and any future phone really would need to have this for me to feel satisfied. Long ago it fell away from the "nice to have" categories to "cannot live without much pain". Fortunately some other folks have picked up on wireless charging over recent years though sadly it's still far from universal.
One of the reasons I liked the Note 3 was it was going to get(and did get) official wireless charging from Samsung. I suppose that is where my happiness came to an end.
I suppose it is semi obvious I wouldn't be writing about it if my experience was flawless
Samsung charging accessories
What seems like a month ago now I went to my local Frys and picked up the one wireless charging back cover that I liked for the Note 3, along with a Samsung charging base station. I didn't want to risk generating an unstable magnetic field in my bedroom and a rip in the space time continuum by buying a second or third rate wireless charger.
There are other back cover(s) available but the other one(s) I saw also included a wrap around front cover which I did not want. This cover looks identical to the stock cover(same color even, and seems like the same size as well though I could be wrong my perception is far from precise).
The Note 3 is a big phone, and it is fairly heavy too (slightly heavier than the HP Pre3) with a stock configuration. With the regular back cover it was fine, with the new back cover I can't help but think of the word brick come to mind. I mean it is a stark difference - I would say at least 25% heavier than stock. There are no specs that I can find online or on the packaging that talk about the weight of the cover but it's heavy. I have gotten used to that heft over the weeks though. The HP Pre3 (and some of the WebOS phones before it I believe with specific exception at least to the original Pre which I owned as well) all came with charging covers built in, so I never had a comparison to make with/without them at the time.
Anyway so I'm past the heft of the new back cover (though compared to a co-workers HTC One with a fancier back cover his phone I think is heavier than mine even though it is smaller, he has a big cover on it though).
UPDATE 2014: after a month of frustration I finally figured out the solution to this problem. I had to remove the back cover, placing it face down on a table and compressing it before putting it back on the phone. The connection from the cover to the phone wasn't good enough. Since I started doing this whenever I remove the back cover(rare) I haven't had any issues with the phone not charging.
The next problem came with charging on the pad, it was spotty. There is a green light on the pad that is supposed to tell you when the pad is mated with the phone and is charging. Don't believe it because it lies to me often. Most of the time it would charge fine, other times it would not. In my earlier days(before I learned that the green light lies to me) I tried just leaving it on the pad overnight with the green light on, woke up the next day with the battery at 10%.
The phone does indicate when it is charging wirelessly. Many times (including right now which prompted me to write this now) the phone just refuses to sit still and charge wirelessly. It will go in and out of charge mode every few seconds, then eventually it seems to give up and does not charge at all, unless of course I hook it to a USB cable. I don't understand how it could give up like that it doesn't make any sense to me unless there is a software component, but how could the software component refuse electricity ? I don't know.
I have spent literally 10 minutes trying every possible position on the pad to have the phone not want to charge. Then other times it works 100% of the time for a day or so.
So I thought hey maybe it's the crappy Samsung pad, I had read and heard some good things about the Tylt Vu, specifically they claim that they have a better charging area, meaning you can have the phone pretty much at any angle and it will charge. They have wide compatibility but did not specifically mention Note 3 at the time (I assume because the charging covers for Note 3 are still new).
So I ordered two Vus, and tried my phone on the first one - did not charge. I tried again for 5 minutes or so every possible position and it would not charge. Took it to the Samsung pad and I believe it would not charge there either. Filed support ticket with Tylt to see if they had any ideas, meanwhile the Samsung pad started working again with the phone. It charged all night. I got up the next day, battery was full - I played some games for a few minutes battery drained to ~93% - took the phone to the Vu and it would not charge. Took the phone back to the Samsung charger and it would not charge.
Rinse and repeat several times.....eventually I got both the Vus to charge my phone, though it is still sporadic. Tylt was going to replace the Vu but I don't think it's the Vu's fault. Samsung support wasn't very helpful. I suppose it could be the back cover, but I mean how complex can that be? I'm suspecting more of a design flaw or perhaps a software problem preventing the charging from working. I don't know. All three chargers have semi sporadic charging results, so I suppose I can rule the chargers out as a cause of the problem.
Android day dream
One of the long time cool features of the WebOS devices is a feature called exhibition mode. Basically means when the phone is charging it can launch a screen saver of sorts on the screen, default is a clock but it can do photo slide show as well as some other apps. The HP Touchpad took this to the next level and used a form of NFC to uniquely identify charging stations so the device could launch a different mode depending on what station it is charging off of.
I use this a lot with my Touchpads still they make great digital picture frames, just sit them in the charger and the slide show fires right up. If I want to use it I just pick it up, no wires and off I go..
Android has something similar called day dream. However a flaw in either Android or in Samsung's code prevents it from working correctly. When day dream is running, the configured application loads, which in my case is a slide show of sorts, and while the battery charges the slide show shows like it should.
The problem comes with the battery gets full - the OS kicks day dream off line and brings back the home screen and shows a notification the battery is full - disconnect the charger. The wireless charging unit stops charging for a minute or so - then the charging kicks in again, and day dream fires up again for about a minute perhaps then is booted off again and well rinse and repeat.
It gets worse though - if I want to use daydream I have to turn it on during the day and turn it off before I goto sleep. Because if daydream is in use at night, I hit the power button to turn off the screen before I go to sleep. Then guess what - when the battery is full the screen lights up and shows that same stupid battery is full message(and the screen does not turn off again). Without daydream the device turns the screen off automatically and stays off until I turn it on or remove it from the charging pad.
Stupid - I would of thought these would be basic things that would of been solved a while ago.
The only problem I really EVER had with wireless charging on WebOS was with the HP Pre3 and the original wireless pucks as they were called(base stations). The design of the Pre3 is slightly different so they don't fit the older charging stations precisely, even with the built in magnet to help align the phone to the charger sometimes it gets out of alignment and goes into a charging/beeping loop until corrected. Understandable since they were not designed for each other. HP was going to release a newer, significantly more sophisticated charging station for the Pre3(which included wireless audio out too) but of course it never made it to market.
As far as I know, the WebOS phones did not ever "stop" charging when the battery was full, they just keep going. I realize this is not good for the battery but I'll live with having to replace the battery every year or something if it means the above stuff worked right. In fact I never replaced a battery on a WebOS device in the roughly four years I used them.
All in all I'm still pretty happy with the Note 3. I mean my phone usage has gone up significantly. I think I can compare it to moving from a feature phone to a smart phone originally. I really did not use the Pre3 very much anymore towards the end. The battery life is not to my expectations. Video playback battery life is excellent(I think CNET recently rated the Note 3 as something like 14 hours). But drive that CPU a bunch and it will chew through battery quick, I think I could fairly easily chew through 30% in an hour at high usage. I haven't used any new apps since my last blog post, and in fact other than the two games I mentioned that I do play I haven't touched any of the other games that I had installed either. I have loaded the thing up with pictures though easily 15,000. Also have all of my music on there, lots of video and still have about 25GB available (96GB total).
I also edited the Superbowl to a 19 minute video and have watched that tons of times on my phone(looks amazing). There is another video - an episode of NFL Films presents on the Superbowl I put that on my phone too - also looks incredible(and the episode itself is just awesome). I purchased a pair of Braven bluetooth speakers (originally bought one then got another) which can be paired to each other for stereo playback, they work quite well(and have NFC too).
My mobile data usage has been tiny though since the bulk of my time is either at home or the office where I use wifi. With the HP Pre3 for the most part I kept wifi off all the time because it would interfere with bluetooth. The phone claims from Jan 21 - Feb 21 I used only 136MB of mobile data (I have a 5GB plan - mainly for travel with the phone's mifi hotspot mode).
Anyway that's enough for now.