TechOpsGuys.com Diggin' technology every day

June 15, 2014

HP Discover 2014: Software defined

Filed under: Datacenter,Events — Tags: , , , — Nate @ 12:26 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I have tried to be a vocal critic of the whole software defined movement, in that much of it is hype today and has been for a while and will likely to continue to be for a while yet. My gripe is not so much about the approach, the world of “software defined” sounds pretty neat, my gripe is about the marketing behind it that tries to claim we’re already there, and we are not, not even close.

I was able to vent a bit with the HP team(s) on the topic and they acknowledged that we are not there yet either. There is a vision, and there is a technology. But there aren’t a lot of products yet, at least not a lot of promising products.

Software defined networking is perhaps one of the more (if not the most) mature platforms to look at. Last year I ripped pretty good into the whole idea with good points I thought, basically that technology solves a problem I do not have and have never had. I believe most organizations do not have a need for it either (outside of very large enterprises and service providers). See the link for a very in depth 4,000+ word argument on SDN.

More recently HP tried to hop on the bandwagon of Software Defined Storage, which in their view is basically the StoreVirtual VSA. A product that to me doesn’t fit the scope of Software defined, it is just a brand  propped up onto a product that was already pretty old and already running in a VM.

Speaking of which, HP considers this VSA along with their ConvergedSystem 300 to be “hyper converged”, and least the people we spoke to do not see a reason to acquire the likes of Simplivity or Nutanix (why are those names so hard to remember the spelling..). HP says most of the deals Nutanix wins are small VDI installations and aren’t seen as a threat, HP would rather go after the VCEs of the world. I believe Simplivity is significantly smaller.

I’ve never been a big fan of StoreVirtual myself, it seems like a decent product, but not something I get too excited about. The solutions that these new hyper converged startups offer sound compelling on paper at least for lower end of the market.

The future is software defined

The future is not here yet.

It’s going to be another 3-5 years (perhaps more). In the mean time customers will get drip fed the technology in products from various vendors that can do software defined in a fairly limited way (relative to the grand vision anyway).

When hiring for a network engineer, many customers would rather opt to hire someone who has a few years of python experience than more years of networking experience because that is where they see the future in 3-5 years time.

My push back to HP on that particular quote (not quoted precisely) is that level of sophistication is very hard (and expensive) to hire for. A good comparative mark is hiring for something like Hadoop.  It is very difficult to compete with the compensation packages of the largest companies offering $30-50k+ more than smaller (even billion $) companies.

So my point is the industry needs to move beyond the technology and into products. Having a requirement of knowing how to code is a sign of an immature product. Coding is great for extending functionality, but need not be a requirement for the basics.

HP seemed to agree with this, and believes we are on that track but it will take a few more years at least for the products to (fully) materialize.

HP Oneview

(here is the quick video they showed at Discover)

I’ll start off by saying I’ve never really seriously used any of HP’s management platforms(or anyone else’s for that matter). All I know is that they(in general not HP specific) seem to be continuing to proliferate and fragment.

HP Oneview 1.1 is a product that builds on this promise of software defined. In the past five years of HP pitching converged systems seeing the demo for Oneview was the first time I’ve ever shown just a little bit of interest in converged.

HP Oneview was released last October I believe and HP claims something along the lines of 15,000 downloads or installations. Version 1.10 was announced at Discover which offers some new integration points including:

  • Automated storage provisioning and attachment to server profiles for 3PAR StoreServ Storage in traditional Fibre Channel SAN fabrics, and Direct Connect (FlatSAN) architectures.
  • Automated carving of 3PAR StoreServ volumes and zoning the SAN fabric on the fly, and attaching of volumes to server profiles.
  • Improved support for Flexfabric modules
  • Hyper-V appliance support
  • Integration with MS System Center
  • Integration with VMware vCenter Ops manager
  • Integration with Red Hat RHEV
  • Similar APIs to HP CloudSystem

Oneview is meant to be light weight, and act as a sort of proxy into other tools, such as Brocade’s SAN manager in the case of Fibre channel (myself I prefer Qlogic management but I know Qlogic is getting out of the switch business). Though for several HP products such as 3PAR and Bladesystem Oneview seems to talk to them directly.

Oneview aims to provide a view that starts at the data center level and can drill all the way down to individual servers, chassis, and network ports.

However the product is obviously still in it’s early stages – it currently only supports HP’s Gen8 DL systems (G7 and Gen8 BL), HP is thinking about adding support for older generations but their tone made me think they will drag their feet long enough that it’s no longer demanded by customers. Myself the bulk of what I have in my environment today is G7, only recently deployed a few Gen8 systems two months ago. Also all of my SAN switches are Qlogic (and I don’t use HP networking now) so Oneview functionality would be severely crippled if I were to try to use it today.

The product on the surface does show a lot of promise though, there is a 3 minute video introduction here.

HP pointed out you would not manage your cloud from this, but instead the other way around, cloud management platforms would leverage Oneview APIs to bring that functionality to the management platform higher up in the stack.

HP has renamed their Insight Control systems for vCenter and MS System Center to Oneview.

The goal of Oneview is automation that is reliable and repeatable. As with any such tools it seems like you’ll have to work within it’s constraints and go around it when it doesn’t do the job.

“If you fancy being able to deploy an ESX cluster in 30 minutes or less on HP Proliant Gen8 systems, HP networking and 3PAR storage than this may be the tool for you.” – me

The user interface seems quite modern and slick.

They expose a lot of functionality in an easy to use way but one thing that struck me watching a couple of their videos is it can still be made a lot simpler – there is a lot of jumping around to do different tasks.  I suppose one way to address this might be broader wizards that cover multiple tasks in the order they should be done in or something.

2 Comments

  1. Hi Nate,

    Can I make a rather facetious comment?

    There has been so much talk of making systems resilient, redundant, reliable in known failure expectancies, basically we have layers and layers and layers of things that tout abilities to avert complete disaster or disruption to denial of service levels.

    But take SDNs. You can cobble soft switches as VMs and get a whole load of use functionally. Possibly programmatic functionality that hardware will not give you, not legacy anyway.

    I am hand waving and being unspecific just to air the drift of this.

    At what point do we have enough things that are good enough?

    I mean, when do we down tools engineering every detail and just say well, chances are slim that by not working on really caring about this and that, we are still not likely to fail fail fail?

    I think some probability math and statistics are important to heed. But there are so many links in the chain, that five nines is almost too expensive a idea at any level.

    What I am wondering about us how crappy we can let the engineering be, before say a web service truly huts the dust?

    I start to think that my early agnosticism as to platforms was not so misguided. Everyone talks about breakpoints to scaling systems as web services as a arch example of when early design thinking matters. But twenty twenty hindsight I believe is so much a bias. I mean, who writes articles on the myriad small projects that go nowhere? Small business is where I am at in terms of thinking. “webifying’ stuff. How many ways to put a sales record onto a web page? So many paths to doing that.

    Sometimes I think people who really know what they are doing constrict themselves by specialism. I know about this technology enough to compete on my domain knowledge. So I do the thing I pitch to a high degree, I’m proud of it. But that could work against the results, not from over engineering one aspect, but because if you know one technology deeply you probably are a capable generalist also. What if your energies were more thinly spread? E.g. I know loads about how to do backup from distributed databases on cloud platforms. Not my field, just example. But what if when hired I instead of optimizing the heck out of a complex system, help build the other parts, so that they will play nicely later. Say in my travels I know how to ensure individual nodes are coherent at a lesser interval, when real time replication is not needed. So I might get involved in simplifying a entirely different strategy of file transfer, looking at how a application writes data and when it needs to have updates. Example here is sales offices who synchronize customer data. How often does a manager need a full fresh dataset of all customers worldwide to do a OLAP style job? Is this even a candidate for a central database replicated every office? Can we just grab the updates end of day, or do it before a regular meeting?

    What I am saying is I think we have a robustness of software and hardware that’s good enough at every level your qualification to make one bit of it perform to very very high standards is over used in terms of what you could be doing to look low level into the objectives of a project.

    This was a actual example I am very loosely using to explain. The guy who pitched and won the contract was amazing at cluster databases and geo diverse engineering. He could give many a big name a run for their money intellectually and in engineering. He won the contract because he had implemented a super robust database that was used by a bank. Regulations meant this was one over engineered gem. It took ages to start work, because of induction, legal sign offs, meetings and orientation to greet the other in situ IT teams, their legal, their compliance, their managers and reports, collate the network topologies that might be relevant, document and sign releases for every piece of information all of which had to be audited for security.

    Then one day it dawned on him this was not a mission critical system. Rules and regs and litigation paranoia had played on the big specifications. Management demands came in about availability. Global audit wanted to have access to all everything 24/7 please…

    But the whole thing was to update salespeople as to their client portfolios as a research tool and management tool, not as a actual live view of the portfolios. The underlying data could be stale by a fair bit of time, before it became useless. The reports were looked at according to a schedule, each month. The OLAP requirement was really not so online, it was merely some pivots pulled for excel. The real accounts and bonuses and commissions and so on were reconciled once a year, the payroll was contracted only as a advance, actually as a loan not a salary (this is common, making loans on anticipated year end bonus, not laying clear commission, I’ve seen top three brokers employee contracts recently enough) and so all the amazing replication and network engineering to support that really wasn’t needed.

    What the job apparently ended up being, was upgrading WAN infrastructure. Just so these data ingests could be smoother, faster, better checked for integrity and so on. Some security aspects were improved, signing files specifically from each branch office, and installing a hardware key system (Thales) so that head office controlled the aggregate data. Local servers were upgraded. Some optimization was done for common queries. A simple contract was laid out for data release due to privacy laws and all that, grossly simplifying how the next people to work on the system could be cleared to work with the data.

    Basically, the guy who sold himself for the project lead sold himself on a highly specialized premise. But doing the job turned out to be none of that. Because the human factors, the wetware, was more costly to concentrate on a flash new system than the system would cost, but the wetware was to stay as a fixed cost. You cannot amortize salaries. So the consultant figured out how to just make clearer sense of the existing system, and said adios call me if you really want a globally distributed fully redundant do everything instantly database.

    The fun thing is that the contract had a bonus linked to employee hours saved that was not limited to just IS/IT people. So after some diplomatic haggling, factoring in the fact huge license fees had not been paid and contracts for software not pushed to meet new super SLAs, our man came home with a truly nice check.

    I dunno if this illustration is as good as I hoped, to explain my position, but i’ll try to wrap up:

    There are gains to be had in software define things

    Gains in new hardware,

    Gains in virtualization,

    But I feel that we pay too much heed to whether this or that is prime time, by which I mean hoping that ultra sophisticated layers can have disproportional benefit across a system.

    Too much is good enough, already.

    That job apparently entailed many systems that could have benefitted from virtualization, of the convergence talked about by HP, there were many parts to it, but not all parts needed to move in perfect synch.

    Getting to grips with the organization is where I think there are inroads to be made in gigs like this Oneview prospect. But although a pseudo OLAP pitch like I described is different in nature, this one had a result in large part by breaking down the job and making selective upgrades and knowing the real objectives.

    Once that is done, I can imagine that being able to move the “hardware” around with software defined networks and virtual machines might be the next stage. I don’t know the scale save for it was hundreds of servers involved in collecting sales data at department level. Spinning up the wan and network and selective hardware upgrades made a huge difference and avoided a very big license charge for having a more mainframe approach that would have had to have really high availability. The distributed gain was already in place at office level, and even upping the local office systems a touch, new tape drives and buffering the tape with disc properly built the gains.

    Basically, a guy who could have built a fantastic cluster and won many kudos points for geek bragging did a minimum impact job.

    We may win contracts based on what we know best, or even by specializing in what trends now and gets attention of management.

    But the moral is to word your contract with scope to earn fairly from results that are not dependent on a particular technology being more mature than it is.

    Do you want to do any big gig with, sorry to quote you in vain, Nate, a product that’s “shows promise”??

    Some gigs will be long enough just meeting the people involved you might get a version number update for products you are quietly evaluating.

    Maybe I beat on the consultant war story too much. This guy totally won, and the client was totally relieved, especially branch IT managers who were, as is always the case, nervous for their positions when head office started bragging about new global IT initiatives in investor relations conference calls. Instead of just a couple of references for his CV he got a small army of relieved local admins prepared to endorse him, even if only on LinkedIn.

    The real win, though, is figuring out how to get to change other components.

    If you are hired to fix the storage infrastructure, are you allowed to do anything to hell manage demand on the storage infrastructure? Can you look at other pieces of the system to get gains?

    Try not to be contracted only to fix whatever whiz bang wonderful new thing, even if that’s your foot through the door, your bid.

    Because vendors want their new product to sell with immense margins, they all project as if it will have a effect across the system.

    Sure, if you have a home computer and not got a SSD, buying better storage will affect every but of your day foe the better. But real IT systems are driven by humans, human demand, human interaction, and have very different profiles.

    Imagine if you went into formula one motor racing with only the idea you could design the neat nose cone? Or the best engine even? Teams who win have balance. Force India, a back if the grid team in terms of the recognition they get, punch above their weight all the time, for having balance and super use of their components.

    I hop I’ve gone just far enough, not over the line, in my arguments.

    This software defined gig is potentially amazing. But will it pry the hardware out of the dead cold hands of many branch offices?

    And generally, the thing I learned is to pitch narrow if that’s the request, but contract widely. When I ask for terms that bonus me indirectly form what the job is, I explain that by knowing how to get the system working, by speaking to who manages what connects to it, I am providing a service to those departments, and I will say it is unusual I can improve related systems performance but not unlikely, and so I want to have a little if I spot a windfall gain at low cost. If I get skepticism, I just say that sometimes when you come in with a fresh attitude, so long as you’re no threat, you get to hear so much from the trenches that you can learn things management would never hear otherwise. Not as a spy, but if you work across normal line reporting, you can figure out if line management is hurting things, for example. Just notes and impressions can be so valuable to senior management. Make them aware you are aware but not a disruptor.

    Okay, gone on a bit. The overall tech is so good now I wonder if some companies can tell which bit gets them the wins, before they start renewing systems..

    Comment by John ( other John ) — June 25, 2014 @ 4:28 am

  2. I think the gist of what most of what you were trying to say is we already have some pretty robust hardware and software out there, and this new software defined stuff is at least in the short term introducing some unknown variables in it’s robustness and reliability. In some cases your better off doing the task manually – automation is great but unless it’s done really really well it can be much more risky especially if that automation involves a significant amount of complexity(coding, etc), vs say interfacing with a CLI or a GUI of some enterprise storage/hypervisor/etc type tool.

    I think we’ll get to a stable point eventually but I see it as still being years away. It is annoying to me most of the press around this stuff tries to claim we are already there when we are not. It was nice to hear directly from HP that they agree that we are not there yet as well.

    Comment by Nate — June 30, 2014 @ 2:43 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress