TechOpsGuys.com Diggin' technology every day

June 15, 2014

HP Discover 2014: Datacenter services

Filed under: Datacenter — Tags: , — Nate @ 12:54 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I should be out sight seeing but have been stuck in my hotel room here in Sedona, AZ due to the worst food poisoning I’ve ever had from food I ate on Friday night.

X As a service

The trend towards “as a service” as what seems to be an accounting thing more than anything else to shift dollars to another column in the books continues with HP’s Facility as a service.

HP will go so far as to buy you a data center(the actual building), fill it with equipment and rent it back to you for some set fee – with entry level systems starting at 150kW (which would be as few as say 15 x high density racks). They can even manage it end to end if you want them to. I didn’t realize myself the extent that their services go to. requires a 5 or 10 year commitment however (has to do with accounting again I believe). HP says they are getting a lot of positive feedback on this new service.

This is really targeted at those that must operate on premise due to regulations and cannot rely on a 3rd party data center provider (colo).

Flexible capacity

FAAS doesn’t cover the actual computer equipment though, that is just the building, power, cooling etc. The equipment can either come from you or you can get it from HP using their Flexible Capacity program. This program also extends to the HP public cloud as well as a resource pool for systems.

HP Flexible Capacity program

HP Flexible Capacity program

Entry level for Flexible capacity we were told was roughly a $500k contract ($100k/year).

Thought this was a good quote

“We have designed more than 65 million square feet of data center space. We are responsible for more than two-thirds of all LEED Gold and Platinum certified data centers, and we’ve put our years of practical experience to work helping many enterprises successfully implement their data center programs. Now we can do the same for you.”

Myself I had no idea that was the case, not even close.

HP Discover 2014: Software defined

Filed under: Datacenter,Events — Tags: , , , — Nate @ 12:26 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I have tried to be a vocal critic of the whole software defined movement, in that much of it is hype today and has been for a while and will likely to continue to be for a while yet. My gripe is not so much about the approach, the world of “software defined” sounds pretty neat, my gripe is about the marketing behind it that tries to claim we’re already there, and we are not, not even close.

I was able to vent a bit with the HP team(s) on the topic and they acknowledged that we are not there yet either. There is a vision, and there is a technology. But there aren’t a lot of products yet, at least not a lot of promising products.

Software defined networking is perhaps one of the more (if not the most) mature platforms to look at. Last year I ripped pretty good into the whole idea with good points I thought, basically that technology solves a problem I do not have and have never had. I believe most organizations do not have a need for it either (outside of very large enterprises and service providers). See the link for a very in depth 4,000+ word argument on SDN.

More recently HP tried to hop on the bandwagon of Software Defined Storage, which in their view is basically the StoreVirtual VSA. A product that to me doesn’t fit the scope of Software defined, it is just a brand  propped up onto a product that was already pretty old and already running in a VM.

Speaking of which, HP considers this VSA along with their ConvergedSystem 300 to be “hyper converged”, and least the people we spoke to do not see a reason to acquire the likes of Simplivity or Nutanix (why are those names so hard to remember the spelling..). HP says most of the deals Nutanix wins are small VDI installations and aren’t seen as a threat, HP would rather go after the VCEs of the world. I believe Simplivity is significantly smaller.

I’ve never been a big fan of StoreVirtual myself, it seems like a decent product, but not something I get too excited about. The solutions that these new hyper converged startups offer sound compelling on paper at least for lower end of the market.

The future is software defined

The future is not here yet.

It’s going to be another 3-5 years (perhaps more). In the mean time customers will get drip fed the technology in products from various vendors that can do software defined in a fairly limited way (relative to the grand vision anyway).

When hiring for a network engineer, many customers would rather opt to hire someone who has a few years of python experience than more years of networking experience because that is where they see the future in 3-5 years time.

My push back to HP on that particular quote (not quoted precisely) is that level of sophistication is very hard (and expensive) to hire for. A good comparative mark is hiring for something like Hadoop.  It is very difficult to compete with the compensation packages of the largest companies offering $30-50k+ more than smaller (even billion $) companies.

So my point is the industry needs to move beyond the technology and into products. Having a requirement of knowing how to code is a sign of an immature product. Coding is great for extending functionality, but need not be a requirement for the basics.

HP seemed to agree with this, and believes we are on that track but it will take a few more years at least for the products to (fully) materialize.

HP Oneview

(here is the quick video they showed at Discover)

I’ll start off by saying I’ve never really seriously used any of HP’s management platforms(or anyone else’s for that matter). All I know is that they(in general not HP specific) seem to be continuing to proliferate and fragment.

HP Oneview 1.1 is a product that builds on this promise of software defined. In the past five years of HP pitching converged systems seeing the demo for Oneview was the first time I’ve ever shown just a little bit of interest in converged.

HP Oneview was released last October I believe and HP claims something along the lines of 15,000 downloads or installations. Version 1.10 was announced at Discover which offers some new integration points including:

  • Automated storage provisioning and attachment to server profiles for 3PAR StoreServ Storage in traditional Fibre Channel SAN fabrics, and Direct Connect (FlatSAN) architectures.
  • Automated carving of 3PAR StoreServ volumes and zoning the SAN fabric on the fly, and attaching of volumes to server profiles.
  • Improved support for Flexfabric modules
  • Hyper-V appliance support
  • Integration with MS System Center
  • Integration with VMware vCenter Ops manager
  • Integration with Red Hat RHEV
  • Similar APIs to HP CloudSystem

Oneview is meant to be light weight, and act as a sort of proxy into other tools, such as Brocade’s SAN manager in the case of Fibre channel (myself I prefer Qlogic management but I know Qlogic is getting out of the switch business). Though for several HP products such as 3PAR and Bladesystem Oneview seems to talk to them directly.

Oneview aims to provide a view that starts at the data center level and can drill all the way down to individual servers, chassis, and network ports.

However the product is obviously still in it’s early stages – it currently only supports HP’s Gen8 DL systems (G7 and Gen8 BL), HP is thinking about adding support for older generations but their tone made me think they will drag their feet long enough that it’s no longer demanded by customers. Myself the bulk of what I have in my environment today is G7, only recently deployed a few Gen8 systems two months ago. Also all of my SAN switches are Qlogic (and I don’t use HP networking now) so Oneview functionality would be severely crippled if I were to try to use it today.

The product on the surface does show a lot of promise though, there is a 3 minute video introduction here.

HP pointed out you would not manage your cloud from this, but instead the other way around, cloud management platforms would leverage Oneview APIs to bring that functionality to the management platform higher up in the stack.

HP has renamed their Insight Control systems for vCenter and MS System Center to Oneview.

The goal of Oneview is automation that is reliable and repeatable. As with any such tools it seems like you’ll have to work within it’s constraints and go around it when it doesn’t do the job.

“If you fancy being able to deploy an ESX cluster in 30 minutes or less on HP Proliant Gen8 systems, HP networking and 3PAR storage than this may be the tool for you.” – me

The user interface seems quite modern and slick.

They expose a lot of functionality in an easy to use way but one thing that struck me watching a couple of their videos is it can still be made a lot simpler – there is a lot of jumping around to do different tasks.  I suppose one way to address this might be broader wizards that cover multiple tasks in the order they should be done in or something.

HP Discover 2014: Helion (Openstack)

Filed under: Datacenter,Events — Tags: , , , — Nate @ 10:36 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

HP Helion

This is a new brand for HP’s cloud platform based on OpenStack. There is a commercial version and a community edition. The community edition is pure OpenStack without some of the fancier HP management interfaces on top of it.

The easiest thing about OpenStack is setting it up – organizations spend the majority of the time simply keeping it running after it is set up.”

HP admits that OpenStack has a long way to go before it is considered a mature enterprise application stack. But they do have experience running a large OpenStack public cloud and have hundreds of developers working on the product. In fact HP says that most OpenStack community projects these days are basically run by HP, while other larger contributors (even Rack Space) have pulled back on resource allocation to the project HP has gone in full steam ahead.

HP has many large customers who specifically asked HP to get involved in the project and to provide a solution for them that can be supported end to end. I must admit the prospect does sound attractive, being that you can get HP Storage, Servers, Networking all battle tested and ready to run this new cloud platform, the Openstack platform is by far the biggest weak point today.

It is not there yet though, HP does offer a professional services for the customers entire life cycle of OpenStack deployment.

One key area that has been weak in OpenStack which recently made the news, is the networking component Neutron.

[..] once you get beyond about 50 nodes, Neutron falls apart”

So to stabilize this component HP integrated support with their SDN controller into the lower levels of Neutron. This allowed it to scale much better and maintain complete compatibility with existing APIs.

That is something HP is doing in several cases, they emphasize very strongly they are NOT building a proprietary solution, and they are NOT changing any of the APIs (they are helping change them upstream) as to break compatibility. They are however adding/moving some things around beneath the API level to improve stability.

The initial cost for the commercial $1,400/server/year which is quite reasonable, I assume that includes basic support. The commercial version is expected to become generally available in the second half of 2014.

Major updates will be released every six months, and minor updates every three months.

Very limited support cycle

One thing that took almost everyone in the room by surprise is the support cycle for this product. Normally enterprise products have support for 3-5 years, Helion has support for a maximum of 18 months. HP says 12 of those months is general support and the last six of those are specifically geared towards migration to the next version, which they say is not a trivial task.

I checked Red Hat’s policy as they are another big distribution of OpenStack, and their policy is similar – they had one year of support on version three of their production and have one and a half years on version four (current version). Despite the version numbers apparently version three was the first release to the public.

So given that it should just reinforce the fact that Openstack is not a mature platform at this point and it will take some time before it is, probably another 2-3 years at least. They only recently got the feature that allowed for upgrading the system.

HP does offer a fully integrated ConvergedSystem with Helion, though despite my best efforts I am unable to find a link that specifically mentions Helion or OpenStack.

HP is supporting ESXi and KVM as the initial hypervisors in their Helion. Openstack supports a much wider variety itself but HP is electing those two to begin with anyway. Support for Hyper-V will follow shortly.

HP also offers full indemnification from legal issues as well.

This site has a nice diagram of what HP is offering, not sure if it is an HP image or not so sending you there to see it.

Conclusion

My own suggestion is to steer clear of Openstack for a while yet, give it time to stabilize, don’t deploy it just because you can. Don’t deploy it because it’s today’s hype.

If you really, truly need this functionality internally then it seems like HP has by far the strongest offerings from a product and support standpoint(they are willing and able to do everything from design to deployment to operationally running it). Keep in mind depending on scale of deployment you may be constantly planning for the next upgrade (or having HP plan for you).

I would argue that the vast majority of organizations do not need OpenStack (in it’s current state) and would do themselves a favor by sticking to whatever they are already using until it’s more stable. Your organization may have pains running whatever your running now, but your likely to just trade those pains for other pains going the OpenStack route right now.

When will it be stable? I would say a good indicator will be the support cycle, when HP (or Redhat) starts having a full 3 year support cycle on the platform (with back ported fixes etc) that means it’s probably hit a good milestone.

I believe OpenStack will do well in the future, it’s just not there yet today.

June 10, 2014

HP Discover Las Vegas 2014: Apollo 8000

Filed under: Datacenter — Tags: , — Nate @ 1:40 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I witnessed what I’d like to say is one of the most insane unveiling of a new server in history. It was sort of mimicking the launch of an Apollo space craft. Lots of audio and video from NASA, and then when the system appeared lots of compressed air/smoke (very very loud) and dramatic music.

Here’s the full video in 1080p, beware it is 230MB. I have 100Mbit of unlimited bandwidth connected to a fast backbone, will see how it goes.

HP Apollo 8000 launch

HP Apollo 8000

Apollo is geared squarely at compute bound HPC, and is the result of a close partnership between HP, Intel and the National Renewable Energy Laboratory (NREL).

The challenge HP presented itself with is what would it take to drive a million teraflops of compute. They said with today’s technology it would require one gigawatt of power and 30 football fields of space.

Apollo is supposed to help fix that, though the real savings are still limited by today’s technology, it’s not as if they were able to squeeze 10 fold improvement in performance out of the same power footprint. Intel said they were able to get I want to say 35% more performance out of the same power footprint using Apollo vs (I believe) the blades they were using before, I am assuming in both cases the CPUs were about the same and the savings came mainly from the more efficient design of power/cooling.

They build the system as a rack design, you probably haven’t been reading this blog very long but four years ago I lightly criticized HP on their SL series as not being visionary enough in that they were not building for the rack. The comparison I gave was with another solution I was toying with at the time for a project from SGI called CloudRack.

Fast forward to today and HP has answered that, and then some with a very sophisticated design that is integrated as an entire rack in the water cooled Apollo 8000. They have a mini version called the Apollo 6000(If this product was available before today I had never heard of it myself though I don’t follow HPC closely), of which Intel apparently already has something like 20,000 servers deployed on this platform.

Apollo 8000 water cooling system

Apollo 8000 water cooling system

Anyway one of the keys to this design is the water cooling – it’s not just any water cooling though, the water in this case never gets into the server, they use heat pipes on the CPUs and GPUs and transfer the heat to what appears to be a heat sink of some kind that is on the outside of the chassis, which then “melds” with the rack’s water cooling system to transfer the heat away from the servers. Power is also managed at a rack level. Don’t get me wrong this is far more advanced than the SGI system of four years ago. HP is apparently not giving this platform a very premium price either.

Apollo 8000 server level water cooling

Apollo 8000 server level water cooling

Their claims include:

  • 4 X the teraflops per square foot (vs air cooled servers)
  • 4 X density per rack per dollar (not sure what the per dollar means but..)
  • Deployment possible within days (instead of “years”)
  • More than 250 Teraflops per rack (the NREL Apollo 8000 system is 1.2 Petaflops in 18 racks …)
  • Apollo 6000 offering 160 servers per rack, and Apollo 8000 having 144
  • Some fancy new integrated management platform
  • Up to 80KW powering the rack (less if you want redundancy – 10kW per module)
  • One cable for ILO management of all systems in the rack
  • Can run on water temps as high as 86 degrees F (30 C)

The cooling system for the 8000 goes in another rack, which consumes 1/2 of a rack for a maximum of four server racks. If you need redundancy then I believe a 2nd cooling unit is required so two racks. The cooling system weighs over 2,000 pounds by itself so it appears unlikely that you’d be able to put two of them in a single cabinet.

The system takes in 480V AC and converts it into DC in up to 8x10kW redundant rectifiers in the middle of the rack.

NREL integrated the Apollo 8000 system with their building’s heating system, so that the Apollo cluster contributes to heating their entire building so that heat is not wasted.

It looks as if SGI discontinued the product I was looking at four  years ago (at the time for a Hadoop cluster). It supported up to 76 dual socket servers in a rack at the time supporting a max of something like 30kW(I don’t believe any configuration at the time could draw more than 20kW) and had rack based power distribution as well as rack based cooling(air cooling). It seems as if it was replaced with a newer product called Infinite data cluster which can go up to 80 dual socket servers in an air cooled rack(though GPUs not supported unlike Apollo).

This new system doesn’t mean a whole lot to me, I mean I don’t deal with HPC so I may never use it but the tech behind it seemed pretty neat, and obviously was interested in HP finally answering my challenge to deploy a system based on an entire rack with rack based power and cooling.

The other thing that stuck out to me was the new management platform, HP said it was another unique management platform that is specific to Apollo which was sort of confusing given I sat through what I thought was a very compelling preview of what HP OneView (the latest version announced today) had to offer which is HP’s new converged management interface. Seems strange to me that they would not integrate Apollo into that from the start, but I guess that’s what you get from a big company with teams working in parallel.

HP tries to justify this approach by saying there are several unique things in Apollo components so they needed a custom management package. I think they just didn’t have time to integrate with OneView, since there is no justification I can think of to not expose those management points via APIs that OneView can call/manage/abstract on behalf of the customer.

It sure looks pretty though(more so in person, I’m sure I’ll get a better pic of it on the showroom floor in normal lighting conditions along with pics of the cooling system).

UPDATE – some pics of the compute stuff

HP Apollo Series Server node Front part

HP Apollo Series Server node inside back part

HP Apollo  8000 Series server front end

HP Apollo 8000 Series server inside front part

HP Apollo 8000 Series front end

HP Apollo 8000 Series front end

HP Apollo 8000 series heat pipes, one for each CPU or GPU in the server and passes heat to the side of the server

HP Apollo 8000 series heat pipes, one for each CPU or GPU in the server and passes heat to the side of the server

January 30, 2014

More cloud FAIL

Filed under: Datacenter — Tags: — Nate @ 3:57 pm

I guess I have to hand it to these folks as they are up front about it.. But I just came across this little gem, where a Seattle-area startup talks about a nearly $6 million loss they are taking for 2013, and really what caught my eye more than anything else is their cloud spend

They spent 25% of their REVENUE on cloud services in 2013 (which for them comes to just over $7 million)

REVENUE, 25% of REVENUE. Oh. my. god.

Now I shouldn’t be surprised, having been at a company that was doing just that(that company has since collapsed and was acquired by some Chinese group recently), and know many other companies that are massively over spending on cloud because they are simply clueless.

It is depressing.

What’s worse is it just makes everyone’s else life harder because people read articles about public cloud and crap and they see all these companies signing up and spending a lot, so they think it is the right thing to do when more often than not (far more often than not) it is the wrong strategy. I won’t go into AGAIN specifics on when it is good or not, that is not the point of this post.

The signal to noise ratio of people moving OUT of public cloud vs going INTO it is still way off, rarely do you hear about companies moving out, or why they moved out. I’ve talked to a BUNCH of companies over the recent years who have moved out of public clouds (or feel they are stuck in their cloud) but those things never seem to reach the press for some reason.

The point of this post is to illustrate how absurd some of the spending is out there on cloud. I am told this company in particular is building their own cloud now apparently I guess they saw the light.

My company moved out of public cloud about two years ago and obviously we have had great success ever since, the $$ saved is nothing compared to the improved availability, flexibility and massive ease of use over a really poor public cloud provider.

Oh as a side note if you use Firefox I highly encourage you to install this plugin, it makes reading about cloud more enjoyable. I’ve had it for months now and I love it. There is a version for Chrome as well I believe.

May 14, 2013

Some pictures of large scale Microsoft ITPAC Deployments

Filed under: Datacenter — Tags: — Nate @ 11:16 am

Just came across this, at Data Center Knowledge. I have written about these in the past, from a high level perspective they are incredibly innovative, far more so than any other data center container I have seen at least.

Once you walk outside, you begin to see the evolution of Microsoft’s data center design. The next building you enter isn’t really a building at all, but a steel and aluminum framework. Inside the shell are pre-manufactured ITPAC modules. Microsoft has sought to standardize the design for its ITPAC – short for Information Technology Pre-Assembled Component – but also allows vendors to work with the specifications and play with the design. These ITPACs use air side economization, and there are a few variations.

I heard about some of these being deployed a couple years ago from some friends, though this is the first time I’ve seen pictures of any deployment.

You can see a video on how the ITPAC works here.

April 18, 2013

Giant Explosion of companies moving off of AWS

Filed under: Datacenter — Tags: — Nate @ 10:24 am

Maybe I’m not alone in this world after all. I have ranted and raved about how terrible Amazon’s cloud is for years now. I used it at two different companies for around two years(been almost a year since I last used it at this point) and it was, by far the worst experience in my professional career. I could go on for an entire afternoon listing all the problems and lack of abilities, features etc that I had experienced — not to mention the costs.

But anyway, onto the article which I found on slashdot. It made my day, well the day is young still so perhaps something better will come along.

A week ago I quoted Piston’s CTO saying that there was a “giant explosion” of companies moving off of Amazon Web Services (AWS). At the time, I noted that he had good reason to say that, since he started a company that builds software used by companies to build private clouds.

[..]

Enterprises that are moving to private clouds tend to be those that had developers start using the cloud without permission.

[..]

Other businesses are “trying to get out,” he said. AWS has made its compute services very sticky by making them difficult for users to remove workloads like databases to run them elsewhere.

Myself I know of several folks who have stuff in Amazon, and for the most part the complaints are similar to mine. Very few that I have heard of are satisfied. The people that seem to be satisfied (in my experience) are those that don’t see the full picture, or don’t care. They may be satisfied because they don’t want to worry about infrastructure no matter the cost, or they want to be able to point the finger at an external service provider when stuff breaks (Amazon’s support is widely regarded as worthless).

“We have thousands of AWS customers, and we have not found anyone who is happy with their tech support,” says Laderman.

I was at a company paying six figures a month in fees and they refused to give us any worthwhile support. Any company in the enterprise space would of been more than happy to permanently station an employee on site to make sure the customer is happy for those kind of payments. Literally everyone who used the Amazon stuff in the company hated it, and the company wanted Amazon to come help show us the way — and they said no.

I am absolutely convinced (as I’ve seen it first and second hand) in many cases the investors in the startups have conflicts of interest and want their startups to use Amazon because the investors benefit from them growing as well. Amazon then uses this marketing stuff to pimp to other customers. This of course happens all over the place with other companies, but there are a lot of folks that are invested in Amazon relatively speaking compared to most other companies.

There’s no need for me to go into specifics as to why Amazon sucks here – for those you can see some of the past posts. This is just a quickie.

Anyway, that’s it for now.. I saw the article and it made me smile.

April 10, 2013

HP Project Moonshot micro servers

Filed under: Datacenter — Tags: , , , , — Nate @ 12:11 am

HP made a little bit of headlines recently when they officially unveiled their first set of ultra dense micro servers, under the product name Moonshot. Originally speculated as likely being an ARM-platform, it seems HP has surprised many in making this first round of products Intel Atom based.

Picture of HP Moonshot chassis with 45 servers

They are calling it the world’s first software defined server. Ugh. I can’t tell you how sick I feel whenever I hear the term software defined <insert anything here>.

In any case I think AMD might take issue with that, with their SeaMicro unit which they acquired a while back. I was talking with them as far back as 2009 I believe and they had their high density 10U virtualized Intel Atom-based platform(I have never used Seamicro though knew a couple folks that worked there). Complete with integrated switching, load balancing and virtualized storage(the latter two HP is lacking).

Unlike legacy servers, in which a disk is unalterably bound to a CPU, the SeaMicro storage architecture is far more flexible, allowing for much more efficient disk use. Any disk can mount any CPU; in fact, SeaMicro allows disks to be carved into slices called virtual disks. A virtual disk can be as large as a physical disk or it can be a slice of a physical disk. A single physical disk can be partitioned into multiple virtual disks, and each virtual disk can be allocated to a different CPU. Conversely, a single virtual disk can be shared across multiple CPUs in read-only mode, providing a large shared data cache. Sharing of a virtual disk enables users to store or update common data, such as operating systems, application software, and data cache, once for an entire system

Really the technology that SeaMicro has puts the Moonshot Atom systems to shame. SeaMicro has the advantage that this is their 2nd or 3rd (or perhaps more) generation product. Moonshot is on it’s first gen.

Picture of Seamicro chassis with 256 servers

Moonshot provides 45 hot pluggable single socket dual core Atom processors, each with 8GB of memory and a single local disk in a 4.5U package.

SeaMicro provides up to 256 sockets of dual core Atom processors, each with 4GB of memory and virtualized storage. Or you can opt for up to 64 sockets of either quad core Intel Xeon or eight core AMD Opteron, with up to 64GB/system (32GB max for Xeon). All of this in a 10U package.

Let’s expand a bit more – Moonshot can get 450 servers(900 cores) and 3.6TB of memory in a 47U rack. SeaMicro can get 1,024 servers (2,048 cores) and 4TB of memory in a 47U rack. If that is not enough memory you could switch to Xeon or Opteron with similar power profile, at the high end 2,048 Opteron(AMD uses a custom Opteron 4300 chip in the Seamicro system – a chip not available for any other purpose) cores with 16TB of memory.  Or maybe you mix/match .. There is also fewer systems to manage – HP having 10 systems, and Sea Micro having 4 per rack. I harped on HP’s SL-series a while back for similar reasons.

Seamicro also has dedicated external storage which I believe extends upon the virtualization layer within the chassis but am not certain.

All in all it appears Seamicro has been years ahead of Moonshot before Moonshot ever hit the market. Maybe HP should of scrapped Moonshot and taken out Seamicro when they had the chance.

At the end of the day I don’t see anything to get excited about with Moonshot – unless perhaps it’s really cheap (relative to Seamicro anyway). The micro server concept is somewhat risky in my opinion. I mean if you really got your workload nailed down to something specific and you can fit it into one of these designs then great. Obviously the flexibility of such micro servers is very limited. Seamicro of course wins here too, given that an 8 core Opteron with 64GB of memory is quite flexible compared to the tiny Atom with tiny memory.

I have seen time and time again people get excited about this and say oh how they can get so many more servers per watt vs the higher end chips. Most of the time they forget to realize how few workloads are CPU bound, and simply slapping a hypervisor on top of a system with a boatload of memory can get you significantly more servers per watt than a micro server could hope to achieve. HOWEVER, if your workload can effectively exploit the micro servers, drive utilization up etc, then it can be a real good solution — in my experience those sorts of workloads are the exception rather than the rule, I’ll put it that way.

It seems that HP is still evaluating whether or not to deploy ARM processors in Moonshot – in the end I think they will – but won’t have a lot of success – the market is too niche. You really have to go to fairly extreme lengths to have a true need for something specialized like ARM. The complexities in software compatibility are not trivial.

I think HP will not have an easy time competing in this space. The hyper scale folks like Rackspace, Facebook, Google, Microsoft etc all seem to be doing their own thing, and are unlikely to purchase much from HP. At the same time there of course is Seamicro, amongst other competitors (Dell DCS etc) who are making similar systems. I really don’t see anything that makes Moonshot stand out, at least not at this point. Maybe I am missing something.

December 18, 2012

Top 10 outages of the year

Filed under: Datacenter — Tags: , — Nate @ 11:02 am

It’s that time of the year again, top N lists are popping up everywhere, I found this list from Data Center Knowledge to be interesting.

Of note, two big cloud companies were on the list with multiple outages – Amazon having at least three outages and Azure right behind it at two. Outages have been a blight on both services for years.

I don’t know about you, but short of a brief time at a poor hosting facility in Seattle (I joined a company in Spring of 2006 that was hosted there and we were moved out by Fall of 2006 – we did go through one power outage while I was there if I recall right), the number of infrastructure related outages I’ve been through over the past decade have been fairly minimal compared to the number experienced by these cloud companies. The number of application related outages (and total downtime minutes incurred by said applications) out numbers infrastructure related things for me I’d say by at least 1,000:1.

Amazon has had far more downtime for companies that I have worked for (either before or since I was there) than any infrastructure related outages at companies I was at where they hosted their own stuff. I’d say it’s safe to say an order of magnitude more outages. Of course not all of these are called outages by Amazon, they leave themselves enough wiggle room to drive an aircraft carrier through in their SLAs. My favorite one was probably the forced reboot of their entire infrastructure.

Unlike infrastructure related outages at individual companies, obviously these large service provider outages have much larger consequences for very large numbers of customers.

Speaking of cloud, I heard that HP brought their own cloud platform out of beta recently. I am not a fan of this cloud either, basically they tried to clone what Amazon is doing in their cloud, which infrastructure wise is a totally 1990s way of doing things (with APIs on top to make it feel nice). Wake me up when these clouds get the ability to pool CPU/memory/storage and have the ability to dynamically configure systems without fixed configurations.

If the world happens to continue on after December 22nd @ 3:11AM Pacific time, and I don’t happen to see you before Christmas – have a good holiday from all of us monkeys at Techopsguys.

New Cloud provider Profitbricks

Filed under: Datacenter — Tags: , — Nate @ 9:02 am

(originally I had this on the post above this but I thought it better to split it out since it morphed into something that suited a dedicated post)

Also on the topic of cloud, I came across this other post on Data Center Knowledge’s site a few days ago talking about a new cloud provider called ProfitBricks.

I dug into their web site a bit and they really seem to have some interesting technology. They are based out of Europe, but also have a U.S. data center somewhere too. They claim more than 1,000 customers, and well over 100 engineers working on the software.

While Profitbricks does not offer pooling of resources they do have several key architectural advantages that other cloud offerings that I’ve come across lack:

They really did a good job at least on paper, I haven’t used this service, though I did play around with their data center designer

ProfitBricks Data Center designer

Their load balancing offering appears to be quite weak (weaker than Amazon’s own offering), but you can deploy a software load balancer like Riverbed Stingray (formerly Zeus). I emailed them about this and they are looking into Stingray, perhaps they can get a partnership going and have it be an offering with their service. Amazon has recently improved their load balancing partnerships and you can now run at least Citrix Netscaler as well as A10 Networks’ SoftAX in EC2, in addition to Riverbed Stingray. Amazon’s own Elastic Load Balancer is worse than useless in my experience. I’d rather rely on an external DNS-based load balancing from the likes of Dynect than use ELB. Even with Stingray it can take several seconds (up to about 30) for the system to fail over with Elastic IPs, vs normally sub second fail over when your operating your own infrastructure.

Anyway back to Proifitbricks, I was playing around with their designer tool and I was not sure how best to connect servers that would be running load balancers(assuming they don’t provide the ability to do IP-takeover). I thought maybe have one LB in each zone, and advertise both data center IP addresses (this is a best practice in any case at least for larger providers). Though in the above I simplified it a bit to a single internet access point and using one of ProfitBricks round robin load balancers to distribute layer 4 traffic to the servers behind it(running Stingray). Some real testing would of course have to go into play and further discussions before I’d run production stuff on it obviously (and I have no need for IaaS cloud right now anyway).

So they have all this, and still their pricing is very competitive. They also claim very high level of support as well which is good to see.

I’ll certainly keep them in mind in the event I need IaaS in the future, they seem to know the failings of first generation cloud companies and are doing good things to address them. Now if they could only address the point of lack of resource pooling I’d be really happy!

Older Posts »

Powered by WordPress