TechOpsGuys.com Diggin' technology every day

September 25, 2011

Real picture of Microsoft IT PAC

Filed under: Datacenter — Tags: , — Nate @ 10:34 am

I’ve mentioned it here, and here, but finally there’s a real picture of their dense server designs, it looks pretty nice –

I haven’t tried to count, but there should be 96 servers per 57U rack (taller racks because they are in shipping containers), with integrated UPSs, and I am happy to see they are not placing all of their switches at the top of the rack as earlier diagrams seemed to indicate.

There you have it, the most innovative server/rack design in the industry, at least the most innovative that I have come across. Too bad they aren’t reselling these things to other companies.

We also get another indication on just how many jobs these data centers generate when they come to town (i.e. not many, certainly not enough worthy of tax breaks like Washington state was doing)

Microsoft will invest an additional $150 million to expand its new data center in southern Virginia,
[..]
The expansion will add 10 jobs, bringing the total expected employment in Boydton to 60 positions.

I see occasional references to how much jobs cost when the government tries to create them, here is a good contrast for those people making those comparisons – $15 million invested per job created (at least jobs measured at the end point).

September 21, 2011

HP considering replacing CEO

Filed under: General — Tags: — Nate @ 8:53 am

This would be great news if it happens, seems HP is considering replacing their CEO. I don’t know if Meg Whitman would be a good candidate or not that’s not really my area of expertise, but Leo’s reign has been a disaster to-date, not only for the company, but the customers, and their stock price.

Not that the previous CEO was good, he was terrible too, being known for slashing budgets, cutting corners for short term profits, and killing morale across the company. HP execs seem to be doing everything they can to kill the company.

I’m seeing on CNBC “Significant number of HP board members want to replace Apotheker”. Which is somewhat ironic considering people believed that Leo recently rebuilt the board around people that would support him. I guess they may not be so loyal after all.

If the CEO goes hopefully goes the bid for that UK software company Autonomy – HP should re-invest in their own stuff, several folks have pointed out the lack of HP’s R&D in recent years has almost forced HP to acquire a lot of companies and in the end they pay a much higher premium for that vs doing things themselves, just look at how much HP has spent on acquisitions vs IBM (the data for both is incomplete but the trend shows HP running at about double the rate of IBM for what data is available.

HP has Vertica, they have 3PAR, they could use some more advanced networking stuff 3COM just isn’t there, nor is Procurve (I still believe HP bought 3COM for nothing more than market presence in China).

Microsoft’s cloud takes another hit

Filed under: Datacenter — Tags: — Nate @ 8:47 am

Just came across this from our friends at The Register. Nice to see Microsoft was up front about what caused the outage, full disclosure is a good policy.

“A tool that helps balance network traffic was being updated and the update did not work correctly. As a result, configuration settings were corrupted, which caused a service disruption,” he wrote.

It took some hours for normal service levels to resume and time for the changes to replicate across the planet.

Just a reminder out there for the less technical or non technical, even the big clouds can have major downtime, even with all their fancy buzzword compliant services. This is of course not the first outage for Microsoft, more like the third or fourth in recent months.

Microsoft isn’t alone either of course, whether it’s Microsoft, Amazon, Rackspace (among many smaller names), all have had their day in the spotlight on more than one occasion.

It seems cloud outages occur more frequently than outages outside the cloud, at least in my experience, maybe I’ve just been lucky. It helps to be in a good data center.

September 19, 2011

BOOM! What was that noise?

Filed under: Random Thought — Tags: — Nate @ 9:24 am

It was Netflix shooting themselves in the other foot.

Seems they can’t get enough of pissing off their die hard fans. I’ve seen a lot of people try to claim that the DVD by mail business is dragging them down (really I think it’s the opposite these days) because it’s so expensive to mail DVDs. I don’t think it’s too expensive — if there are some abusers out there (the ones that rent, rip and return) Netflix should target those users and have them pay more or cut them off, much like consumer internet services do for people that abuse the service.

One good comment from the Netflix blog (which has around 12,000 comments at this point) sum’s it up pretty good (I’d link to it directly but don’t see a way to do that, it’s near the top of the list though) –

Thanks for the explanation and apology. That helps, but your arrogance is still so thick it’s palpable. The “I’m sorry if you were offended” is no apology at all. It just makes things worse.

I have been a Netflix customer and fan for many years. Have been a Netflix evangelist, turning on many friends to your service. I am still a customer but no longer a fan — I feel betrayed.

(I feel similarly about VMware at this point)

Though it sounds more like they are going to try to grab onto more hype and spin off the DVD by mail service and stick to streaming. In any case it seems like their remaining customers lose from pretty much any angle you look at it. I think the jig is up for streaming – I mean if Hulu can’t pull it off with such big content producers as investors – who can? I don’t think anyone, not for a while at least. Which is too bad.

The upside is maybe by the time the licensing and legal stuff is worked out the internet architecture will be to the point where it can better support streaming (I’m looking at you multicast over IPv6 assuming your ever widely deployed and assuming you work on a large scale).

This is obviously a panic move for them to take in response to the plunge in the stock price(the least they could of done is announce this last week with their other news since the change won’t be happening for several weeks seems this decision was made in the past few days), otherwise they would be taking their time and making the sites inter operable with each other (whether or not one of them is spun out).

Netflix predicted losing as many as a million subscribers recently, I would expect this change to increase that number significantly.

OK no more posts about Netflix for a while – there just isn’t a lot of things in the tech industry happening that interest me these days (enough to write something on).

September 15, 2011

Netflix in a pickle

Filed under: Random Thought — Tags: , — Nate @ 7:53 pm

I wrote on why I canceled my Netflix subscription when they jacked up their rates, it all came down to content – or lack thereof. I see people tout Netflix quite frequently, claiming to be willing/able to “cut the cord” to cable or sattelite or whatever and go Netflix/Hulu/etc. I’m in the opposite boat myself, I’m more than happy to pay more to get more content. Netflix is too little content for obviously too little money. They haven’t achieved the balance for me (and it’s not as if they had a more expensive tier that had more content).

They announced today they expect to lose a million subscribers over this, compound that with them losing a content deal with Starz recently things are not looking so hot for Netflix, if I were a betting person I’d wager their best days are behind them (as in now their content costs are skyrocketing and their growth will likely slow significantly vs past years). Their stock is down roughly 41% from the recent high when they announced the change.

I understand Netflix had to raise rates because their costs have gone up and will continue to rise, they just handled the situation very poorly and are paying for it as a result. It is too bad, at one point it seemed Netflix could be ‘the thing’ as in having a model where they could be potentially the world leader in content distribution or something(and they had the market pretty saturated as far as types of devices that can talk to Netflix to stream — except of course for WebOS devices) – but at least with the way their negotiations are going with the content producers that seems unlikely at this point. As a side note, I read this about Netflix as well and that made me kind of chuckle at their operations as well. Though I’m sure in the grand scheme of things pissing a few million down the tubes for “cloud services” is nothing compared to their content costs.

Something I learned in the midst of these price changes and the uproar about them that I really didn’t know before is that streaming titles come and go on Netflix, what is available today may not be available tomorrow (for no obvious reason – unlike losing a content deal with Starz for example). Could it be that they have rights to put up only x% of someone’s content at any given time ? I don’t know. But I was kind of surprised when I read(from multiple sources) claims that the same titles can be available, then not available then available again. There apparently is some means to get a gauge as to how long something might be available(don’t remember what it was), just goes to show how far we have to go until we ever get to this.

Next up – the impact of the vSphere 5 licensing fiasco. This will take longer to unfold, probably a year or more but I have no doubt it will have a measurable impact (dare I say significant impact) on Vmware’s market share in the coming years. I was talking to a local Vmware rep not too long ago about this and it was like talking to a brick wall – sad really.

I’ve spent more $ buying old movies and tv shows that I want to have copies of in the past week than a year’s netflix subscription would of cost me(I went on somewhat of a spree I don’t do it all that often). But at least I know I have these copies they aren’t going anywhere, I just have to rip them and upload them to my colo’d server for safe off site backup.

Delaying IPOs..

Filed under: Random Thought — Tags: — Nate @ 7:31 am

It seems the other major social media companies are delaying their IPOs, first Zynga, now Facebook. Even Groupon delayed though it seems they may be going forward now. I know it’s a different situation but every time I heard news of these delays it reminded me of a brief(3 month) time that I had at freeinternet.com, as the bubble was bursting the company was in a hotel meeting hall and the CEO was talking about stuff, the only thing I remember from the meeting was that their investors were delaying their IPO due to “market conditions”, that their investors wanted their “quality companies” to wait till things improved (this was about July 2000 if I recall right).

The conspiracy theorist in me thinks that Groupon is doing anything they can to get their IPO out the door because their model is by far the most shaky and they need to cash out before it’s too late. Zynga not far behind.  Facebook makes some real money at the moment, though the hype doesn’t hold water and I’m sure they hope in 2012 or 2013 they may be luckier.

Joke’s on them though, the economy is still going to be in the crapper in 2012, and 2013, and 2014 and 2015 and probably 2016 and 2017 too.

One thing these IPOs do seem to have an impact on is the local housing market in the bay area, a lot of folks (especially in Palo Alto) are apparently wanting to sell their houses but are holding off until these IPOs to try to get some executive to buy them for big bucks.

Life without a bubble is tough..whether it’s social media, or cloud computing, or the government trying to re-inflate housing (and other assets with low interest rates and stuff) there’s a lot of interest in building another massive bubble.

September 8, 2011

What an outage..

Filed under: Random Thought — Tags: — Nate @ 10:32 pm

I’ve caused my share of outages, whether it’s applications, systems, networking, storage. My ratio of fixing outages to causing outages is quite good though, so overall I think I do alright.

But every time I am the cause of an outage it’s hard not to feel guilty in some way right? Even if it was an honest mistake. Was just looking at the local news and they were reporting on the power outage in southern California and Arizona and mentioned an Arizona power company believes an employee working at a sub station is what triggered the cascading failure causing:

  • Power outage for up to 5 million people in two states
  • Killed the commute for those in San Diego tonight
  • Shutting down a San Diego airport
  • Closing schools in San Diego tomorrow
  • Even a nuclear reactor was taken off line for safety

I’m not sure what kind of person this employee is of course, it may of just been an honest mistake, or they may of not been a mistake maybe they were doing exactly the right thing and something failed, who knows. But I certainly do feel for them, the sheer level of guilt has got to be hard to bare.

But at the same time how many people can brag that they single handedly took out a nuclear reactor?

I suppose the bigger issue is the design of the grid how one fault can cascade to impact so many, it’s reported that the outage has even spread to the northern portion of Mexico as well. Stuff like this really makes me fear the wide scale deployment of the “smart grid” stuff, which I believe will make the grid far, far more vulnerable than it already is today.

HDS aborbs Bluearc

Filed under: Storage — Tags: , , — Nate @ 12:07 am

It seems HDS has finally decided to buy out BlueArc after what was either two or three failed attempts at an IPO.

BlueArc, along with my buddies over at 3PAR is among the few storage companies that puts real silicon to work in their system for the best possible performance. Their architecture is quite impressive and the performance (that is for their mid range system) shows.

I have only been exposed to their older stuff (5-6 year old technology) directly, not their newer technology. But even their older stuff was very fast and efficient, very reliable and had quite a few nifty features as well. I think they were among the first to do storage tiering (for them at the file level).

[ warning – a wild tangent gets thrown in here somewhere ]

While their NAS technology was solid(IMO), their disk technology was not. They relied on LSI storage, and the quality of the storage was very poor over all. First off whoever setup the system we had set it up with everything running RAID 5 12+1, then there was the long RAID rebuild times, the constant moving hot spots because of the number of different tiers of storage we had, the fact that the 3 Titan head units were not clustered so we had to take hard downtime for software upgrades(not BlueArc’s fault other than perhaps making it too expensive to be able to do clustered heads when the company bought the stuff – long before I was there). Every time we engaged with BlueArc 95% of our complaints were about the disk. For the longest time they tried to insist that “disk doesn’t matter”. That you could put any storage system behind the BlueArc and it would be the same.

After the 3rd or 4th engagement BlueArc significantly changed their tune (not sure what prompted it), but now acknowledged the weakness of the low tier storage and was promoting the use of HDS AMS storage (USP was well, waaaaaaaay out of our price range) since they were a partner of HDS back then as well. The HDS proposal fell far short of the design I had with 3PAR and at the time Exanet was their partner of choice.

If I could of chosen I would of used BlueArc for NAS and 3PAR for disk. 3PAR was open to the prospect of course, BlueArc claimed they had contacted 3PAR to start working with them but 3PAR said that never happened. Later BlueArc acknowledged they were not going to try to work with 3PAR (or any other storage company other than LSI or HDS – I think 3PAR was one digit too long for them to handle).

Given the BlueArc system lacked the ability to provide us with any truly useful disk performance statistics, it was tough coming up with a configuration that I thought would work as a replacement. There was a large number of factors involved, and any one of them had a fairly wide margin of error. You could say I pulled a number out of my ass, but I did do more calculations than that I have about a dozen pages of documentation I wrote at the time on the project, but really at the end of the day it was a stab in the dark as far as initial configuration.

BlueArc as a company, at the time didn’t really have their support stuff all figured out yet. The first sign was when we had scheduled downtime for a software upgrade that was intended to take 2-3 hours ended up taking 10-11 hours because there was a problem and BlueArc lacked the proper escalation procedures to resolve it quick enough. Their CEO sent us a letter later saying that they fixed that process in the company. The second sign was when I went to them and asked them to confirm the drive type/size of all of our disks so I could do some math for the replacement system. They did a new audit(had to be on site to do it for some reason), and turns out we had about 80 more spindles than they thought we had(we bought everything through them). I don’t know how you lose track of that amount of disks for support but somehow it fell through the cracks. Another issue we had was we paid BlueArc to relocate the system to another facility(again before I was at the company), and whomever moved it didn’t do a good job, they accidentally plugged both power supplies of a single shelf into the same PDU. Fortunately it was a non production system. A PSU blew at one point that took out the PDU, which then took out that shelf which then took out the file system the shelf was on.

Even after all of that my main problem with their solution was the disks. LSI was not up to snuff and the proposal from HDS wasn’t going to cut it. I told my management that there is no doubt that HDS could come up with a solution that would work — it’s just what they have proposed will not(they didn’t even have thin provisioning at the time. 3PAR was telling me HDS was pairing USP-Vs along with AMSs in order to try to compete in the meantime. They did not propose that to us). A combination of poor performing SATA on RAID-6 no less for bulk storage and higher performing 15k RPM disks for higher tier/less storage. HDS/BlueArc felt it was equivalent to what I had specified through 3PAR and Exanet, not understanding the architectural advantages the 3PAR system had over the proposed HDS design(going into specifics will take too long you probably know them by now anyways if your here). Not to mention what seemed like sheer incompetence among the HDS team that was supporting us, it seemed nothing I asked them they could answer without engaging someone from Japan and even then I rarely got a comprehensible answer.

So in the end we ended up replacing a 4-rack BlueArc system with what could of been a single rack 3PAR + a few rack units for the Exanet but we had to put the 3PAR in two racks due to weight constraints in the data center. We went from 500+ disks (mix of SATA-I and 10k RPM FC) to 200 disks of SATA-II (RAID 5 5+1). With the change we got the advantage of being able to run fibre channel (which we ran to all of our VM boxes as well as primary databases), iSCSI (which we used here and there 3PAR’s iSCSI support has never been as good as I would of liked to have seen it though for anything serious I’d rather use FC anyways and that’s what 3PAR’s customers did which led to some neglect on the iSCSI front).

Half the floor space, half the power usage, roughly the same amount of usable storage, about the same amount of raw storage. We put the 3PAR/Exanet system through it’s paces with our most I/O intensive workload at the time and it absolutely screamed. I mean it exceeded everyone’s expectations(mine included). But that was only the begining.

This is a story I like to tell on 3PAR reference calls when I do them which is becoming more and more rare these days. In the early days of our 3PAR/Exanet deployment the Exanet engineer tried to assure me that they were thin provisioning friendly, he had personally used 3PAR+Exanet in the past and it worked fine. So with time constraints and stuff I provisioned a file system on the Exanet box not thinking too much about the 3PAR end of things. It’s thin provisioning friendly right? RIGHT?

Well not so much, before you knew it the system was in production and we started dumping large amounts of data on it, and deleting large amounts of data on it, I found out in a few weeks the Exanet box was preferring to allocate new space rather than reclaim deleted space. I did some calculations and the result was not good. If we let the system continue at this rate we were going to exceed the amount of capacity on the 3PAR box if the Exanet file system was allowed to grow to it’s full potential. Not good.. Compound that with the fact that we were at the maximum addressable capacity of a 2-node 3PAR box, if I had to add even 1 more disk to the system(not that adding 1 disk is possible in that system due to the way disks are added, minimum is 4), I would of had to put in 2 more controllers. Which as you might expect is not exactly cheap. So I was looking at what could of been either a very costly downtime to do data migration or a very costly upgrade to correct my mistake.

Dynamic optimization to the rescue. This really saved my ass. I mean really, it did. When I built the system I used RAID 5 3+1 for performance (for 3PAR that is roughly ~8% slower than RAID 10, and 3PAR fast RAID 5 is probably on par with many other vendors RAID 10 due to the architecture).

So I ran some more calculations and determined if I could get to RAID 5 5+1 I would have enough space to survive. So I began the process, converting roughly a half dozen LUNs at a time. 24 hours a day, 7 days a week. It took longer than I expected, the 3PAR was routinely getting hammered from daily activities from all sides. It took about 5 months in the end to convert all of the volumes. Throughout the process nobody noticed a thing. The array was converting volumes for 24 hours a day for 5 months straight and nobody noticed (except me who was baby sitting it hoping I could beat the window). If I recall right I probably had 3-4 weeks of buffer space, if my conversions was going to take an extra month I would of exceeded the capacity of the system. So, I got lucky I suppose, but also bought the system knowing I could make such course corrections online without impacting applications for just that kind of event — I just didn’t expect the event to be so soon and on such a large scale.

One of the questions I had for HDS at the time we were looking at them was could they do the same online RAID conversions. The answer? Absolutely they can. But the fine print was (assuming it still is) you needed blank disks to do the migration to. Since 3PAR’s RAID is performed at the sub disk level no blank disks are required, only blank “chunklets” as they call them. Basically you just need enough empty space on the array to mirror the LUN/Volume to the new RAID level and then you break the mirror and eliminate the source (this is all handled transparently with a single command and some patience depending on system load and volume of data in flight).

As time went on we loaded the system with ever more processes and connected ever more systems to it as we got off the old BlueArc(s). I kept seeing the IOPS (and disk response times) on the 3PAR going up..and up.. I really thought it was going to choke, I mean we were pushing the disks hard, with sustained disk response times in the 40-50ms range at times(with rare spikes to well over 100ms). I just kept hoping for the day when we would be done and the increase would level off, and it did for the most part eventually. I built my own custom monitoring system for the array for performance trending, since I didn’t like the query based tool they provided as much as what I could generate myself(despite the massive amount of time it took to configure my tool).

I did not know a 7200RPM SATA disk could do 127 IOPS of random I/O.

We had this one process that would dump up to say 50GB of data from upwards of 40-50 systems simultaneously as fast as they could go. Needless to say when this happened it blew out all of the caches across the board and brought things to a grinding halt for some time(typically 30-60 seconds). I would see the behavior on the NAS system and login to my monitoring tool and just see it hang while it tried to query the database(which was on the storage). I would cringe, and wait for the system to catch up. We tried to get them to re-design the application so it was more thoughtful of the storage but they weren’t able to. Well they did re-design it one time (for the worse). I tried to convince them to put it on fusion IO on local storage in the servers but they would have no part of it. Ironically not long after I left the company they went out and bought some Fusion IO for another project. I guess as long as the idea was not mine it was a good one.. The storage system was entirely a back office thing, no real time end user transactions ever touched it, which meant we could live with the higher latency by pushing the SATA drives 30-50% beyond engineering specifications.

At the end of the first full year of operation we finally got budget to add capacity to the system, we had shrunk the overall theoretical I/O capacity by probably 2/3rds vs the previous array, and had absorbed almost what seemed like a 200% growth on top of that during the first year and the system held up. I probably wouldn’t of believed it if I didn’t see it(and live it) personally. I hammered 3PAR as often as I could to increase the addressable capacity of their systems which was limited by the operating system architecture. Doesn’t take a rocket scientist to see that their systems had 4GB of control cache(per controller) which is a common limit to 32-bit software. But the software enhancement never came while I was there at least, it is there in some respect in the new V-class, though as mentioned the V-class seems to have had an arbitrary raw capacity limit placed on it that does not align with the amount of control cache it can have (up to 32GB per controller). With 64-bit software and more control cache I could of doubled or tripled the capacity of the system without adding controllers.

Adding the two extra controllers did give us one thing I wanted – Persistent cache, that’s just an awesome technology to have and you simply can’t do that kind of thing on a 2-controller system. Also gave us more ports than I knew what to do with.

What happened to the BlueArc? Well after about 10 months of trying to find someone to sell it to – or give it to — we ended up paying someone to haul it away. When HDS/BlueArc was negotiating with us on their solution they tried to harp on how we could leverage our existing disk from BlueArc in the new solution as another tier. I didn’t have to say it my boss did which made me sort of giggle – he said the operational costs of running the old BlueArc disk (support was really high, + power and co-lo space) was more than the disks were worth, BlueArc/HDS didn’t have any real response to that. Other than perhaps to nod their heads acknowledging that we’re smart enough to realize that fact.

I still would like to use BlueArc again, I think it’s a fine platform, I just want to use my own storage on it 🙂

This ended up being a lot longer than I expected! Hope you didn’t fall asleep. Just got right to 2600 words.. there.

September 2, 2011

EMC’s Server strategy: use our arrays?

Filed under: Security — Tags: — Nate @ 8:13 am

I just read this from our friends at The Register. I just have one question after reading it

Why?

Why would anyone want to use extremely premium CPU/Memory resources on a high end enterprise storage system to run virtual servers on? What’s the advantage? You could probably buy a mostly populated blade enclosure from almost everyone for the cost of a VMAX controller.

If EMC wants in on the server-based flash market they should just release some products of their own or go buy one of the suppliers out there.

If EMC wants to get in on the server business they should just do it, don’t waste people’s time on this kinda stuff. Stupid.

 

Powered by WordPress