TechOpsGuys.com Diggin' technology every day

June 14, 2012

Nokia’s dark future

Filed under: Random Thought — Tags: — Nate @ 8:00 am

Nokia made some headlines today, chopping a bunch of jobs, closing factories and stuff. With sales crashing and market share continuing to slip, their reliance on Windows Phone has been a bet that has gone bad, at least so far.

What I find more interesting though is what Microsoft has gotten Nokia to inflict upon itself. It’s basically investing much of it’s remaining resources to turn into a Microsoft shop. Meanwhile their revenues decline, and their market valuation plunges. There was apparently talks last year about Microsoft buying Nokia outright, but they fell through. For good reason, I mean all Microsoft has to do is wait, Nokia is doing their bidding already, and making the valuation of the company even less as time goes on. From a brand name standpoint Nokia doesn’t exist in the smart phone world (really), so there really isn’t much to lose (other than the really good employees that may be jumping ship in the meantime – though I’m sure Nokia keeps MS aware of who is leaving so MS can contact them in the event they want to try to hire them back).

At some point barring a miracle, Nokia will get acquired. By so heavily investing itself in Microsoft technologies now, and until that acquisition happens they are naturally preparing themselves for assimilation – and at the same time making themselves less attractive to most other buyers because they are so committed to the Microsoft platform. Other buyers may come in and say we want to buy the patents or this piece or that piece. But then Microsoft can come in and offer a much higher price because all of the other parts of the company have much more value to them.

Not that I think going the Microsoft way was a mistake. All too often I see people say all Nokia had to do is embrace Android and they’d be fine. I don’t agree at all here. Look at the Android market place, there are a very few select standouts, Samsung (Apple and Samsung receive 90%+ of the mobile phone profits) being the main one these days (though I believe as recently as perhaps one year ago it was HTC though they have fallen from grace as well). There’s not enough to differentiate in the Android world, there are tons of handset makers, most of them are absolute crap(very cheap components, breaks easily, names you’ve never heard of), the tablets aren’t much better.

So the point here is just being another me too supplier of Android wasn’t going to cut it. To support an organization that large they needed something more extraordinary. Of course that is really hard to come up with, so they went to Microsoft. It’s too bad that Nokia, like RIM and even Palm(despite me being a WebOS fan and user, the WebOS products were the only Palm-branded products I have ever owned) floundered so long before they came up with a real strategy.

HP obviously realized this as well given the HP Touchpad was originally supposed to run Android – before the Palm acquisition. Which would explain the random Touchpad showing up (from RMA) in customer’s hands running Android.

Palm’s time of course prematurely ran out last year (HP’s original plan had a three year runway for Palm), Nokia and RIM still have a decent amount of cash on hand and it remains to be seen if they have enough time to execute on their plans. I suspect they won’t, with Nokia ending up at Microsoft, and RIM I don’t know. I think it would make another good MS fit primarily for the enterprise subscribers, though by the time the valuation is good enough (keeping in mind MS will acquire Nokia) there may not be enough of them left. Unless RIM breaks apart, sells the enterprise biz to someone like MS, and maintains a smaller global organization supporting users where they still have a lot of growth which seems to be in emerging markets.

Of course Nokia is not the only one making Windows Phone handsets, but at least that market is still so new (at least with the latest platform) that there was a better opportunity for them to stand out amongst the other players.

Speaking of the downfall of Nokia and RIM, there was a fascinating blog post a while back about the decline of Apple since the founder is gone now. It generated a ton of buzz, I think the person makes a lot of very good and valid points.

Now that I’ve written that maybe my mind can move on to something else.

June 11, 2012

3PAR and NPIV

Filed under: Storage — Tags: , — Nate @ 7:20 am

I was invited to a little preview of some of the storage things being announced at HP Discover last week, just couldn’t talk about it until the announcement. Since I was busy in Amsterdam all last week I really didn’t have a lot of time to think about blogging here.

But I’m back and am mostly adjusted to the time zone differences I hope. HP had at least two storage related announcements they made last Monday, one related to scaling of their StoreOnce dedupe setup and another related to 3PAR. The StoreOnce announcement seemed to be controversial, since I really have a minimal amount of exposure to that sort of product I won’t talk about it much, on the surface it sounded pretty impressive but if the EMC claims are true than it’s unfortunate.

Anyways onto the 3PAR announcement which while it had a ton of marketing around it, it basically comes down to three words:

3PAR Supports NPIV (finally)

NPIV in a nutshell the way I understand it is a way of virtualizing connections between points in a fibre channel network, most often in the past it seems to have been used to present storage directly to VM hosts, via FC switches. NPIV is also used by HP’s VirtualConnect technology on the FC side to connect the VC modules to a NPIV-aware FC switch (which is pretty much all of them these days?), and then the switch connected to the storage(duh). I assume that NPIV is required by Virtual Connect because the VC module isn’t really a switch it’s more of a funky bridge.

Because 3PAR did not support NPIV (for what reason I don’t know I kept asking them about it for years but never got a solid response as to why not or when they might support it) there was no way to directly connect a Virtual Connect module (either the new Flex Fabric or the older dedicated FC VC modules) to a 3PAR array, you had to have a switch as a middleman. Which just seemed like a waste. I mean here you have a T or now a V-class system with tons of ports, you have these big blade chassis with a bunch of servers in them, with the VC modules acting like a switch (acting as in aggregating points) and you can’t directly connect it to the 3PAR storage! It was an unfortunate situation. Even going back to the 3cV, which was a bundle of sorts of 3PAR, HP c-Class Blades and VMware (long before HP bought 3PAR of course), I would have thought getting NPIV support would of been a priority but it didn’t happen, until now (well last Monday I suppose).

So at scale you have up to 96 host fibre channel ports on a V400 or 192 FC ports on a V800 operating at 8Gbps. At a maximum you could get by with 48 blade enclosures (2 FC/VC modules each with a single connection) on a V400 or of course double that to 96 on a V800. Cut it in half if you want higher redundancy with dual paths on each FC/VC module. That’s one hell of a lot of systems directly connected to the array. Users may wish to stick to a single connection per VC module allowing the 2nd connection to be connected to something else, maybe another 3PAR array. You still have full redundancy with two modules and one path per module. 3PAR 4Gbps HBAs (note the V-class has 8Gbps) have queue depths of something like 1,536 (not sure what the 8Gbps HBAs have). If your leveraging full height blades you get 8 per chassis, absolute worst case scenario you could set a queue depth of 192/server (I use 128/server on my gear). You could probably pretty safely go quite a bit higher though more thought may have to be had in certain circumstances. I’ve found 128 has been more than enough for my own needs.

It’s cost effective today to easily get 4TB worth of memory per blade chassis, memory being the primary driver of VM density, so your talking anywhere from 96 – 384 TB of memory hooked up to a single 3PAR array. From a CPU perspective anywhere from 7,680 CPU cores all the way up to 36,684 CPU cores in front of a single storage system, a system that has been tested to run at over 450,000 SPC-1 IOPS. The numbers are just insane.

All we need now is a flat ethernet fabric to connect the Virtual Connect switches to, oh wait we have that too, though it’s not from HP. A single pair of Black Diamond X-Series switches could scale to the max here as well, supporting a full eight 10Gbit/second connections per blade chassis with 96 blade chassis directly connected – which, guess what – is the maximum number of 10GbE ports on a pair of FlexFabric Virtual Connect modules (assuming your using two ports for FC). Of course all of the bandwidth is non blocking. I don’t know what the state of interoperability is but Extreme touts their VEPA support in scaling up to 128,000 VMs in an X-series, and Virtual Connect appears to tout their own VEPA support as well. Given the lack of more traditional switching functionality in the VC modules it would probably be advantageous to leverage VEPA (whether or not this extends to the Hypervisor I don’t know – I suspect not based on what I last heard at least from VMware, I believe it is doable in KVM though) to route that inter-server traffic through the upstream switches in order to gain more insight into it and even control it. If you have upwards of 80Gbps of connectivity per chassis anyways it seems there’d be abundant bandwidth to do it. All HP needs to do now is follow the Dell and revise their VC modules to natively support 40GbE (the Dell product is a regular Blade Ethernet switch by contrast and is not yet shipping).

You’d have to cut at least one chassis out of that configuration(or reduce port counts) in order to have enough ports on the X-Series to uplink to other infrastructure. (When I did the original calculations I forgot there would be two switches not one, so there’s more than enough ports to support 96 blade chassis between a pair of X-8s going full bore with 8x10GbE/chassis and you could even use M-LAG to go active-active. if you prefer). I’m thinking load balancers, and some sort of scale-out NAS for file sharing, maybe the interwebs too.

Think about that, up to 30,000 cores, more than 300 TB of memory, sure you do have a bunch of bridges, but all of it connected by only two switches, and one storage array (perhaps two). Just insane.

One HP spokesperson mentioned that even a single V800 isn’t spec’d to support their maximum blade system configuration of 25,000 VMs. 25k VMs on a single array does seem quite high(that comes to an average of 18 SPC-1 IOPS/VM), but it really depends on what those VMs are doing. I don’t see how folks can go around tossing solutions about saying X number of VMs when workloads and applications can vary so widely.

So in short, the announcement was simple – 3PAR supports NPIV now – the benefits of that simple feature addition are pretty big though.

Back from Amsterdam

Filed under: Uncategorized — Tags: — Nate @ 5:20 am

I’m back from Amsterdam – it was about what I expected. I basically stuck to the hotel and the data center – I even skipped out on that little cruise I pre paid for, just didn’t feel like going. I knew I disliked traveling and this trip was a massive reminder as to why. About the only thing that was a positive surprise for me was the long haul flights. I was dreading it at first but the nice reclining seats and big screen LCDs allowed me to kick back and stretch my legs without getting the usual cramps and discomfort. My flight to Amsterdam was on a single airline, made a stop in Chicago where the transfer of plans was amazingly short – it was about 150 feet between the gates I was afraid that it was going to be far and maybe I’d miss the flight (I don’t have much recent flying experience the last time I had to make a connecting flight was I’d wager 20 years ago).

I got confused as to my flight schedule(wasn’t going to be the first time) and I arrived in Amsterdam about eight hours before I thought I was going to arrive. The hotel was alright, I mean for the price at least, it was around $200/night or something which seemed pretty typical for a city room. First thing I noticed is it took me a good 3-5 minutes to figure out how the lights worked (had to put hotel key card in a slot to activate them). Took a shower after the long flight – no washcloth ? Maybe it is not typical in Europe I don’t know, I seem to recall washcloths at hotels I was at in Asia growing up. The toilet was a very strange design, it was like this, which had a couple drawbacks. The mini bar in the room was automatic, I didn’t notice that until the 2nd day, so you can’t even take something out to look at it without being charged. I ended up taking quite a bit of things out. There was a sort of mini mart at a shell station about a half mile away that I walked to to buy drinks and stuff on a couple occasions the selection paled in comparison to similar stores in the U.S. The first time I went I literally saw a line of cars at the pumps. I don’t know if gas was cheap or if it was a rush hour or the only gas station in the area but  it really reminded me of seeing the pictures of the gas shortages in the 70s in the U.S. There wasn’t many pumps – I think 4 or 5, I’d say less than half the typical gas station here.

On the first leg of my flight the passenger next to me said watch out for the bikes – but didn’t elaborate. Wow – I had not seen so many bikes since I lived in China in 89-90. They certainly have their bike infrastructure laid out pretty well with dedicated pathways for bikes as well as dedicated street crossing signals etc. On one of my walks around the hotel area I walked through what appeared to be their version of the Park and Ride. Where here the park and rides are filled with cars and parking lots, this one was filled with bikes and was pretty much entirely under a freeway overpass. It seemed like a large number of bikes weren’t even locked up. The overall quality of the bikes seemed low I suppose that is at least partly to reduce theft by not having nice fancy bikes I’m not sure. More than anything when I saw the bike stuff it made me think this must be what those hippies in Seattle and SFO want. It was certainly an interesting design, too much of a culture shock for me though.

I found the intersections very confusing and am even more glad I did not try to rent a car while I was there.

Speaking of cars, wow are they small over there, I struggle to think of seeing even a single pickup truck (of any size) while I was there. I saw a bunch of cars like mine, and there was this other really tiny car, which made those tiny Smart cars look big, it was smaller than a golf cart. I missed a few opportunities to take pictures of them, I’m sure I could find them online somewhere. The taxi drivers drove sort of crazy, drifting between lanes and stuff, one of them blew way through a red light(the other lights must’ve turned green already) which was freaky. I recall on that same trip we were behind some kind of small van that had a radiation warning sign on it.

The data center was — interesting I guess. Everyone had to wear protective booties around their shoes while on the floor which was a first for me, I think way over kill. Nothing really exciting here, I got everything done that I needed to get done.

I spent hours looking online for places to go but could not find anything that I was interested in. Well there was one thing I just couldn’t figure out how to do it. I was really interested in seeing the big water structures they used to hold back the water. The biggest of them appeared to be a 2 hour drive away from the city ( too far). There was a couple tours that hit them but they were minimum 8 hour commitment which was too long. This is my first trip where I did not have a car at the destination and that was a good reason why I didn’t do anything or go anywhere, normally I would just roam around but relying on taxis I really had to have a precise destination. I wasn’t about to rent a car, I really did not feel anywhere comfortable enough to drive in a foreign country like that. While everyone said “they all speak english!”, most people did speak great english, but the destinations for me for the most part were unpronounceable and not understandable (Schepenbergweg was the street the data center was on – I heard it pronounced at least a dozen times and at the end was no closer to beginning to pronounce it myself than hearing it the first time). Because of the $20 per megabyte roaming data fees on my phone I kept the data services on the phone disabled throughout my trip there which of course limited my ability to find stuff while not at the hotel or data center. I was especially worried of getting lost and having to call for a taxi and not be able to pronounce where I was and the taxi not being able to find me. I don’t know how it was like in the real down town parts of town but in all the places I visited while growing up in Asia there was taxis everywhere you could just flag down and get one. I did not see this situation in the areas I was at in Amsterdam. The hotel called me a taxi to go to the data center and I asked the security guards at the data center to call me a taxi to get back.

So in the end I ate most of my meals at the hotel, never went to the down town part of town, I walked around a bit around the hotel and took some pictures of the area, nothing special. It really reminded me how much I dislike traveling in general.

The flight back was a little more frustrating, having to stop in London and go through customs and immigration and a pretty long trip to change terminals, it seems like I barely made the flight despite having a 2 hour stop over. I had to ask multiple people for help while there too because while I had a boarding pass it didn’t tell me which gate or even which terminal to go to. Even once I knew where to go, getting there wasn’t clear either. The whole place was very confusing, and as a result very frustrating.

This is the first trip I’ve taken in recent memory where I was really excited about going home. I wasn’t looking forward to it to begin with and it turned out about the way I expected. Hopefully that’s my last trip for a long time to come.

I thought about going somewhere fancy to eat or something, but I really couldn’t find anything of interest. Add to that I don’t like going out alone, if I’m with a friend things are different. When it comes to things like fancy steak or pasta or whatever I really don’t have the sensitivity to tell the difference between most of them so I wouldn’t be able to appreciate the good stuff so there really isn’t a whole lot of point of me going. There was a BBQ + Grill near the data center (emphasis on was), the sign was still up but the building was empty. I went to two different nice places with a local friend when I was in Atlanta that I really enjoyed, I tried finding something sort of along those lines in Amsterdam but came up with nothing. Most of the places seemed too exotic or too fancy/upper class.

Apparently I left on the day things were going to get crazy, some special soccer game was being played on Saturday afternoon (I left at around noon). I’ve never been much of a soccer fan at least not since I played it back in 5th grade and earlier years. About the only sport I do enjoy watching is pro football, and even then my interest has been waning over the recent years.

I did all of my shopping at the Airport, picked up a bunch of dutch chocolate going to give most of it away, I tried some of it and it tastes like regular chocolate. I live a mile or so away from a pretty big Sees Candy operation, I bought some of their stuff for Christmas gifts last year, it tastes similar to the Dutch stuff if not better. Picked up a couple picture books of the area, along with some shot glasses for friends and/or family or something.

I got back a full day earlier than I expected. I was absolutely sure yesterday was Monday when I woke up at 5:30AM and turned to CNBC only to see it was Sunday. I got back on Saturday afternoon.

Contrast that with my next trip, which I think will be early July at this point, road trip up to Seattle. I decided to take the coast up north at least to Crescent City, CA. I’ve been wanting to take my new car along the coast since I bought it over a year ago. I made the coastal trip a couple of times several years ago but not in a car as fun to drive as the one I have at the moment. I’m not sure if I will spend two or three days driving up. I’m really looking forward to that. I think it may of been really cool to go along the coast of the Netherlands but I really didn’t have a way to make that happen while I was there.

One of my friends from SEA is in town for a few days I intend to take tomorrow off and go see him down in Morgan Hill, CA (60 miles away), should be good times to catch up and hang out at this nice place he is talking about.

June 1, 2012

London Internet Exchange downed by Loop

Filed under: Networking — Tags: , — Nate @ 8:08 am

This probably doesn’t happen very often at these big internet exchanges but found the news sort of interesting.

I had known  for a few years that the LINX was a dual vendor environment, one side was Foundry/Brocade the other was Extreme, they are one of the few places that go out of their way to advertise what they use. I’m sure it gets them a better discount :)  It seems the LINX replaced the Foundry/Brocade with Juniper at some point since I last checked(less than a year ago). Though their site still mentions usage of EAPS (Extreme’s ring protocol) and MRP (Foundry’s ring protocol). I assume Juniper has not adopted MRP, though they probably have something similar. Looking at the design of the Juniper LAN vs the Extreme LAN (and the Brocade LAN before Juniper), the Juniper one looks a lot more complicated.  I wonder if they are using Juniper’s new protocol(s) to manage it? Qfabric I think it’s called? It seems LINX still has some Brocade in one of their edge networks.

Apparently the Juniper side is what suffered the loop –

“Linx is trying to determine where the loop originated and we are also addressing why the protection on Juniper’s LAN didn’t work.”

I wanted to point out again, since it’s been a while since I covered it (and only then was it buried in the post, wasn’t part of the title), that Extreme has a protocol (that as far as I know is unique – let me know if there is another vendor or protocol that is similar – note of course I am not referring to anything like STP) that can detect and recover(in some cases) loops automatically. I’ve only used it in detect mode to-date. I was also telling someone about this protocol who was learning the ropes on Extreme gear after coming from a Juniper background so thought I would mention it again.

The protocol is the Extreme Loop Recovery Protocol (ELRP). The documentation does a better job at explaining it than I can.

The Extreme Loop Recovery Protocol (ELRP) is used to detect network loops in a Layer 2 network. A switch running ELRP transmits multicast packets with a special MAC destination address out of some or all of the ports belonging to a VLAN. All of the other switches in the network treat this packet as a regular, multicast packet and flood it to all of the ports belonging to the VLAN.

When the packets transmitted by a switch are received back by that switch, this indicates a loop in the Layer 2 network. After a loop is detected through ELRP, different actions can be taken such as blocking certain ports to prevent loop or logging a message to system log. The action taken is largely dependent on the protocol using ELRP to detect loops in the network.

The design seems simple enough to me, I’m not sure why others haven’t come up with something similar (or if they have let me know!)

It’s rare to have a loop in a data center environment but I do remember a couple loops I came across in an office environment many years ago that ELRP helped trace down. I’m not sure what method one would use to trace down a loop without something like ELRP – perhaps just looking at port stats and trying to determine where the bulk of the traffic is and disabling ports or unplugging cables until it stops.

[Tangent]

I remember an outage one company I was at took one time to upgrade some of our older 10/100 3COM switches to gigabit Extreme switches. It was a rushed migration, I was working with the network engineer that we had, the switches were installed in a centralized location with tons of cables, none of which were labeled. So I guess it comes as little surprised while during the migration someone (probably me) happened to plug the same cable back into one of the switches causing a loop. It took a few minutes to track down, at one point our boss was saying get ready to roll back. The network engineer and I looked at each other and laughed there was no roll back, well not one that was going to be smooth it would of taken another hour of downtime to remove the Extreme switches and re-install the 3COM and re-cable stuff. Fortunately I found the loop. This was about a year or so before I was aware of the existence of ELRP. We discovered the loop mainly after all the switch lights started blinking in sequence, normally a bad thing. Then users reported they lost all connectivity.

One of my friends who is another network engineer told me a story when I was in Atlanta earlier in the year about a customer who was a university or something. They had major network performance problems but could not track them down. These problems had been going on for literally months. My friend went out as a consultant and they brought him into their server/network room and his jaw dropped, they had probably 2 dozen switches and ALL of them were blinking in sequence. He knew what the problem was right away and informed the customer. But the customer was adamant that the lights were supposed to blink that way and the problem was elsewhere(not kidding here). The customer had other issues like running overlapping networks on the same VLAN etc. My friend had a lot of suggestions for the customer but the customer felt insulted by him telling them their network had so many problems so they kicked him out and told the company not to send him back. A couple months later the customer went through some sort of audit process and failed miserably and grudgingly asked (begged) to get my friend back since he was the only one they knew that seemed to know what he was doing. He went back and fixed the network I assume (I forgot that last bit of the story).

[End Tangent]

ELRP can detect a loop immediately and give a very informative system log entry as to the port(s) the loop is occurring on so you can take action. It works best of course if it is running on all ports, so you can pinpoint down to the edge port itself. But if for some reason the edge is not an Extreme switch at least you can get it at a higher layer and can isolate it further that way.

You can either leave it running periodically every X seconds it will send a probe out, or you can run it on demand for a real time assessment. There is also integration with ESRP which I wrote about a while ago, although I don’t use the integrated mode (see the original post as to how that works and why). I normally leave it running sending requests out at least say once every 30 seconds.

LINX had another outage (which was the last time I looked at their vendor stats) a couple of years ago (this one affected me since my company had gear hosted in London at the time and our monitors were tripped by this event), though no mention of which LAN the outage occurred on. One user wrote

It wasn’t a port upgrade, a member’s session was being turned up and due to miscommunication between the member’s engineer and the LINX engineer a loop was somehow introduced in to the network which caused a broadcast storm and a switches CPU to max out cue packet loss and dropped BGP sessions.

As a cause to the outage that occurred two years ago. So I guess it was another loop! For all I know LINX is not running ELRP in their environment either.

It’s not exactly advertised by Extreme if you talk to them, it’s one of those things that’s buried in the docs. Same goes for ESRP. Two really useful protocols that Extreme almost never mentions, two things that make them stand out in the industry and they don’t talk about them. I’m told that one reason could be is they are proprietary(vs EAPS which is not and Extreme touts EAPS a lot but EAPS is layer 2 only!), though as I have mentioned in the past ESRP doesn’t require any software at the edge to function and can support managed and unmanaged devices. So you don’t require an Extreme-only network to run (just at the core, like most any other protocol). ELRP is even less stringent – can be run on any Extreme switch, no interoperability issues. If there were open variants of the protocols that’d be better of course, but again, these seem to be unique in the industry so tout what you got! Customers don’t have to use them if they don’t want to and it can make a network administrator’s life vastly simpler in many cases by leveraging what you have available to you. Good luck integrating Extreme or Cisco or Brocade into Juniper’s Qfabric ? Or into Force10’s distributed core setup ? There are interoperability issues abound with most of the systems out there.

May 29, 2012

More Chef pain

Filed under: General — Tags: — Nate @ 10:30 am

I wrote a while back about growing pains with Chef, which is the newish hyped up system management tool. I’ve been having a couple other frustrations with it in the past few months and needed a place to gripe.

The first issue started a couple of months ago where some systems were for some reason restarting Splunk every single time chef ran. It may of been going on longer than that but that’s when I first noticed it. After a couple hours of troubleshooting I tracked it down to chef seemingly randomizing the attributes for the configuration resulting in writing a new configuration (that was the same configuration, just in a different order) every time and triggering a restart. I think it was isolated primarily to the newer version(s) of chef (maybe specific to 0.10.10). My co-worker who knows more chef than I (and the more I use chef the more I really want cfengine – disclaimer I’ve only used cfengine v2 to-date), says after spending some time troubleshooting himself that the only chef solution might be to somehow set the order of the attributes in a static fashion (probably some ruby thing that lets you do that? I don’t know). In any case he hasn’t spent time on doing that and it’s over my head so these boxes just sit there restarting splunk once or twice an hour. They make up a small portion of the systems, the vast majority are not affected by this behavior.

So this morning I am alerted to a failure in some infrastructure that still lives in EC2 (oh how I hate thee), turns out the disk is going bad and I need to build a new system to replace it. So I do, and chef spits out one of it’s usual helpful error messages

 [Tue, 29 May 2012 16:35:36 +0000] ERROR: link[/var/log/myapp] (/var/cache/chef/cookbooks/web/recipes/default.rb:50:in `from_file') had an error:
 link[/var/log/myapp] (web::default line 50) had an error: TypeError: can't convert nil into String
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:106:in `stat'
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:106:in `stat'
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:61:in `set_owner'
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:30:in `set_all'
 /usr/lib/ruby/vendor_ruby/chef/mixin/enforce_ownership_and_permissions.rb:33:in `enforce_ownership_and_permissions'
 /usr/lib/ruby/vendor_ruby/chef/provider/link.rb:96:in `action_create'
 /usr/lib/ruby/vendor_ruby/chef/resource.rb:454:in `send'
 /usr/lib/ruby/vendor_ruby/chef/resource.rb:454:in `run_action'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:49:in `run_action'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:85:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:85:in `each'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:85:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection.rb:94
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:116:in `call'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:85:in `step'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:104:in `iterate'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection.rb:92:in `execute_each_resource'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:80:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/client.rb:330:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/client.rb:163:in `run'
 /usr/lib/ruby/vendor_ruby/chef/application/client.rb:254:in `run_application'
 /usr/lib/ruby/vendor_ruby/chef/application/client.rb:241:in `loop'
 /usr/lib/ruby/vendor_ruby/chef/application/client.rb:241:in `run_application'
 /usr/lib/ruby/vendor_ruby/chef/application.rb:70:in `run'
 /usr/bin/chef-client:25

So I went to look at this file, on line 50, looked perfectly reasonable, there hasn’t been any changes to this file in a long time and has worked up until now. What a TypeError is I don’t know(it’s been explained to me before but I forgot what it was 30 seconds after it was explained), I’m not a developer(hey, fancy that). I have seen it tons of times before though, it was usually a syntax problem (tracking down the right syntax has been a bane for me in Chef, it’s so cryptic, just like the stack trace above).

So I went to the Chef website to verify the syntax, and yep, at least according to those docs it was right. So, WTF?

I decided to delete the user and group config values, and ran chef again, and it worked! Well until the next TypeError, rinse and repeat about four more times and I finally got chef to complete. Now for all I know my modifications to make the recipes work on this chef will break on the others. Fortunately I was able to figure this syntax error out, usually I just bang my head on my desk for two hours until it’s covered in blood and then wait for my co worker to come figure it out(he’s in a vastly different time zone from me).

So what’s next? I get an alert for the number of apache processes on this host, and that brings back another memory with regards to Chef attributes. I haven’t specifically looked into this issue again but am quite certain I know what the issue is – just no idea how to fix it. The issue the last time this came up was that Chef could not decide on what type of EC2 (ugh) instance this system is, there are different thresholds for different sizes. Naturally one would expect chef to check to see what size, it’s not as if Amazon has the ability to dynamically change sizes on you right? But for some reason again chef thinks it is size A on one run and size B on another run. Makes no sense. Thus the alerts when it gets incorrectly set to the wrong size. Again – this only seems to impact the newest version(s) of Chef.

I’m sure it’s something we’re doing wrong, or if it was VMware it would be something Chef was doing wrong before and is doing right now, what we’re doing hasn’t changed and now all of a sudden is broken. I believe another part of the issue is the legacy EC2 bootstrap process pulls in the latest chef during build, vs our new stuff(non EC2) maintains a static version, less surprises.

Annoyed to have to come back from a nice short holiday to have to immediately deal with two things I hate to deal with – Chef and EC2.

This coming trip to Amsterdam will provide the infrastructure to move the vast majority of the remaining EC2 stuff out of EC2, so am excited about that portion of the trip at least. Getting off of chef is another project I don’t feel like tackling now since I’m in the minority as to my feelings for it. I just try to minimize my work in it for my own sanity, there’s lots of other things I do instead.

On an unrelated note, for some reason during a recent apt-get upgrade my Debian system pulled in what feels like a significantly newer version of WordPress, though I think the version number only changed a little(I don’t recall what the original version number was). I did a major Debian 5.0->6.0 Upgrade a couple of months ago, but this version came in after, has a bunch of UI changes. I’m not sure if it breaks anything, I think I need to re-test how the site renders in IE9 as I manually patched a file after getting a report that it didn’t work right, and the most recent update may of overwritten that fix.

May 21, 2012

Off to Amsterdam next week

Filed under: General — Tags: — Nate @ 7:58 pm

[DANGER: NON TECHNICAL CONTENT BELOW]

Well next Saturday at least. The project kept getting delayed but I guess it’s finally here. Going to Amsterdam for a week to build out a little co-location for the company. Whenever I brought up the subject over the past 7-8 months or so people almost universally responded with jealousy, wanting to go themselves. I suppose this may be a shock but I really don’t want to go. I’m not fond of long trips especially on airplanes (don’t mind long road trips though, at least I can stop whenever and stretch, or take a scenic route and take pictures).

The more I looked at Amsterdam the less interested I was in going there – or Europe in general. It seems like a very old, quaint, cultured place. Really the polar opposite of what I’m interested in. I traveled a lot around Asia specifically growing up(lived in Asia for nearly six years), and had a quick trip down to Australia. So I feel that I can confidently say that I have traveled a bunch and really don’t feel like traveling more, I’ve seen a lot – a lot of things that I am not interested in and am not (yet) aware of other things that may interest me. I also don’t like big crowded cities either. I swear I’ve spent more time in Seattle than San Fransisco since I moved here almost a year ago (not that I enjoy Seattle – but I have a few specific destinations to go to there, where I really have none in SFO at this point).

People of course obviously bring up how certain things you can do in the Netherlands legally that you can’t do here legally at least (like that stops anybody from doing it here). So it’s really not a big deal to me. I actually looked quite a bit at the red light district, and didn’t see anything that got me excited.

The one thing I did sign up for when I was booking the trip on Orbitz (my friend’s site hipmunk.com directed me to Orbitz), was a canal pizza cruise, which seems up my alley – if only they served the only alcoholic drink I order. I really can’t stand the taste or smell even of things like wine or beer (or coffee or tea while I’m at it – I’m sure there’s something wrong with me in that dept…). I’ve looked around for other things but so much of it seems culture related and my attention span for that stuff lasts about 60 seconds. I also don’t know how much free time I may have there, maybe little. My two trips to Atlanta for the build out there I had a COMBINED maybe 7 hours between the two (total time there about 12 days), and many 15+ hour work days, with more than one dinner at a gas station well past midnight.

I would like to find a place like this over there, since it’s so close to Germany. I asked one guy that is over near Amsterdam (runs a VMware blog), though he didn’t have a whole lot of suggestions. I have a good friend from when I was a kid that lives in Denmark, which I thought was close – not close enough though (~700km away), haven’t seen that guy in maybe 17-18 years.

I emailed a two of my very well traveled friends who know me well, know what I like, and neither of them had any ideas either.

In the remote chance that any of the 9 readers I have knows of a good place(s) to visit or thing(s) to do in Amsterdam let me know.

So I suspect short of that little canal cruise I’ll spend all of my time at the Holiday Inn and at the data center. Save my cash for my next trip – I intend to spend a week in Seattle towards the end of next month. My company has a satellite office there so I plan to work during the day and have fun at night. Much celebration will ensue there. I’m obviously very excited to go back to my favorite places, neither of which I have not found replacements for in the Bay Area.

May 15, 2012

Facebook’s race to the bottom

Filed under: Random Thought — Tags: — Nate @ 8:36 am

I’m as excited as everyone else about Facebook’s IPO – hopefully it will mark the passing of an era, and people can move on to talk about something else.

For me it started back in 2006 when I went to work for my first social media startup, a company that very quickly seemed to lose it’s way and just wanted to try to capitalize on Facebook some how. Myself I did not(and still don’t) care about social media, one of the nice things about being on the operations/IT/internet side of the company is it doesn’t really matter what the company’s vision is or what they do, my job stays pretty much the same. Optimize things, monitor things, make things faster etc. All of those sorts of tasks improve the user experience no matter what and I don’t need to get involved in the innards of company strategy or whatever. I mistakenly joined another social media startup a few years later and that place was just a disaster any way you slice it. No more social media companies for me! Tired of the “I wanna be facebook too!” crowd.

Anyways, back on topic. The forthcoming Facebook IPO. The most anticipated IPO in the past decade I believe anyways. Obviously tons of hype around it but I learned a couple interesting things, yesterday I think, that made me chuckle. This information comes from analysts on CNBC – I don’t care enough about Facebook to research the IPO myself it’s a waste of time.

Facebook has a big problem, and the problem is mobile. There hasn’t been any companies that have been able to monetize mobile(weird thinking I worked at a mobile payments company going back to 2003 that was later acquired by AMDOCS which is a huge billing provider for the carriers) in the same way that companies have been able to monetize the traditional PC-based web browsing platform with advertising. There have been companies like Apple that makes tons of money off their mobile stuff but that’s a different and somewhat unique model. The point is advertising. Whether it’s Google, or Pandora, Facebook, and I’m confident Twitter is in the same boat. Nobody is making profits on mobile advertising – despite all the hype and efforts. I guess the screen is too small.

So expanding on that a bit, this analyst said yesterday that outside of the U.S. and parts of Europe the bulk of the populations using Facebook use it almost exclusively on mobile – so there’s no real revenue for Facebook from them at this time.

Add to that apparently Facebook has written China off as a growth market for some specific reason(don’t recall what). Which seems contrary to the recent trend where companies are falling head over heels to try to get into China, giving up their intellectual property to the Chinese government(why..why?!) to get into that market.

So that leaves the U.S. and a few other developed markets that are still, for the most part, using their computers to interact with Facebook.

So Facebook is in a race – to be the first company to monetize mobile before their lucrative subscriber base that they have in these few developed markets shifts away from the easy-to-advertise-on computer platform.

Not only that but there’s another challenge that faces them as well. Employee retention. Myself of course would never work for Facebook, I’ve talked to several people that have interviewed there, and a couple that have worked there and I’ve never really heard anything positive come out of anyone about the company.

Basically it seems like the only thing holding it together is the impending IPO. In fact at one point I believe it was reported that Zuckerberg delayed the IPO in order to get employees to re-focus on the company and software and not get side tracked by the IPO.

So why IPO now? One big reason seems to be taxes, of all things. With many tax rates currently scheduled to go up on Jan 1, 2012 – Facebook wants to IPO now, with the employee lock up preventing anyone from selling shares for six months – that gets you pretty close to the New Year, and the potential new taxes.

The IPO is also expected to trigger a housing boom in and around Palo Alto, CA. I remember seeing a report about a year ago that mentioned many people in the area wanted to sell their houses but were holding off for the IPO – as a result the housing market(at least at the time, not sure what the state is now) was very tight with only a few dozen properties on the market out of tens of thousands.

There was even a California politician or two earlier in the year that said the state’s finances weren’t in as bad of shape as some people were making out because they weren’t taking into account the effect of the Facebook IPO. Of course recently it was announced that things were in fact, much worse than some had previously communicated.

I’m not saying the hype won’t drive the stock really high on opening day – wouldn’t surprise me if it went to $90 or $100 or more. It seems like the IPO road show that Facebook took, in their case it felt like a formality more than anything else. I just saw someone mention that in Asia the IPO is 25X oversubscribed.

One stock person I saw recently mentioned her company has received more requests about the Facebook IPO than any other IPO in the past 20 years.

Maybe they can pull mobile off before it’s too late, I’m not holding my breath though.

I really didn’t participate in the original dot com bubble, I worked at a dot com for about 3 months in the summer of 2000 but that was about it. So this comparison may not be accurate but the hype around this IPO really reminds me of that time, I’m not sure how many of the original dot com companies you would have to combine to reach a market cap of $100B,  hopefully it’s 100 at least. But it’s sort of like a mini dot com bubble all contained within one company. With so many other wanna be hopefuls in the wings not able to get any momentum to capitalize on it beyond their initial VC investments. The two social media companies I worked for combined got around  I want to say $90M in funding alone.

Another point along these lines, is the esteemed CEO of Facebook seems to be on a social mission and cares more about the mission than the money. That reminds me so much of the dot com days, it’s just another way of saying we want even more traffic, more web site hits! Sure it’s easy to not care much about the money now because people have bought the hype hook line and sinker and are just throwing money at it. Obviously it won’t last though 🙂

Myself of course, will not buy any Facebook stock – or any other stock. I’m not an investor, or trader or whatever.

May 12, 2012

HP Launches IaaS cloud

Filed under: Datacenter — Tags: — Nate @ 8:01 am

I’ve seen a couple different articles from our friends at The Register on the launch of the HP IaaS cloud as a public beta. There really isn’t a whole lot of information yet, but one thing seems unfortunately clear – HP has embraced the same backwards thinking as Amazon when it comes to provisioning. Going against the knowledge and experience we’ve all gained in the past decade around sharing resources and over subscription.

Yes – it seems they are going to have fixed instance sizes and no apparent support for resource pools. This is especially depressing from someone like HP who has technology like thin provisioning, and partnerships with the likes of all of the major hypervisor players.

Is the software technology at the hypervisor just not there yet to provide such a service? vSphere 5 for example supports 1600 resource pools per cluster. I don’t like the licensing model of 5, so I built my latest cluster on 4.1 – which supports 512 resource pools per cluster. Not a whole lot in either case but then again cluster sizes are fairly limited anyways.

There’s no doubt that gigabyte to gigabyte that DAS is cheaper than something like a 3PAR V800. But with fixed allocation sizes from the likes of Amazon – it’s not uncommon to have disk utilization rates hovering in low single digits. I’ve seen it at two different companies – and guess what – everyone else on the teams (all of whom have had more Amazon experience than me) was just as not surprised as I was.

So you take this cheap DAS and you apply a 4 or 5% utilization rate to it – and all of a sudden it’s not so cheap anymore is it ? Why is utilization so low ? Well in Amazon (since I haven’t use HP’s cloud), it’s primarily low because that DAS is not protected, if the server goes down or the VM dies the storage is gone. So people use other methods to protect their more important data. You can have the OS and log files and stuff on there no big deal if that goes away – but again – your talking about maybe 3-5GB of data (which is typical for me at least). Then the rest of the disk goes unused.

Go to the most inefficient storage company in the world and and even they will drool at the prospects of replacing storage that your only getting 5% utilization out of! Because really even the worst efficiency is maybe 20% on older systems w/o thin provisioning.

Even if the storage IS fully protected – the fixed allocation units are still way out of whack and they can’t be shared! I may need a decent amount of CPU horsepower and/or  (more likely) memory to run a bloated application but I don’t need several hundred gigabytes of storage attached to each system when 20GB would be MORE than enough(my average OS+App installation comes in at under 3GB and that’s with a lot of stuff installed)! I’d rather take that several hundred gigabytes both in terms of raw space and IOPS and give them to database servers or something like that(in theory at least, the underlying storage in this case is poor so I wouldn’t want to use it for that anyways).

This is what 3PAR was built to solve – drive (way)utilization up, while simultaneously providing the high availability and software features of a modern storage platform. Others do the same too of course with various degrees of efficiency.

So that’s storage – next take CPU. The industry average pre-virtualization was in the sub 20% utilization range – my own personal experience says it’s in the sub 10% range for the most part. There was a quote from a government official a couple years back that talked about how their data centers are averaging about 7% utilization. I’ve done a few virtualization projects over the years and my experience shows me that even after systems have been virtualized the vmware hosts themselves are at low utilization from a CPU perspective.

Two projects in particular that I documented while I was at a company a few years ago while back – the most extreme perhaps being roughly 120VMs on 5 servers, four of them being HP DL585 G1s – which were released in 2005. They had 64GB of memory on them but they were old boxes. I calculated that the newer Opteron 6100 when it was released had literally 12 times the CPU power(according to SPEC numbers at least) of the Opteron 880s that we had at the time. Anyways, even with these really old servers the cluster averaged under 40% CPU – with peaks to maybe 50 or 60%. Memory usage was pretty constant at around 70-75%. Imagine translating that workload on those ancient servers onto something more modern and you’d likely see CPU usage rates drop to single digits while memory usage remains constant.

I have no doubt that the likes of HP and Amazon are building their cloud to specifically not oversubscribe – to assume that people will utilize all of the CPU allocated to them as well as memory and disk space. So they have fixed building blocks to deal with and they carve them up accordingly.

The major fault with the design of course is the vast majority of workloads do not fit in such building blocks and will never come close to utilizing all of the resources that are provided – thus wasting an enormous amount of resources in the environment. What’s Amazon’s solution to this ? Build your apps to better utilize what they provide. basically work around their limitations. Which, naturally most people don’t do so resources end up being wasted on a massive scale.

I’ve worked for really nothing but software development companies for almost 10 years now and I have never really seen even one company or group or developer ever EVER design/build for the hardware. I have been part of teams that have tried to benchmark applications and buy the right sized hardware but it really never works out in the end because a simple software change can throw all those benchmarks and testing out the window overnight(not to mention how traditionally difficult it is to replicate real traffic in a test environment – I’ve yet to see it done right myself for any even moderately complex application). The far easier solution to take is of course, resource pools, and variably allocated resources.

Similarly this model, along with the per-VM licensing model of so many different products out there go against the trend that has allowed us to have more VM sprawl I guess. Instead of running a single server  with a half dozen different apps it’s become a good practice to split those apps up. This fixed allocation unit of the cloud discourages such behavior by dramatically increasing the cost of doing it. You still incur additional costs by doing it on your own gear – memory overhead for multiple copies of the OS (assuming that memory de-dupe doesn’t work -which for me on Linux it doesn’t), or disk overhead (assuming your array doesn’t de-dupe -which 3PAR doesn’t – but the overhead is so trivial here that it is a rounding error). But those incremental costs pale in comparison to massive increases in cost in the cloud, because again of those fixed allocation units.

I have seen no mention of it yet, but I hope HP has at least integrated the ability to do live migration of VMs between servers. The hypervisor they are using supports it of course, I haven’t seen any details from people using the service as to how it operates yet.

I can certainly see a need for cheap VMs on throwaway hardware. I see an even bigger need, for the more traditional customers(that make up the vast, vast majority of the market) to have this model of resource pools instead. If HP were to provide both services – and a unified management UI that really would be pretty nice to see.

The concept is not complicated, and is so obvious it dumbfounds me why more folks aren’t doing it (only thought is perhaps the technology these folks are using isn’t capable) – IaaS won’t be worth while to use in my opinion until we have that sort of system in place.

HP is obviously in a good position when it comes to providing 3PAR technology as a cloud since they own the thing their support costs would be a fraction of what their customers pay and they would be able to consume unlimited software for nothing. Software typically makes up at least half the cost of a 3PAR system(the SPC-1 results and costs of course only show the bare minimum software required). Their hardware costs would be significantly less as well since they would not need much(any?) margin on it.

I remember SAVVIS a few years ago wanting to charge me ~$200,000/year for 20TB usable on 10k RPM storage on a 3PAR array, when I could of bought 20TB usable on 15k RPM storage on a 3PAR array(+ hosting costs) for less than one year’s costs at SAVVIS. I heard similar stories from 3PAR folks where customers would go out to the cloud to get pricing thinking it might be cheaper than doing it in house but always came back being able to show massive cost savings by keeping things in house.

They are also in a good position as a large server manufacturer to get amazing discounts on all of their stuff and again of course don’t have to make as much margin for these purposes (I imagine at least). Of course it’s a double edged sword pissing off current and potential customers that may use your equipment to try to compete in that same space.

I have hope still, that given HP’s strong presence in the enterprise and in house technology and technology partners that they will, at some point offer an enterprise grade cloud, something where I can allocate a set amount of CPU, memory, maybe even give me access to a 3PAR array using their Virtual Domain software, and then provision whatever I want within those resources – and billing would be based on some sort of combination of a fixed price for base services and variable price based on actual utilization (bill for what you use rather than what you provision), with perhaps some minimum usage thresholds (because someone has to buy the infrastructure to run the thing). So say I want a resource pool with 1TB of ram and 500Ghz of CPU. Maybe I am forced to pay for 200GB of ram and 50Ghz of CPU as a baseline, then anything above that is measured and billed accordingly.

Don’t let me down HP.

 

May 11, 2012

More 10GbaseT Coming..

Filed under: Networking — Tags: , — Nate @ 2:38 pm

I wrote a couple of times about the return of 10GbaseT, a standard that tried to come out a few years ago but for various reasons didn’t quite make it. I just noticed that two new 10GbaseT switching products were officially announced a few days ago at Interop Las Vegas. They are, of course from Extreme, and they are, of course not shipping yet (and knowing Extreme’s recent history with product announcements it may be a while before they do actually ship – though they say for the 1U switch by end of year).

The new products are

  • 48 port 10Gbase-T module for the Black Diamond X-series – for up to 384 x 10GbaseT ports in a 14U chassis – note this is of course half the density you can achieve using the 40GbE modules and break out cables, there’s only so many plugs you can put in 14U!
  • Summit X670V-48t (I assume that’s what it’ll be called) – a 48-port 10GbaseT switch with 40GbE uplinks (similar to the Arista 7100 – the only 48-port 10GbaseT switch I’m personally aware of – just with faster uplinks and I’m sure there will be stacking support for those that like to stack)

From this article it’s claimed a list price of about $25k for the 1U switch which is a good price – about the same price as the existing 24-port X650 10GbaseT product. Also in line with the current generation X670V-48x which is a 48-port SFP+ product, so little to no premium for the copper which is nice to see! (note there is a lower cost X670 (non “V”) that does not have 40GbE ability available for about half the cost of the “V” model)

Much of the hype seems to be around the new Intel 10Gbase-T controller that is coming out with the latest CPUs from them.

With the Intel Ethernet Controller X540, Intel is delivering on its commitment to drive down the costs of 10GbE. We’ve ditched two-chip 10GBASE-T designs of the past in favor of integrating the media access controller (MAC) and physical layer (PHY) controller into a single chip. The result is a dual-port 10GBASE-T controller that’s not only cost-effective, but also energy-efficient and small enough to be included on mainstream server motherboards. Several server OEMs are already lined up to offer Intel Ethernet Controller X540-based LOM connections for their Intel Xeon processor E5-2600 product family-based servers.

With Broadcom also having recently announced (and shipping too perhaps?) their own next generation 10GbaseT chips, built for LOM (among other things), which apparently can push power utilization down to under 2W per port, using a 10 meter mode (perhaps?), 10m is plenty long enough for most connections of course! Given that Broadcom also has a quad port version of this chipset, could they be the ones powering the newest boxes from Oracle ?

Will Broadcom be able to keep their strong hold on the LOM market (really can’t remember the last time I came across Intel NICs on motherboards outside of maybe Supermicro or something)?

So the question remains – when will the rest of the network industry jump on board – after having been burned somewhat in the past by the first iterations of 10GbaseT.

April 23, 2012

MS Shooting themselves in their mobile feet again?

Filed under: General,Random Thought — Tags: — Nate @ 10:03 am

I’ve started to feel sorry for Microsoft recently. Back in the 90s and early 00s I was firmly in the anti MS camp, but the past few years I have slowly moved out of that camp mainly because MS isn’t the beast that it used to be. It’s a company that just fumbles about at what it does now and doesn’t appear to be much of a threat anymore. It has a massive amount of cash still but for some reason can’t figure out how to use it. I suppose the potential is still there.

Anyways I was reading this article on slashdot just now about Skype on Windows phone 7. The most immediate complaint was the design of WP7 prevents skype from receiving calls while in the background because with few exceptions like streaming media and stuff any background app is suspended. There is no multi tasking on WP7? As some others I have seen notice – I haven’t seen a WP7 phone on anyone yet, so haven’t seen the platform in action. Back when what was Palm was gutted last year and the hardware divisions shut down many people were saying how WP7 was a good platform to go to from WebOS, especially the 7.5 release which was pretty new at the time.

I don’t multi task too much on my phone or tablets, but it’s certainly nice to have the option there. WebOS has a nice messaging interface with full skype integration so skype can run completely in the background. I don’t use it in this way mainly because the company I’m at uses Skype as a sort of full on chat client, so the device would be hammered by people talking (to other people) in group chats which is really distracting. Add to that the audible notifications for messaging on WebOS applies to all protocols, so I use a very loud panic alarm for SMS messages for my on call stuff, and having that sound off every couple of seconds when a skype discussion is going is not workable! So I keep it off unless I specifically need it. 99.9% of my skype activity is work related. Otherwise I wouldn’t even use the thing. Multi tasking has been one of the biggest selling points of WebOS since it was released several years ago, really seeming to be the first platform to support it (why it took even that long sort of baffles me).

So no multi tasking, and apparently no major upgrades coming either – I’ve come across a few articles like this one that say it is very unlikely that WP7 users will be able to upgrade to Windows 8/WP8. Though lack of mobile phone upgrades seems pretty common, Android in particular has had some investigations done to illustrate the varying degrees when or if the various handsets get upgrades. WebOS was in the same boat here, with the original phones not getting past version 1.4.5 or something, the next generation of phones not getting past 2.x, and only the Touchpad (with a mostly incompatible UI for phones apparently) having 3.x. For me, I don’t see anything in WebOS 3.x that I would need on my WebOS 2.x devices, and I remember when I was on WebOS 1.x I didn’t see anything in 2.x that made me really want to upgrade, the phone worked pretty well as it was. iOS seems to shine the best in this case providing longer term updates for what (has got to be) is a very mature OS at this point.

But for a company that has as much resources as Microsoft, especially given the fact that they seem to be maintaining tighter control over the hardware the phones run on, it’s really unfortunate that they may not be willing/able to provide the major update to WP8.

Then there was the apparent ban Microsoft put on all players, preventing them from releasing multi core phones in order to give Nokia time to make one themselves, instead of giving even more resources to making sure they could succeed they held the other players back, which not only hurts all of their partners (minus Nokia, or not?) but of course hurt the platform as a whole.

I’m stocked up on WebOS devices to last me a while on a GSM network. So I don’t have to think about what I may upgrade to in the future, I suspect my phones might outlive the network technologies they use.

To come back to the original topic – lack of multi tasking – specifically the inability for Skype to operate in the background is really sad. Perhaps the only thing worse is it took this long for Skype to show up on the platform in the first place. Even the zombie’d WebOS has had Skype for almost a year on the Touchpad, and if you happened to have a Verizon Pre2 phone at the time, Skype for that was released just over a year ago(again with full background support). I would of thought given Microsoft bought Skype about a year ago that they would of/ could of had a release for WP7 within a very short period of time(30 days?). But at least it’s here for the 8 people that use the phone, even if the version is crippled by the design of the OS. Even Linux has had Skype (which I use daily) for longer. There have been some big bugs in Skype on WebOS – most of them I think related to video/audio, doesn’t really impact me since most of my skype usage is for text chat.

While I’m here chatting about mobile I find it really funny, and ironic that apparently Microsoft makes more money off of Android than it does it’s own platform(estimated to be five times more last year), and Google apparently makes four times more money off of iOS than it’s own platform.

While there are no new plans for WebOS hardware at this point – it wouldn’t surprise me at all if people inside HP were working to make the new ARM-based WP8 tablets hackable in some way to get a future version of WebOS on them, even though MS is going to do everything they can to prevent that from happening.

« Newer PostsOlder Posts »

Powered by WordPress