Diggin' technology every day

July 25, 2011

Netflix acknowledges significant customer backlash

Filed under: General — Tags: , — Nate @ 5:16 pm

Was watching some CNBC recently and they were talking about the upcoming Netflix results and how much of an impact their recent price hikes may cause.

I wondered over to Yahoo! and came across this:

SAN FRANCISCO (AP) — Netflix Inc. is bracing for customer backlash that could result in its slowest subscriber growth in more than three years amid changes to its online video and DVD rental service that will raise prices by as much as 60 percent.


The shortfall stems from an anticipated slowdown in Netflix’s subscriber growth amid the most radical change in the company’s pricing since it began renting DVDs through the mail 12 years ago.

Nice to see. I don’t blame Netflix for the price hikes, I didn’t like them so I quit the service, but it seems clear they are losing money pretty badly (apparently they’ve been using fancy accounting things to try to cover this up), and their licensing costs are about to skyrocket.

Netflix spent nearly $613 million on streaming rights in the second quarter, a more than nine-fold increase from the same time last year. The company so far has signed long-term contracts committing it to pay $2.44 billion for streaming rights.

So they’re doing what they have to do. Though I’m sure most everyone agrees they could of handled the situation far better than they did. They also apparently face some stiff competition in the latin america markets where they are expanding to, places where bandwidth pipes are smaller(making streaming less feasible), and cable bills are much cheaper than they can be here in the states.

While Netflix’s price hikes have gotten quite a bit of press at least in the business news recently I am kind of surprised that the same hasn’t seemed to be true of the VMware price hikes (outside of the tech community at least). The outrage continues to build..

For me it all comes down to selection – increase the streaming catalog to at least match whatever they have on DVD now and I would probably jump back on board.. in the mean time I’ll stick to cable(+Tivo), I’ll pay more but I get a lot more value out of it.

July 20, 2011

I called it! – Force10 bought by Dell

Filed under: Networking — Tags: , — Nate @ 11:35 am

Not that it matters to me too much either way but Dell just bought Force10. I called it! Well it matters to me in that I didn’t want Dell near my Extreme Networks 🙂

It is kind of sad that Force10 was never able to pull off their IPO. I have heard that they have been losing quite a bit of talent recently, but don’t know to what degree. It’s also unfortunate they weren’t able to fully capitalize on their early leadership in the 10 gigabit arena, Arista seems to be the new Force10 in some respects, though it wouldn’t surprise me if they have a hard time growing too barring some next gen revolutionary product.

I wonder if anyone will scoop up BlueArc, they have been trying to IPO as well for a couple of years now, I’d be surprised if they can pull it off in this market.  They have good technology just a whole lot of debt. Though recently I read they started turning a profit..


VMware Licensing models

Filed under: Virtualization — Tags: , — Nate @ 5:38 am

[ was originally combined with another post but I decided to split out ]

VMware has provided it’s own analysis of their customers hardware deployments and telling folks that ~95% of their customers won’t be impacted by the licensing changes. I feel pretty confident that most of those customers are likely massively under utilizing their hardware. I feel confident because I went through that phase as well. Very, very few workloads are truly cpu bound especially with 8-16+ cores per socket.

It wouldn’t surprise me at all that many of those customers when they go to refresh their hardware change their strategy pretty dramatically – provided the licensing permits it. The new licensing makes me think we should bring back 4GB memory sticks and 1 GbE. It is very wasteful to assign 11 CPU licenses to a quad socket system with 512GB of memory, memory only licenses should be available at a significant discount over CPU+memory licenses at the absolute minimum. Not only that but large amounts of memory are actually affordable now. It’s hard for me to imagine at least having a machine with a TB of memory in it for around $100k, it wasn’t TOO long ago that it would of run you 10 times that.

And as to VMware’s own claims that this new scheme will help align ANYTHING better, by using memory pools across the cluster – just keep this in mind. Before this change we didn’t have to care about memory at all, whether we used 1% or 95%, whether some hosts used all of their ram and others used hardly any. It didn’t matter. VMware is not making anything simpler. I read somewhere about them saying some crap about aligning more with IT as a service. Are you kidding me? How may buzz words do we need here?

The least VMware can do is license based on usage. Remember pay for what you use, not what you provision. When I say usage I mean actual usage. Not charging me for the memory my Linux systems are allocating towards (frequently) empty disk buffers (goes to the memory balloon argument). If I allocate 32GB of ram to a VM that is only using 1GB of memory I should be charged for 1GB, not 32GB. Using vSphere’s own active memory monitor would be an OK start.

Want to align better and be more dynamic? align based on memory usage and CPU usage, let me run unlimited cores on the cluster and you can monitor actual usage on a per-socket basis, so if on average (say you can bill based on 95% similar to bandwidth) your using 40% of your CPU then you only need 40% licensing. I still much prefer the flat licensing model in almost any arrangement rather than usage based but if your going to make it usage based, really make it usage based.

Oh yeah – and forget about anything that charges you per VM too (hello SRM). That’s another bogus licensing scheme. It goes completely against the trend of splitting workloads up into more isolated VMs and instead favors fewer much larger VMs that are doing a lot of things at the same time. Even on my own personal co-located ESXi server, I have 5 VMs on it, I could consolidate it to two and provide the similar end user services, but it’s much cleaner to do it in 5 for my own sanity.

All of this new licensing stuff also makes me think back to a project I was working on about a year ago, trying to find some way of doing DR in the cloud, the ROI for doing it in house vs. any cloud on the market(looked at about 5 different ones at the time) was never more than 3 months. In one case the up front costs for the cloud was 4 times the cost for doing it internally. The hardware needs were modest in my opinion, with the physical hardware not even requiring two full racks of equipment. The #1 cost driver was memory, #2 was CPU, storage was a distant third assuming the storage that the providers spec’d could meet the IOPS and throughput requirements, storage came in at about 10-15% of the total cost of the cloud solution.

Since most of my VMware deployments have been in performance sensitive situations (lots of Java) I run the systems with zero swapping, everything in memory has to stay in physical ram.

Cluster DRS

Filed under: Virtualization — Tags: , — Nate @ 12:05 am

Given the recent price hikes that VMware is imposing on it’s customers(because they aren’t making enough money obviously) , and looking at the list of new things in vSphere 5 and being, well underwhelmed (compared to vSphere 4), I brain stormed a bit and thought about what kind of things I’d like to see VMware add.

VMware seems to be getting more aggressive in going after service providers (their early attempts haven’t been successful, it seems they have less partners now than a year ago – btw I am a vCloud express end-user at the moment). An area that VMware has always struggled in is scalability in their clusters (granted such figures have not been released for vSphere 5 but I am not holding my breath for a 10-100x+ increase in scale)

Whether it’s the number of virtual machines in a cluster, the number of nodes, the scalability of the VMFS file system itself (assuming that’s what your using) etc.

For the most part of course, a cluster is like a management domain, which means it is, in a way a single point of failure. So it’s pretty common for people to build multiple clusters when they have a decent number of systems, if someone has 32 servers, it is unlikely they are going to build a single 32-node cluster.

A feature I would like to see is Cluster DRS, and Cluster HA. Say for example you have several clusters, some clusters are very memory heavy for loading a couple hundred VMs/host(typically 4-8 socket with several hundred gigs of ram), others are compute heavy with very low cpu consolidation ratios (probably dual socket with 128GB or less of memory). Each cluster by itself is a stand alone cluster, but there is loose logic that binds them together to allow the seamless transport of VMs between clusters either for either load balancing or fault tolerance. Combine and extend regular DRS to span clusters, on top of that you may need to do transparent storage vMotion (if required) as well along with the possibility of mapping storage on the target host (on the fly) in order to move the VM over (the forthcoming storage federation technologies could really help make hypervisor life simpler here I think).

Maybe a lot of this could be done using yet another management cluster of some kind, a sort of independent proxy of things (running on independent hardware and perhaps even dedicated storage). In the unlikely event of a catastrophic cluster failure, the management cluster would pick up on this and move the VMs to other clusters and re start them (provided there is sufficient resources of course!). In very large environments it is not be possible to map everything to everywhere, which would require multiple storage vMotions in order to get the VM from the source to a destination that the target host can access – if this can be done at the storage layer via the block level replication stuff first introduced in VAAI that could of course greatly speed up what otherwise might be a lengthy process.

Since it is unlikely anyone is going to be able to build a single cluster with shared storage that spans a great many systems(100s+) and have it be bulletproof enough to provide 99.999% uptime, this kind of capability would be a stop gap, providing the flexibility and availability of a single massive cluster, while at the same time reducing the complexity in having to try to build software that can actually pull the impossible (or what seems impossible today) off.

On the topic of automated cross cluster migrations, having global spare hardware would be nice too, much like most storage arrays have global hot spares, which can be assigned to any degraded RAID group on the system regardless of what shelf it may reside on. Global spare servers would be shared across clusters, and assigned on demand. A high end VM host is likely to cost upwards of $50,000+ in hardware these days, multiply by X number of clusters and well.. you get the idea.

While I’m here, I might as well say I’d like the ability to hot remove memory, Hyper-V has dynamic memory which seems to provide this functionality. I’m sure the guest OSs would need to be re-worked a bit too in order to support this, since in the physical world it’s not too common to need to yank live memory from a system. In the virtual world it can be very handy.

Oh and I won’t forget – give us an ability to manually control the memory balloon.

Another area that could use some improvement is the vMotion compatibility, there is EVC, but last I read you still couldn’t cross processor manufacturers when doing vMotion with EVC. KVM can apparently do it today.

July 19, 2011

Something I hope I don’t miss – Seattle Weather

Filed under: General — Tags: — Nate @ 11:09 pm

I’ve been seeing/hearing increasing numbers of folks in the area complaining about an apparent lack of summer round these parts. Myself I welcome the cooler weather, in fact despite the cooler weather I have had my ACs running almost non stop for at least a couple months now (compressors weren’t always running) with a target temp of 68 degrees.

Actually shortly after I moved here, I was watching a local newscast on one of the channels, and the weather guy was complaining about the weather every chance he got, apparently he hated being here, after a couple months I stopped watching them.

I never minded the rain here, much of my life I have spent time where there was roughly equal if not more annual rainfall than is here. I don’t mind gray skies, doesn’t really matter to me.

People tell me it’s less about the total amount of rain and more about how spread out throughout the year it is. I guess I don’t get out enough to notice.

I have been casually keeping track of the weather of where I am moving to, to see how much of an adjustment I may have ahead of me as a result of the move, after all I am moving roughly 850 miles to the south.

For the most part the temperature seems to be about the same, at least since I started looking in June. My new place is fairly close to both the Pacific Ocean and the San Fransisco bay which helps keep it cooler than if I was more inland.

I just got done watching yet another news report on people complaining about how cold it is here (61 degrees at the moment at 11PM). My own ideal temperature is highs in the mid to upper 50s, lows in the mid 40s (mainly because my apartment is fairly consistently 10-20 degrees warmer inside than out), and good sleeping temperatures seem to be in the 60-68 degree range.


10 day forecast for Bellevue, WA

Now the weather where I’m moving to –


10 day forecast for San Bruno, CA


A little bit more rain in the forecast up here, but not much. In my 11 years of living in Bellevue it seems it hardly rains here at all, it’s as if the rain went around the city (it rains a lot up in the northern Puget Sound region known as the convergence zone).

But as far as people up here complaining it’s cold, and not summer like.. I don’t know if the above weather is typical or not for where I am going, but given the close proximity to water on both sides, I am not surprised it stays cool.

Just a few miles to the south and further inland the high temperatures jump a full 10+ degrees, I almost moved there, but decided against it because of the ~1 hour commute in each direction.

I wouldn’t mind it being drier wherever I was, quite frequently we are above 85% humidity in the Seattle area, I don’t suppose that will improve at my new place.

July 18, 2011

What I’ll miss most about Seattle

Filed under: General — Tags: — Nate @ 10:46 am

[ I will of course miss all my friends more than anything else, this post is not about friends but rather places ]

I have been thinking about my move coming up – my last full day in the area is this Thursday July 21st. (originally was going to be Saturday but because of moving issues it had to be moved sooner).

I mentioned in my original post that I don’t like Seattle, I can’t think of a single part of Seattle that I like, I don’t like the one way streets, the lack of parking (which on a recent news report was the #1 complaint of tourists), the traffic, and really am not a fan of the culture in general, I don’t think as a driver anything gets me more frustrated than driving in downtown Seattle. I was there yesterday in fact to try to find something in particular, I ended up just coming back home in frustration, never bothered to go in any stores because parking was such a pain. Part of the culture in Seattle that I don’t like is they are increasingly anti-car. Which is a fine view to have – you just won’t catch me dead living there!. I do like the east side though (where I live). It is interesting because a lot of folks I know in Seattle are the opposite, they hate the east side but love Seattle.

So I tried to think of if there was anything I did like about Seattle. I came up with two places in Seattle that I do love, and will miss. I’m not sure if I’ll be able to find a close replacement for either down in the Bay Area though.

The first place, which is far and away first place, there is no competition.

Cowgirls Inc

If you haven’t been to Cowgirls you really should check it out, words cannot properly describe the Cowgirls experience on a Friday or Saturday night after 9 PM. Don’t bother going in at 5-6-7 it’s pretty much like any other bar. After 9 things change though. I usually get there at 9, and by about midnight I have a tough time standing up so it’s time to go (I missed out on their 5th Anniversary party which I happened to go on that particular night but could not stay an extra 30 minutes to see the show because well I was destroyed). – oh and you’ll be doing a lot of standing as they remove the bar stools at around 9:30-10pm.

On these nights it is often wall to wall people, I stick close to the bar and defend my position (you’ll understand why if you go). The words kick ass don’t do it justice. I took another friend there for the first time this past Friday and he was blown away too, totally exceeded his expectations. I took another guy a couple of months ago he moved to Seattle last year, and he seemed like he was in shock most of the time he absolutely loved it.

I worked across the street from this place (which is on the corner of 1st and King street in Pioneer Square), back when it opened in 2004, though I was a very different person back then. We used to do software deployments that would start at 10pm and typically end around 2 or 3AM.

My co-workers liked to go there (at what seemed to be around 6-7pm) to play pool and eat the peanuts because they have you throw the shells on the floor and my co-workers loved that. The place is often pretty dead that early so no action.

My Favorite Cowgirl (moved away in 2010)

Even though I only went a few times a year the staff knew me pretty well when I came in, so that was nice. I am terrible with names and although I heard some of their names over the years I never retained them. My favorite cowgirl (left) was awesome, she did the craziest things including dancing on my shoulders on several occasions, the heels hurt the first couple of times but it was good pain 🙂

She moved away from Seattle last year which made me sad.

A funny story about the company I worked at that was across the street from this place. The last boss I had at that company was a really heavy drinker, one night him, the VP and a bunch of other folks (not including me I wasn’t at the company at that point) went to Cowgirls and drank a bunch.

The VP dared the director (my former boss) to jump up on the bar and start dancing. He apparently did it, and as a result one of the bouncers got pulled him off, got him in a headlock and kicked him out. Speaking of bouncers (they are called regulators there), there are quite a few, I would say upwards of 10 at any given time.

Another funny story about Cowgirls and VMware of all things – a local VMware rep was coming to meet me at the last company I was at, and his boss told him the wrong address(I gave the right address). So he called me up right when he was supposed to be there and asked where I was and told him he went to the wrong place. Here is how the conversation went (this was almost a year ago so the words are not completely accurate) –

  1. Him: so where are you located then?
  2. Me: on the corner of 1st and King street
  3. Him: where is that near?
  4. Me: it’s in Pioneer square
  5. Him: hmm, any more tips you can give?
  6. Me: It’s near the stadiums
  7. Him: Still not completely sure
  8. Me: Do you know where Cowgirls Inc is?
  9. Him: OH YEAH! Cowgirls!
  10. Me: I’m right across the street from that
  11. Him: I’ll be there in 10 minutes!

The place is run really smooth despite the crowds they really have it nailed down. I’ve never seen anything get out of hand while all the times I’ve been there. The staff is very talented, friendly and skilled at churning out drinks at a rapid rate.

If you go and have a good time don’t be afraid to tip big. Drink wise it’s a very affordable bar, even drinking with a friend I don’t think my tab(before tip) has ever been above $100. My tips at Cowgirls are anywhere from 80% to 130%  (+/- I don’t try to do percentages just come up with some number based on the bill). I also tip in cash, I have had on a couple occasions companies reject my tips on my CC bill I guess because they thought it was too much, can’t reject cash though! They are worth every penny.

There is really no other bar I’d rather go to, at least from ones I have been to in the U.S.  – It’s not a place to go to if you want to have a casual conversation, because you’ll just be yelling at each other to hear each other. So depending on the situation – perhaps go to a quiet bar first and talk about whatever you want to talk about then go to Cowgirls and have some real fun.

I have only two complaints about the place – one they don’t keep their web site up to date (they had a pop up running that was more than a year old at one point). The second, which has been more of a drawback for out of town friends than me is that they aren’t open every night. They are open every Friday and Saturday, and some other days if there is a sporting event on. I can’t tell you how many times I’ve driven folks there on off nights only to find them closed.

I will miss the place greatly, I do plan to come back and visit. I’ve known several of the staff there for several years now so I should see some familiar faces if I am back in town in a few months to a year.

On to number two, it is a distant but solid second.

Pecos Pit

I was introduced to this place while I was working at that same company in Pioneer Square back in 2003. I don’t know how famous it might be but it is the only place I will order a pulled pork bbq sandwich from. I’m actually going there today at noon to meet some friends.

I don’t believe I had ever had pulled pork until I had it at Pecos Pit, I’m not even sure if I had even heard of pulled pork until Pecos.


Pecos Pit is located on 1st ave, about a mile south of the stadiums in Seattle. They are open Monday – Friday only as far as I know and hours are something like 11-3PM. Outdoor seating only (or take out). Parking can be limited at times, oh and it’s cash only too (there is an ATM across the street in a pinch though I’ve never used it).

Probably the main reason I love Pecos is the sauce & spice. My standard order (which some of the staff know me so well that I don’t even have to say it) is Pecos Pork, Hot, Spike & Beans. Yes I order the hottest thing on their menu, few people do but I have been having it for so long I got used to it a long time ago. I started out with medium way back when, but at some point it didn’t seem hot enough (I think they adjusted their recipe to make things less spicy but not sure). I switched to hot, and while it really is hot, for me at least it’s by no means too hot. Of all my friends that I have taken there or met there, I think maybe only one or two others have gotten hot, most usually seem to get mild(I know of at least one that complains that mild is even too hot).

I live in Bellevue, very close to Dixie’s BBQ which is much more famous in the area because of the man. While the man passed away a year or two ago they still have the man sauce, which really is the hottest thing I have ever had. I enjoy the heat it gives but I really do not enjoy the flavor. I also don’t generally enjoy the flavor of the pulled pork at Dixie’s either. I’d much, MUCH rather drive to Seattle and get Pecos over Dixie’s. The only reason I would go to Dixie’s myself is to get a jar of the man sauce to use at home, they sell, what I think is 2 ounce jars of the sauce for something like $10. While I haven’t used it at home in many years, the time when I did, one jar of that stuff literally lasted me a year. I would apply it with toothpicks to meat to get the heat inside the meat and cook it. Really was good (and very hot). The man sauce I would have to say is probably 3-4-5x+ hotter than what the hottest is at Pecos.

I’ve had pulled pork at a couple other places as well but for me, nothing compares to Pecos (I’m sure real bbq from down south or east or whatever is as good or better but I haven’t been able to try any of that). So for the most part when I see pulled pork on a menu I don’t bother ordering it, unless I’m at Pecos.

Pecos is not a place to go if your not a meat eater, their menu is limited to pork, beef, and beans for the most part(which have meat in them). I’ve never tried the beef, never felt the need. I have heard that sometimes they run out of pork if the days are really busy but I haven’t come across that myself. Lines can be long on good weather days so be prepared to wait.

Well there you have it – there are more places I will miss from up here, but they aren’t unique and I’ll be able to find replacements for them pretty easily down in the Bay Area.

These places will be harder. I know there is a bunch of other bars that are sort of like Cowgirls around the country, I haven’t been to any myself, one of my friends who travels a bunch does, and told me at least of the ones he’s been to, nothing compares to Cowgirls Inc.

July 12, 2011

VMware jacks up prices too

Filed under: Virtualization — Tags: , — Nate @ 4:34 pm

Not exactly hot on the heels of Red Hat’s 260% price increase, VMware has done something similar with the introduction of vSphere 5 which is due later this year.

The good: They seem to have eliminated the # of core/socket limit for each of the versions, and have raised the limit of vCPUs per guest to 8 from 4 on the low end, and to 32 from 8 on the high end.

The bad: They have tied licensing to the amount of memory on the server. Each CPU license is granted a set amount of memory it can address.

The ugly: The amount of memory addressable per CPU license is really low.

Example 1 – 4x[8-12] core CPUs with 512GB memory

  • vSphere 4 cost with Enterprise Plus w/o support (list pricing)  = ~$12,800
  • vSphere 5 cost with Enterprise Plus w/o support (list pricing)  = ~$38,445
  • vSphere 5 cost with Enterprise w/o support (list pricing)         = ~$46,000
  • vSphere 5 cost with Standard w/o support (list pricing)           = ~$21,890

So you pay almost double for the low end version of vSphere 5 vs the highest end version of vSphere 4.

Yes you read that right, vSphere 5 Enterprise costs more than Enterprise Plus in this example.

Example 2 – 8×10 core CPUs with 1024GB memory

  • vSphere 4 cost with Enterprise Plus w/o support (list pricing) = ~$25,600
  • vSphere 5 cost with Enterprise Plus w/o support (list pricing) = ~$76,890

It really is an unfortunate situation, while it is quite common to charge per CPU socket, or in some cases per CPU core, I have not heard of a licensing scheme that charged for the memory.

I have been saying that I would expect to be using VMware vSphere myself until the 2012 time frame at which point I hope KVM is mature enough to be a suitable replacement (I realize there are some folks out there using KVM now it’s just not mature enough for my own personal taste).

The good news, if you can call it that, is as far as I can tell you can still buy vSphere 4 licenses, and you can even convert vSphere 5 licenses to vSphere 4 (or 3). Hopefully VMware will keep the vSphere 4 license costs around for the life of (vSphere 4) product, which would take customers to roughly 2015.

I have not seen much info about what is new in vSphere 5, for the most part all I see are scalability enhancements for the ultra high end (e.g. 36Gbit/s network throughput, 1 million IOPS, supporting more vCPUs per VM – number of customers that need that I can probably count on 1 hand). With vSphere 4 there was many good technological improvements that made it compelling for pretty much any customer to upgrade (unless you were using RDM with SAN snapshots), I don’t see the same in vSphere 5 (at least at the core hypervisor level). My own personal favorites for vSphere 4 enhancements over 3 were – ESXi boot from SAN, Round Robin MPIO, and the significant improvements in the base hypervisor code itself.

I can’t think of a whole lot of things I would want to see in vSphere 5 that aren’t already in vSphere 4, my needs are somewhat limited though. Most of the features in vSphere 4 are nice to have though for my own needs are not requirements. For the most part I’d be happy on vSphere standard edition (with vMotion which was added to the licensed list for Standard edition about a year ago) the only reason I go for higher end versions is because of license limitations on hardware. The base hypervisor has to be solid as a rock though.

In my humble opinion, the memory limits should look more like

  • Standard = 48GB (Currently 24GB)
  • Enterprise = 96GB (Currently 32GB)
  • Enterprise Plus = 128GB (Currently 48GB)

It just seems wrong to have to load 22 CPU licenses of vSphere on a host with 8 CPUs and 1TB of memory.

I remember upgrading from ESX 3.5 to 4.0, it was so nice to see that it was a free upgrade for those with current support contracts.

I have been a very happy, loyal and satisfied user & customer of VMware’s products since 1999, put simply they have created some of the most robust software I have ever used (second perhaps to Oracle). Maybe I have just been lucky over the years but the number of real problems (e.g. caused downtime) I have had with their products has been tiny, I don’t think it’s enough to need more than one hand to count. I have never once had a ESX or GSX server crash for example. I see mentions of the PSOD that ESX belches out on occasion but I have yet to see it in person myself.

I’ve really been impressed by the quality and performance (even going back as far as my first e-commerce launch on VMware GSX 3.0 in 2004 we did more transactions the first day than we were expecting for the entire first month), so I’m happy to admit I have become loyal to them over the years(for good reason IMO). Pricing moves like this though are very painful, and it will be difficult to break that addiction.

This also probably means if you want to use the upcoming Opteron 6200 16-core cpus (also due in Q3) on vSphere you probably have to use vSphere 5, since 4 is restricted to 12-cores per socket (though would be interesting to see what would happen if you tried).

If I’m wrong about this math please let me know, I am going by what I read here.

Microsoft’s gonna have a field day with these changes.

And people say there’s no inflation going on out there..


Netflix jacks up rates – I cancelled

Filed under: Random Thought — Tags: — Nate @ 2:58 pm

All that trouble tracking down why my Netflix HD streaming was not working for nothing? I guess so. Netflix sent me an email a short time ago said they were going to increase the cost of my plan from $10 to $16. So I closed my account. They raised the price by a buck from $9 to $10 last November.

I normally wouldn’t mind the increase in charges if I was using the service, but I checked my email archives I’ve had the same DVD sitting waiting to be played since May 31st, and the last time I streamed a “full” movie or tv show from their streaming service looks to be January 2010 based on the “How was the quality of X?” emails. I didn’t think it was that long ago. I have streamed short segments of a bunch of stuff over the past year but always got bored of what I was watching so never watched more than a few minutes at a time.

If they had a better selection …..especially on the streaming side, I swear every time I’ve gone there in the past 6 months I have not noticed a single thing I wanted to stream. I suppose part of that is having a Tivo for so long I really don’t keep track of what kind of things come out, frequently coming across TV shows for the first time long after they had been canceled.

I have a week to return this DVD that has been sitting here for almost 2 months, I guess I will go pop it in the mail because I likely won’t get around to watching it in the next week.

What would of been nicer of course is if Netflix was better at being able to bill based on actual usage, if so my bill probably should of been $0.99/mo 🙂

Netflix’s content costs are apparently about to skyrocket so they need to get ready for that by raising rates..

[..] Barclays analyst Douglas Anmuth: He figures Netflix will have a total streaming commitment of $2 billion by the end of 2011.

Let me know when we have a video streaming service that is fulfills the dream of this Qwest commercial. I’m not holding my breath.

I’m more than happy to pay for premium services or products, in this case I was just paying them for the convenience that I might use it. I’ve rented 11 DVDs (10 of which I have watched) so far in 2011 through Netflix, and 17 in 2010.

July 11, 2011

The Decline of Mozilla

Filed under: General,Random Thought — Tags: — Nate @ 11:45 am

It’s quite possible that Firefox (and Mozilla in general) may of peaked already.

There’s been a lot of discussion and reporting recently on some pretty big changes either being implemented or being pushed by influential members of the Mozilla organization around their flagship product Firefox.


Ad for Firefox v4 viewed a short time ago


Much of the controversy is around one vocal member of Mozilla saying they should move to a much faster release cycle and not be afraid of breaking things for users in the process because it’s what’s best for the Internet.

It seems that Mozilla’s shift in policy is because of Google’s Chrome browser which already does this and has been gaining market share among those who don’t care about their privacy.

Mozilla gets a very large percentage of their revenue from the little Google search bar on the top right of the browser(and just in case your wondering yes I do block all of Google’s cookies). Apparently this contract deal with Google expires later this year. Who knows what the new deal may look like (I’d say it’s safe to assume there will be a new deal).

Google obviously wasn’t too happy with the lack of progress in the web browser arena which is why they launched Chrome.

Now Firefox feels more threatened by Chrome so appears to be trying to stem the losses by adopting a more Chrome-like approach, which has upset a decent part of their user base, whom, like myself just want a stable browser.

The web standards world has been clearly lagging, being that HTML 4.01 was released in 1999, and we don’t have a ratified HTML 5 yet.

And despite what some folks say, version numbers are important when used properly at least. A major version number for the most part implies a high level of compatibility(hopefully 100% compatibility) for all minor versions residing under the major version.

When used improperly, as with the Firefox 4 to Firefox 5 migration it causes needless confusion (also consider MS Win95 – 98 – XP – Vista – 7). If version numbers really don’t matter then perhaps they should use the release date as the version number so at least people know about how old it is.

Stories like this certainly don’t help either.

It is unfortunate that Mozilla seems to lack the resources of a more traditional model of developing both newer more feature full versions of software while being able to simultaneously being able to provide security and other minor fixes to more established, stable versions of the product.

Combine the factors of what will likely be a less lucrative contract with Google, with the rise in Chrome, and the alienation of (what seems to me at least) a pretty large portion of their potential market out there(whether or not they are current users), it really seems like Firefox, and Mozilla has peaked, and likely will face significant declines in the coming few years.

It is sad for me, as someone who has used Firefox since it was Phoenix 0.3, and have been using Mozilla Seamonkey for a long time as well (usually put “work” things in Seamonkey so if a browser crashes then it only takes a portion of my stuff away).

There are a few plug-ins I use for Firefox (by no means is it a long list!) that for the most part has kept me on Firefox otherwise I probably would of jumped ship to Opera or something (originally stopped using Opera on Linux what seems like almost 10 years ago because of memory leaks with SSL).

I can only hope that the various long term distributions Red Hat, Debian, Ubuntu LTS etc can band together to support a stable version of Firefox in the event it’s completely abandoned by Mozilla. Ubuntu has already mentioned they are considering Chrome(?!) for some future release of LTS.

The privacy implications of Chrome are just too much for me to even consider using it as a browser.

While there are some bugs, myself I am quite satisfied with Firefox 3.6 on this Ubuntu 10.04 laptop.

At one point it seemed plausible that the engine that powers Firefox, Gecko was going to take over the world, especially in the mobile/embedded space, but it seems that it never caught on in the mobile space with most everyone going to Webkit instead. In the mobile space, again Opera seems to have poured a lot more work into mobile versions of their browser than Mozilla ever did. I was just looking at my Sharp Zaurus’ a couple of days ago before I give them to a friend before I move, and saw they all had a mobile version of Opera going back to the 2003-2004 time frame.

If Firefox simply wanted a bigger version number, they could’ve just pulled a Slackware, and skip a few major version numbers (Slackware was the first distribution I used until I switched to Debian in 1998).

The big winner in all of this I think is Microsoft, who is already not wasting any time in wooing their corporate customers many of which were already using Firefox to some extent or at least had it on their radar.

I guess this is just another sign I’m getting older. There was a time when, for no real reason I would get excited about compiling the latest version of Xfree86, the latest Linux kernel, and downloading the latest beta of KDE (yes that’s what the screen shot to the left shows! – from 1998)

Now, for the most part, things are good enough, that the only time I seek newer software is if what I have is not yet compatible with some new hardware.

(Seeing that Ad on Yahoo! earlier today is what prompted this post, ironically when I clicked on the Firefox 4 link it took me to a page to download Firefox 5. Apparently Firefox development moves too fast for advertisers.)

July 8, 2011

Wired or Wireless?

Filed under: Networking,Random Thought,Uncategorized — Tags: — Nate @ 9:58 am

I’ll start out by saying I’ve never been a fan of Wifi, it’s always felt like a nice gimmick-like feature to have but other than that I usually steered clear. Wifi has been deployed at all companies I have worked at in the past 7-8 years though in all cases I was never responsible for that (I haven’t done internal IT since 2002, at which time wifi was still in it’s early stages(assuming it was out at all yet? I don’t remember) and was not deployed widely at all – including at my company). I could probably count on one hand the number of public wifi networks I have used over the years, excluding hotels (of which there was probably ten).

In the early days it was mostly because of paranoia around security/encryption though over the past several years encryption has really picked up and helped that area a lot. There is still a little bit of fear in me that the encryption is not up to snuff, and I would prefer using a VPN on top of wifi to make it even more secure, only really then would I feel comfortable from a security standpoint of using wifi.

From a security standpoint I am less concerned about people intercepting my transmissions over wifi than I am about people breaking into my home network over wifi (which usually happens by intercepting transmissions – my point is more of the content of what I’m transferring, if it is important is always protected by SSL or SSH or in the case of communicating with my colo or cloud hosted server there is a OpenVPN SSL layer under that as well).

Many years ago, I want to say 2005-2006 time frame, there was quite a bit of hype around the Linksys WRT-54G wifi router, for being easy to replace the firmware with custom stuff and get more functionality out of it. So I ordered one at the time, put dd-wrt on it, which is a custom firmware that was talked a lot about back then (is there something better out there? I haven’t looked). I never ended up hooking it to my home network, just a crossover cable to my laptop to look at the features.

Then I put it back in it’s box and put it in storage.

Until earlier this week, when I decided to break it out again to play with in combination with my new HP Touchpad, which can only talk over Wifi.

My first few days with the Touchpad involved having it use my Sprint 3G/4G Mifi access point. As I mentioned earlier I don’t care about people seeing my wifi transmissions I care about protecting my home network. Since the Mifi is not even remotely related to my home network I had no problem using it for extended periods.

The problem with the Mifi, from my apartment is the performance. At best I can get 20% signal strength for 4G, and I can get maybe 80% signal strength for 3G, latency is quite bad in both cases, and throughput isn’t the best either, a lot of times it felt like I was on a 56k modem. Other times it was faster. For the most part I used 3G because it was more reliable for my location, however I do have a 5 gig data cap/month for 3G so considering I started using the Touchpad on the 1st of the month I got kind of concerned I may run into that playing with the new toy during the first month. I just checked Sprint’s site and I don’t see a way to see intra month data usage, only data usage for the month once it’s completed. The mifi tracks data usage while it is running but this data is not persisted across reboots, and I think it’s also reset if the mifi changes between 3G and 4G services. I have unlimited 4G data, but the signal strength where I’m at just isn’t strong enough.

I looked into the possibility of replacing my Mifi with newer technology, but after reading some customer reviews of the newer stuff it seemed unlikely I would get a significant improvement in performance at my location, enough to justify the cost of the upgrade at least so I decided against that for now.

So I broke out the WRT-54G access point and hooked it up. Installed the latest recommended version of firmware, configured the thing and hooked up the touchpad.

I knew there was a pretty high number of personal access points deployed near me, it was not uncommon to see more than 20 SSIDs being broadcast at any given time. So interference was going to be an issue. At one point my laptop showed me that 42 access points were broadcasting SSIDs. And that of course does not even count the ones that are not broadcasting, who knows how many there are there, I haven’t tried to get that number.

With my laptop and touchpad being located no more than 5 feet away from the AP, I had signal strengths of roughly 65-75%. To me that seemed really low given the proximity. I suspected significant interference was causing signal loss. Only when I put the touchpad within say 10 inches of the antenna from the AP did the signal strength go above 90%.


Looking into the large number of receive errors told me that those errors are caused almost entirely by interference.

So then I wanted to see what channels were most being used and try to use a channel that has less congestion, the AP defaulted to channel 6.

The last time I mucked with wifi on linux there seemed to be an endless stream of wireless scanning, cracking, hacking tools. Much to my shock and surprise these days most of those tools haven’t been maintained in 5-6-7-8+ years. There aren’t many left. Sadly enough the default Ubuntu wifi apps do not report channels they just report SSIDs. So I went on a quest to find a tool I could use. I finally came across something called wifi radar, which did the job more or less.

I counted about 25 broadcasting SSIDs using wifi radar, nearly half of them if I recall right were on channel 6. A bunch more on 11 and 1, the other two major channels. My WRT54G had channels going all the way up to 14. I recall reading several years ago about frequency restrictions in different places, but in any case I tried channel 14 (which is banned in the US). Wifi router said it was channel 14, but neither my laptop nor Touchpad would connect. I suspect since they flat out don’t support it. No big deal.

Then I went to channel 13. Laptop immediately connected, Touchpad did not. Channel 13 is banned in many areas, but is allowed in the U.S. if the power level is low.

Next I went to channel 12. Laptop immediately connected again, Touchpad did not. This time I got suspicious of the Touchpad. So I fired up my Palm Pre, which uses an older version of the same operating system. It saw my wifi router on channel 12 no problem. But the Touchpad remained unable to connect even if I manually input the SSID. Channel 12 is also allowed in the U.S. if the power level is low enough.

So I ended up on channel 11. Everything could see everything at that point. I enabled WPA2 encryption, enabled MAC address filtering (yes I know you can spoof MACs pretty easily on wifi, but at the same time I have only 2 devices I’ll ever connect so blah). I don’t have a functional VPN yet mainly because I don’t have a way (yet) to access VPN on the Touchpad, it has built in support for two types of Cisco VPNs but that’s it. I installed OpenVPN on it but I have no way to launch it on demand without being connected to the USB terminal.  I suppose I could just leave it running and in theory it should automatically connect when it finds a network but I haven’t tried that.

So on to my last point on wifi – interference. As I mentioned earlier signal quality was not good even being a few feet away from the access point. I decided to try out to run a basic throughput test on both the Touchpad and the Laptop. All tests were using the same Comcast consumer broadband connection

DeviceConnectivity TypeLatencyDownload PerformanceUpload Performance
HP Touchpad802.11g Wireless18 milliseconds5.32 Megabits4.78 Megabits
Toshiba dual core Laptop with Ubuntu 10.04 and Firefox 3.6802.11g Wireless13 milliseconds9.46 Megabits4.89 Megabits
Toshiba dual core Laptop with Ubuntu 10.04 and Firefox 3.61 Gigabit ethernet9 milliseconds27.48 Megabits5.09 Megabits

The test runs in flash, and as you can see of course the Touchpad’s browser (or flash) is not nearly as fast as the laptop, not too unexpected.

Comparing LAN transfer speeds was even more of a joke of course, I didn’t bother involving the Touchpad in this test just the laptop. I used iperf to test throughput(no special options just default settings).

  • Wireless – 7.02 Megabits/second (3.189 milliseconds latency)
  • Wired – 930 Megabits/second (0.3 milliseconds latency)

What honestly surprised me though was over the WAN, how much slower wifi was on the laptop vs wired connection, it’s almost 1/3rd the performance on the same laptop/browser. I justed measured to be sure – my laptop’s screen (where I believe the antenna is at) is 52 inches from the WRT54G router.

It’s “fast enough” for the Touchpad’s casual browsing, but certainly wouldn’t want to run my home network on it, defeats the purpose of paying for the faster connectivity.

I don’t know how typical these results out there. One place I recently worked at was plagued with wireless problems, performance was soo terrible and unreliable. They upgraded the network and I wasn’t able to maintain a connection for more than two minutes which sucks for SSH. To make matters worse the vast majority of their LAN was in fact wireless, there was very little cable infrastructure in the office. Smart people hooked up switches and stuff for their own tables which made things more usable, though still a far cry from optimal.

In a world where we are getting even more dense populations and technology continues to penetrate driving more deployments of wifi, I suspect interference problems will only get worse.

I’m sure it’s great if the only APs within range are your own, if you live or work at a place that is big enough. But small/medium businesses frequently won’t be so lucky, and if you live in a condo or apartment like me, ouch…

My AP is not capable of operating in the 5Ghz range 802.11a/n, that very well could be significantly less congested. I don’t know if it is accurate or not but wifi radar claims every AP within range of my laptop(47 at the moment) is 802.11g (same as me). My laptop’s specs say it supports 802.11b/g/n, so I’d expect if anyone around me was using N then wifi radar would pick it up, assuming the data being reported by wifi radar is accurate.

Since I am moving in about two weeks I’ll wait till I’m at my new apartment before I think more about the possibility of going to a 802.11n capable device for reduced interference. On that note does any of my 3-4 readers have AP suggestions?

Hopefully my new place will get better 4G wireless coverage as well, I already checked the coverage maps and there are two towers within one mile of me, so it all depends on the apartment itself, how much interference is caused by the building and stuff around it.

I’m happy I have stuck with ethernet for as long as I have at my home, and will continue to use ethernet at home and at work wherever possible.

Older Posts »

Powered by WordPress