TechOpsGuys.com Diggin' technology every day

February 24, 2010

SSD Not ready yet?

Filed under: Storage — Nate @ 7:26 pm

SSD and storage tiering seem to be hot topics these days, certain organizations are pushing them pretty hard, though it seems the “market” is not buying the hype, or doesn’t see the cost benefit(yet).

In the consumer space SSD seems to be problematic, with seemingly wide spread firmware issues, performance issues, and even reliability issues. In the enterprise space most storage manufacturers have yet to adopt it, and I’ve yet to see a storage array that has enough oomph to drive SSD effectively(TMS units aside). It seems SSD really came out of nowhere and none of the enterprise players have systems that can drive the IOPS that SSD can drive.

And today I see news seeing that STEC stock has tanked because they yet again came out and said EMC customers aren’t buying SSD so they aren’t selling as much stuff as they thought.

With this delay in adoptionn for the enterprise space it makes me wonder if STEC will even be around in the future, HDD manufacturers, like enterprise storage companies sort of missed the boat when it came to SSD, but with such a slow adoption rate it may allow the manufacturers of spinning rust to catch up and win back the business that they lost to STEC in the meantime.

Then there’s the whole concept around automagic storage tiering at the sub volume level. It sounds cool on paper, though I’m not yet convinced on it’s effectiveness in the real world, mainly due to the delay involved in a system detecting particular hot blocks/regions and moving them to SSD, maybe by the time they are moved the data is no longer needed. I’ve not yet talked with someone with real world experience with this sort of thing, so I can only speculate at this point. Compellent of course has the most advanced automagic storage tiering today, they promote it pretty heavily, I’ve only talked to one person who’s worked with Compellent and he said he specifically only recommended their gear for smaller installs. I’ve never seen SPC-1 numbers posted by Compellent so at least in my mind their implementation remains in question, while the core technology certainly sounds nice.

Coincidently, Compellent’s stock took a similar 25% hair cut recently after their earnings were released, I guess expectations were too high.

I’d like to see a long running test, along the lines of what NetApp submitted for SPC-1, for the same array, two tests, one with automagic storage tiering turned on, the other without, and see the difference. I’m not sure how SPC-1 works internally, if it is a suitable test to illustrate automagic storage tiering or not, but at least it’s a baseline that can be used to compare with other systems.

February 23, 2010

Uptime matters

Filed under: Uncategorized — Nate @ 11:04 am

A friend of mine sent me a link to this xkcd comic and said it reminded him of me, I thought it was fitting given the slogan on the site.

Devotion to Duty

AMD 12-core chips on schedule

Filed under: News — Tags: , , , — Nate @ 10:31 am

I came across this article a few days ago on Xbitlabs and was surprised it didn’t seem to get replicated elsewhere. I found while playing with a stock tracking tool on my PDA (was looking at news regarding AMD). I’m not an investor but I find the markets interesting and entertaining at times.

Anyways it mentioned some good news from my perspective that is the 12-core Opterons (rather call them that then their code name because the code names quickly become confusing, I used to stay on top of all the CPU specs back in the Socket 7 days) are on track to ship this quarter. I was previously under the impression I guess incorrectly that they would ship by the end of next quarter. And it was Intel’s 8-core chips that would ship this quarter.

From the article

AMD Opteron “Magny-Cours” processor will be the first chip for the AMD G34 “Maranello” platform designed for Opteron processors 6000-series with up to 16 cores, quad-channel memory interface, 2 or 4 sockets, up to 12 memory modules per socket and some server and enterprise-specific functionality. Magny-Cours microprocessors feature two six-core or quad-core dies on one piece of substrate.

I read another article recently on The Register which mentioned AMD’s plans to take the chip to 16-cores in 2011.  I’ve been eagerly waiting for the 12-core chips for some time now mainly for virtualization, having the extra cores gives more CPU scheduler options when scheduling multi vCPU virtual machines. And it further increases the value of dual socket systems, allowing 24 real cores in a dual socket configuration that to me is just astonishing. And having the ability to have 24 memory sockets on a dual socket system is also pretty amazing. I have my doubts that anyone can fit 24 memory modules on a single half height blade but who knows. Right now to my knowledge HP has the densest half height blade as far as memory is concerned with 18 DIMMs for a Xeon 5500-based system and 16 DIMMs for an 6-core Opteron-based system. IBM recently announced a new more dense blade with 18 slots but it appears it is full height, so doesn’t really qualify. I think a dual socket full height blade is a waste of space. Some Sun blades have good densities as well though I’m not well versed in their technology.

February 9, 2010

Why I hate the cloud

Filed under: Virtualization — Tags: — Nate @ 4:26 pm

Ugh, I hate all this talk about the cloud, for the most part what I can see is it’s a scam to sell mostly overpriced/high margin services to organizations who don’t know any better.  I’m sure there are plenty of organizations out there that have IT staff that aren’t as smart as my cat, but there are plenty that have people that are smarter too.

The whole cloud concept is sold pretty good I have to admit. It frustrates me so much I don’t know how properly express it. The marketing behind the cloud is such that it gives some people the impression that they can get nearly unlimited resources at their disposal, with good SLAs, good performance and pay pennies on the dollar.

It’s a fantasy. That reality doesn’t exist. Now sure the cost models of some incompetent organizations out there might be bad enough to the point that clouds make a lot of sense. But again there are quite a few that already have a cost effective way of operating. I suppose I am not the target customer, as every cloud provider I have talked to or seen cost analysis for has come in at a MINIMUM of 2.5-3x more expensive than doing it in house, going as high as 10x. Even the cheap crap that Amazon offers is a waste of money.

In my perspective, a public cloud(by which I mean an external cloud service provider, vs hosting “cloud” in house by way of virtual machines, grid computing and the like) has a few of use cases:

  1. Outsourced infrastructure for very small environments. I’m talking single digit servers here, low utilization etc.
  2. Outsourced “managed” cloud services, which would replace managed hosting(in the form of dedicated physical hardware) primarily to gain the abstraction layer from the hardware to handle things like fault tolerance and DR better. Again really only cost effective for small environments.
  3. Peak capacity processing – sounds good on paper, but you really need a scale-out application to be able to handle it, very few applications can handle such a situation gracefully. That is being able to nearly transparently shift compute resources to a remote cloud on demand for short periods of time to handle peak capacity. But I can’t emphasize enough the fact that the application really has to be built from the ground up to be able to handle such a situation. A lot of the newer “Web 2.0” type shops are building(or have built) such applications, but of course the VAST majority of applications most organizations will use were never designed with this concept in mind. There are frequently significant concerns surrounding privacy and security.

I’m sure you can extract other use cases, but in my opinion those other use cases assume a (nearly?) completely incompetent IT/Operations staff and/or management layers that prevent the organization from operating efficiently. I believe this is common in many larger organizations unfortunately, which is one reason I steer clear of them when looking for employment.

It just drives me nuts when I encounter someone who either claims the cloud is going to save them all the money in the world, or someone who is convinced that it will (but they haven’t yet found the provider that can do it).

Outside of the above use cases, I would bet money that for any reasonably efficient IT shop(usually involves a team of 10 or fewer people) can do this cloud thing far cheaper than any service provider would offer the service to them. And if a service provider did happen to offer at or below cost pricing I would call BS on them. Either they are overselling oversubscribed systems that they won’t be able to sustain, or they are buying customers so that they can build a customer base. Even what people often say is the low cost leader for cloud Amazon is FAR more expensive than doing it in house in every scenario I have seen.

Almost equally infuriating to me are those that believe all virtualization solutions are created equal, and that oh we can go use the free stuff(i.e. “free” Xen) rather than pay for vSphere. I am the first to admit that vSphere enterprise plus is not worth the $$ for virtually all customers out there, there is a TON of value available in the lower end versions of VMware. Much like Oracle, sadly it seems when many people think of VMware they immediately gravitate towards the ultra high end and say “oh no it’s too expensive!”. I’ve been running ESX for a few years now and have gotten by just fine without DRS, without host profiles, without distributed switches, without vMotion, without storage vMotion, the list goes on..! Not saying they aren’t nice features, but if you are cost conscious you often need to ask yourself while those are nice to have do you really need them. I’d wager frequently the answer is no.

February 4, 2010

Is Virtualisation ready for prime time?

Filed under: Virtualization — Tags: — Nate @ 12:06 pm

The Register asked that question and some people responded, anyone familiar ?

When was your first production virtualisation deployment and what did it entail? My brief story is below(copied from the comments of the first article, easier than re-writing it).

My first real production virtualization deployment was back in mid 2004 I believe, using VMware GSX I think v3.0 at the time(now called VMware server).

The deployment was an emergency decision that followed a failed software upgrade to a cluster of real production servers that was shared by many customers. The upgrade was supposed to add support for a new customer that was launching within the week(they had already started a TV advertising campaign). Every attempt was made to make the real deployment work but there were critical bugs and it had to get rolled back, after staying up all night working on it people started asking what we were going to do next.

One idea(forgot who maybe it was me) was to build a new server with vmware and transfer the QA VM images to it(1 tomcat web server, 1 BEA weblogic app server, 1 win2k SQL/IIS server, the main DB was on Oracle and we used another schema for that cluster on our existing DB) and use it for production, that would be the fastest turnaround to get something working. The expected load was supposed to be really low so we went forward. I spent what felt like 60 of the next 72 hours getting the systems ready and tested over the weekend with some QA help, and we launched on schedule on the following Monday.

Why VMs and not real servers? Well we already had the VM images, and we were really short on physical servers, at least good ones anyways. Back then building a new server from scratch was a fairly painful process, though not as painful as integrating a brand new environment. What would usually take weeks of testing we pulled off in a couple of days. I remember one of the tough/last issues to track down was a portion of the application failing due to a missing entry in /etc/hosts (a new portion of functionality that not many were aware of).

The second time I’ve managed to make The Register(yay!), the first would be a response to my Xiotech speculations a few months back.

January 5, 2010

Acknowledge Nagios Alerts Via Email Replies

Filed under: Monitoring — @ 3:59 pm

Monitoring should be annoying by design – when something is broken, we need to fix it and we need to be reminded it needs fixing until it gets fixed. That’s why we monitor in the first place. In that vein, I’ve configured our Nagios server to notify every hour for most alerts. However, there are times when a certain alert can be ignored for a while and I might not have a computer nearby to acknowledge it.

The solution: acknowledge Nagios alerts via email. A quick reply on my smartphone and I’m done.

Setting it up is fairly simple and involves a few components: an MTA (Postfix in my case), procmail (might need to install it), a perl script and the nagios.cmd file. I used the info in this post to get me started. My instructions below were done on two different CentOS 5.4 installs running Nagios 3.0.6 and Nagios 3.2.0.

Procmail
Make a /home/nagios/.procmailrc file (either su to the nagios user or chown to nagios:nagios afterwards) and paste in the following:

LOGFILE=$HOME/.procmailrc.log
MAILDIR=$HOME/Mail
VERBOSE=yes
PATH=/usr/bin
:0
* ^Subject:[    ]*\/[^  ].*
| /usr/lib/nagios/eventhandlers/processmail "${MATCH}"

Postfix
Tell Postfix to use procmail by adding the the following line to /etc/postfix/main.cf (restart Postfix when finished):
mailbox_command = /usr/bin/procmail
You might want to search your main.cf file for mailbox_command to make sure procmail isn’t already configured/turned on. You also might want to do whereis procmail to make sure it’s in the /usr/bin folder. If your Nagios server hasn’t previously been configured to receive email, you’ve got some configuration to do – that’s outside of the scope of this article, but I would suggest getting that up and running first.

Perl Script
Next up is the perl script that procmail references. Create a /usr/lib/nagios/eventhandlers/processmail file and chmod 755 it – paste in the code below:

#!/usr/bin/perl

$correctpassword = 'whatever';   # more of a sanity check than a password and can be anything
$subject = "$ARGV[0]";
$now = `/bin/date +%s`;
chomp $now;
$commandfile = '/usr/local/nagios/var/rw/nagios.cmd';

if ($subject =~ /Host/ ){      # this parses the subject of your email
        ($password, $what, $junk, $junk, $junk, $junk, $junk, $host) = split(/ /, $subject);
        ($host) = ($host) =~ /(.*)\!/;
} else {
        ($foo, $bar) = split(/\//, $subject);
        ($password, $what, $junk, $junk, $junk, $junk, $junk, $host) = split(/\ /, $foo);
        ($service) = $bar =~ /^(.*) is.*$/;
}

$password =~ s/^\s+//;
$password =~ s/\s+$//;

print "$password\t$what\t$host\t$service\n";

unless ($password =~ /$correctpassword/i) {
        print "exiting...wrong password\n";
        exit 1;
}

# ack - this is where the acknowledgement happens
# you could get creative with this and pass all kinds of things via email
# a list of external commands here: http://www.nagios.org/development/apis/externalcommands/
if ($subject =~ /Host/ ) {
        $ack =
"ACKNOWLEDGE_HOST_PROBLEM;$host;1;1;1;email;email;acknowledged through email";
} else {
        $ack = "ACKNOWLEDGE_SVC_PROBLEM;$host;$service;1;1;1;email;acknowledged through email";
}

if ($what =~ /ack/i) {
        sub_print("$ack");
} else {
        print "no valid commands...exiting\n";
        exit 1;
}


sub sub_print {
        $narf=shift;
        open(F, ">$commandfile") or die "cant";
        print F "[$now] $narf\n";
        close F;
}

The script above assumes certain things about how your email subject line is formatted and you might have to tweak it if you’ve done much/any customization to the Notification commands in the default commands.cfg file. One thing you will need to change is the “Host” variable. The default is to put Host: $HOSTALIAS$ in the subject – you’ll need to replace that with $HOSTNAME$ as that is what the nagios.cmd file expects. If you don’t change that, the perl script above will pass the $HOSTALIAS$ to the nagios.cmd file and it won’t know what to do with it. Below is a sample of my notify-service-by-email command:

define command{
        command_name    notify-service-by-email
        command_line    /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nComment: $SERVICEACKCOMMENT$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n\nMonitoring Page: http://nagios1/nagios\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTNAME$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$

Example
So, when i get an alert that has a subject something like this:
** PROBLEM alert - server1/CPU Load is WARNING **
I can just reply and add “whatever ack” to the beginning of the subject line:
whatever ack RE: ** PROBLEM alert - server1/CPU Load is WARNING **
and the alert will be acknowledged.

Troubleshooting
As I said earlier, you will want to make sure Postfix is configured correctly for receiving email for the Nagios user – this might be an area where you’ll have issues if it’s not set up correctly. The other thing that fouled me up a few times was the Notification command section I mentioned above. By passing commands directly to the nagios.cmd file and by watching the log files, you should be able to spot any misconfigs.

January 4, 2010

Uptime of various web properties

Filed under: Monitoring — Nate @ 6:19 pm

Came across a post on Techcrunch, which then lead me to Alertsite, which seems to maintain a list of various web sites in various industries and their average uptime and response time. I thought it was interesting at least that Amazon is up only 97% of the time, and LinkedIn up only 95% of the time for examples. Kind of puts things in perspective I think, an increasing number of people and organizations are “demanding” higher levels of uptime, while it’s certainly achievable it seems in many cases the costs are just not worth it.  Taking it to an extreme level, this topic reminds me of this article written several years ago by our best friends at The Register.

When Microsoft goofed the DNS settings on its microsoft.com servers recently, he figured the site would have to be up for the next two hundred years to achieve five-nines uptime.

Don’t know why I remember things like that but can’t remember other things like birthdays.

December 13, 2009

Save MySQL from Oracle

Filed under: News — Nate @ 11:02 am

One of the creators(the creator?) of MySQL is pleading to the public to write to the EC to save MySQL.

Myself I’m not so sure of the future of MySQL in any case, it seems since Sun bought them it has gotten into nothing but trouble. I’m sure the MySQL guys enjoyed the big payout but it may of cost them even more. I’m still using versions of MySQL that were released before Sun bought them because there is so much uncertainty around the versions that are out now.  It’s been forked at least once, and I have questions on the stability of the latest official branches.

Keep in mind if you do use MySQL and want to secure some sort of gaurantee from Oracle, that Oracle already owns InnoDB and BerkleyDB, InnoDB of course being probably the widest deployed engine for MySQL.

I for one am against the merger, not for MySQL but for Java. Split Java(and MySQL I suppose) out and Oracle can have the rest of Sun. Oracle already has one of the big enterprise JVMs – Jrockit, acquired when they bought BEA. The only other big JVM I know of is from IBM.

December 10, 2009

Lesser of two evils

Filed under: General — Nate @ 10:05 pm

Thanks to The Register for another interesting thing to write about.This time it’s about a Mozilla guy, who apparently was the one who wrote Firefox (I still miss Phoenix, it was really a light browser, unlike Firefox today) suggesting people should switch their search engines from Google to Bing because Bing has a better privacy policy.

So which is the lesser of the two evils? Microsoft or Google? For me at least for the moment it is Google, but with each passing day my distrust of them grows. I have never signed up for their services, I have never accepted their cookies, and while I do use their search engine it’s unlikely the searches I do provide much value to their advertisers. I used to use alltheweb.com as my search engine, I resisted Google for as long as I could. The thing that drove me away from alltheweb at the time was when they introduced banner ads. I even told them as much and they thanked me for the feedback and said they would take it under consideration for future improvements or something along those lines. I notice now they do not have banner ads.. I’m not against advertising myself,(I do not and never have used ad blocking browser plugins) but am against collection of data on me for that purpose. I don’t bother trying to opt out of such systems, because I don’t trust the opt out in the first place, would much rather take the time to block the data collection on my end(wherever possible). I’m sure I won’t get them all, but I’ll get most.

Anyways on the topic of privacy on the net. I’m probably one of the few that take it fairly seriously. That is I rarely sign up for any offers, I do create unique email addresses for each organization I have a relationship with(which as of last count is roughly 230 unique email addresses each with an associated inbox). I host my own email,  DNS, and web services on a server I physically own at a local co-location facility that I pay for.  I have hosted my own email+web+DNS for more than ten years now. This blog is not hosted there because it wasn’t setup by me, and there isn’t much private information here anyways.

On to web browsing. I have had my web browser prompt me for each and every web cookie that comes in for at least the last five years now(I do love that feature that saves the preferences for the site). Checking the sqlite database in Firefox reveals

  • Reject cookies outright from 2,099 web sites
  • Accept cookies from 216 web sites
  • Accept cookies from 470 web sites for “session” only

I read recently that Flash cookies are becoming a more common means of tracking users as well, because they are more difficult to detect/delete. In fact I had no idea that there was such a thing as cookies in Flash until I read the article. (Thanks again to The Reg). I have been using the Prefbar Firefox plugin for years now(since Phoenix days I believe), that provides a couple of handy things for Flash, one is to enable/disable the plugin on demand, the other is to immediately kill all flash objects in the page. It works pretty well. I usually keep Flash off unless I specifically need it, not for privacy reasons but more so for performance and stability reasons(and most Flash advertisements are very annoying). I know there are more advanced plugins that deal with Flash and advertisements in general, I’m just too lazy to try them. I’ve used the same basic plugins for several years and haven’t really tried anything new.

I am becoming more convinced as time goes on that Google is nothing more than a front for the NSA/CIA or some other 3 letter organization that you’ve never heard of in order to try to get you to willingly give them all of your information, whether it is email, IM, DNS,  or voice mail, phone calls, pictures, hell I can’t think of all of the services they offer since I don’t use them. I see comments on slashdot and am shocked to see people say things like they’d rather Google have their private data than their ISP. Me I’m the opposite. I’d rather have my ISP have my data, there’s a lot less chance they’ll have any interest in it, and even less of a chance they’ll be able to effectively use it against me than the data mining masterminds at Google.

I have (to put it mildly as anyone who knows me will attest) have a deep rooted mistrust for Microsoft as well, it has bonded with my DNA at this point, that is somewhat of a different post though.

I’m not quite to the point where I tunnel my internet traffic over a VPN to my co-located server but who knows, perhaps in a few years that’s what I will have to resort to. My DNS traffic is tunneled to my co-located server today, mainly because I host my own internal DNS and the master zones live on the other end of the connection so I rely on it for my internal + external DNS.

So, lesser of two evils, Microsoft or Google, tough choice indeed, perhaps the one or two readers of this blog can contribute links to other search engines, hopefully less obvious ones that might be worth while to use.

December 9, 2009

AT&T plans on stricter mobile data plans

Filed under: Networking — Nate @ 6:03 pm

You know one thing that really drives me crazy about users? It’s those people that think they have a right to megabits, if not tens of megabits of bandwidth for pennies a month. Those people that complain $50/mo is such a ripoff for 5-10Mbit broadband!

I have always had a problem with unlimited plans myself, I recall in the mid 90s getting kicked off more than a few ISPs for being connected to their modems 24/7 for days on end. The plan was unlimited. So I used it. I asked, even pleaded for the ISPs to tell me what is the real limit. You know what? Of all of the ones I tried at the time there was only one. I was in Orange County California at the time and the ISP was neptune.net. I still recall to this day the owner’s answer. He did the math calculating the number of hours in a day/week/month and said that’s how many I can use. So I signed up and used that ISP for a few years(until I moved to Washington) and he never complained(and almost never got a busy signal). I have absolutely no problem paying more for premium service, it’s just I appreciate full disclosure on any service I get especially if it is advertised as unlimited.

Companies are starting to realize that the internet wasn’t built to scale at the edge. It’s somewhat fast at the core, but the pipes from the edge to the core are a tiny fraction of what they could be(and if we increased those edge pipes you need to increase the core by an order(s) of magnitude). Take for example streaming video.  There is almost non stop chatter on the net on how people are going to ditch TV and watch everything on the internet(or many things on the internet). I had a lengthy job interview with one such company that wanted to try to make that happen, they are now defunct, but they specialized in peer-to-peer video streaming with a touch of CDN. I remember the CTO telling me some stat he saw from Akamai, which is one of the largest CDNs out there(certainly the most well known I believe). Saying how at one point they were bragging about having something like 10,000 simultaneous video streams flowing through their system(or maybe it was 50,000 or something).

Put that in some perspective, think about the region around you, how many cable/satellite subscribers there are, and think how well your local broadband provider could handle unicast streaming of video from so many of them from sites out on the net. Things will come to a grinding halt very quickly.

It certainly is a nice concept to be able to stream video(I love that video, it’s the perfect example illustrating the promise of the internet) and other high bit rate content(maybe video games), but the fact is it just doesn’t scale. It works fine when there are only a few users. We need an order(s) of magnitude more bandwidth towards the edge to be able to handle this. Or, in theory at least, high grade multicast, and vast amounts of edge caching. Though multicast is complicated enough that I’m not holding my breath for it being deployed on a wide scale on the internet anytime soon, the best hope might be when everyone is on IPv6, but I’m not sure. On paper it sounds good, don’t know how well it might work in practice though on a massive scale.

So as a result companies are wising up, a small percentage of users are abusing their systems by actually using them for what they are worth. The rest of the users haven’t caught on yet. These power users are forcing the edge bandwidth providers to realize that the plans dreamed up by the marketing departments just isn’t going to cut it(at least not right now, maybe in the future). So they are doing things like capping data transfers, or charging slightly excessive fees, or cutting users off entirely.

The biggest missing piece to the puzzle has been to provide an easy way for the end user to know how much bandwidth they are using so they can control the usage themselves, don’t blow your monthly cap in 24 hours. It seems that Comcast is working on this now, and AT&T is working on it for their wireless subscribers. That’s great news. Provide solid limits for their various tiers of service for the users, and provide an easy way for users to monitor their progress on their limits. I only wish wireless companies did that for their voice plans(how hard can it be for a phone to keep track of your minutes). That said I did sign up for Sprint’s simply unlimited plan so I wouldn’t have to worry about minutes myself, saved a good chunk off my previous 2000-minute plan. Even though I don’t use anywhere near what I used to(seem to average 300-500 minutes/month at the most), I still like the unlimited plan just in case.

Anyways, I suppose it’s unfortunate that the users get the shaft in the end, they should of gotten the shaft from the beginning but I suppose the various network providers wanted to get their foot in the door with the users, get them addicted(or at least try), then jack up the rates later once they realize their original ideas were not possible.

Bandwidth isn’t cheap, at low volumes it can cost upwards of $100/Mbit or even more at a data center(where you don’t need to be concerned about telco charges or things like local loops). So if you think your getting the shaft for paying $50/mo for a 10Mbit+ burstable connection, shut up and be thankful your not paying more than 10x that.

So no, I’m not holding my breath for wide scale deployment of video streaming over the internet, or wireless data plans that simultaneously allow you to download at multi-megabit speeds while providing really unlimited data plans at consumer level pricing. The math just doesn’t work.

I’m not bitter or anything, you’d probably be shocked on how little bandwidth I actually use on my own broadband connection, it’s a tiny amount, mainly because there isn’t a whole lot of stuff on the internet that I find interesting anymore. I was much more excited back in the 90s, but as time as gone on my interest in the internet in general has declined(probably doesn’t help that my job for the past several years has been supporting various companies where their main business was internet-facing).

I suppose the next step beyond basic bandwidth monitoring might be something along the lines of internet roaming. In which you can get a data plan with a very high cap(or unlimited), but only for certain networks(perhaps mainly local ones to avoid going over the backbones), but pay a different rate for general access to the internet. Myself, I’m very much for net neutrality only where it relates to restricting bandwidth providers from directly charging content companies for access for their users(e.g. Comcast charging Google extra so Comcast users can watch Youtube). They should be charging the users for that access, not the content providers.

(In case your wondering what inspired this post it was the AT&T iPhone data plan changes that I linked to above).

« Newer PostsOlder Posts »

Powered by WordPress