TechOpsGuys.com Diggin' technology every day

October 23, 2012

Should System admins know how to code?

Filed under: linux — Tags: — Nate @ 11:57 am

Just read the source article, and the discussion on slashdot was far more interesting.

It’s been somewhat of a delicate topic for myself, having been a system admin of sorts for about sixteen years now, primarily on the Linux platform.

For me, more than anything else, you have to define what code is. Long ago I drew a line in the sand that I have no interest in being a software developer, I do plenty of scripting in Perl & Bash, primarily for monitoring purposes and to aid in some of the more basic areas of running systems.

Since this blog covers 3PAR I suppose I should start there – I’ve written scripts to do snapshots and integrate them with MySQL (still in use today) and Oracle (haven’t used this side of things since 2008).  This is a couple thousand lines of script (I don’t like to use the word code because to me it implies some sort of formal application). I’d wager 99% of that is to support the Linux end of things and 1% to support 3PAR. One company I was at I left, and turned these scripts over to people who were going to try to take on my responsibility. The folks had minimal scripting experience and their eyes glazed over pretty quick while I walked them through the process. They feared the 1,000 line script. Even though for the most part the system was very reliable and not difficult to recover from failures from, even if you had no scripting experience. In this case to manage snapshots with MySQL (integrated with a storage platform) – I’m not aware of any out of the box tool that can handle this. So you sort of have no choice but to glue your own together. With Oracle, and MSSQL tools are common, maybe even DB2 – but MySQL is left out in the cold.

I wrote my own perl-based tool to login to 3PAR arrays and get their metrics and populate RRD files (I use cacti to present that data – since it has a nice UI, but cacti could not collect data like I can so that stuff is run outside of cacti). Another thousand lines of script here.

Perhaps one of the coolest things I think I wrote was a file distribution system a few years ago to replace a product we used in house that was called R1 Repliweb. Though it looks like they got acquired by somebody else. Repliweb is a fancy file distribution system that primarily ran on Windows, but the company I was at was using the Linux agents to pass files around. I suppose I could write a full ~1200 word post about that project alone(if your interested in hearing that let me know), but basically I replaced it with an architecture of load balancers, VMs, a custom version of SSH, rsync, with some help from CFengine and about 200 lines of script which not only dramatically improved scalability but also reliability went literally to 100%. Never had a single failure (the system was self healing – though I did have to turn off rsync’s auto resume feature because it didn’t work for this project) while I was there (the system was in place about 12-16 months when I left).

So back to the point – to code or not to code. I say not to code (again back to what code means – in my context it means programming – if your directly using APIs then your programming, if your using tools to talk to APIs then your scripting) – for the most part at least. Don’t make things too complicated. I’ve worked with a lot of system admins over the years and the number that can script well, or code is very small. I don’t see that number increasing. Network engineers are even worse – I’ve never seen a network engineer do anything other than completely manually. I think storage is similar.

If you start coding your infrastructure you start making it even more difficult to bring new people on board, to maintain this stuff, and run it moving forward. If you happen to be in an environment that is experiencing explosive growth and your adding dozens or hundreds of servers constantly then yes this can make a lot of sense. But most companies aren’t like that and never will be.

It’s hard enough to hire people these days, if you go about raising the bar to even higher levels your never going to find anyone. I think to the Hadoop end of the market – those folks are always struggling to hire because the skill is so specialized, and there are so few people out there that can do it. Most companies can’t compete with the likes of Microsoft, Yahoo and other big orgs with their compensation and benefits packages.

You will, no doubt spend more on things like software, hardware for things that some fancy DevOps god could do in 10 lines of ruby while they sleep. Good luck finding and retaining such a person though, and if you feel you need redundancy so someone can take a real vacation, yeah that’s gonna be tough. There is a lot more risk, in my opinion in having a lot of code running things if you lack the resources to properly maintain it.  This is a problem even at scale as well. I’ve heard on several occasions – the big Amazon themselves, customized CFengine v1 way back when with so much extra stuff. Then v2 (and since v3)  came around with all sorts of new things, and guess what – Amazon couldn’t upgrade because they had customized it too much. I’ve heard similar things about other technologies Amazon has adopted. They are stuck because they customized it too much and can’t upgrade.

I’ve talked to a ton of system admin candidates over the past year and the number that I feel comfortable being able to take over the “code” on our end I think is fair to say is zero. Granted not even I can handle the excellent code written by my co-worker. I like to tell people I can do simple stuff in 10 minutes on CFengine and it will take me four hours to do things the chef way on chef, my eyes will bleed and my blood will boil in the process.

The method I’d use on CFengine you could say “sucks” compared to Chef, but it works, and is far easier to manage. I can bring  almost anyone up to speed on the system in a matter of hours, vs chef takes a strong Ruby background to use (myself I am going on nearly two and a half years with Chef and I haven’t made much progress other than I feel I can speak with authority on how complex it is).

Sure it can be nice to have APIs for everything, fancy automation everywhere – but you need to pick your battles.  When your dealing with a cloud organization like Amazon you almost have to code – to deal with all of their faults and failures and just overall stupid broken designs and everything that goes along with it. Learning to code makes the experience most likely from absolutely infuriating (where I stand) to almost manageable (costs and architecture aside here).

When your dealing with your own stuff, where you don’t have to worry about IPs changing at random because some host has died, or because you can change your CPU or memory configuration with a few mouse clicks and not have to re-build your system from scratch, the amount of code you need shrinks dramatically, lowering the barriers to entry.

After having worked in the Amazon cloud for more than two years both myself and my co-workers(who have much more experience in it than me) believe that it actually takes more effort and expertise to properly operate something in there vs doing it on your own. It’s the total opposite of how cloud is viewed by management.

Obviously it is easier said than done, just look at the sheer number of companies that go down every time Amazon has an outage or their service is degraded. Most recent one was yesterday. It’s easy for some to blame the customer for not doing the right thing,  at the end of the day though most companies would rather work on the next feature to attract customers and let something else handle fault tolerance. Only the most massive companies have resources to devote to true “web scale” operation. Shoe horning such concepts onto small and medium businesses is just stupid, and the wrong set of priorities.

Someone made a comment recently that made me laugh (not at them, but more at the situation). They said they performed some task to make my life easier in the event we need to rebuild a server (a common occurrence in EC2). I couldn’t help but laugh because we hadn’t rebuilt a single server since we left EC2 (coming up on one year in a few months here).

I think it’s great that equipment manufacturers are making their devices more open, more programmatic. Adding APIs, and other things to make automation easier. I think it’s primarily great because then someone else can come up with the glue that can tie it all together.

I don’t believe system admins should have to interact with such interfaces directly.

At the same time I don’t expect developers to understand operations in depth. Hopefully they have enough experience to be able to handle basic concepts like load balancing(e.g. store session data in some central place, preferably not a traditional SQL database). The whole world often changes from running an application in a development environment to running it in production. The developers take their experience to write the best code that they can, and the systems folks manage the infrastructure (whether it is cloud based or home grown) and operate it in the best way possible.  Whether that means separating out configuration files so people can’t easily see passwords, to inserting load balancers in between tiers, splitting out how application code is deployed,  to something as simple as log rotation scripts.

If you were to look at my scripts you may laugh(depending on your skill level) – I try to keep them clean but they are certainly not up to programmer standards, no I’ve never “use strict” on Perl for example. My scripting is simple so to do things sometimes takes me many more lines than someone more experienced in the trade to do. This has it’s benefits though – it makes it easier for more people to be able to follow the logic should they need to, and it still gets the job done.

The original article seemed to focus more on scripts, while the discussion on slashdot at some points really got into programming with one person saying they wrote apache modules ?!

As one person in the discussion thread on slashdot pointed out, heavy automation can hurt just as much as help. One mistake in the wrong place and you could take the systems down far faster than you can recover them. This has happened to me on more than one occasion of course.  One time in particular I was looking at a CFEngine configuration file, saw some logic that appeared to be obsolete, and removed a single character (a ! which told CFEngine don’t apply that configuration to that class), then CFengine went and wiped out my apache configurations. When I made the change I was very sure that what I was doing was right, but in the end it wasn’t. That happened seven years ago but I still remember it like it was yesterday.

System administrators should not have to program – scripting certainly is handy and I believe that is important(not critical – it’s not at the top of my list  for skills when hiring), just keep an eye out for complexity and supportability when your doing that stuff.

October 16, 2012

Caught red handed!

Filed under: Random Thought — Tags: — Nate @ 7:46 am

[UPDATED] Woohoo! I am excited. I was checking my e-mail and got an email from Bank of America that there was another fraud alert on my credit card (as you might imagine I am very careful but for some reason I get hit at least once or twice a year). My card was locked out until I verified some transactions.

I tried to use their online service but it said my number couldn’t be processed online so I had to call.

So I called and gave my secret information to them, and they cited some of the transactions that was attempted to be charged including:

  • World friends – people who like to travel ? Or maybe the upscale dating service?
  • Al shop – online electronics store in DUBAI
  • Payless shoe stores – yeah they don’t carry my size unless I wear their shoe boxes as shoes
  • Paypal authorization attempt

All of the charges were declined – because – the number they attempted to use is a ShopSafe number, a service that BofA offers that I have written about in the past, where I generate single use credit cards for either single purchases or recurring subscriptions. These cards are only good for a single merchant, once charged nobody else can use them.

In this case it was a recurring payment number, which on top of the single merchant has a defined monthly credit limit.

Naturally of course since they are only valid for a single merchant I only give the number out to a single merchant.

Apparently it was my local CABLE COMPANY that had this recurring credit card number assigned to them. I gave this number to them over the phone a couple of months ago after my credit card was re-issued again. So either they had a security breach or some employee tried to snag it. They don’t appear to be a high tech organization given they are a local cable company that only serves the city I am in.  They have no online billing or anything like that which I am aware of.  In any case it made it really easy to determine the source of the breach since this number was only ever given to one organization. The fraud attempts were made less than 24 hours after the cable company charged my bill.

Unlike the last credit card fraud alert – which was also on a ShopSafe card, this time the customer service rep said she did not have to cancel my main card – which makes total sense since only the ShopSafe card was compromised. I believe the last time only the ShopSafe card was compromised as well, but the customer service rep insisted the entire card be canceled. I think that original rep didn’t fully understand what ShopSafe was.

You could even say there is not a real need to cancel the ShopSafe card – it is compromised but it is not usable by anyone other than the cable company, but they canceled it anyways. Not a big deal it takes me two minutes to generate a new one, though I have to call them and give them the new number. Or go see them in person I guess. I tried calling a short time ago but the office wasn’t open yet.

The BofA customer service rep I spoke to this morning said I was one of only a few customers over the years that she has talked to that used ShopSafe (I use it ALL the time).

Of all the fraud activity on my card over the years(and other times where merchants reported they had been compromised but there was no fraudulent charges on my card), this is the first time that I know with certainty who dun it, so I’m excited. I wonder what the cable company will say…

One of the downsides to ShopSafe is because it is single merchant I do have to pay attention when buying stuff from market places, I frequently buy from buy.com (long time customer), who is a pretty big merchant site. I have to make sure my orders come from only a single merchant, which on big orders can sometimes mean going through checkout 3-4 times and issuing different credit card #s for each round. I try to keep the list of numbers saved on their site fairly pruned though at the moment they have 38 cards stored for me. There was one time about a year ago that buy.com contacted me about a purchase I made that they forgot to charge me for about 4 months. The card I issued was only valid for two months so it was expired when they found the missing transaction in their system. Apparently someone I know who is well versed in the credit card area said that technically they can’t force me to pay for it at that point (I think 60 or maybe 90 days is the limit, I forgot exactly what he said). But I did get the product and I am a happy customer so had no issue paying for it.

Yay Shopsafe, I wish more companies had such a service, it’s very surprising to me how rare it seems to be.

UPDATE – I spoke to one of the managers at the cable company and he was obviously surprised, and said they will start an investigation. I think that manager may end up signing up for BofA credit cards, he sounded very impressed with ShopSafe.

October 15, 2012

Ubuntu 10.04 LTS upgrade bug causes issues

Filed under: linux — Tags: , — Nate @ 11:41 am

[UPDATE] – after further testing it seems it is machine specific I guess my ram is going bad. dag nabbit.

 

I’ve been using Ubuntu for about five years, and Debian I have been using since 1998.

This is a first for me. I came into the office and Ubuntu was prompting to upgrade some packages, I run 10.04 LTS – which is the stable build.  I said go for it, and it tried, and failed.

I tried again, and failed, and again and failed.

I went to the CLI and it failed there too – dpkg/apt was seg faulting –

[1639992.836460] dpkg[31986]: segfault at 500006865496 ip 000000000040b7bf sp 00007fff71efdee0 error 4 in dpkg[400000+65000]
[1640092.698567] dpkg[32069] general protection ip:40b7bf sp:7fff73b2f750 error:0 in dpkg[400000+65000]
[1640115.056520] dpkg[32168]: segfault at 500008599cb2 ip 000000000040b7bf sp 00007fff20fc2da0 error 4 in dpkg[400000+65000]
[1640129.103487] dpkg[32191] general protection ip:40b7bf sp:7fffd940d700 error:0 in dpkg[400000+65000]
[1640172.356934] dpkg[32230] general protection ip:40b7bf sp:7fffbb361e80 error:0 in dpkg[400000+65000]
[1640466.594296] dpkg-preconfigu[32356]: segfault at d012 ip 00000000080693e4 sp 00000000ff9d1930 error 4 in perl[8048000+12c000]
[1640474.724925] apt-get[32374] general protection ip:406a67 sp:7fffea1e6c68 error:0 in apt-get[400000+1d000]
[1640920.178714] frontend[720]: segfault at 4110 ip 00000000080c50b0 sp 00000000ffa52ab0 error 4 in perl[8048000+12c000]

I have a 32-bit chroot to run things like 32-bit firefox, and I had the same problem there. For a moment I thought maybe I have bad ram or something, but turns out that was not the case. There is some sort of bug in the latest apt 0.7.25.3ubuntu9.14 (I did not see a report on it, though the UI for Ubuntu bugs seems more complicated than Debian’s bug system), which causes this. I was able to get around this by:

  • Manually downloading the older apt package (0.7.25.3ubuntu9.13)
  • Installing the package via dpkg (dpkg -i <package>)
  • Exporting the list of packages (dpkg –get-selections >selections)
  • Edit the list, change apt from install to hold
  • Import the list of packages (dpkg –set-selections <selections)
  • apt-get works fine now on 64-bit

However my 32-bit chroot has a hosed package status file, something else that has never happened to me in the past 14 years on Debian. So I will have to figure out how to correct that, or worst case I suppose wipe out the chroot and reinstall it, since it is a chroot, it’s not a huge deal. Fortunately the corruption didn’t hit the 64-bit status file. There is a backed up status file but it was corrupt too (I think because I tried to run apt-get twice).

64-bit status file:

/var/lib/dpkg/status: UTF-8 Unicode English text, with very long lines

32-bit status file:

/var/lib/dpkg/status: data

I’m pretty surprised, that this bug(s) got through. Not the quality I’ve come to know and love ..

October 10, 2012

TiVO: 11 years and counting

Filed under: Random Thought — Tags: — Nate @ 10:37 am

It has been on my mind a bit recently, wondering how long my trusty Phillips Tivo Series 1 has been going, I checked just now and it’s been going about 11 years, April 24th  2001 is when Outpost.com (now Fry’s) sent me the order confirmation that my first Tivo was on the way. It was $699 for the 60-hour version that I originally purchased, plus $199 for lifetime service (lifetime service today is $499 for new customers), which at the time was still difficult to swallow given I had never used a DVR before that.

My TiVo sandwiched between a cable box with home made IR-shield and a VCR (2002)

There were rampant rumors that Tivo was dead, they’d be out of business soon, I believe that’s also about the time Replay TV (RIP) was fighting with the media industry over commercial skipping.

Tivo faithful hoped Tivo would conquer the DVR market, but that never happened, always rumors of big cable companies deploying Tivo, but I don’t recall reading about wide scale deployments(certainly none of the cable companies I had over the years offered Tivo in my service area).

To this day Tivo still is held as the strongest player from a technology standpoint (for good reason I’m sure). Tivo has been involved in many patent lawsuits over the years and to my knowledge they’ve won pretty much every one. Many folks hate them for their patents, but I’ve always thought the patents were innovative and worth having. I’m sure to some the patents were obvious, but to most, myself included – they were not.

I believe Tivo got a new CEO in recent years and they have been working more seriously with cable providers,  I believe there have been much larger scale deployments in Europe vs anywhere else at this point.

Tivo recently announced support for Comcast Xfinity on demand on the Tivo platform. The one downside to something like TiVo, or anything that is not a cable box really is there is no bi directional communication with the cable company, so things like on demand or PPV are not possible directly through Tivo. I don’t think any Tivo since Series 2 supports working with an external cable box, they all use cable cards now. The cable card standard hasn’t moved very far over the years, I saw recently people talking about how difficult it is to find TVs on the market with cable card support as the race to reduce costs has cut them out of the picture.

Back to my Tivo Series 1, it was a relatively “hacker friendly” box, unlike post Series 2 equipment. At one point I added a “TivoNet Cache Card” which allowed the system to get program data and software updates over ethernet instead of phone lines by plugging into an exposed PCI-like connector on the motherboard. At the same time it gave the system a 512MB read cache on a single standard DIMM, to accelerate the various databases on the system.

Tivo Cache Card plus TurboNet ethernet port

Tivo Series 1 came with only 16MB of ram and a 54Mhz(!) Power PC 403 GCX. Some people used to do more invasive hacking to upgrade the system to 32MB, that was too risky for my taste though.

Picture of process needed to upgrade TiVo Series 1 memory

I’ve been really impressed with the reliability of the hardware(and software), I replaced the internal hard disks back in 2004 because the original ones were emitting a soft but high pitched whine which was annoying. The replacements also upgraded the unit from the original 60 hour rating to 246 hours.

After one of the replacement disks was essentially DOA, I got it replaced, and the Tivo has been running 24/7 since then – 8 years of reading and writing data to that pair of disks – 4200 RPM if I remember right. I’ve treated it well, Tivo has always been connected to a UPS, occasionally I shut it off and clean out all the dust, it’s almost always had plenty of airflow around it. It tends to crash a couple times a year (power cycling fixes it). I have TV shows going back to what I think is probably 2004-2005 time frame saved on that Tivo. Including my favorite Star Trek: Original Series episodes before they wrecked it with the modern CGI (which looks so out of place!).

I’m also able to telnet into the tivo and do very limited tasks, there is a ftp application that allows you to download the shows/movies/etc that are stored on Tivo, but in my experience I could not get it to work (video was unwatchable). Tivo Series 3 and up you can download shows via their fancy desktop application or directly with HTTPS, though many titles are flagged as copyright protected and are not downloadable.

Oh yeah I remember now what got me thinking about Tivo again – I was talking to AT&T this morning adding the tethering option to my plan since my Sprint Mifi is canceled now, and they tried to up sell me to U-Verse, which as far as I know is not compatible with Tivo (maybe Series 1 would work, but I also have a Series 3 which uses cable cards). So I explained to them it’s not compatible with Tivo and I don’t have interest in leaving Tivo at this point.

There was a time when I read in a forum that the Tivo “lifetime subscription” was actually only a few years (this was back in ~2002), and they disclosed it in tiny print in the contract. I don’t recall if I tried to verify it or not, but I suspect they opted to ignore that clause in order to keep their subscriber base, in any case the lifetime service I bought in 2001 is still active today.

The Tivo has outlasted the original TV I had (and the one that followed), gone through four different apartments, two states, it even outlasted the company that sold it to me (Outpost.com). It also outlasted the analog cable technology that it relied upon, for several years I’ve had to have a digital converter box so that Tivo can get even the most basic channels.

The newest Tivos aren’t quite as interesting to me mainly because they focus so much on internet media, as well as streaming to iOS/Android (neither of which I use of course). I don’t do much internet video. My current Tivo has hookups to Netflix, Amazon, a crappy Youtube app, and a few other random things. I don’t use any of them.

The Series 3 is obviously my main unit, which was purchased almost five years ago, it too has had it’s disks replaced once (maybe twice I don’t recall) – though in that case the disks were failing, fortunately I was able to transfer all of the data to the new drives.

The main thing I’d love from Tivo after all these years (maybe they have it now on the new platforms) – is to be able to backup the season passes/wishlists and stuff, so you can migrate them to a new system (or be able to recover faster from a failed hard disk). I’ve had the option of remote scheduling via the Tivo website since I got my Series 3 – but never had a reason to use it. The software+hardware on all of my units (I bought a 2nd Series 1 unit with lifetime back in 2004-2005, eventually gave it to my Sister who uses it quite a bit) has been EOL for many years now so there’s no support now.

Eleven years and still ticking. I don’t use it (Series 1) all that much, but even if I’m not actively using it, it’s always recording (live tv buffer) regardless.

October 9, 2012

Backblaze’s answer to the Thai flooding

Filed under: Storage — Tags: , — Nate @ 10:37 am

Saw an interesting article over at Slashdot, then went to GigaOm, and then went to the source. Aside from the sick feeling I felt when a cloud storage provider is sourcing their equipment through Costco, or Newegg, the more interesting aspect to the Backblaze story, that I wasn’t aware of before is the people in the Slashdot thread pointing out the limitations to their platform.

Here I was thinking Backblaze is a cheap way to store stuff off site but its a lot more complicated than that. For my own off site backups I use my own hardware hosted in a co-location facility that is nearby. The cost is reasonable considering the flexibility I have (it seems far cheaper than any cloud storage I have come across anyways which honestly surprises me given I have no leverage to buy hardware).

Anyways back to Backblaze, the model really sort of reminds me of the model that so many people complain about when it comes to broadband and wireless data usage plans. The price is really cheap – they did that part well.

The most startling thing to me is they delete data 30 days after you delete it locally. They don’t allow storage of many common types of files like ISO images, virtual machine images. They acknowledge

Backblaze is not designed as an additional storage system when you run out of space.

(having wrote this I could just as easily see consumer internet companies saying that they are not designed to replace your Cable/Satellite with Netflix+Hulu)

At the same time they advertise unlimited storage. They talk about how much cheaper they are than (shudder) Amazon, and other providers (as well as doing things in house), but don’t mention these key comparison points. I believe one of the posts on slashdot even claimed that Backblaze goes out of their way to detect network drives and perhaps iSCSI attached network storage and blocks that from being backed up as well.

For all I know the other players in the space have similar terms, I haven’t investigated, I was just kind of surprised to see such mixed messages coming from them, from one side they say unlimited storage for a cheap rate, while at the same time they put all sorts of restrictions on it.

The up side is of course they seem to be fairly up front about what they limit when you dig more into the details, but at the same time the broadband and wireless data providers are fairly upfront as well, but that doesn’t stop people from complaining at ever increasing volumes.

I’d think they could do a lot more if they expanded the scope of their support with tiers of service, For example extending the window of storage from 30 days to some arbitrary longer period, for some marginal increase in cost. But maybe not.

I’m sure they run a fine service for the target market, I was always sort of curious how they managed the cost model, outside of the hardware anyways, reading this today really enlightened me as to how that strategy works.

Learn something new every day (almost).

October 5, 2012

HP Releases new 3PAR vCenter Plugin

Filed under: Storage — Tags: , — Nate @ 3:31 pm

[I haven’t written a story about 3PAR in the past five minutes so I suppose I’m due..]

Well it’s not that new, to be honest I don’t know how old it is(maybe it’s 6 months old!). I was complaining to 3PAR recently about the lack of functionality in their vCenter plugin and was told that they had a newer version that had some good stuff in it.

The only caveat is this version couldn’t be downloaded from the HP website (no versions can, I looked as recently as yesterday afternoon). It’s only available in the media kit, aka CDROM. I didn’t remember which version was the newer one and when I was told about the newer one I didn’t know which version I had. So I asked the Seattle account team what the current version is because the version I was handed with our array which was installed in December was 2.2.0. It had some marginal improvements in the VMware Recovery Manager (I don’t need the recovery manager), but the vCenter plugin itself was sorely lacking, it felt like it had gone nowhere since it was first released what seems like three years ago (maybe it was two).

I track 3PAR pretty closely as you might imagine, and if I had absolutely no idea there was a new version then I suspect there are a lot of customers out there that have no idea. I never noticed any notifications, there’s no “upgrade checker” on the software side etc.

Anyways, sure enough they get back to me and say 2.2.3 is the latest and sent me a electronic copy of the ISO, and I installed it. I can’t say it’s massively better but it does address two basic sets of functionality that was lacking previously:

  • Ability to cache user credentials to the array in vCenter itself (before you had to re-login to the array every time you loaded the vCenter client)
  • Ability to provision storage from vCenter (tried this – it said I had to configure a storage template before it would function – I’ve never needed templates on 3PAR before so not sure why i do now – I suppose it just makes it more simple, though it’d be nice if there was an advanced check box to continue without a template)

There may be other things too that I haven’t noticed. I don’t think it is top notch yet, I’m fairly certain both EMC and NetApp’s integration packages are much more in depth. Though it wouldn’t surprise me if 3PAR now has the resources to fix the situation on their end, client side software was never really a strong point of theirs. For all I know they are busy re-writing it in a better language – to run on the new vCenter web console.

HP 3PAR vCenter Plugin

Based on the UI, I didn’t get the impression that the plugin could export storage to the whole cluster, since the provision storage option was available under each server but wasn’t visible in the cluster. But who knows, maybe if I took the time to make a template I’d see that it could export to the whole cluster at once..

Not that I needed to provision storage from vCenter, for me it’s much simpler to just ssh in and do it –

  • Create the volume of whatever size I want
  • Export the volume to the cluster (all servers with 1 command)

It really is just two commands. Well three if you count the ssh command line itself to login to the system. I can see the value for less technical folks though so I think it’s important functionality to have. I can accomplish that in a fraction of the amount of time it takes me to login to vCenter, fire up silver light and go through a wizard.

Something I have wanted to see is more integration from the performance monitoring/management standpoint. I don’t know what all hooks are available in vCenter for this sort of thing.

The 3PAR plugin is built using Microsoft Silverlight which was another thorn in my side earlier this year – because Silverlight did not support 64-bit windows. So I couldn’t run the plugin from the vCenter server itself (normally I just remote desktop to the vCenter server and run the client locally – the latency running it over the WAN can get annoying). But to my surprise Microsoft released an update at some point in the past several months and Silverlight now works in 64bit!

So if you happen to want this newer version of software (the plugin is free), contact your HP account team or file a support ticket to get it. Be sure to tell them to make that available for download, there’s no reason to not make it available to download. The VMware Recovery Manager is not free by contrast (both are distributed together), however the Recovery manager checks the license status on the array, so you can install it, but it won’t work unless the array has the license key.

On a somewhat related note I installed a Qlogic management plugin in vCenter a couple of months back, among other things it allows you to upgrade the firmware of their cards from vCenter itself. The plugin isn’t really high quality though, the documentation is poor and it was not too easy to get up and going(unlike the 3PAR plugin the Qlogic plugin cannot be installed on the vCenter server – I tried a dozen times). But it is sort of neat to see what it has, it shows all of the NICs and HBAs and what they are connected to. I think I have so many paths and connections that it seems to make the plugin go unresponsive and hang the vCenter client much of the time (eventually it unfreezes). Because of that I have not trusted it to do firmware upgrades.

Qlogic vCenter Plugin

The Qlogic plugin requires software to be installed on each physical server that you want Qlogic information for(which also requires a reboot). The host software, from what I remember, is also not compatible with VMware Update Manager, so at least I had to install it from the CLI. You can download the Qlogic plugin from their website, here is one link.

Both plugins need a lot of work, Qlogic’s is pretty much unusable, I have a small environment here and it’s dog slow. 3PAR’s well it is more usable now, performance is fine, and at least the two new features above bring it out of the unusable territory for myself (I probably still won’t use it but it provides at least some value now for less technical folks where before it did not).

October 4, 2012

IE – Do not track and false senses of security

Filed under: General — Tags: — Nate @ 9:52 am

Came across another article on slashdot which talks about advertisers blasting Microsoft for setting the default option of do not track in IE 10.

Someone in the Apache organization went as far as to submit a patch that would cause Apache to ignore the setting if the browser is IE 10, it seems more people think the patch is a bad idea based on the comments.

Of course the argument is if enough people say don’t track me then the value of advertising goes down and advertisers will stop honoring the setting, something that many advertisers already say they don’t plan to honor.

Two big associations, the Interactive Advertising Bureau and the Digital Advertising Alliance, represent 90% of advertisers. Downey says those big groups have devised their own interpretation of Do Not Track. When the servers controlled by those big companies encounter a DNT=1 header, says Downey, “They have said they will stop serving targeted ads but will still collect and store and monetize data.

I can’t help but think this whole DNT thing is somewhat of a conspiracy, to induce users into a false sense of security that they are not being tracked. Even if the advertisers didn’t admit to not tracking you – why trust them?

Instead these privacy advocates(assuming that’s who is pushing this initiative) should promote better methods of stopping tracking on the user’s end, and not relying on the advertisers to play nice.

If you haven’t tried this recently – I suggest you do – clear your browser cookies, enable the function in the browser that prompts for every cookie, and go about your normal day, the growth in tracking cookies over the years is just insane. I’ve had my main browser set to prompt me for many years now.

The worst offender of a regular site I visit at least is the Cyanide & Happiness comic strip, while the comics are wonderful, they must have signed up every ad serving company in the world because I can’t leave the page open for long without getting hit with a cookie from a domain my browser has never seen before. Ads are one thing – I’m just going to make it more difficult for you to track me by not allowing cookies.

Sometimes I get in trouble, when I reject a cookie that somehow breaks site functionality and I have to go and try to track it down and re-enable cookies(or in some cases I use another browser that accepts all cookies), but the number of cases of that is pretty rare, I guess I am a little surprised how much cookies are used while at the same time how most of the web remains fully functional even without them.

I worked for a internet behavior targeting company a few years ago, they are still around, though their stuff never really took off. For them at least they were pretty honest about tracking vs not tracking etc. There was a little uproar about a year ago when they, amongst other companies were found to be setting tracking cookies even when you opted out. I emailed the developer (*the* since there is only one) that works on that product and he quickly shot back saying it was a bug and they fixed it within hours of being notified – and that even though the cookie was being set the back end was not using it, so it was more cosmetic.

At the end of the day people don’t value their privacy very much at all on average, I think I saw a survey at one point recently that showed people would sell out their privacy on line for less than $1, if this is the case what’s the point of Do not track ? Give people a choice do they want to be tracked or do they want to pay a fee to use Facebook? or Google? or whatever. Obviously 99.9999999999% of people will choose to be tracked.

For me I will keep blocking cookies and doing my part to make it just a little harder to track me (at least at the advertising company I was at there was no attempt to track via IP etc, if you didn’t have the cookie you weren’t tracked). At the same time I do not use any Ad blocking software, though I do have a plugin called Remove it Permanently, where I can right click on an object (Ad or whatever) and remove it.

Checking on the cookie settings I have here in Firefox –

  • 296 hosts or domains that I accept/trust all cookies for
  • 2,186 hosts or domains that I accept cookies “For session only”
  • 6,071 hosts or domains that I reject cookies for

I don’t visit many sites either, most of the ones I visit are tech related and a lot of them don’t even have ads on them(blogs, documentation etc). The addiction I had to the Internet (more IRC than anything else, for those that used it) in the mid/late 90s is long dead(died along with the IRC places I hung out at whithered along with the first tech bubble). Sort of ironically I saw recently that some medical folks were going to start including internet addiction as some sort of formal problem now. I sort of see the Internet like many see TV – billions of channels and not much happening.

You want my attention? I think back to all of the internet ads I have seen over the years, and honestly the only one that I can remember that I clicked on, and made a sale actually was from that X10 home automation company. This has got to be probably back in 1999-2002 time frame. I think the ad was on one of the internet tech news sites (Not ‘el Reg, more like Ziff Davis type of site). Why did it get my attention? Hooters!  After I clicked on it and saw what the technology was about it looked pretty neat so I got some of their wireless video stuff. It was interesting and I used it off and on, though it never worked quite as well as I would of liked, picture quality was not so hot.  I’m sure there have been some others that I have made sales on as well, though none of them have stayed in my memory like that X10 ad 🙂

I’ll end with – good job Microsoft, can’t believe I’m giving Microsoft more kudos, but I think it’s a good setting to have as a default. It’s one less thing the user has to click when running their computer.

October 3, 2012

Oracle doesn’t care if Sun hardware goes to zero

Filed under: General — Tags: — Nate @ 11:08 am

I saw an interesting interview with Larry over at Oracle yesterday. It was pretty good, it was nice to see him being honest, he wasn’t trying to sugar coat anything.

He says they have two hardware businesses – one that they care about (engineered systems), and another one that they don’t (commodity x86 stuff mainly though I have to think it encompasses everything that is not the engineered integrated products). He also says they don’t care if/when the Sun hardware business goes to $0. Pretty brutal.

This is somewhat contrary to some comments I saw somewhat recently where people were claiming Oracle was heavily discounting their software and keeping Sun hardware discounts at 0 so they could show higher revenue on the hardware side.

Given that is there any hope for what’s left of Pillar ? I suspect not, I suppose that funky acquisition of Pillar that Oracle did a while back probably won’t result in anyone getting a dime, and may or may not allow Larry to recoup his investment in the company, sad.

October 2, 2012

Cisco drops price on Nexus vSwitch to free

Filed under: Networking,Virtualization — Tags: , , , — Nate @ 10:02 am

I saw news yesterday that Cisco dropped the price of their vSwitch to $free, they still have a premium version which has a few more features.

I’m really not all that interested in what Cisco does, but what got me thinking again is the lack of participation by other vendors in making a similar vSwitch, of integrating their stack down to the hypervisor itself.

Back in 2009, Arista Networks launched their own vSwitch (though now that I read more on it, it wasn’t a “real” vSwitch),  but you wouldn’t know that by looking at their site today, I tried a bunch of different search terms I thought they still had it, but it seems the product is dead and buried. I have not heard myself of any other manufacturers making a software vSwitch of any kind (for VMware at least). I suppose customer demand is not there.

I asked Extreme back then if they would come out with a software vSwitch, and at the time at least they said there was no plans, instead they were focusing on direct attach, a strategy at least for VMware, appears to be dead for the moment, as the manufacturer of the NICs used to make it happen is no longer making NICs(as of about 1-2 years ago). I don’t know why they have the white paper on their site still, I guess to show the concept, since you can’t build it today.

Direct attach – at least taken to it’s logical conclusion is a method to force all inter-VM switching out of the host and into the physical switches layer. I was told that this is possible with Extreme(and possibly others too) with KVM today (I don’t know the details), just not with VMware.

They do have a switch that runs in VMware, though it’s not a vSwitch, more of a demo/type thing where you can play with commands. Their switching software has run on Intel CPUs since the initial release in 2003 (and they still have switches today that use Intel CPUs), so I imagine the work involved is not herculean to make a vSwitch happen if they wanted to.

I have seen other manufacturers (Brocade at least if I remember right) that were also looking forward to direct attach as the approach to take instead of a vSwitch. I can never remember the official networking name for the direct attach technology…

With VMware’s $1.2B purchase of Nicira it seems they believe the future is not direct attach.

Myself I like the concept of switching within the host, though I have wanted to have an actual switching fabric (in hardware) to make it happen. Some day..

Off topic – but it seems the global economic cycle has now passed the peak and now for sure headed down hill? One of my friends said yesterday the economy is “complete garbage”, I see tech company after company missing or warning, layoffs abound, whether it’s massive layoffs at HP, or smaller layoffs at Juniper that was announced this morning. Meanwhile the stock market is hitting new highs quite often.

I still maintain we are in a great depression. Lots of economists try to dispute that, though if you take away the social safety nets that we did not have in the ’20s and ’30s during the last depression I am quite certain you’d see massive numbers of people lined up at soup kitchens and the like. I think the economists try to dispute it more because they fear a self fulfilling prophecy rather than their willingness to have a serious talk on the subject. Whether or not we can get out of the depression, I don’t know. We need a catalyst – last time it was WWII, at least the last two major economic expansions were bubbles, it’s been a long time since we’ve had a more normal economy. If we don’t get a catalyst then I see stagnation for another few years, perhaps a decade while we drift downwards towards a more serious collapse (something that would make 2008 look trivial by comparison).

Powered by WordPress