TechOpsGuys.com Diggin' technology every day

April 23, 2011

Palm Pixis as PDA/Media player

Filed under: General — Tags: , — Nate @ 1:43 pm

So my pair of Palm Pixi Pluses arrived yesterday. I’m by no means a hardware hacker, have never really had a whole lot of interest in “breaking in” to my systems unless I really needed to (e.g. replace Tivo hard drive that is out of warranty).

Palm Pixi Plus for Verizon

I have read a lot over the years how friendly the WebOS platform is to “hacking”. For one you don’t have to root the device, there is an official method to enable developer mode at which point you can install whatever you want.

First thing I wanted to do was upgrade the OS software to the latest revision, the units shipped with 1.4.0, latest is 1.4.5. Given Palm is working on WebOS 2 and WebOS 3, I’m not really expecting any more major updates to WebOS 1. Upgrading was very painless, just download WebOS Doctor for your version/phone/carrier and run it, it re-flashes the phone with the full operating system, then reboots it.

One of the things I didn’t really think of when I ordered my Pixis were the fact that they would require activation in order to use (on initial boot it prompts to call Verizon to register, and of course I am not a Verizon customer and have no intention of using these as Phones).

Fear not though, after a few minutes of research turns up an official tool to bypass this registration process, and is really easy to use:

nate@nate-laptop:~/Downloads$ java -jar devicetool.jar
Found device: pixie-bootie
Copying ram disk................
Rebooting device...
Configuring device...
File descriptor 3 (socket:[1587]) leaked on lvm.static invocation. Parent PID 947: novacomd
 Reading all physical volumes.  This may take a while...
 Found volume group "store" using metadata type lvm2
File descriptor 3 (socket:[1587]) leaked on lvm.static invocation. Parent PID 947: novacomd
 6 logical volume(s) in volume group "store" now active
File descriptor 3 (socket:[1587]) leaked on lvm.static invocation. Parent PID 947: novacomd
 0 logical volume(s) in volume group "store" now active
Rebooting device...
Device is ready.

So now I have the latest OS, and I have bypassed registration (which also includes turning on developer mode by default). I do lose some functionality in this mode such as:

  • No access to online software updates (don’t care)
  • No access to Palm App Catalog (not the end of the world)

I had installed OpenSSH on my Pre in the past (though never tested it), this time around I was looking how to get a shell on the Pixi, and looked high and low on how to get SSH on it, to no avail (the documentation is gone, and I can’t find any ssh packages for some reason). Anyways in the end it didn’t really matter because I could just use novaterm, another official Palm tool to get root access, I mean it doesn’t get much simpler than this:

nate@nate-laptop:~$ novaterm
root@palm-webos-device:/# df -h
Filesystem                Size      Used Available Use% Mounted on
rootfs                  441.7M    394.8M     46.9M  89% /
/dev/root                31.0M     11.3M     19.7M  37% /boot
/dev/mapper/store-root
 441.7M    394.8M     46.9M  89% /
/dev/mapper/store-root
 441.7M    394.8M     46.9M  89% /dev/.static/dev
tmpfs                     2.0M    152.0k      1.9M   7% /dev
/dev/mapper/store-var
 248.0M     22.7M    225.3M   9% /var
/dev/mapper/store-log
 38.7M      4.6M     34.1M  12% /var/log
tmpfs                    64.0M    160.0k     63.8M   0% /tmp
tmpfs                    16.0M     28.0k     16.0M   0% /var/run
tmpfs                    97.9M         0     97.9M   0% /media/ram
cryptofs                  6.4G    549.6M      5.8G   8% /media/cryptofs
/dev/mapper/store-media
 6.4G    549.6M      5.8G   8% /media/internal

I don’t think I will need to access the shell beyond this initial configuration so I am not going to bother with SSH going forward.

I have a bunch of Apps and games on my Palm Pre and wanted to try to transfer them to my Pixis. I was hoping for an ipkg variation of dpkg-repack but was unable to find such a thing, so I had to resort to good ‘ol tar/gzip. All of the apps (as far as I can tell) are stored in /media/cryptofs/apps. So I tarred up that directory on my Pre and transferred it to my first Pixi and overwrote the apps directory on it, then rebooted to see what happened.

It worked, really much better than I had expected. Several of the games (especially the more fancy ones) did not work, I suspect because of the different screen size, a couple of the other fancy games started up, but the edges of the screen were clipped. There are probably Pixi versions for many of them, but that wasn’t a big deal, all of the apps worked.

I put the phone in Airplane mode to disable the 3G radio, installed a few patches (which modify system behavior) , and a few more free apps/games via WebOS Quick Install. Copied over some music to test it out, works awesome. The speakers on the Pixi sound really good in my opinion.

After the apps are installed there is roughly 6.3-6.5 GB of available storage for media.

Only thing missing? Touchstone charging, the custom case to support that looks like it starts at $20, I already have 3 touchstone docks, if I did not, that runs $50.

The UI in WebOS is really great, with full multi tasking, a great notification system, and has everything integrated really well.

Having all of these apps, some games, full wifi (which I can use on 3G/4G with the Sprint Mifi that I have), media playback abilities, a keyboard, nice resolution screen, camera with flash, GPS, user replaceable battery, no carrier contracts, all for $40 ?! I really wish I did buy more than two.

Really looking forward to the Pre 3 and the Touchpad.

March 25, 2011

RIP: F-22 Raptor

Filed under: General — Tags: — Nate @ 10:55 am

This isn’t really directly related to the IT field but is related to technology so is relevant to the tag line of the site.

First off let me disclose I am not a pilot, and do not closely follow military stuff so I quite likely  have some things wrong, this is more of a comment(like most things) than me trying to report on something.

The F-22 Raptor, what seemed to be the most technologically advanced aircraft ever developed seems to be close to retirement before it ever saw action.

The F-22 Raptor

Like most people with the No Fly Zone over Libya I expected Hey, this is a great opportunity for the Raptor. After all with it’s next generation stealth technologies in theory the Raptor could enforce the no fly zone without even attacking the anti aircraft systems since they wouldn’t see it anyways(short of getting lucky with some flak or something).

I’ve looked at the Raptor in awe for what seems like almost 20 years now, I remember back in high school I was a pretty avid reader of Aviation Week and Space Technology (think I will subscribe to it now that I am thinking about it). I haven’t really read it since high school but I have to say the Raptor sounded so cool at the time, I still remember even today I had a laminated artist conception of the Raptor for years, it was beautiful.

As time went on I got bits of pieces of information here and there from various sources. More recently was a documentary called Dogfights of the future (one cool segment is available here) which renewed my enthusiasm for the fighter. I remember being quite disappointed when the Commanche was canned, but there was still hope with the Raptor!

But it seems the Raptor has too many issues, or is too expensive, or deemed not to be needed in the world today since we have such dominance in the sky, although with China working hard on building a stealth fighter and the general rise of China as a world power it wouldn’t surprise me if we have conflicts with them in the future if over nothing else over natural resources.

The one incident I do remember with the Raptor was several years ago when a bunch of them were flying to Asia, and when they crossed the international date line their computers all crashed.

Maj. Gen. Don Sheppard (ret.): ”…At the international date line, whoops, all systems dumped and when I say all systems, I mean all systems, their navigation, part of their communications, their fuel systems. They were—they could have been in real trouble. They were with their tankers. The tankers – they tried to reset their systems, couldn’t get them reset. The tankers brought them back to Hawaii.

It seems that support for the F-22 Raptor is all but gone at this point, another reason for my interest in the Raptor recently was that report from CNBC I saw it yesterday. I don’t see lack of communications with other aircraft as being a reason not to use the jet in Libya, after all it’s stealth, you don’t know it’s there. They can see you, you can’t see or shoot it, well unless you get in close with a heat seeker or get lucky with guns (assuming the stealth works as well as it is hyped anyways). It’s clear that almost 14 years after it’s first flight, the powers that be have lost patience and confidence in the program.

The next solution seems to be (I think?) the F-35 Joint Strike Fighter, which obviously doesn’t have nearly the amount of bling that makes the Raptor so bad ass(like Supercruise). I don’t know if it is still an issue but at one point some of the partners of the F-35 were threatening to stop support because they weren’t allowed access to the source code of the software that powers it. Not to mention the fact that the F-35 has it’s own delays associated with it and budget overruns.

The F-35 Joint Strike Fighter

In a world where we can’t seem to kill a program because the people who make it have political vested interests in their districts, despite the fact the military says they have too many and don’t want any more, it makes me sad that something like the F-22 and the Commanche for that matter goes away.

The Air Force hasn’t asked for more money to buy C-17s since 2007. That year the Air Force wanted 12, and Congress bought it 22. In 2008, the Air Force wanted none, but Congress bought 15. In 2009, the request was also zero, and Congress bought eight. In 2010, the Air Force once again asked for no C-17s, and lawmakers bought 10.

I don’t know the inside story of course, maybe the systems really are plagued and it’s not realistic to fund them further, whatever the real reason is, it is too bad.

February 24, 2011

So easy it could be a toy, but it’s not

Filed under: General,Random Thought — Tags: — Nate @ 8:44 pm

I was at a little event thrown for the Vertica column-based database, as well as Tableau Software, a Seattle-based data visualization company. Vertica was recently acquired by HP for an undisclosed sum. I had not heard of Tableau until today.

I went in not really knowing what to expect, have heard good things about Vertica from my friend over there but it’s really not an area I have much expertise in.

I left with my mouth on the floor. I mean holy crap that combination looks wicked. Combining the world’s fastest column based data warehouse with a data visualization tool that is so easy some of my past managers could even run it. I really don’t have words to describe it.

I never really considered Vertica for storing IT-related data, and they brought up a case study with one of their bigger customers – Comcast who sends more than 65,000 events a second into a vertica database (including logs, SNMP traps and other data). Hundreds of terabytes of data with sub second query response times. I don’t know if they use Tableau software’s products or not. But there was a good use case for storing IT data in Vertica.

(from Comcast case study)

The test included a snapshot of their application running on a five-node cluster of inexpensive servers with 4 CPU AMD 2.6 GHz core processors with 64-bit 1 MB cache; 8 GB RAM; and ~750 GBs of usable space in a RAID- 5 configuration.
To stress-test Vertica, the team pushed the average insert rate to 65K samples per second; Vertica delivered millisecond-level performance for several different query types, including search, resolve and accessing two days’ worth of data. CPU usage was about 9%, with a fluctuation of +/- 3%, and disk utilization was 12% with spikes up to 25%.

That configuration could of course easily fit on a single server. How about a 48-core Opteron with 256GB of memory and some 3PAR storage or something? Or maybe a DL385G7 with 24 cores, 192GB memory(24x8GB), and 16x500GB 10k RPM SAS disks with RAID 5  and dual SAS controllers with 1GB of flash-backed cache(1 controller per 8 disks). Maybe throw some Fusion IO in there too?

Now I suspect that there will be additional overhead with trying to feed IT data into a Vertica database since  you probably have to format it in some way.

Another really cool feature of Vertica – all of it’s data is mirrored at least once to another server, nothing special about that right? Well they go one step further, they give you the ability to store the data pre-sorted in two different ways, so mirror #1 may be sorted by one field, and mirror #2 is sorted by another field, maximizing use of every copy of the data, while maintaining data integrity.

Something that Tableu did really well that was cool was you don’t need to know how you want to present your data, you just drag stuff around and it will try to make intelligent decisions on how to represent it. It’s amazingly flexible.

Tableu does something else well, there is no language to learn, you don’t need to know SQL, you don’t need to know custom commands to do things, the guy giving the presentation basically never touched his keyboard. And he published some really kick ass reports to the web in a matter of seconds, fully interactive, users could click on something and drill down really easily and quickly.

This is all with the caveat that I don’t know how complicated it might be to get the data into the database in the first place.

Maybe there are other products out there that are as easy to use and stuff as Tableau I don’t know as it’s not a space I spend much time looking at. But this combination looks incredibly exciting.

Both products have fully functional free evaluation versions available to download on the respective sites.

Vertica licensing is based on the amount of data that is stored (I assume regardless of the number of copies stored but haven’t investigated too much), no per-user, no per-node, no per-cpu licensing. If you want more performance, add more servers or whatever and you don’t pay anything more. Vertica automatically re-balances the cluster as you add more servers.

Tableau is licensed as far as I know on a named-user basis or a per-server basis.

Both products are happily supported in VMware environments.

This blog entry really does not do the presentation justice, I don’t have the words for how cool this stuff was to see in action, there aren’t a lot of products or technologies that I get this excited about, but these has shot to near the top of my list.

Time to throw your Hadoop out the window and go with Vertica.

February 15, 2011

IBM Watson does well in Jeopardy

Filed under: General — Nate @ 10:17 am

I’m not a fan of Jeopardy, don’t really watch game shows in general though I do miss the show Greed I think it was called, on about 10 years ago for a brief time.

I saw a few notes yesterday on how Watson was going to compete and I honestly wasn’t all that interested for some reason, but I was reading the comments on the story at The Register and someone posted a link (part 1, part 2) to it on Youtube, and I started watching. I couldn’t stop watching the more I saw the more it interested me.

It really was amazing to me to see some of the brief history behind it, how it evolved and stuff, and it was even more exciting to see such innovation occurring still, I really gotta give IBM some mad props for doing that sort of thing,it’s not the first time they’ve done it of course, but in an age where we are increasingly  thinking shorter and shorter term it’s really inspiring I think is the word I’m looking for to see an organization like IBM invest the time and money over several years to do something like this.

Here are the questions and answers from the show (as usual I could answer less than 10% of them), and here is more information on Watson.

My favorite part of the show aside from the historical background was when Watson gave the same wrong response that another one of the contestants gave right after they gave it (though Watson was unable to hear or see anything so can’t fault it for that but it was a funny moment).

Thanks IBM – keep up the good work!

(maybe it’s just me but that avatar that Watson has, has a cycle where it shows a lot of little circles expanding, reminds me of War games and the computer in that movie running nuclear war simulations)

January 31, 2011

Terremark snatched by Verizon

Filed under: General,Virtualization — Tags: , — Nate @ 9:34 pm

Sorry for my three readers out there for not posting recently I’ve been pretty busy! And to me there hasn’t been too much events in the tech world in the past month or so that have gotten me interested enough to write about them.

One recent event that did was Verizon’s acquisition of Terremark, a service I started using about a year ago.

I was talking with a friend of mine recently he was thinking about either throwing a 1U server in a local co-location or play around with one of the cloud service providers. Since I am doing both still (been too lazy to completely move out of the co-lo…) I gave him my own thoughts, and it sort of made me think about more about the cloud in general.

What do I expect from a cloud?

When I’m talking cloud I’m mainly referring to the IaaS or Infrastructure as a Service. Setting aside cost modelling and stuff for  a moment here I expect the IaaS to more or less just work. I don’t want to have to care about:

  • Power supply failure
  • Server failure
  • Disk drive failure
  • Disk controller failure
  • Scheduled maintenance (e.g. host server upgrades either software or hardware, or fixes etc)
  • Network failure
  • UPS failure
  • Generator failure
  • Dare I say it ? A fire in the data center?
  • And I absolutely want to be able to run what ever operating system I want, and manage it the same way I would manage it if it was sitting on a table in my room or office. That means boot from an ISO image and install like I would anything else.

Hosting it yourself

I’ve been running my own servers for my own personal use since the mid 90s. I like the level of control it gives me and the amount of flexibility I have with running my own stuff. Also gives me a playground on the internet where I can do things. After multiple power outages over the first part of the decade, one of which lasted 28 hours, and the acquisition of my DSL provider for the ~5th time, I decided to go co-lo. I already had a server and I put it in a local, Tier 2 or Tier 3 data center. I could not find a local Tier 4 data center that would lease me 1U of space. So I lacked:

  • Redundant Power
  • Redundant Cooling
  • Redundant Network
  • Redundant Servers (if my server chokes hard I’m looking at days to a week+ of downtime here)

For the most part I guess I had been lucky, the facility had one, maybe two outages since I moved in about three years ago. The bigger issue with my server was aging and the disks were failing, it was a pain to replace them and it wasn’t going to be cheap to replace the system with something modern and capable of running ESXi in a supported configuration(my estimates put the cost at a minimum of $4k). Add to that  the fact that I need such a tiny amount of server resources.

Doing it right

So I had heard of Terremark from my friends over at 3PAR, and you know I like 3PAR, and they use Vmware and I like Vmware. So I decided to go with them rather than the other providers out there, they had a decent user interface and I got up and going fairly quickly.

So I’ve been running it for almost a year, with pretty much no issues, I wish they had a bit more flexibility in the way they provision networking stuff but nothing is perfect (well unless you have the ability to do it yourself).

From a design perspective, Terremark has done it right, whether it’s providing an easy to use interface to provision systems, using advanced technology such as VMware, 3PAR, and Netscaler load balancers, and building their data centers to be even — fire proof.

Having the ability to do things like Vmotion, or Storage vMotion is just absolutely critical for a service provider, I can’t imagine anyone being able to run a cloud without such functionality at least with a diverse set of customers. Having things like 3PAR’s persistent cache is critical as well to keep performance up in the event of planned or unplanned downtime in the storage controllers.

I look forward to the day where the level of instrumentation and reporting in the hypervisors allow billing based on actual usage, rather than what is being provisioned up front.

Sample capabilities

In case your a less technical user I wanted to outline a few of the abilities the technology Terremark uses offers their customers –

Memory Chip Failure (or any server component failure or change)

Most modern servers have sensors on them and for the most part are able to accurately predict when a memory chip is behaving badly and to warn the operator of the machine to replace it. But unless your running on some very high end specialized equipment (which I assume Terremark is not because it would cost too much for their customers to bare), the operator needs to take the system off line in order to replace the bad hardware. So what do they do? They tell VMware to move all of the customer virtual machines off the affected server onto other servers, this is done without customer impact, the customer never knows this is going on. The operator can then take the machine off line and replace the faulty components and then reverse the process.

Same applies to if you need to:

  • Perform firmware or BIOS updates/changes
  • Perform Hypervisor updates/patches
  • Maybe your retiring an older type of server and moving to a more modern system

Disk failure

This one is pretty simple, a disk fails in the storage system and the vendor is dispatched to replace it, usually within four hours. But they may opt to wait a longer period of time for whatever reason, with 3PAR it doesn’t really matter, there are no dedicated hot spares so your really in no danger of losing redundancy, the system rebuilds quickly using a many:many RAID relationship, and is fully redundant once again in a matter of hours(vs days with older systems and whole-disk-based RAID).

Storage controller software upgrade

There are fairly routine software upgrades on modern storage systems, the software feature set seems to just grow and grow. So the ability to perform the upgrade without disrupting users for too long(maybe a few seconds) is really important with a diverse set of customers, because there will probably be no good time where all customers say ok I have have some downtime. So having high availability storage with the ability to maintain performance with a controller being off line by mirroring the cache elsewhere is a very useful feature to have.

Storage system upgrade (add capacity)

Being able to add capacity without disruption and dynamically re-distribute all existing user data across all new as well as current disk resources on-line to maximize performance is a boon for customers as well.

UPS failure (or power strip/PDU failure)

Unlike the small dinky UPS you may have in your house or office UPSs in data centers typically are powering up to several hundred machines, so if it fails then you may be in for some trouble. But with redundant power you have little to worry about, the other power supply takes over without interruption.

If a server power supply blows up it has the ability to take out the entire branch or even whole circuit that it’s connected to. But once again redundant power saves the day.

Uh-oh I screwed up the network configuration!

Well now you’ve done it, you hosed the network (or maybe for some reason your system just dropped off the network maybe flakey network driver or something) and you can’t connect to your system via SSH or RDP or whatever you were using. Fear not, establish a VPN to the Terremark servers and you can get console access to your system. If only the console worked from Firefox on Linux..can’t have everything I guess. Maybe they will introduce support for vSphere 4.1’s virtual serial concentrators soon.

It just works

There are some applications out there that don’t need the level of reliability that the infrastructure Terremark uses can provide and they prefer to distribute things over many machines or many data centers or something, that’s fine too, but most apps, almost all apps in fact make the same common assumption, perhaps you can call it the lazy assumption – they assume that it will just work. Which shouldn’t surprise many, because achieving that level of reliability at the application layer alone is an incredibly complex task to pull off. So instead you have multiple layers of reliability under the application handling a subset of availability, layers that have been evolving for years or decades even in some cases.

Terremark just works. I’m sure there are other cloud service providers out there that work too, I haven’t used them all by any stretch(nor am I seeking them for that matter).

Public clouds make sense, as I’ve talked about in the past for a subset of functionality, they have a very long ways to go in order to replace what you can build yourself in a private cloud (assuming anyone ever gets there). For my own use case, this solution works.

October 8, 2010

Windows mobile: the little OS that couldn’t

Filed under: General — Nate @ 8:15 pm

I don’t know how to feel, a mixture of happiness and sorrow I suppose. Happy that MS is still unable to get any traction in the mobile space, and sad that they have spent so much time and effort and have gotten nowhere, I feel sorry for them, just a little sorry though.

From our best friends at The Register, an article quoting some folks at Gartner –

Gartner said that Windows Phone 7 will provide a fillip to Microsoft’s worldwide mobile market share, pushing it up from 4.7 per cent this year to 5.2 per cent in 2011, but it’s share will fall again to 3.9 per cent by 2014.

That’s just sad, for the worlds biggest software company. I mean I have a Palm pre and I know their market share is in the toilet, not that it really matters at this point since they sold out to HP they made their money. But Palm had a microscopic amount of resources compared to Microsoft. Microsoft has been trying this for I have to think at least a decade now. If the above is true, if by 2014 they have 4% market share, what would you do if you were MS, spending 14 years to get 4% market share?

I never understood the mobile space, and I worked at one of the earliest companies that capitalized on the space earlier this decade, selling ringtones and wallpapers like nobody’s business. All I could think was what are these people thinking buying all this crap. But they just kept coming and coming. Like moths to a flame or something. Nobody was worse than Sprint though before they ended up partnering with that company I was with. I remember back in … 2004? I looked to see what Sprint had to offer as far as ringtones and stuff and they actually wanted you to rent them, that’s right, pay $2.99 or whatever for a ringtone and then have to buy it again in 90 days. That practice stopped after they bought Nextel which was already a customer of ours at the time and Sprint was merged into the service that we provided.

If it wasn’t for Classic, I would still be using a trusty Sanyo feature phone, booted up in about 15 seconds, crashed maybe once or twice a year, worked really well as a phone, and battery lasted a good long time too.

I noticed the battery life on my Palm went through the roof practically after I stopped using the MS Exchange plugin all this time I thought the battery life was crap when it was that stupid plugin draining it.

Looking forward to the PalmPad from HP myself. I won’t go near an Apple product. I don’t trust Google either, so Android is out 🙂

 

New TechOpsGuy: Scott

Filed under: General — Nate @ 7:46 pm

Well what do you know, I make a passive comment about whether or not someone will join up with me in the future for this site in response to Robin’s comments, and not long after I get a volunteer. He sounds really good, focusing more on the software end of things than hardware, but from what I read of him he is similar to me — seeking out best of breed technologies to make his life better.

He seems to be a Linux expert, automation expert, knows everything there is about open source stuff. And even seems to knows networking having built an Extreme-based network recently (sorry couldn’t resist).

So welcome Scott to the site, and I’m sure he will post something introducing him self at some point in the not too distant future.

I hope he knows what he is getting into – the bar is high for publishing content! It does take time to get in the grove, it took me a few months to figure out how to write stuff in this manor.

This site gets far more traffic than I ever really thought, I’m really surprised, impressed, and flattered. I get enormous positive feedback and I really feel this site has done more for my professional career than well any job I have had. I’m really glad it’s here and will do my best to continue posting my thoughts.

Thanks a lot to all of the readers, whether you like what I have to say or not 🙂

I’m getting close to launching my non technical blog, hopefully this weekend. Can you believe that I have so much more to say that I need a second blog? You have probably seen hints of what this new blog will contain from past posts. Those of you who know me, well know what I have to say. I will try to keep it as tame and objective as this site (regardless of how I may come across sometimes I try very hard to be calm and controlled, those that know me know this is true I am RAW and honest — too raw for some I’m sure on occasion), but can’t make promises.

October 5, 2010

Keeping TechOpsGuys around a bit longer

Filed under: General — Nate @ 7:52 am

Well before the domain transferred Robin from StorageMojo sent a good comment my way and it made sense. He’s a much more experienced blogger than me so I decided to take his advice and do a couple of things:

  • Keep the TechOpsGuys name for now – even though it’s just me – until I manage to find something better
  • Keep the original layout – it annoys me but I can live with it with the Firefox zoom feature(zoomed in 150%)

Thanks Robin for the good suggestions, (I don’t know enough about MySQL to recover the data since I did the original migration)

Maybe someone else will join my blogging in the future..

The old TechOpsGuys is officially dead.. Well you may be able to hit it if you have the IP (not that you care!),  my former partners in crime are welcome to contribute to the site still if they want.

I’ll bring up www.techopsguys.com again probably this weekend to rave about non technical topics, so I can keep this site technical..since I run the server and can run as many blogs as I want! Well as many as I have time to..

October 3, 2010

Welcome to the new site

Filed under: General — Nate @ 1:43 pm

Hey there, new blog site..migrated data from http://www.techopsguys.com/ (well my posts at least). Let me know if you see anything that’s really broken. I had to edit a bunch of sql to change the names,paths, etc. Put in symlinks to fix other things.. but I think it’s working…new theme too! Myself I like to read things that are easier to read in low(er) light levels, white is very..bright! hurts my eyes

September 23, 2010

Using open source: how do you give back?

Filed under: General,linux,Random Thought — Tags: — Nate @ 10:11 pm

After reading an article on The Register (yeah you probably realize by now I spend more time on that site online than pretty much any other site), it got me thinking about a topic that bugs me.

The article is from last week but is written by the CEO of the organization behind Ubuntu. It basically talks about how using open source software is a good way to save costs in a down(or up) economy. And tries to give a bunch of examples on companies basing their stuff on open source.

That’s great, I like open source myself, fired up my first Slackware Linux box in 1996 I think it was(Slackware 3.0). I remember picking Slackware over Red Hat at the time specifically because Slackware was known to be more difficult to use and it would force me to learn Linux the hard way, and believe me I learned a lot. To this day people ask me what they should study or do to learn Linux and I don’t have a good answer, I don’t have a quick and easy way to learn Linux the way I learned it. It takes time, months, years of just playing around with it. With so many “easy” distributions these days I’m not sure how practical my approach is now but I’m getting off topic here.

So back to what bugs me. What bugs me is people out there, or more specifically organizations out there that do nothing but leach off of the open source community. Companies that may make millions(or billions!) in revenue in large part because they are leveraging free stuff. But it’s not the usage of the free stuff that I have a problem with, more power to them. I get annoyed when those same organizations feel absolutely no moral obligation to contribute back to those that have given them so much.

You don’t have to do much. Over the years the most that I have contributed back have been participating in mailing lists, whether it is the Debian users list(been many years since I was active there), or the Red Hat mailing list(few years), or the CentOS mailing list(several months). I try to help where I can. I have a good deal of Linux experience, which often means the questions I have nobody else on the list has answers to. But I do(well did) answer a ton of questions. I’m happy to help. I’m sure at some point I will re-join one of those lists(or maybe another one) and help out again, but been really busy these past few months. I remember even buying a bunch of Loki games to try to do my part in helping them(despite it not being open source, they were supporting Linux indirectly). Several of which I never ended up playing(not much of a gamer). VMware of course was also a really early Linux supporter(still have my VMware 1.0.2 linux CD I believe that was the first version they released on CD previous versions were download only), though I have gotten tired of waiting for vCenter for Linux.

The easiest way for a corporation to contribute back is to say use and pay for Red Hat Enterprise, or SuSE or whatever. Pay the companies that hire the developers to to make the open source software go. I’m partial to Red Hat myself at least in a business environment, though I use Debian-based in my personal life.

There are a lot of big companies that do contribute code back, and that is great too, if you have the expertise in house. Opscode is one such company I have been working with recently on their Chef product. They leverage all sorts of open source stuff in their product(which in itself is open source). I asked them what their policy is for getting things fixed in the open source code they depend on, do they just file bugs and wait or do they contribute code, and they said they contribute a bunch of code, constantly. That’s great, I have enormous respect for organizations that are like that.

Then there are the companies that leach off open source and not only don’t officially contribute in any way whatsoever but they actively prevent their own employees from doing so. That’s really frustrating & stupid.

Imagine where Linux, and everything else would be if more companies contributed back. It’s not hard, go get a subscription to Red Hat, or Ubuntu or whatever for your servers (or desktops!). You don’t have to contribute code, and if you can’t contribute back in the form of supporting the community on mailing lists, or helping out with documentation, or the wikis or whatever. Write a check, and you actually get something in return, it’s not like it’s a donation. But donations are certainly accepted by the vast numbers of open source non profits

HP has been a pretty big backer of open source for a long time, they’ve donated a lot of hardware to support kernel.org and have been long time Debian supporters.

Another way to give back is to leverage your infrastructure, if you have a lot of bandwidth or excess server capacity or disk space or whatever, setup a mirror, sponsor a project. Looking at the Debian page as an example it seems AboveNet is one such company.

I don’t use open source everywhere, I’m not one of those folks who has to make sure everything is GPL or whatever.

So all I ask, is the next time you build or deploy some project that is made possible by who knows how many layers of open source products, ask yourself how you can contribute back to support the greater good. If you have already then I thank you 🙂

Speaking of Debian, did you know that Debian powers 3PAR storage systems? Well it did at one point I haven’t checked recently, I do recall telnetting to my arrays on port 22 and seeing a Debian SSH banner. The underlying Linux OS was never exposed to the user. And it seems 3PAR reports bugs, which is another important way to contribute back. And, as of 3PAR’s 2.3.1 release(I believe) they finally officially started supporting Debian as a platform to connect to their storage systems. By contrast they do not support CentOS.

Extreme Networks’s ExtremeWare XOS is also based on Linux, though I think it’s a special embedded version. I remember in the early days they didn’t want to admit it was Linux they said “Unix based”. I just dug this up from a backup from back in 2005, once I saw this on my core switch booting up I was pretty sure it was Linux!

Extreme Networks Inc. BD 10808 MSM-R3 Boot Monitor
Version 1.0.1.5 Branch mariner_101b5 by release-manager on Mon 06/14/04
Copyright 2003, Extreme Networks, Inc.
Watchdog disabled.
Press and hold the <spacebar> to enter the bootrom.

Boot path is /dev/fat/wd0a/vmlinux
(elf)
0x85000000/18368 + 0x85006000/6377472 + 0x8561b000/12752(z) + 91 syms/
Running image boot…

Starting Extremeware XOS 11.1.2b3
Copyright (C) 1996-2004 Extreme Networks.  All rights reserved.
Protected by U.S. Patents 6,678,248; 6,104,700; 6,766,482; 6,618,388; 6,034,957

Then there’s my Tivo that runs Linux, my TV runs Linux(Phillips TV), my Qlogic FC switches run Linux, I know F5 equipment runs on Linux, my phone runs Linux(Palm Pre). It really is pretty crazy how far Linux has come in the past 10 years. And I’m pretty convinced the GPL played a big part, making it more difficult to fork it off and keep the changes for yourself. A lot of momentum built up in Linux and companies and everyone just flocked to it. I do recall early F5 load balancers used BSDI, but switched over to Linux (didn’t the company behind BSDI go out of business earlier this decade? or maybe they got bought I forget). Seems Linux is everywhere and in most cases you never notice it. The only way I knew it was in my TV is because of the instructions came with all sorts of GPL disclosures.

In theory the BSD licensing scheme should make the *BSDs much more attractive, but for the most part *BSD has not been able to keep pace with Linux(outside some specific niches I do love OpenBSD‘s pf) so never really got anywhere close to the critical mass Linux has.

Of course now someone will tell me some big fancy device that runs BSD that is in every data center, every household and I don’t know it’s there! If I recall right I do remember that Juniper’s JunOS is based on FreeBSD? And I think Force10 uses NetBSD.

Also recall being told by some EMC consultants back in 2004/2005 that the EMC Symmetrix ran Linux too, I do remember the Clariions of the time(at least, maybe still) ran Windows(probably because EMC bought the company that made that product rather than creating it themselves)

« Newer PostsOlder Posts »

Powered by WordPress