TechOpsGuys.com Diggin' technology every day

August 7, 2012

ESXi 5 Uptake still slow?

Filed under: General — Tags: — Nate @ 10:10 am

Just came across this article from our friends at The Register, and two things caught my eye –

HP is about to launch a new 2U quad socket system – the HP DL560 Gen8, which is what the article is about. I really can’t find any information on this server online, so it seems it is not yet officially announced. I came across this PDF from 2005, which says the 560 has existed in the past – though I never recall hearing about it and I’ve been using HP gear off and on since before that. Anyways, on the HP site the only 500-series systems I see are the 580 and 585, nothing new there.

HP has taken it’s sweet time joining the 4-socket 2U gang, I recall Sun was among the first several years ago on the Opteron, then later Dell and others joined in but HP was bulky still with the only quad socket  rack option being 4U.

The more interesting thing though to me was the lack of ESXi 5.0 results posted with VMware’s own benchmark utilities. Of the 23 results posted since ESXi 5 was made generally avaialble, only four are running on the newest hypervisor. I count six systems using ESX 4.1U2 and vCenter 5.0 (a combination I chose for my company’s infrastructure). Note I said ESX – not ESXi. I looked at a couple of the disclosure documents and would expect them to specifically call out ESXi if that is in fact what was used.

So not only are they NOT using ESXi 5.0 but they aren’t even using ESXi period with these newest results (there is not a single ESXi 4.x system on the site as far as I can tell).

Myself I find that fascinating. Why would they be testing with an older version of the hypervisor and not even using ESXi? I have my own reasons for preferring ESX over ESXi, but I’d really expect for benchmark purposes they’d go with the lighter hypervisor. I mean it consumes significantly less time to install onto a system since it’s so small.

I have to assume that they are using this configuration because it’s what the bulk of their customers are still deploying today, otherwise it makes no sense to be testing the latest and greatest Intel processors on server hardware that’s not even released yet on an OS kernel that is going on three years old at this point. I thought there was supposed to be some decent performance boosts in ESXi 5?

I’m not really a fan of the VMark benchmark itself, it seems rather confusing to interpret results, there are no cost disclosures, and I suspect it only runs on VMware making it difficult or impossible to compare with other hypervisors. Also the format of the results is not ideal, I’d like to see at least CPU/Memory/Storage benchmarks included so it’s easier to tell how each subsystem performed. Testing brand X with processor Y and memory Z against brand W with processor Y and memory Z by itself doesn’t seem very useful.

SPEC has another VM benchmark, though it seems similarly confusing to interpret results, though at least they have results for more than one hypervisor.

vSphere, aka ESX 4, when it was released really was revolutionary, it ditched the older 32-bit system for a more modern 64-bit system, and introduced a ton of new things as well.

I was totally underwhelmed by ESXi 5, even before the new licensing change was announced. I mean just compare What’s New between vSphere 4 and vSphere 5.

August 1, 2012

Oracle loses 2nd major recent legal battle

Filed under: General — Tags: — Nate @ 5:15 pm

Not long ago, Oracle lost the battle against Google’s Android, and now it seems they have lost the battle with HP on Itanium.

A California court has ruled that Oracle is contractually obligated to produce software for Hewlett-Packard’s Itanium-based servers and must continue to do so for as long as HP sells them.

That’s quite a ruling – for as long as HP sells them. That could be a while! Though I think a lot of damage is already done with Itanium, all of the uncertainty I’m sure prompted a bunch of customers to upgrade to other platforms since they thought Oracle was gone. I suspect it won’t stop either, I think customers will think they will get poor levels of support with Itanium because Oracle is forced to do it kicking and screaming.

Couldn’t of happened to a nicer company (even though I am a long time fan of Oracle DB itself..)

July 30, 2012

Super Micro out with mini UPSs for servers

Filed under: General — Tags: — Nate @ 9:56 pm

It’s been talked about for quite a while, I think Google was probably the first to widely deploy a battery with their servers, removing the need for larger batteries at the data center or rack level.

Next came Microsoft in what I consider at least to be a better design(more efficient at least), with Google’s apparently using AC power to the servers (though the pictures could well be outdated, who knows what they use now). Microsoft took the approach of rack level UPSs and DC power to the servers.

I was at a Data Center Dynamics conference a couple years back where a presenter talked about a similar topic though he didn’t use batteries, it was more along the lines of big capacitors(that had the risk of exploding no less).

Anyways I was wondering along and came across this, which seems really new. It goes beyond the notion that most power events last only two seconds and gives a server an internal battery capacity of anywhere from 30 seconds to 7 minutes depending on sizing and load.

It looks like a really innovative design and it’s nice to see a commercial product in this space being brought to market. I’m sure you can get similar things from the depths of the bigger players if your placing absolutely massive orders of servers, but for more normal folks I’m not aware of a similar technology being available.

These can be implemented in 1+1+1 (2 AC modules + 1 UPS Module), 1+2 (1 AC + 2 UPS @ 2000W) or 2+2 (2 AC + 2 UPS @ 2000W) configurations.

It does not appear to be a integrated PSU+Battery, but rather a battery module that fits along side a PSU, in place of what otherwise could be another PSU.

You may have issues running these battery units in 3rd party data centers, I don’t see any integration for Emergency Power Off (EPO), some facilities are picky about that kind of thing. I can imagine the look on some uninformed tech’s face when they hit the EPO switch, the lights go out but hundreds or thousands of servers keep humming along. That would be a funny sight to see.

While I’m here I guess I should mention the FatTwin systems that they released a few weeks ago, equally innovative compared to the competition in the space at least. Sort of puts the HP SL-series to shame, really. I don’t think you’d want to run mission critical stuff on this gear, but for the market it’s aimed at which is HPC, web farms, hadoop etc they look efficient, flexible and very dense, quite a nice step up from their previous Twin systems.

It’s been many years since I used Super Micro, I suppose the thing they have traditionally lacked more than anything else in my experience (which again isn’t recent maybe this is fixed), is better fault detection and reporting of memory errors. Along the lines of HP’s Advanced ECC, or IBM’s Chipkill (the damn thing was made for NASA what more do you need !) .

I recall some of the newer Intel chips have something similar in the newer chipsets, though the HP and IBM stuff is more CPU agnostic(e.g. supports AMD 🙂 ). I don’t know how the new Intel memory protection measures up to Advanced ECC / Chipkill. Note I didn’t mention Dell – because Dell has no such technology either (they too rely on the newer Intel chips to provide that similar function for their Intel boxes at least).

The other aspect is when a memory error is reported on an HP system for example (at least one of the better ones 300-series and above) – typically a little LED lights up next to the socket having errors, along with perhaps even a more advanced diagnostics panel on the system before you even open it up to show which socket has issues. Since memory errors were far and away the #1 issue I had when I had Super micro systems, these features became sorely missed very quickly. Another issue was remote management, but they have addressed this to some extent in their newer KVM management modules (now that I think about it the server that powers this blog is a somewhat recent Supermicro with KVM management – but from a company/work/professional perspective it’s been a while since I used them).

July 27, 2012

Microsoft Licenses Linux to Amdocs

Filed under: General — Tags: , — Nate @ 3:00 pm

Microsoft has been fairly successful in strong arming licensing fees from various Android makers, though less successful in getting fees directly from operators of Linux servers.

It seems one large company, Amdocs, has caved in though.

The patent agreement provides mutual access to each company’s patent portfolio, including a license under Microsoft’s patent portfolio covering Amdocs’ use of Linux-based servers in its data centers.

I almost worked for Amdocs way back in the day. A company I was at was acquired by them, I want to say less than two months after I left the company. Fortunately I still had the ability to go back and buy my remaining stock options and got a little payout from it. One of my former co-workers said that I walked away from a lot of money.  I don’t know how much he got but he assured me he spent it quickly and was broke once again! I don’t know many folks at the company still since I left it many years ago, but everything I heard sounds like the company turned out to be as bad as I expected, and I don’t think I would of been able to put up with the politics or red tape for the retention periods following the acquisition as it was already bad enough to drive me away from the company before they were officially acquired.

I am not really surprised Amdocs licensed Linux from Microsoft. I was told an interesting story a few years ago about the same company. They were a customer of Red Hat for Enterprise Linux, and Oracle enticed them to switch to Oracle Enterprise Linux for half the cost they were paying Red Hat. So they opted to switch.

The approval process had to go through something like a dozen layers in order to get processed, and at one point it ends up on the desk of the head legal guys at Amdocs corporate. He quickly sent an email to the new company they just acquired about a year earlier that the use of Linux or any open source software was forbidden and they had to immediately shut down any Linux systems they had. If I recall right this was on a day before a holiday weekend. My former company was sort of stunned and laughed a bit, they had to sent another letter up the chain of command which I think reached the CEO or the person immediately below the CEO of the big parent who went to the lawyer and said they couldn’t shut down their Linux systems because all of the business flowed through Linux, and they weren’t about to shut down the business on a holiday weekend, well that and the thought of migrating to a new platform so quickly was sort of out of the question given all the other issues going on at the time.

So they got a special exclusion to run Linux and some other open source software, which I assume is still run to this day. It was the first of three companies (in a row no less) that I worked at that started out as Microsoft shops, then converted to Linux (in all three cases I was hired on a minimum of 6-12 months after they made the switch).

Another thing the big parent did was when they came over to take over the corporate office they re-wired everything into a secure and insecure networks. The local linux systems were not allowed on the secure network only the insecure one(and they couldn’t do things like check email from the insecure network). They tried re-wiring it over a weekend and if I recall right they were still having problems a week later.

Fun times I had at that company, I like to tell people I took 15 years of experience and compressed it into three, which given some of the resumes I have come across recently 15 years may not be long enough. It was a place of endless opportunity, and endless work hours. I’d do it again if I could go back I don’t regret it, though it came at a very high personal cost which took literally a good five years to recover from fully after I left(I’m sure some of you know the feeling).

I wouldn’t repeat the experience again though – I’m no longer willing to put up with outages that last for 10+ hours(had a couple that lasted more than 24 hours), work weeks that extend into the 100 hour range with no end in sight. If I could go back in time and tell myself whether or not to do it – I’d say do it, but I would not accept a position at a company today after having gone through that to repeat the experience again – just not worth it.  A few years ago some of the execs from that company started a new company in a similar market and tried to recruit a bunch of us former employees pitching the idea “it’ll be like the good ‘ol days”, they didn’t realize how much of a turn off that was to so many of us!

I’d be willing to bet the vast majority of Linux software at Amdocs is run by the company I was at, at last check I was told it was in the area of 2,000 systems (all of which ran in VMware) – and they had switched back to Red Hat Enterprise again.

July 11, 2012

Tree Hugging SFO stops buying Apple

Filed under: General — Tags: , — Nate @ 8:31 am

I saw this headline over on slashdot just now and couldn’t help but laugh. Following Apple’s withdrawal from an environmental group, the city of San Fransisco – pretty much in Apple’s back yard, is going to stop buying Macs because of it. I imagine they will have to not buy iPads or iPhones too (assuming they were buying any to begin with) since they are just as integrated as the latest Mac laptops.

Apparently the tightly integrated devices are too difficult to recycle to be compliant so rather than make the devices compliant Apple goes their own way.

I don’t care either way myself but I can just see the conflict within the hardcore environmentalists who seem to, almost universally from what I’ve seen anyways adopt Apple products across the board. For me it’s really funny at least.

It is an interesting choice though, given Apple’s recent move to make one of their new data centers much more green by installing tons of extra solar capacity. On the one hand the devices are not green, but on the other hand the cloud that powers them is. But you can’t use the cloud unless you use the devices, what is an environmentalist to do?!

I suppose the question remains – given many organizations have bans on equipment that is not certified by this environmental standards body – once these bans become more widespread, how long is it until some of them cave internally to their own politics and the withdrawal some of their users go through for not being able to use Apple. I wonder if some may try to skirt the issue by implementing BYOD and allowing users to expense their devices.

Speaking of environmental stuff, I came across this interesting article on The Register a couple weeks ago, which talks about how futile it is to try to save power by unplugging your devices – the often talked about power drain as a result of standby mode. The key takeaway from that story for me was this:

Remember: skipping one bath or shower saves as much energy as switching off a typical gadget at the wall for a year.

In the comments of the story one person wrote how this guy’s girlfriend or wife would warm up the shower for 4-5 minutes before getting in. The same person wanted to unplug their gadgets to save power. But she didn’t want to NOT warm up the shower. Thus obviously wasted a ton more energy than anything that could be saved by unplugging their gadgets. For me, the apartment I live in now has some sort of centralized water heater (first once I’ve ever seen in a multi home complex). All of my previous places have had dedicated water heaters. So WHEN the hot water works (I’ve had more outages of hot water in the past year than I have in the previous 20), the shower warms up in about 30-45 seconds.

So if you want to save some energy, take a cold shower once in a while – or skip a shower once in a while. Or if your like Kramer and take 60+ minute showers, cut it to less time(for him it seems even 27 minutes wasn’t long enough). If you really want to save some energy, have fewer children.

I’m leaving on my road trip to Seattle tomorrow morning, going to drive the coast from the Bay Area to Crescent City, then cut across to Grants Pass Oregon before stopping for the night. Then take I-5 up to Bellevue on Friday so I can make it in time for Cowgirls that night. Going to take a bunch of pictures with my new camera and test my car out on those roads. I made a quicker trip down south last Sunday – drove the coast to near LA and got some pretty neat pictures there too. I drove back on the 4th of July (started at around 5PM from Escondido, CA), for the first time ever for me at least there was NO TRAFFIC. I drove all the way through LA and never really got below 50-60MPH. I was really shocked even given the Holiday. I drove through LA on Christmas eve last year and still hit a ton of traffic then.

June 30, 2012

Java and DNS caching

Filed under: General — Nate @ 12:20 pm

I wanted to write a brief note on this since it’s a fairly wide spread problem that I’ve encountered when supporting Java-based applications (despite the problem I much prefer supporting Java-based apps than any other language at this point by leaps and bounds).

The problem is with a really, really stupid default setting with regards to DNS caching set in the java.security file. It’s an old setting, I recall first coming across it I want to say in 2004 or 2003 even. But it’s still a default even today, and some big names out there apparently are not aware or do not care because I come across this issue from a client perspective on what feels like a semi regular basis.

I’ll let the file speak for itself:

#
# The Java-level namelookup cache policy for successful lookups:
#
# any negative value: caching forever
# any positive value: the number of seconds to cache an address for
# zero: do not cache
#
# default value is forever (FOREVER). For security reasons, this
# caching is made forever when a security manager is set. When a security
# manager is not set, the default behavior in this implementation
# is to cache for 30 seconds.
#
# NOTE: setting this to anything other than the default value can have
#       serious security implications. Do not set it unless
#       you are sure you are not exposed to DNS spoofing attack.
#
#networkaddress.cache.ttl=-1

If your experienced with DNS at all you can probably tell right away the above is a bad default to have. The idea that you open yourself to DNS spoofing attack is just brain dead I’m sorry. You may be very well opening yourself to DNS Spoofing attacks by caching those responses. I think to a recent post of mine related to the Amazon cloud, specifically their Elastic Load Balancers – as terrible as they are they also by-design change IP addresses at random intervals. Sometimes resulting in really bad things happening.

“Amazon Web Services’ Elastic Load Balancer is a dynamic load-balancer managed by Amazon. Load balancers regularly swapped around with each other which can lead to surprising results; like getting millions of requests meant for a different AWS customer.

Swapping IPs at random is obviously heavily dependent upon all portions of DNS resolution operating perfectly. Ask anyone experienced with DNS what they do when they migrate from one IP to another and you’ll very likely hear them say they keep the old IP active for X number of DAYS or WEEKS regardless of their DNS TTL settings because some clients or systems simply don’t obey them.  This is pretty standard practice. When I moved that one company out of Fisher Plaza (previous post) to the new facility I stuck a basic Apache proxy server in the old data center for a month forwarding all requests to the new site (other things like SMTP/DNS was handled by other means).

Java takes it to a new level though I’ll admit that. Why that is a default is just, well I really don’t have words to answer that.

Fortunately Amazon EC2 customers have another solution considering how terrible ELB is, they can use Zeus (oh sorry I meant Stingray).  I’ve been using it for the past 7-8 months and it’s quite nice, easy to use, very flexible and powerful much like a typical F5 or Citrix load balancer (much easier to manage than Citrix). It even integrates with EC2 APIs – it can use an elastic IP to provide automatic fail over (fails over in 10-20 seconds if I recall right, much faster than any DNS-based system could). Because of the Elastic IP the IP of the load balancer will never change. The only real downside to Zeus is it’s limited to a single IP address (not Zeus’ fault this is Amazon limitation). So you can only do one SSL cert per Zeus cluster, the costs can add up quick if you need lots of certs, since the cost of Zeus is probably 5-10x the cost of ELB (and worth every stinking penny too).

Oh that and the Elastic IP is only external (another Amazon limitation – you may see a trend here – ELB too has no static IP internally). So if you want to load balance internal resources say web server 1 talking to web server 2, you either have to route that traffic out to the internet and back in, or to the internal IP of the Zeus load balancer, and manually update the configuration if/when the load balancer fails over because the internal IP will change. I was once taught a long time ago everything behind a VIP. Which means extensive use of internal load balancing for everything from HTTP to HTTPS, to DNS to NTP to SMTP – everything. With the Citrix load balancers we’ve extended our intelligent load balancing to MySQL since it has native support for MySQL now (not aware of anyone else that does – Layer 4 load balancing doesn’t count here).

June 26, 2012

In Seattle area July 13 – 22nd – and at Velocity 2012 tonight

Filed under: General — Tags: — Nate @ 7:41 am

Hey folks!

For my friends up in SEA I wanted to announce that I plan to be in Bellevue from July 13th and leave on the 22nd. Looking at that Wikipedia article

In 2008, Bellevue was named number 1 in CNNMoney’s list of the best places to live and launch a business. More recently, Bellevue was ranked as the 4th best place to live in America.

I guess I was pretty lucky to both live and work in Bellevue in 2008! (lived right across the street from work even) – I do miss it, though it was growing too fast (for good reason I’m sure). The downtown area just exploded during the 11 years I lived there(I lived downtown). My new home doesn’t seem to have won any awards at least according to wikipedia. I still walk to work – but it’s much further away (especially after a recent office move now 0.8 miles each way, double what it used to be).

As I mentioned before I’m driving up the coast (at least part way, haven’t decided how far past Crescent City, CA I will go). I thought about it and ordered a new camera which should get here soon and I’m pretty excited to try it out. I wanted something with a more powerful optical zoom than the 12X I have now so thought I would have to go DSLR. After doing a bit of checking it seems that typical DSLR zoom lenses aren’t that impressive (from a zoom factor at least). So I came across the Nikon P510 and it’s unbelievable 42X optical zoom (I was expecting to get something more like 20X),  so it wasn’t a very hard decision to make. The camera is bigger than what I have now, but it’s about the same size as a Kodak camera I had before which had 10X zoom (bought in 2005).

I’m very much an amateur picture taker (I couldn’t bring myself to use the term photographer because I suck), but I really did like the scenery along the western coast of the country the last trips I took.

So not only does it have a massive zoom, just what I wanted but the price is good too! To think I was thinking about spending $1500 for a short while until I determined the DSLR zoom wasn’t as high of a zoom factor as I thought they would be(given the price). I know the picture quality of the DSLRs are much better – or at least can be in the right hands (my hands are not the right hands).

Anyways back on topic – my trip to Seattle, I mean to Bellevue, I’m staying at the Larkspur Hotel, a chain I had not heard of before but it looks like a really nice place, and close to my former home. The plans are not 100% finalized but I think I am 95-98% sure at this point. I opted for a refundable room just in case 🙂

Of course I plan to be at the usual places while I’m there, I’ll be working for a few days at least since most of the action will be at night and weekends anyways.

Also if your in the Bay Area and aren’t doing anything else tonight there is a Music + Tech party sponsored by Dynect (who is a DNS provider I’ve been using for a few years they have great service), in Santa Clara tonight at 8PM. Barring an emergency or something I’m planning to be there. Unlike the main Velocity 2012 conference this event does not require a pass/tickets to attend.

June 17, 2012

Can Sprint do anything else to drive me away as a customer

Filed under: General — Tags: — Nate @ 2:15 pm

I write this in the off chance it shows up in one of those Google alert searches that someone over at Sprint may be reading.

I’ll keep it pretty short.

I’ve been a customer of Sprint for 12 years now, for the most part I had been a happy customer. The only problems I had with Sprint was when I had to deal with customer service, the last time I had a customer service issue was in 2005.

I dropped Sprint last year for AT&T in order to use the GSM HP Pre 3 which I ordered at high cost from Europe. My Sprint Pre basically died early last year and I sat with a feature phone to fill the gap. Sprint charged me something like $100 or so to change to AT&T even though I was no longer in contract(paid full price for that feature phone). I think that was wrong but whatever I didn’t care.

Fast forward to late last year/early this year – I’m still a Sprint customer – not a phone customer but a Mifi 3G/4G customer. I bought the Mifi almost two years ago primarily for on call type stuff. I hardly ever use it. I’d be surprised if I did 15GB of data transfer over the past 2 years.

Sprint sent me my usual bill, and it had some note about e-billing in it. Whatever, I paid my bill and didn’t think about it. I don’t like e-billing, I don’t want e-billing.

Given I don’t use the Mifi much it was about a month later that I tried to use it to find my service disconnected. Whatever, it wasn’t that important at the time.

Later I got some letter saying Hey pay your bill to reconnect your service! I still didn’t care since I wasn’t using it anyways.

Later I got some collection notices from some collection agency (first time I’ve ever gotten one of those). So I figured I should pay. So I called Sprint – paid the bill and the first rep I talked to swore I was on paper bills and it must be a problem with the post office. I could see perhaps the post office missing one bill (I don’t recall them ever missing any in the past 15 years), but not more than one. She swore it was set right on their side. So I hung up, not really knowing what to do next.

I logged onto their site and it clearly said I was signed up for e-billing. So without changing it back to paper I called back and got another rep. This rep said I was signed up for e-billing. I asked her why – she said it only would of happened if I had requested it. I told her I did not and she looked at the call logs going back SEVEN YEARS and confirmed I never requested it. I asked did Sprint do some kind of bulk change to e-billing she said no. I asked how it got set she didn’t know.

I asked first I want to change it to paper billing, then I want some kind of notification whenever it changes. She said notification goes out via SMS. Well I am on a MIFI – no SMS here. She updated the system to send me notifications via the mail(I would hope their system would detect the only device they have for me doesn’t support SMS and automatically use another method but No it doesn’t). There wasn’t more she could do, I was transferred to another rep who said about the same thing. They assured me it was fixed.

Shortly thereafter I got a paper bill, and I paid it like I always do, yay no more problems.

Fast forward more time (a month or two who knows) to today – I get another bill-like thing in the mail from Sprint. But it’s not a bill – it’s another one of those Hey pay your bill so you can get your service back things. Here I was thinking it felt like too much time had elapsed since my last bill.

Ok now I’m F$##$ pissed. WTF.  I would hope that their system would say this guy stopped paying his bills once he was switched to e-billing so there is probably not a coincidence here. Obviously they have no such system.

So I logged onto Sprint again, and could not find the ‘billing preferences’ part of the site that I found earlier in the year, the only link was ‘switch to e-billing to be green’. I didn’t want to click it as I wasn’t sure what it would do – I did not want to click it and have it sign me up for e-billing I wanted the transaction to be logged in a different way – I didn’t want them to be able to say HEY you asked our system today to switch you.

So I called again, and sort of like my experience in 2005 I had a hard time getting to a customer service rep. But I managed to by telling their system I wanted to be a new customer and then asking that rep to transfer me to billing dept. On one of my attempts I hit zero so many times (sometimes works to get to an operator) that their system just stopped responding, sat there for a a good two minutes in silence before I hung up and tried again.

THAT rep then confirmed that I HAD requested to be set to PAPER BILLS earlier in the year but apparently the other reps forgot a little hidden feature that says “PAPER BILLS ALL THE TIME” or “PAPER BILL ON DEMAND”. They didn’t have ALL THE TIME SET. Despite my continuous questions of the original reps.

Despite how pissed off I was in both occasions I was very polite to the reps. I didn’t want to be one of those customers, this time around it was significantly more difficult to keep calm but I managed to do so. But Sprint sure is trying hard to lose me as a customer. There’s really no reason for me to stay other than I’m locked into the contract until September. On a device I hardly ever use (maybe I’ve transferred 150 megabytes the entire year so far).

Sprint is ditching the WiMax network (which is obviously what the 4G portion of my Mifi uses), they’ve neutered their premier membership stuff (as a 12 year customer I was and still am a member), they’ve bet their future on the iPhone(a device I will never use), they’ve messed up my billing twice in less than six months and then think I’m the bad guy for not paying my bills (well the reps didn’t say that but it certainly feels that way when collection agencies start contacting me).

On top of that I’ve committed a lot of money to GSM Palm phones which means I pretty much have to stick to AT&T for the foreseeable future, if I want to use those phones. I suppose T-mobile is technically an option but the frequency differences make for a degraded experience.

There are so many other ways the situations could of been handled better by technology – the most simple as I mention – I don’t pay the bill unless I get a paper bill in the mail.  I don’t like e-bills. Sprint did not make any attempt to contact me by mail or by phone (I’m not a voice customer anymore though I expect they should have my phone# as a contact number since it hasn’t changed in 12 years) when they put me back on e-billing for a second time in six months.

I’m probably one of the well sought out customers – pretty low usage – but subscribe to unlimited plans because I don’t like surprises in the mail and I don’t want to have to worry about minutes in the event something comes up and I have a long phone call to make or something. I must be gold for their Mifi stuff since I almost never use the thing, but I still pay the $50/mo to keep the service up.

Will Sprint F*$@ up again between now and 9/5/2012 ? I’m counting the days until my contract is up, I really don’t see what they could possibly do to keep me as a customer at this point.

I guess 1385 words is short for me.

June 15, 2012

Life extending IPv4 Patent issued

Filed under: General — Tags: — Nate @ 8:23 am

I’ve worked at a few companies over the years, more than one of them have crashed and burned but one in particular has managed to hold on, against the odds. Don’t as me how because I don’t know. Their stock price was $0.26 more than a decade ago (I recall my opening option price was in the $5 range back in the Summer of 2000).

CNBC Screen shot from a long time ago, it must of been good to be a treasury investor then look at that yield!

They didn’t get into the patent game until years after they laid my friends and I off in 2002. The primary business model they had at the time, was making thin client software their main competition was the likes of Citrix and stuff. I worked on the side that made the Unix/Linux variant being a IT support/system admin for various Linux, Solaris, HPUX, Tru64 and AIX systems along with a few select Windows NT boxes. One of my long time friends worked on the Windows side of things at the other end of the country. The company acquired that technology from Corel I believe in the late 90s and still develop that product today. I’m not sure whether or not the Unix product is developed still for a long time they just had a single developer on it.

Anyways I write this because I was browsing their site yesterday while talking to that friend on the phone, and it turns out they were granted a groundbreaking new patent for cloud computing a couple of months ago. Though I think you’ll agree that it’s much more applicable towards extending the life of IPv4, without this technology we would of ran out of IPs a while ago.

U.S. Patent 8,117,298, issued February 14, 2012, is directed towards an easily configurable Web server that has the capability to host (or provide a home for) multiple Web sites at the same time. This patent helps small companies or individuals create the same kind of Web presence enjoyed by larger companies that are able to afford the cost of multiple, dedicated Web server machines.

This patent protects GraphOn’s unique technology for “configuring” a Web server that has the capability to host multiple Web sites at the same time – called “multi-homing”. This multi-homing capability of the Web server provides the appearance to users of multiple distinct and independent servers at a lower cost.
Functionally, a multi-homed Web server consists of, in effect, multiple virtual Web servers running on the same computer. The patent claims a design in which features can be readily added to one or more of the virtual servers. For example, a new software module having additional features or different capabilities might be substituted for an existing module on one of the virtual servers. The new features or capabilities may be added without affecting any other of the virtual servers and without the need to rebuild the Web server.

You can see the uses for this patent right? I mean pretty much everyone out there will immediately want to step in and license it because it really is groundbreaking.

Another thing I learned from the patent itself which I was not aware of is that most web servers run under inetd –

Webs servers, most of which are written for UNIX, often run under INETD (“eye-net-D”), an Internet services daemon in UNIX. (A daemon is a UNIX process that remains memory resident and causes specified actions to be taken upon the occurrence of specified events.) Typically, if more than sixty connection requests occur within a minute, INETD will shut down service for a period of time, usually five minutes, during which service remains entirely unavailable.

This is not the first ground breaking patent they’ve been issued over the years.

Back in 2008 they were issued

  • Patent 7,386,880 for some form of load balancing.
  • Patent 7,424,737 for some sort of bridging technology that converts between IP and non IP protocols (the example given is Satellite protocols).
  • Patent 7,360,244 for two-factor authentication against a firewall.

Back in 2007 they were issued

  • Patent 7,269,591 which talks about a useful business model where you can charge a fee to host someone’s personal web site
  • Patent 7,269,847 which is a very innovative technology involving configuration of firewalls using standard web browsers.
  • Patent 7,249,376 which covers multi homed firewalls and dynamic DNS updates
  • Patent 7,249,378 which seems to be associating a dedicated DNS for users utilizing a VPN.

Unfortunately for Graphon they did not license the patent that allows them to display more than the most recent five years of press releases on their site so I tasked our investigative sasquatch to find the information I require to finish this post.  Most/all of the below patents were acquired with the acquisition of Network Engineering Software.

Harry, our investigative sasquatch

Back in 2006 they were issued

  • Patent 7,028,034 covers web sites that dynamically update and pull information from a database to do so. Fortunately for you non-profits out there the scope is limited to pay sites.

..in 2005

  • Patent 6,850,940 which I’m honestly surprised someone like Oracle didn’t think of earlier, it makes a lot of sense. This covers maintaining a network accessible database that can receive queries from remote clients.

..And Waaay back in 2001

  • Patent 6,324,528 which covers something along the lines of measuring the amount of time a service is in use – I think this would be useful for the cloud companies too you need to be able to measure how much your users are using the service to really bill based on usage rather than what is provisioned.

I suppose I should of bought my options when I had the chance I mean if I had invested in the $5 option price I would be able to retire, well maybe not, given the stock is trading in the $0.12 range. I felt compelled to get this news out again so that the investors can wise up and see what an incredible IP portfolio this company has and the rest of the world needs to stand ready to license these key technologies.

Then I can go back in time and tell myself to buy those options only to come forwards in time and retire early. I’ll publish my time travel documents soon, I won’t ask you to license them from me, but you will have to stop at my toll both in the year 2009 to pay the toll. You’re free to travel between now and Jan 1 2009 without any fees though, think of it as a free trial.

May 29, 2012

More Chef pain

Filed under: General — Tags: — Nate @ 10:30 am

I wrote a while back about growing pains with Chef, which is the newish hyped up system management tool. I’ve been having a couple other frustrations with it in the past few months and needed a place to gripe.

The first issue started a couple of months ago where some systems were for some reason restarting Splunk every single time chef ran. It may of been going on longer than that but that’s when I first noticed it. After a couple hours of troubleshooting I tracked it down to chef seemingly randomizing the attributes for the configuration resulting in writing a new configuration (that was the same configuration, just in a different order) every time and triggering a restart. I think it was isolated primarily to the newer version(s) of chef (maybe specific to 0.10.10). My co-worker who knows more chef than I (and the more I use chef the more I really want cfengine – disclaimer I’ve only used cfengine v2 to-date), says after spending some time troubleshooting himself that the only chef solution might be to somehow set the order of the attributes in a static fashion (probably some ruby thing that lets you do that? I don’t know). In any case he hasn’t spent time on doing that and it’s over my head so these boxes just sit there restarting splunk once or twice an hour. They make up a small portion of the systems, the vast majority are not affected by this behavior.

So this morning I am alerted to a failure in some infrastructure that still lives in EC2 (oh how I hate thee), turns out the disk is going bad and I need to build a new system to replace it. So I do, and chef spits out one of it’s usual helpful error messages

 [Tue, 29 May 2012 16:35:36 +0000] ERROR: link[/var/log/myapp] (/var/cache/chef/cookbooks/web/recipes/default.rb:50:in `from_file') had an error:
 link[/var/log/myapp] (web::default line 50) had an error: TypeError: can't convert nil into String
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:106:in `stat'
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:106:in `stat'
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:61:in `set_owner'
 /usr/lib/ruby/vendor_ruby/chef/file_access_control/unix.rb:30:in `set_all'
 /usr/lib/ruby/vendor_ruby/chef/mixin/enforce_ownership_and_permissions.rb:33:in `enforce_ownership_and_permissions'
 /usr/lib/ruby/vendor_ruby/chef/provider/link.rb:96:in `action_create'
 /usr/lib/ruby/vendor_ruby/chef/resource.rb:454:in `send'
 /usr/lib/ruby/vendor_ruby/chef/resource.rb:454:in `run_action'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:49:in `run_action'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:85:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:85:in `each'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:85:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection.rb:94
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:116:in `call'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:85:in `step'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:104:in `iterate'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index'
 /usr/lib/ruby/vendor_ruby/chef/resource_collection.rb:92:in `execute_each_resource'
 /usr/lib/ruby/vendor_ruby/chef/runner.rb:80:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/client.rb:330:in `converge'
 /usr/lib/ruby/vendor_ruby/chef/client.rb:163:in `run'
 /usr/lib/ruby/vendor_ruby/chef/application/client.rb:254:in `run_application'
 /usr/lib/ruby/vendor_ruby/chef/application/client.rb:241:in `loop'
 /usr/lib/ruby/vendor_ruby/chef/application/client.rb:241:in `run_application'
 /usr/lib/ruby/vendor_ruby/chef/application.rb:70:in `run'
 /usr/bin/chef-client:25

So I went to look at this file, on line 50, looked perfectly reasonable, there hasn’t been any changes to this file in a long time and has worked up until now. What a TypeError is I don’t know(it’s been explained to me before but I forgot what it was 30 seconds after it was explained), I’m not a developer(hey, fancy that). I have seen it tons of times before though, it was usually a syntax problem (tracking down the right syntax has been a bane for me in Chef, it’s so cryptic, just like the stack trace above).

So I went to the Chef website to verify the syntax, and yep, at least according to those docs it was right. So, WTF?

I decided to delete the user and group config values, and ran chef again, and it worked! Well until the next TypeError, rinse and repeat about four more times and I finally got chef to complete. Now for all I know my modifications to make the recipes work on this chef will break on the others. Fortunately I was able to figure this syntax error out, usually I just bang my head on my desk for two hours until it’s covered in blood and then wait for my co worker to come figure it out(he’s in a vastly different time zone from me).

So what’s next? I get an alert for the number of apache processes on this host, and that brings back another memory with regards to Chef attributes. I haven’t specifically looked into this issue again but am quite certain I know what the issue is – just no idea how to fix it. The issue the last time this came up was that Chef could not decide on what type of EC2 (ugh) instance this system is, there are different thresholds for different sizes. Naturally one would expect chef to check to see what size, it’s not as if Amazon has the ability to dynamically change sizes on you right? But for some reason again chef thinks it is size A on one run and size B on another run. Makes no sense. Thus the alerts when it gets incorrectly set to the wrong size. Again – this only seems to impact the newest version(s) of Chef.

I’m sure it’s something we’re doing wrong, or if it was VMware it would be something Chef was doing wrong before and is doing right now, what we’re doing hasn’t changed and now all of a sudden is broken. I believe another part of the issue is the legacy EC2 bootstrap process pulls in the latest chef during build, vs our new stuff(non EC2) maintains a static version, less surprises.

Annoyed to have to come back from a nice short holiday to have to immediately deal with two things I hate to deal with – Chef and EC2.

This coming trip to Amsterdam will provide the infrastructure to move the vast majority of the remaining EC2 stuff out of EC2, so am excited about that portion of the trip at least. Getting off of chef is another project I don’t feel like tackling now since I’m in the minority as to my feelings for it. I just try to minimize my work in it for my own sanity, there’s lots of other things I do instead.

On an unrelated note, for some reason during a recent apt-get upgrade my Debian system pulled in what feels like a significantly newer version of WordPress, though I think the version number only changed a little(I don’t recall what the original version number was). I did a major Debian 5.0->6.0 Upgrade a couple of months ago, but this version came in after, has a bunch of UI changes. I’m not sure if it breaks anything, I think I need to re-test how the site renders in IE9 as I manually patched a file after getting a report that it didn’t work right, and the most recent update may of overwritten that fix.

« Newer PostsOlder Posts »

Powered by WordPress