TechOpsGuys.com Diggin' technology every day

June 4, 2013

Break out the bubbly, 100k SPAM comments

Filed under: Random Thought — Nate @ 10:49 am

It seems we crossed the 100k spam comments blocked by Akismet mark. (see right side of the page)

Saved for history: crossing the 100k comment spam marker

That is just insane. 100k. I don’t even know what I would do without them. Well I guess I do — I’d have to keep comments off. That low cost annual subscription pays for itself pretty quick.

I verified on archive.org that on October 10, 2012 this site was at ~32k spam comments blocked. On May 17, 2012 only ~23k.

75,000 spam comments in about one year? For this tiny site ?

*shakes head*

Side note: for some reason HP employees are always blocked by Akismet, I don’t know why. I think they are the only ones who have contacted me saying their comments were (incorrectly) blocked.

 

May 10, 2013

Activist investors and not wanting to be public

Filed under: Random Thought — Nate @ 11:49 am

Another somewhat off topic post but still kinda tech related.  There has been somewhat of a rash of companies the past few years that say they either don’t want to go public or some that already are public and want to go private again.

Obviously the leading reason behind this is often cited as companies not wanting to have to deal with the pressure of short term investors. Some are more problematic than others.

Two events caught my eye this morning, the first one is Carl Ichan’s attempt to crush Dell. The second is apparently a dual headed assault against the board of Emulex for not taking the buyout offer from Broadcom a few years ago.

The average amount of time people hold stocks for has collapsed over the past decade or two( I saw some charts of this at one point but can’t find them). People are becoming less and less patient demanding higher returns in shorter amounts of time. The damage is being done everywhere, from companies surrendering their intellectual property in order to be allowed to do business in China, to companies just squeezing their staff trying to get everything they can out of them. I saw someone who claimed to be an IBMer (widely regarded as giving investors what they want) claiming IBM is being gutted from the inside out, all the good people are leaving, it’s only a matter of time.. This certainly may not be true, I don’t know. But it wouldn’t surprise me given IBM’s performance.

Corporate profits are at record highs as many like to cite. Though I see severe long term damage being done across the board in order to get to these profits in the short term.

HP is probably a prime example of this, as a result they felt they had to go spend a ton of money acquiring companies to make up for the lack of investment in R&D in previous years.

But of course HP is not alone. This problem is everywhere, and it’s really depressing to see. The latest attempt from Icahn to kill Dell is just sad.  Here is Michael Dell the founder and CEO trying to save his company for the long term, and here comes in Icahn who just wants to squeeze it dry. Dell may not succeed if they go private maybe they go bust and investors lose out. Maybe Icahn’s deal is better for investors. For some reason I rather view the situation as for what’s best for the company. Dell has done some smart acquisitions over the past few years, they need (more)time to sort through them. If investors don’t like what Dell is doing they are free to sell their shares.

Not long ago BMC got acquired as well (I really have no knowledge of BMC’s products) by private equity and it’s quite possible the same happens there too.

HP attacked Dell when the plan was announced to go private, saying things like “Dell is distracted now”, and HP is a better fit for customers because they are not distracted. I got news for anyone who believed that – HP has more distractions than Dell. I have absolutely no doubt if HP could manage a way to go private they would do it in a heart beat.  I think HP will get past their distractions, they have some great products in Proliant, 3PAR, Vertica, and others I’m sure that I can’t cite off the top of my head.

Same sort of short sightedness I see all the time with folks wanting to embrace the cloud, and the whole concept of shifting things from CAPEX to OPEX (though some accountants and CFOs are wising up to this scam).

The same thing is happening in governments around the world as well. It’s everywhere.

It’s just one of those things that has kept my vision of the future of our economy at such low levels (well if I said what I really thought you might think I’m even more crazy than I already come off as being 🙂 )

anyway, sad to see….

May 9, 2013

Lost almost all respect for Consumer Reports today

Filed under: Random Thought — Tags: — Nate @ 10:30 pm

I was getting my morning dose of CNBC this morning(I’m not an investor – I watch CNBC purely for entertainment purposes) when news came over the wire that Tesla had gotten a 99/100 rating on their ultra luxury green car by Consumer Reports.

I watched the interview with the guy at Consumer Reports and I was shocked, and I still am. A bit disgusted too.

Let me start out by saying I have no issues with the car itself. I’m sure it’s a fine automobile. My issue is with the double talking standard of Consumer Reports in this particular case.

(forgive me the quotes will not be precise, see the video link above for the full interview)

The guy who wrote this report starts off by saying it’s better than the other $90,000 cars in it’s price range… (also goes on to say it’s better than pretty much ANY car they have EVER EVER EVER TESTED — not just better than any other electric car — ANY CAR)

..but..

It is an electric car, while it has a long range compared to other electric cars, I can take a Toyota Corolla and drive to Cleveland from New York  — I can’t do that in this car yet.

You can only go about 200 miles before charging it up, that is a severe limitation. (those last two are his words)

CNBC goes on to quote him as saying If you leave it unplugged, you experience what you describe as a parasitic loss of energy(Consumer Report’s words) that amounts to 12-15 miles per day, and asks him on that topic. He responds –

The concern is this is a new vehicle from a new automaker and there’s going to be growing pains. If you’re really looking for something to be trouble free off the bat – look elsewhere (his words too!)

I’m really glad I saw the interview. My disgust here is not with the car. I have no doubt it is a fine car! I think many people should go buy it. But if these sorts of flaws knock only a single point off the score ….that just seems wrong. Very wrong.  Especially the last bit — if you want something to be trouble free look elsewhere — for something that got rated 99 out of 100!!!

He goes on to talk about how it takes 30 minutes to charge the battery to half strength at one of Tesla’s charging stations. He thinks people would be happier (duh) if they could fully charge it in four minutes.

CNBC half seriously asked him if you could charge a Tesla from a hotel room power outlet. Consumer reports guy said yes but it would take a VERY long time.

People buying this $90k car obviously are not concerned about the price of gas. It’s really more for the image of trying to show your green than anything else, which is sad in itself(you can be more green by buying carbon offset credits – folks that can afford a $90k car should have no trouble buying some). But that’s fine.

Again my issue isn’t with the car. It’s with the rating. Maybe it should be a 75, or an 85. I don’t know. If it was me I’d knock at least 20 points off for the lack of range and lack of charging stations alone. Now if gas was say fifteen dollars a gallon I could see giving it some good credit for saving on that.

I think what Tesla is doing is probably a good thing, it seems like decent technology, good (relative to other electric vehicles) range. Likely can’t take it on a road trip between SFO and Seattle any time soon..

I have relied at least in part on Consumer Reports for a few different purchases I have made (including my current car – which if I recall right Consumer Reports had no rating on it at the time it was too new. One of my friends just bought the 2013 model year of my car a few weeks ago).  It was extremely disappointing to see this result today. Maybe I should not of been surprised. I don’t pay too close attention to what Consumer Reports does, I check them usually once every couple years at most. This may be par for the course for them.

Internet tech review sites have often had a terrible reputation for giving incorrect ratings for some reason or another (most often it seems to be because they want to keep getting free stuff from the vendors). I had thought(hoped) Consumer Reports was in a sort of league by itself for independent reporting.

I think at the end of the day the rating doesn’t matter – people who are in the market for a $90k car will do their homework. It just sort of reeks to me of a back room deal or perhaps a bunch of hippies over at Consumer reports dreaming of a future where all vehicles are electric, never mind the fact the power grid can’t handle it. (I can hear the voice of Cartman in the back of my head – Screw you hippies!). Tesla wants all the positive press they can get after all.

So let’s see. We have a perfect score of 99/100 along which we have the words severe limitation, parasitic loss of energy, and look elsewhere if you want a trouble free experience ……..

I’ll say it one more time – my problem is with the perfect rating of 99/100 not with the car itself.

April 9, 2013

Influx of SPAM – batton down the hatches!

Filed under: Random Thought — Tags: — Nate @ 9:09 pm

I don’t know what is going on but for some reason this blog has been getting a lot more SPAM comments recently. I mean normally Akismet takes care of everything and MAYBE one gets through a MONTH, eleven have gotten through today alone (update: now 14)

I haven’t been keeping track, but that little counter on the right side is up to almost 75,300 now — the last time I recall noticing it I thought it was below 30,000..

The Akismet plugin says it is operational and the API key I am using is valid,and all servers are reachable.

I wonder what is going on, maybe today is just my lucky day.

Opscode Chef folks still have a lot to learn

Filed under: Random Thought — Tags: — Nate @ 8:01 pm

The theme for this post is: BASIC STUFF. This is not rocket science.

A while back I wrote a post (wow has it really been over a year since that post!) about Chef and my experience with it for what was at the time the past two years, I think I chose a good title for it –

Making the easy stuff hard, and the hard stuff possible

Which still sums up my thoughts today. This post was inspired by something I just read on the Opscode Chef status site.

While I’m on the subject of that damn status site I’ll tell you what – I filed a support ticket with them back in AUGUST 2012 – yes people that is EIGHT MONTHS ago, to report to them that their status site doesn’t #@$@ work.  Well at least most of the time it doesn’t #@$@! work. You see a lot of times the site returns an invalid Location: header which is relative instead of absolute, and standards based browsers(e.g. Firefox), don’t like that so I get a pretty error message that says the site is down, basically. I can usually get it to load after forcing a refresh 5-25 times.

This is not the kind of message you want to serve from your "status" site

I first came across this site when Opscode was in the midst of a fairly major outage. So naturally I feel it’s important that the web site that hosts your status page work properly. So I filed the ticket, after going back and forth with support, I determined the reason for the browser errors and they said they’d look into it. There wasn’t a lot they claimed they could do because the site was hosted with another provider (Tumbler or something??).

That’s no excuse.

So time passes, and nothing gets done. I mentioned a while back I met some of the senior opscode staff a few years ago, so I directly reached out to the Chief Operating Officer of Opscode (who is a very technical guy himself) to plead with him FIX THE DAMN SITE. If Tumbler is not working then host it elsewhere, it is trivial to setup that sort of site, I mean just look at the content on the site! I was polite in my email to him. He responded and thanked me.

So more time passes, and nothing happens. So in early January I filed another support ticket outlining the reason behind their web site errors and asked that they fix their site. This time I got no reply.

More time passes. I was bored tonight so I decided to hit the site again, guess what? Yeah, they haven’t done squat.

How incompetent are these people? Sorry maybe it is not incompetence but laziness.  If you can’t be bothered to properly operate the site take the site down.

So anyway I was on their site and noticed this post from last week

Chef 0.9.x Client EOL

Since we stopped supporting Chef 0.9.x June 11, 2012 we decided it is a good time to stop all API support for Chef 0.9.x completely.

Starting tomorrow the api.opscode.com service will no longer support requests from Chef 0.9.x clients.

ref: http://www.opscode.com/blog/2012/05/10/chef-0-9-eol/

I mean it doesn’t take a rocket scientist to read that and not think immediately how absurd that is. It’s one thing to say you are going to stop supporting something that is fine. But to say OH WE DECIDED TO STOP SUPPORT, TODAY IS YOUR LAST DAY.

So I go to the page they reference above and it says

On or after June 11th, we’ll deploy a change to Hosted Chef that will disable all access to Hosted Chef for 0.9 clients, so you will want to make sure you’ve upgraded before then.

Last I checked, it is nowhere near June 11th. (now that I think of it maybe they meant last year, they don’t say for sure).  In any case there was extremely poor notification on this – and how much work does it take to maintain servers running chef 0.9 ? So you can stop development on it, no new patches. Big deal.

This has absolutely no impact on anything I do because we have been on Chef 0.10 forever. But the fact they would even consider doing something like this just shows how poorly run things are over there.

How can they expect customers to take them seriously by doing stuff like this? This is BASIC STUFF. REAL BASIC.

Something else that caught my eye recently as I was doing some stuff in Chef, was their APIs seemed to be down completely. So I hopped on over to the status site after forcing a refresh a dozen or more times to get it to load and saw

Hosted Chef Maintenance Underway

The following systems are unavailable while Hosted Chef is migrated from MySQL to PostgreSQL.

– The Hosted Chef Platform including the API and Management Console

– Opscode Support Ticketing System

– Chef Community Site

Apparently they had announced it on the site one or more days prior(can’t tell for sure now since both posts say posted 1 week ago). But they took the APIs down at 2:00 PM Pacific time! (they are based in Seattle so that’s local time for them). Who in their right mind takes their stuff down in the middle of the afternoon intentionally for a data migration? BASIC STUFF PEOPLE. And their method of notification was poor as well, nobody at my company(we are a paying customer) had any idea it was happening. Fortunately it had only a minor impact on us. I just got lucky when I happened to try to use their API at the exact moment they took it down.

Believe me there are plenty of times when one of our developers comes up to me and says OH #@$ WE NEED THIS CONFIGURATION SETTING IN PRODUCTION NOW! As you might imagine most of that is in Chef, so we rely on that functioning for us at all times. Unscheduled down time is one thing, but this is not excusable. At the very least you could migrate customers in smaller batches(with downtime for any given customer measured in seconds – maybe the really big customers take longer but they can work with those individually to schedule a good time). If they didn’t build the product to do that they should go back to the drawing board.

My co-worker was recently playing around with a slightly newer build of Chef 0.10.x that he thinks we should upgrade to (ours is fairly out of date – primarily because we had some major issues on a newer build at the time). He ran into a bunch of problems including Opscode changing some major things around within a minor release breaking a bunch of stuff. Just more signs of how cavalier they are, typical modern “web 2.0” developer types, that don’t know anything about stability.

Maybe I was lucky I don’t know. But I basically ran the same version of CFengine v2 for nearly 7 years without any breakage (hell I can’t remember encountering a single issue I considered a bug!), across three different companies. I want my configuration system to be stable, fast and simple to manage. Chef is none of those, the more I use it the more I dislike it. I still believe it is a good product and has it’s niche, but it’s got a looooooooong way to go to win over people like me.

As a CFengine employee put it in my last post, Chef views things as configuration as code, and CFengine views them as configuration as documentation. I’m far in the documentation camp. I believe in proper naming conventions whether it is servers, or load balancer addresses, or storage volumes, mount points on servers etc. Also I believe strongly in a good descriptive domain name (have always used the airport codes like most other folks). None of this randomly generated crap(here’s looking at you Amazon). If you are deploying 10,000 servers that are doing the same thing you can still number them in some sort of sane manor. I’ve always been good at documentation, it does take work, and I find more often than not most people are overwhelmed by what I write (you may get the idea with what I have written here) so they often don’t read it — but it is there and I can direct them to it. I take lots of screen shots and do a lot of step by step guides.

On a side note, this configuration as documentation is a big reason why I do not look forward to IPv6.

Chef folks will say go read the code!  That can be a pretty dangerous thing to say, really, it is. I mean just yesterday or was it the day before, I was trying to figure out how a template on a server was getting a particular value. Was it coming from the cookbook attributes? from the role? from the environment? I looked everywhere and I could not find the values that were being populated — and the values I specified were being ignored. So I passed this task to my co-worker who I have to acknowledge has been a master in Chef, he has written most of what we have, and while I can manage to tweak stuff here and there, the difficult stuff I give him because if I don’t my fist will go through the desk or perhaps the monitor (desk is closer), after a couple hours working with Chef.  A tool is not supposed to make you get so frustrated.

So I ask him to look into it, and quickly I find HIM FIGHTING CHEF! OH MY THE IRONY. He was digging up and down and trying to set things but Chef was undoing them and he was cursing and everything. I loved it. It’s what I go through all the time.  After some time he eventually found the issue, the values were being set in another cookbook and they conflicted.

So he worked on it for a bit, and decided to hard code the values for a time while he looked into a better solution. So he deployed this better solution and it had more problems. The most recent thing is for some reason Chef was no longer able to successfully complete a run on certain types of servers(other types were fine though). He’s working on fixing it.

I know he can do it, he’s a really smart guy I just wanted to write about that story – I’m not the only one that has these problems.

Sure I’d love to replace Chef  with something else. But it’s not a priority I want to try to shove in my boss’ face (who likes the concept of Chef). I have other fish to fry, and as long as I have this guy doing the dirty work well it’s not as much of a pain for me.

Tracking down conflicting things in CFengine was really simple for me – probably because I wasn’t trying anything too over the top with configuration. Opscode guys liked to say, oh wouldn’t it be great if you could have one configuration stanza that could adapt to ANY SITUATION.

I SAY NO. —-  IT! IS! NOT! GREAT!

It might be nice in some situations but in many others it just gives me a headache. I like to be able to look at a config and say THAT IS GOING TO SERVER X, EXACTLY HOW IT SITS NOW. Sure I have to duplicate configs and files for different environments and such but really at the end of the day – at all of the companies I have worked at — IT’S NOT A BIG DEAL. In the grand scheme of things. If your configuration is so complex that you need all of this maybe you should step back and consider if you are doing something wrong – does it really need to be that complex? Why?

Oh and don’t get me started on that #$@ damn ruby syntax in the Chef configuration files. Oh you need quotes around a string that is nothing more than a word? You puke with a cryptic stack trace if you don’t have that? Oh you puke with a cryptic stack trace unless these two configuration settings are on their own lines? Come on, this is stupid. I go back to this post on Ruby, how I am reminded of it almost every time I use Chef. I had to support Ruby+Rails apps back from 2006-2008 and it was a bad experience. Left a bad taste in my mouth for Ruby. Chef just keeps on piling on the crap. I’ll fully admit I am very jaded against Ruby (and Chef for that matter). I think for good reason. How’s that saying go? Burn me once shame on you, burn me 500 times shame on me?

With the background that some of these folks have at Opscode it’s absolutely stunning to me the number of times they have shot themselves in the feet over the past few years, on such BASIC THINGS.  Maybe that’s how things are done at the likes of Amazon I don’t know, never worked there(knew many that did and do though, general consensus is stay away).

In my neck of the woods people take more care in what they do.

I’ll end this again by mentioning I could train someone on CFEngine in an afternoon, Chef – here I am 2 and a half years later and still struggling.

(In case your wondering YES I run Ubuntu 10.04 LTS on my laptop and desktop (guess what – it is about to go EOL too) – I have no plans to change, because it’s stable, and it does the job for me. I run Debian STABLE on my servers because – IT’S STABLE. No testing, no unstable, no experimental. Tried and true. The new UI stuff in the newer Ubuntu is just scary for me, I have no interest in trying it.)

Ok that’s enough for this rant I guess.  Thanks for listening.

April 7, 2013

Upgraded to 64-bit Debian

Filed under: Random Thought — Nate @ 2:31 pm

Just a quick note — I am in the midst of upgrading this server from 32-bit Debian to 64-bit. I really didn’t think I needed 64-bit, but as time as gone on the processes on this system seem to have outgrown the 32-bit kernel. I recently doubled the memory size on the host server to 16GB, so there’s plenty of ram to go around for the moment.

If you see anything around here that appears more broken than normal let me know, thanks.

April 1, 2013

Public cloud will grow when I die

Filed under: Random Thought — Tags: — Nate @ 8:27 am

El Reg a few days ago posted some commentary from the CTO of Rackspace

Major adoption of public cloud computing services by large companies won’t happen until the current crop of IT workers are replaced by kiddies who grew up with Facebook, Instagram, and other cloud-centric services

Which is true to some extent — though I still feel the bigger issues with the public cloud are cost and features. If a public cloud company can offer comparable capabilities vs operating in house at a comparable (or less – given the cloud company should have bigger economies of scale) cost then I can see cloud really taking off.

As I’ve harped on again and again – one of the key cost things would be billing based on utilization, not based on what is provisioned (you could have a minimum commit rate as is often negotiated in deals for internet bandwidth). But if you provision a 2 CPU VM with 6GB of memory and 90% of the time it sits at 1% cpu usage and 1GB of memory then you must not be charged the same if you were consuming 100% cpu and 95% memory.

Some folks think it is a good idea to host non production stuff in a cloud and host production in house — to me non production is where even more of the value comes from. Most of the time the non production environments(at least the companies I have worked at in the past decade) operate at VERY low utilization rates 99.9% of the time. So they can be over subscribed even more. At my organization for example we started out with basically two or three non production environments, now we are up to 10, the costs to support the extra 7-8 were minimal(relative to hosting them in a public cloud). For the databases I setup a snapshot system for these environments so not only can we refresh the data w/minimal downtime to the environments(about 2 minutes/ea vs/ full day/ea) but each environment typically consumes less than 10% of the disk space that would normally be consumed had the environment had a full independent copy of the data.

Another thing is give the customers the benefit of things like thin provisioning, data compression, deduplication. Some work loads behave better than others, present this utilization data to the customer and include it in the billing. Myself I like to provision multi TB volumes for almost everything, and I use LVM to restrict their growth. So if the time comes and some volume needs to get bigger I just lvextend the volume and resize the file system(both are online operations), don’t have to touch the hypervsior, the storage, or anything. If some application may need a massive amount of storage (have not had one that did yet that used storage through the hypervisor) — as in many many TB — then I could allocate many volumes at once to the system, and grow them the same way over time. Perhaps a VM would have 2 to 10TB of space provisioned to it but may only use a few hundred gigs for the next year or so — nothing is wasted, because the excess is not used. There’s no harm in doing that.  Though I have not seen or heard of a cloud company that offers something like this. I think a large chunk of the reason is the technology doesn’t exist yet to do it for hundreds or thousands of small/medium customers.

Most important of all – the cloud needs to be a true utility – 99.99% uptime demonstrated over a long period of time. No requirements for “built to fail”, all failures should be transparent to the end user. Bonus points if you have the ability to have VMware-style fault tolerance (though something that can support multi CPUs) with millisecond fail over w/o data loss.   It will take a long time for the IaaS of the world to get there, but perhaps SaaS can be there already. PaaS I’m not sure, I’ve never dealt with that though. All of the major IaaS companies have had large scale outages and/or degraded performance.

The one area where public cloud does well – is the ability to get something from nothing up and going quickly, or perhaps up and going in a part of the country or world which you don’t have a facility.  Though the advantage there isn’t all that great. Even at my company back when we were hosted at Amazon on the east coast. The time came to bring up a site for our UK customers and we decided to host it on the east coast because the time frame to adapt everything(Chef etc) to work properly in another Amazon region was too tight to pull off. So we never used that region. Eventually we provisioned real equipment which I deployed in Amsterdam last summer to replace the last of our Amazon stuff.

Another article on a similar topic, this time from ComputerWorld, which noted the shift from in house data centers to service providers, though it seems more focused on literally in house data centers (vs “in house” with a co-location provider). They cite lack of available talent to manage these resources. These employees would rather work for a larger organization with more career opportunities than a small shop.

I’m sort of the opposite — I would not like to work for a large company of any kind. Much prefer small companies, with small teams. The average team size I have worked in since 2006 has been 3 people. The amount of work required to maintain our own infrastructure is actually quite a bit less than managing cloud stuff.

I guess I am the exception rather than the rule again here. I had my annual review recently and in it I wrote there was no career advancement for me at the current company, I had higher growth expectations of the company I am at — but I am not complaining. I’ll admit that the stuff I am doing now is not as exciting as it has been in the past. I’m fairly certain we could not hire someone else in the team because they would get bored and leave quickly.  Me — at least for now — I don’t mind being bored. It is a good change of pace after my run in the trenches along the front lines of technology from 2003-2011. I could do this for another year I imagine (if not longer).

As I watch the two previous companies I worked for wither and die slow deaths (and the one before them died years ago — so basically all the jobs I had from 2006-2011 were at companies that are dead or dying) it’s a good reminder to me to be thankful for where I am at. Still a small growing company with a good culture, good people, and everything runs really really well (sometimes so well it sort of scares me for some reason).

Another good reminder is I had lunch with a couple of friends while up in Seattle — they work for a company that has been on it’s death bed for years now. I asked them what is keeping the company going and they said hope (also never knew why they stuck around for as long as they have).  Or something like that. Not long after I left the company laid off a bunch of folks (they were not included in the layoff). The company is dieing every bit as much as the other two companies I worked for. I guess the main difference is I decided to jump ship long ago while they stuck it out for reasons unknown.

Time to close techopsguys?

I apologize again for not posting nearly as much as I used to — there just continues to be a dearth of topics and news that I find interesting in recent months. I am not sure if it is me that is changing or if things have really gotten boring since the new year.  I have contemplated closing the blog entirely, just to lower people’s expectations(or eliminate them) about seeing new stuff here.I’ve poured myself out all over this site the past few years and it’s just become really hard to find things to write about now. I don’t want the site to turn into a blog that is updated a couple times a year.

So I will likely close it in the coming months unless the situation changes. It has been a good run, from an idea from my former co-workers that I thought I’d be a minor contributor on to a full fledged site where I wrote nearly 400 articles, and a few hundred thousand words. Wow that is a lot.. My former co-workers bailed on the site years ago citing lack of time.  Time is certainly something I have what I have more is lack of things to write about.

I’ve had an offer to become an independent contributor to our friends over at El Reg – something that sounded cool at first, though now that I’ve thought about it I am not going to do it.  I don’t feel comfortable about the level of quality I could bring (granted not all of their stuff is high quality either but I tend to hold myself to high standards). Being a personal blog I can compromise more on quality, lean into more of my own personal biases, I have less responsibility in general.

I have seen them take on a couple other bloggers such as myself in recent months and have noticed the quality of their work is not good.  In some cases it is sort of depressing(why would you write about that?????????)  That sort of stuff belongs on a personal blog not on a news site.

I’ll have to settle for the articles where they mentioned my name in them, those I am still sort of proud of for some reason 🙂

 

March 6, 2013

Another trip to Seattle

Filed under: Random Thought — Tags: — Nate @ 10:29 am

Well I’m going again.. one of my best friends works at Microsoft over in Boston and finally found a training class to give him an excuse to come out to Seattle – his last trip was about four years ago. So I decided to go up and hang out with him and other friends. Go to my favorite places(COW GIRLS COW GIRLS..!) and have a lot of fun…

I’ll be there from this Friday the 8th until the 17th.

I’m pretty excited.

As much as I miss Seattle I’ve come to the conclusion in recent months that I can’t move back — at least not any time real soon. I have been hammered so hard by recruiters these past few months(especially since the new year). They have just been relentless. Including opportunities in Seattle. I miss friends and places up there – but from a career perspective the Bay Area is a better place to be. I’m not focused on my career at the moment (if I was I may of jumped ship as my job has gotten relatively boring and dull the past 6 months as things have gotten to be very stable and growth has leveled out). I’m happy where I am at with the flexibility that I get and the management that is in place. I think back to past companies where often times I got to a point in stability in operations but other things were blowing up be it management, or the economy or both which drove me away (always ended up being a good decision in hindsight). But at my current position I feel no similar pressure. So I have been tweaking and tuning and fixing little things here and there, and documenting like crazy.

I could even move back and still keep my same job at the same company — but I wouldn’t be able to walk to the office any more. I’d have to commute, and pay for parking, and the weather isn’t as nice as it is here (and I mean right here – I don’t like the weather in the South Bay Area vs here – which is San Bruno).

So things are going as well as I could hope for I think. I’d love to have more toys to play with, this is the smallest company from a infrastructure perspective I have worked for pretty much ever (past companies would of compared to some extent had virtualization been leveraged to the extent that it has here). That is my only gripe but it is a small one. It’s an easy trade off to make. I have little doubt that if another person tried to join my group, especially a senior one they would probably quit pretty fast because there is nothing interesting for them to do. For once I am happy to be bored, happy to have stress levels that could practically register in negative numbers!

It was a hard decision to make (to decide not to go back),  but I’ve made it now so it’ll be easier to answer that question when friends and recruiters ask.

But I do intend to keep visiting..!

October 16, 2012

Caught red handed!

Filed under: Random Thought — Tags: — Nate @ 7:46 am

[UPDATED] Woohoo! I am excited. I was checking my e-mail and got an email from Bank of America that there was another fraud alert on my credit card (as you might imagine I am very careful but for some reason I get hit at least once or twice a year). My card was locked out until I verified some transactions.

I tried to use their online service but it said my number couldn’t be processed online so I had to call.

So I called and gave my secret information to them, and they cited some of the transactions that was attempted to be charged including:

  • World friends – people who like to travel ? Or maybe the upscale dating service?
  • Al shop – online electronics store in DUBAI
  • Payless shoe stores – yeah they don’t carry my size unless I wear their shoe boxes as shoes
  • Paypal authorization attempt

All of the charges were declined – because – the number they attempted to use is a ShopSafe number, a service that BofA offers that I have written about in the past, where I generate single use credit cards for either single purchases or recurring subscriptions. These cards are only good for a single merchant, once charged nobody else can use them.

In this case it was a recurring payment number, which on top of the single merchant has a defined monthly credit limit.

Naturally of course since they are only valid for a single merchant I only give the number out to a single merchant.

Apparently it was my local CABLE COMPANY that had this recurring credit card number assigned to them. I gave this number to them over the phone a couple of months ago after my credit card was re-issued again. So either they had a security breach or some employee tried to snag it. They don’t appear to be a high tech organization given they are a local cable company that only serves the city I am in.  They have no online billing or anything like that which I am aware of.  In any case it made it really easy to determine the source of the breach since this number was only ever given to one organization. The fraud attempts were made less than 24 hours after the cable company charged my bill.

Unlike the last credit card fraud alert – which was also on a ShopSafe card, this time the customer service rep said she did not have to cancel my main card – which makes total sense since only the ShopSafe card was compromised. I believe the last time only the ShopSafe card was compromised as well, but the customer service rep insisted the entire card be canceled. I think that original rep didn’t fully understand what ShopSafe was.

You could even say there is not a real need to cancel the ShopSafe card – it is compromised but it is not usable by anyone other than the cable company, but they canceled it anyways. Not a big deal it takes me two minutes to generate a new one, though I have to call them and give them the new number. Or go see them in person I guess. I tried calling a short time ago but the office wasn’t open yet.

The BofA customer service rep I spoke to this morning said I was one of only a few customers over the years that she has talked to that used ShopSafe (I use it ALL the time).

Of all the fraud activity on my card over the years(and other times where merchants reported they had been compromised but there was no fraudulent charges on my card), this is the first time that I know with certainty who dun it, so I’m excited. I wonder what the cable company will say…

One of the downsides to ShopSafe is because it is single merchant I do have to pay attention when buying stuff from market places, I frequently buy from buy.com (long time customer), who is a pretty big merchant site. I have to make sure my orders come from only a single merchant, which on big orders can sometimes mean going through checkout 3-4 times and issuing different credit card #s for each round. I try to keep the list of numbers saved on their site fairly pruned though at the moment they have 38 cards stored for me. There was one time about a year ago that buy.com contacted me about a purchase I made that they forgot to charge me for about 4 months. The card I issued was only valid for two months so it was expired when they found the missing transaction in their system. Apparently someone I know who is well versed in the credit card area said that technically they can’t force me to pay for it at that point (I think 60 or maybe 90 days is the limit, I forgot exactly what he said). But I did get the product and I am a happy customer so had no issue paying for it.

Yay Shopsafe, I wish more companies had such a service, it’s very surprising to me how rare it seems to be.

UPDATE – I spoke to one of the managers at the cable company and he was obviously surprised, and said they will start an investigation. I think that manager may end up signing up for BofA credit cards, he sounded very impressed with ShopSafe.

October 10, 2012

TiVO: 11 years and counting

Filed under: Random Thought — Tags: — Nate @ 10:37 am

It has been on my mind a bit recently, wondering how long my trusty Phillips Tivo Series 1 has been going, I checked just now and it’s been going about 11 years, April 24th  2001 is when Outpost.com (now Fry’s) sent me the order confirmation that my first Tivo was on the way. It was $699 for the 60-hour version that I originally purchased, plus $199 for lifetime service (lifetime service today is $499 for new customers), which at the time was still difficult to swallow given I had never used a DVR before that.

My TiVo sandwiched between a cable box with home made IR-shield and a VCR (2002)

There were rampant rumors that Tivo was dead, they’d be out of business soon, I believe that’s also about the time Replay TV (RIP) was fighting with the media industry over commercial skipping.

Tivo faithful hoped Tivo would conquer the DVR market, but that never happened, always rumors of big cable companies deploying Tivo, but I don’t recall reading about wide scale deployments(certainly none of the cable companies I had over the years offered Tivo in my service area).

To this day Tivo still is held as the strongest player from a technology standpoint (for good reason I’m sure). Tivo has been involved in many patent lawsuits over the years and to my knowledge they’ve won pretty much every one. Many folks hate them for their patents, but I’ve always thought the patents were innovative and worth having. I’m sure to some the patents were obvious, but to most, myself included – they were not.

I believe Tivo got a new CEO in recent years and they have been working more seriously with cable providers,  I believe there have been much larger scale deployments in Europe vs anywhere else at this point.

Tivo recently announced support for Comcast Xfinity on demand on the Tivo platform. The one downside to something like TiVo, or anything that is not a cable box really is there is no bi directional communication with the cable company, so things like on demand or PPV are not possible directly through Tivo. I don’t think any Tivo since Series 2 supports working with an external cable box, they all use cable cards now. The cable card standard hasn’t moved very far over the years, I saw recently people talking about how difficult it is to find TVs on the market with cable card support as the race to reduce costs has cut them out of the picture.

Back to my Tivo Series 1, it was a relatively “hacker friendly” box, unlike post Series 2 equipment. At one point I added a “TivoNet Cache Card” which allowed the system to get program data and software updates over ethernet instead of phone lines by plugging into an exposed PCI-like connector on the motherboard. At the same time it gave the system a 512MB read cache on a single standard DIMM, to accelerate the various databases on the system.

Tivo Cache Card plus TurboNet ethernet port

Tivo Series 1 came with only 16MB of ram and a 54Mhz(!) Power PC 403 GCX. Some people used to do more invasive hacking to upgrade the system to 32MB, that was too risky for my taste though.

Picture of process needed to upgrade TiVo Series 1 memory

I’ve been really impressed with the reliability of the hardware(and software), I replaced the internal hard disks back in 2004 because the original ones were emitting a soft but high pitched whine which was annoying. The replacements also upgraded the unit from the original 60 hour rating to 246 hours.

After one of the replacement disks was essentially DOA, I got it replaced, and the Tivo has been running 24/7 since then – 8 years of reading and writing data to that pair of disks – 4200 RPM if I remember right. I’ve treated it well, Tivo has always been connected to a UPS, occasionally I shut it off and clean out all the dust, it’s almost always had plenty of airflow around it. It tends to crash a couple times a year (power cycling fixes it). I have TV shows going back to what I think is probably 2004-2005 time frame saved on that Tivo. Including my favorite Star Trek: Original Series episodes before they wrecked it with the modern CGI (which looks so out of place!).

I’m also able to telnet into the tivo and do very limited tasks, there is a ftp application that allows you to download the shows/movies/etc that are stored on Tivo, but in my experience I could not get it to work (video was unwatchable). Tivo Series 3 and up you can download shows via their fancy desktop application or directly with HTTPS, though many titles are flagged as copyright protected and are not downloadable.

Oh yeah I remember now what got me thinking about Tivo again – I was talking to AT&T this morning adding the tethering option to my plan since my Sprint Mifi is canceled now, and they tried to up sell me to U-Verse, which as far as I know is not compatible with Tivo (maybe Series 1 would work, but I also have a Series 3 which uses cable cards). So I explained to them it’s not compatible with Tivo and I don’t have interest in leaving Tivo at this point.

There was a time when I read in a forum that the Tivo “lifetime subscription” was actually only a few years (this was back in ~2002), and they disclosed it in tiny print in the contract. I don’t recall if I tried to verify it or not, but I suspect they opted to ignore that clause in order to keep their subscriber base, in any case the lifetime service I bought in 2001 is still active today.

The Tivo has outlasted the original TV I had (and the one that followed), gone through four different apartments, two states, it even outlasted the company that sold it to me (Outpost.com). It also outlasted the analog cable technology that it relied upon, for several years I’ve had to have a digital converter box so that Tivo can get even the most basic channels.

The newest Tivos aren’t quite as interesting to me mainly because they focus so much on internet media, as well as streaming to iOS/Android (neither of which I use of course). I don’t do much internet video. My current Tivo has hookups to Netflix, Amazon, a crappy Youtube app, and a few other random things. I don’t use any of them.

The Series 3 is obviously my main unit, which was purchased almost five years ago, it too has had it’s disks replaced once (maybe twice I don’t recall) – though in that case the disks were failing, fortunately I was able to transfer all of the data to the new drives.

The main thing I’d love from Tivo after all these years (maybe they have it now on the new platforms) – is to be able to backup the season passes/wishlists and stuff, so you can migrate them to a new system (or be able to recover faster from a failed hard disk). I’ve had the option of remote scheduling via the Tivo website since I got my Series 3 – but never had a reason to use it. The software+hardware on all of my units (I bought a 2nd Series 1 unit with lifetime back in 2004-2005, eventually gave it to my Sister who uses it quite a bit) has been EOL for many years now so there’s no support now.

Eleven years and still ticking. I don’t use it (Series 1) all that much, but even if I’m not actively using it, it’s always recording (live tv buffer) regardless.

« Newer PostsOlder Posts »

Powered by WordPress