TechOpsGuys.com Diggin' technology every day

February 23, 2011

Certifiably not qualified

Filed under: Random Thought — Tags: — Nate @ 10:12 am

What is it with people and certifications? I’ve been following Planet V12n for a year or more now and I swear I’ve never seen so many kids advertise how excited they are that they passed some test or gotten some certification.

Maybe I’m old and still remember the vast number of people out there with really pointless certs like MCSE and CCNA (at least older versions of them maybe they are better now). When interviewing people I purposely gave people negative marks if they had such low level certifications, I remember one candidate even advertising he had his A+ certification, I mean come on!

I haven’t looked into the details behind VMware certification I’m sure the processes taken to get the certs have some value (to VMware who cashes in), but certifications still have a seriously negative stigma with me.

I hope the world of virtualization and “cloud” isn’t in the process of being overrun with unqualified idiots much like the dot com / Y2K days were overrun with MCSEs and CCNAs. What would be even worse if it was the same unqualified idiots as before.

There’s a local shop in my neck of the woods that does VMware training, they do a good job in my opinion, costs less, and you won’t get a certification at the end (but maybe you learn enough to take the test I don’t know). My only complaint about their stuff is they are too Cisco focused on networking and too NetApp focused on storage, would be nice to see more neutral things, but I can understand they are a small shop and can only support so much. NetApp makes a good storage platform for VMware I have to admit, but Cisco is just terrible in every way.

February 19, 2011

Flash not good for offline storage?

Filed under: Random Thought,Storage — Tags: , , — Nate @ 9:36 am

A few days ago I came across an article on Datacenter Knowledge that was talking about Flash reliability. As much as I’d love to think that just because it’s solid state that it will last much longer, real world tests to-date haven’t shown that to be true in many cases.

I happened to have the manual open on my computer for the Seagate Pulsar SSD, and just saw something that was really interesting to me, on page 15 it says –

As NAND Flash devices age with use, the capability of the media to retain a programmed value begins to deteriorate. This deterioration is affected by the number of times a particular memory cell is programmed and subsequently erased. When a device is new, it has a powered off data retention capability of up to ten years. With use the retention capability of the device is reduced. Temperature also has an effect on how long a Flash component can retain its pro-grammed value with power removed. At high temperature the retention capabilities of the device are reduced. Data retention is not an issue with power applied to the SSD. The SSD drive contains firmware and hardware features that can monitor and refresh memory cells when power is applied.

I am of course not an expert in this kind of stuff, so was operating under the assumption that if the data is written then it’s written and won’t get  “lost” if it is turned off for an extended period of time.

Seagate rates their Pulsar to retain data for up to one year without power at a temperature of 25 C (77 F).

Compare to what tape can do. 15-30 years of data retention.

Not that I think that SSD is a cost effective method to do backups!

I don’t know what other manufacturers can do, I’m not picking on Seagate, but found the data tidbit really interesting.

(I originally had the manual open to try to find reliability/warranty specs on the drive to illustrate that many SSDs are not expected to last multiple decades as the original article suggested).

February 8, 2011

New WebOS announcements tomorrow

Filed under: Events,Random Thought — Tags: , — Nate @ 9:11 pm

Looking forward myself to the new WebOS announcements coming from HP/Palm, seem to be at about noon tomorrow. I’ve been using a Palm Pre for almost two years now I think, and recently the keyboard on it stopped working, so hoping to see some good stuff announced tomorrow. Not sure what I will do – I don’t trust Google or Apple or Microsoft, so for smart phones it’s Palm and Blackberry. WebOS is a really nice software platform from a user experience standpoint it’s quite polished. I’ve read a lot of complaints about the hardware from some folks, until recently my experience has been pretty good though. As an email device the blackberry rocked, though I really don’t have to deal with much email (or SMS for that matter).

Maybe I’ll go back to a ‘feature phone’ and get a WebOS tablet, combined with my 3G/4G Mifi and use that as my web-connected portable device or something. My previous Sanyo phones worked really well. Not sure where I’m at with my Sprint contract for my phone, and Sprint no longer carries the Pre and doesn’t look like it will carry the Pre 2. I tried the Pixi when it first came out but the keyboard keys were too small for my fingers even when using the tips of my fingers.

I found a virtual keyboard app which lets me hobble along on my Pre in the meantime while I figure out what to do.

February 2, 2011

Oh no! We Ran outta IPs yesterday!

Filed under: Networking,Random Thought — Nate @ 9:37 pm

The Register put it better than I could put it

World shrugs as IPv4 addresses finally exhausted

Count me among those that shrugged, commented on this topic a few months ago.

December 12, 2010

OpenBSD installer: party like it’s 2000

Filed under: linux,Random Thought,Security — Tags: , , — Nate @ 12:07 am

[Random Thought] The original title was going to be “OpenBSD: only trivial changes in the installer in one heck of a long time” a take off of their blurb on their site about remote exploits in the default install.

I like OpenBSD, well I like it as a firewall — I love pf. I’ve used ipchains, iptables, ipfwadm, ipf (which I think pf was originally based off of and was spawned due to a licensing dispute with the ipf author(s)), ipfw, Cisco PIX and probably one or two more firewall interfaces, and pf is far and away the best that I’ve come across.  I absolutely detest Linux’s firewall interfaces by contrast, going all the way back almost 15 years now.

I do hate the OpenBSD user land tools though, probably as much as the *BSD folks hate the Linux user land tools. I mean how hard is it to include an init script of sorts to start and stop a service? But I do love pf, so in situations where I need a firewall I tend to opt for OpenBSD wherever possible (when not possible I don’t resort to Linux, I’d rather resort to a commercial solution perhaps a Juniper Netscreen or something).

But this isn’t about pf, or user land. This is about the OpenBSD installer. I swear it’s had only the most trivial changes and improvements done to it in at least the past 10 years, when I first decided to try it out. To me it is sad, the worst part about it is of course the disk partitioning interface. It’s just horrible.

I picked up my 2nd Soekris net5501 system and installed OpenBSD 4.8 on it this afternoon, and was kind of sadened, yet not surprised how it still hasn’t changed. I have my other Soekris running OpenBSD 4.4 and has been running for a couple years now. First used pf I believe back in about 2004 or so, so have been running it quite a while, nothing too complicated, it’s really simple to understand and manage. My first experience with OpenBSD was I believe back in 2000, I’m not sure but I want to say it was something like v2.8. I didn’t get very far with it, for some reason it would kernel panic on our hardware after about a day or so of very light activity, so went back to Linux.

I know pf has been ported to FreeBSD, and there is soon to be a fully supported Debian kFreeBSD distribution with the next major release of Debian whenever that is, so perhaps that will be worth while switching to for my pf needs, I don’t know. Debian is another system which has been criticized over the years for having a rough installer, though I got to say in the past 4-5 years it really has gotten to be a good installer in my opinion. As a Debian user for more than 12 years now it hasn’t given me a reason to switch away from it, but I still do prefer Red Hat based distros for “work” stuff.

First impressions are important, and the installer is that first impression. While I am not holding out hope they will improve their installer, it would be nice.

December 9, 2010

Java fallout from Oracle acquisition intensifies

Filed under: News,Random Thought — Tags: , — Nate @ 1:51 pm

I was worried about this myself, almost a year ago to the day raised my concerns about Oracle getting control of Java, and the fallout continues. Oracle already had BEA’s JRockit, it’s too bad they had to get Sun’s JVM too.

Apache seems to have withdrawn from most things related to Java today according to our friends at The Register.

On Thursday, the ASF submitted its resignation from JCP’s Java Standard and Enterprise Edition (SE/EE) Executive Committee as a direct consequence of the Java Community Process (JCP) vote to approve Oracle’s roadmap for Java 7 and 8.

The ASF said it’s removing all official representatives from all JSRs and will refuse to renew its JCP membership and EC position.

Java was too important a technology to be put in the hands of Oracle.

Too bad..

November 11, 2010

10% Tale of two search engines

Filed under: News,Random Thought — Tags: , — Nate @ 8:41 pm

Saw! an! article! today! and! thought! of! a! somewhat! sad! situation,! at! least! for! those! at! Yahoo!

Not long ago, Google announced they would be giving every employee in the company a 10% raise starting January 2011. One super bad ass engineer is apparently going to get a $3.5M retention bonus to not go to the competition. Lucky for him perhaps that Google is based in California and non competes are not enforceable in California.

Now Yahoo! has announced somewhat of the opposite, no raises, in fact they are going to give the axe to 10% of their employees.

It’s too bad that Yahoo! lost it’s way so long ago. There was a really good blog post about what went wrong with Yahoo! Going back  more than a decade, really interesting insight into the company.

November 6, 2010

The cool kids are using it

Filed under: Random Thought — Tags: , , — Nate @ 8:24 pm

I just came across this video, which is animated, involves a PHP web developer ranting to a psychologist about how stupid the entire Ruby movement is. It’s really funny.

I remember being in a similar situation a few years ago, the company had a Java application which drove almost all of the revenue of the company(90%+), and a perl application that they acquired from a ~2 person company and were busy trying to re-write it in Java.

Enter stage left: Ruby. At that point (sometime in 2006/2007), I honestly don’t think I had ever heard of Ruby before. But a bunch of the developers really seemed to like it, specifically the whole Ruby on Rails thing. We ran it on top of Apache with fastcgi. It really didn’t scale well at all (for fairly obvious reasons that are documented everywhere online). As time went on the company lost more and more interest in the Java applications and wanted to do everything in Ruby. It was cool (for them). Fortunately scalability was never an issue for this company since they had no traffic. At their peak they had four web servers, that on average peaked out at about 30-35% CPU.

It was a headache for me because of all of the modules they wanted to install on the system, and I was not about to use “gem install” to install those modules(that is the “ruby way” I won’t install directly from CPAN either BTW), I wanted proper version controlled RPMs. So I built them, for the five different operating platforms we supported at the time (CentOS 4 32/64bit, CentOS 5 32/64bit Fedora Core 4 32-bit — we were in transition to CentOS 5 32/64-bit). Looking back at my cfengine configuration file there was a total of 108 packages I built while I was there to support them, and it wasn’t a quick task to do that.

Then add to the fact that they were running on top of Oracle (which is a fine database IMO), mainly because that was what they had already running with their Java app. But using Oracle wasn’t the issue — the issue was their Oracle database driver didn’t support bind variables. If you have spent time with Oracle you know this is a bad thing. We used a hack which involved setting a per-session environment variable in the database to force bind variables to be enabled, this was OK most of the time, but it did cause major issues for a few months when a bad query got into the system, caused the execution plans to get out of whack and massive latch contention. The fastest way to recover the system was to restart Oracle. The developers, and my boss at the time were convinced it was a bug in Oracle. I was convinced it was not because I had seen latch contention in action several times in the past. After a lot of debugging the app and the database in consultation with our DBA consultants they figured out what the problem was — bad queries being issued from the app. Oracle was doing exactly what they told it to do, even if it means causing a big outage. Latch contention is one of the performance limits of Oracle that you cannot solve by adding more hardware. It seems like it could be at first because the results of it are throughput drops to the floor, and CPUs go to 100% usage instantly.

At one point to try to improve performance and get rid of memory leaks I migrated the Ruby apps from fastcgi to mod_fcgid. Which had a built in ability to automatically restart it’s threads after they had served X number of requests. This worked out great, really helped improve operations. I don’t recall if it had any real impact on performance but because the memory leaks were no longer a concern that was one less thing to worry about.

Then one day we got in some shiny new HP DL380 G5s with dual proc quad core processors with either 8 or 16GB of memory. Very powerful, very nice servers for the time. So what was the first thing I tried? I wanted to try out 64-bit, be able to take better advantage of the larger amount of memory. So I compiled our Ruby modules for 64-bit, installed a 64-bit CentOS 5.2 I think it was at the time(other production web servers were running CentOS 5.2 32-bit), installed 64-bit Ruby etc. Launched the apps, from a functional perspective they worked fine. But from a practical perspective it was worthless. I enabled the web server in production and it immediately started gagging on it’s own blood, load shot through the roof, requests were slow as hell. So I disabled it, and things returned to normal. Tried that a few more times and ended up giving up — went back to 32-bit. The 32-bit system could handle 10x the traffic of the 64-bit system. Never found out what the issue was before I left the company.

From an operational perspective, my own personal preference for web apps is to run Java. I’m used to running Tomcat myself but really the container matters less to me. I like war files, it makes deployment so simple. And in the Weblogic world I liked ear files (I suspect it’s not weblogic specific it’s just the only place I’ve ever used ear files). One archive file that has everything you need built into it. Any extra modules etc are all there. I don’t have to go compile anything, install a JVM, install a container and drop a single file to run the application. OK maybe some applications have a few config files (one I used to manage had literally several hundred XML config files — poor design of course).

Maybe it’s not cool anymore to run Java I don’t know. But seeing this video reminded me of those days when I did have to support Ruby on production and pre-production systems, it wasn’t fun, or cool.

November 4, 2010

Chicken and the egg

Filed under: Random Thought,Storage,Virtualization — Tags: , , , , , , — Nate @ 5:24 pm

Random thought time! –  came across an interesting headline on Chuck’s Blog – Attack of the Vblock Clones.

Now I’m the first to admit I didn’t read the whole thing but the basic gist he is saying if you want a fully tested integrated stack (of course you know I don’t like these stacks they restrict you too much, the point of open systems is you can connect many different types of systems together and have them work but anyways), then you should go with their VBlock because it’s there now, and tested, deployed etc. Others recently announced initiatives are responses to the VBlock and VCE, Arcadia(sp?) etc.

I’ve brought up 3cV before, something that 3PAR coined back almost 3 years ago now. Which is, in their words Validated Blueprint of 3PAR, HP, and VMware Products Can Halve Costs and Floor Space”.

And for those that don’t know what 3cV is, a brief recap –

The Elements of 3cV
3cV combines the following products from 3PAR, HP, and VMware to deliver the virtual data center:

  • 3PAR InServ Storage Server featuring Virtual Domains and thin technologies—The leading utility storage platform, the 3PAR InServ is a highly virtualized tiered-storage array built for utility computing. Organizations creating virtualized IT infrastructures for workload consolidation use the 3PAR InServ to reduce the cost of allocated storage capacity, storage administration, and the SAN infrastructure.
  • HP BladeSystem c-Class—The No. 1 blade infrastructure on the market for datacenters of all sizes, the HP BladeSystem c-Class minimizes energy and space requirements and increases administrative productivity through advantages in I/O virtualization, power and cooling, and manageability. (1)
  • VMware Infrastructure—Infrastructure virtualization suite for industry-standard servers. VMware Infrastructure delivers the production-proven efficiency, availability, and dynamic management needed to build the responsive data center.

Sounds to me that 3cV beat VBlock to the punch by quite a ways. It would have been interesting to see how Dell would of handled the 3cV solution had they managed to win the bidding war, given they don’t have anything that competes effectively with c-Class. But fortunately HP won out so 3cV can be just that much more official.

It’s not sold as a pre-packaged product I guess you could say, but I mean how hard is it to say I need this much CPU, this much ram, this much storage HP go get it for me. Really it’s not hard. The hard part is all the testing and certification. Even if 3cV never existed you can bet your ass that it would work regardless. It’s not that complicated, really. Even if Dell managed to buy 3PAR and kill off the 3cV program because they wouldn’t want to directly promote HP’s products, you could still buy the 3PAR from Dell and the blades from HP and have it work. But of course you know that.

The only thing missing from 3cV is I’d like a more powerful networking stack, or at least sFlow support. I’ll take Flex10 (or Flexfabric) over Cisco any day of the week but I’d still like more.

I don’t know why this thought didn’t pop into my head until I read that headline, but it gave me something to write about.

But whatever, that’s my random thought of the day/week.

October 28, 2010

Compellent beats expectations

Filed under: News,Random Thought,Storage — Tags: , — Nate @ 11:10 am

Earlier in the year Compellent‘s stock price took a big hit following lower expectations for sales and stuff, a bunch of legal stuff followed that, it seems yesterday they redeemed themselves though with their stock going up nearly 33% after they tripled their profits or something.

I’ve had my eye on Compellent for a couple of years now, don’t remember where I first heard about them. They have similar technology to 3PAR, just it’s implemented entirely in software using Intel CPUs as far as I know vs 3PAR leveraging ASICs (3PAR has Intel CPUs too but they aren’t used for too much).

I have heard field reports that because of this that their performance is much more on the lower end of things, they have never published a SPC-1 result and I don’t know anyone that uses them so don’t know how they really perform.

They seem to use the same Xyratex enclosures that most everyone else uses. Compellent’s controllers do seem to be somewhat on the low end of things, I really have nothing other to go on other than cache. With their high end controller coming in at only 3.5GB of cache (I assume 7GB mirrored for a pair of controllers?) it is very light on cache. The high end has a dual core 3.0Ghz CPU.

The lower amount of cache combined with their software-only design and only two CPUs per controller and the complex automated data movement make me think the systems are built for the lower end and not as scalable, but I’m sure perfectly useful for the market they are in.

Would be nice to see how/if their software can scale if they were to put say a pair of 8 or 12 core CPUs in their controllers. After all since they are leveraging x86 technology performing such an upgrade should be pretty painless! Their controller specs have remained the same for a while now(as far back as I can remember). The bigger CPUs will use more power, but from a storage perspective I’m happy to give a few hundred more watts if I can get 5x+ the performance, don’t have to think once, yet alone twice.

They were, I believe the first to have automagic storage tiering and for that they deserve big props, though again no performance numbers posted (that I am aware of) that can illustrate the benefits this technology can bring to the table. I mean if anybody can prove this strategy works it should be them right? On paper it certainly sounds really nice but in practice I don’t know, haven’t seen indications that it’s as ready as the marketing makes it out to be.

My biggest issue with automagic storage tiering is how fast the array can respond to “hot” blocks and optimize itself, which is why I think from a conceptual perspective I really like the EMC Fast Cache approach more (they do have FAST LUN and sub LUN tiering too). Not that I have any interest in using EMC stuff but they do have cool bits here and there.

Maybe Compellent the next to get bought out (as a block storage company yeah I know they have their zNAS), I believe from a technology standpoint they are in a stronger position than the likes of Pillar or Xiotech.

Anyway that’s my random thought of the day

« Newer PostsOlder Posts »

Powered by WordPress