TechOpsGuys.com Diggin' technology every day

June 17, 2012

The old Linux ABI compatibility argument

Filed under: Random Thought — Tags: — Nate @ 12:45 pm

[WARNING: Rambling ahead]

Was just reading a somewhat interesting discussion over on slashdot (despite the fact that I am a regular there I don’t have an account and have posted as an Anonymous Coward about 4 times over the past twelve years).

The discussion is specifically about Linus’ slamming NVIDIA for their lack of co-operation in open sourcing their drivers and integrating them into the kernel.

As an NVIDIA customer going back to at least 1999 I think when I got a pair of cheap white boxed TNT2 PCI Graphics cards from Fry’s Electronics I can say I’ve been quite happy with their support of Linux. Sure I would love it if it was more open source and integrated and stuff, but it’s not that big of a deal for me to grab the binary drivers from their site and install them.

I got a new Dell desktop at work late last year, and specifically sought out a model that had Nvidia in it because of my positive experience with them (my Toshiba laptop is also Nvidia-based). I went ahead and installed Ubuntu 10.04 64-bit on it, and it just so happens that the Nvidia driver in 10.04 did not support whatever version of graphics chip was in the Dell box – it worked ok in safe mode but not in regular 3D/high performance/normal mode. So I went to download the driver from Nvidia’s site only to find I had no network connectivity. It seems the e1000e driver in 10.04 also did not support the network chip that happened to be in that desktop. So I had to use another computer to track down the source code for the driver and copy it over via USB or something I forget. Ever since that time whenever Ubuntu upgrades the kernel on me I have to boot to text mode to recompile the e1000e driver and re-install the Nvidia driver. As an experienced Linux user this is not a big deal to me. I have read too many bad things about Ubuntu and Unity that I would much rather put up with the pain of the occasional driver re-install than have constant pain because of a messed up UI. A more normal user perhaps should use a newer version of distro that hopefully has built in support for all the hardware (perhaps one of the Ubuntu offshoots that doesn’t have Unity – I haven’t tried any of the offshoots myself).

One of the other arguments is that the Nvidia code taints the kernel, making diagnostics harder – this is true – though I struggle to think of a single time I had a problem where I thought the Nvidia driver was getting in the way of finding the root cause. I tend to run a fairly conservative set of software(I recently rolled back my Firefox 13 64-bit on my Ubuntu at work to Firefox 3.6 32-bit due to massive stability problems with the newer Firefox(5 crashes in the span of about 3 hours)) so system crashes and stuff really aren’t that common.

It’s sad that apparently the state of ATI video drivers on Linux is still so poor despite significant efforts over the years in the open source community to make it better. I believe I am remembering right when in the late 90s Weather.com invested a bunch of resources in getting ATI drivers up to speed to power their graphics on their sets. AMD seems to have contributed quite a bit of stuff themselves. But the results still don’t seem to cut it. I’ve never to my knowledge at least used a ATI video card in a desktop/laptop setting on one of my own systems anyways. I keep watching to see if their driver/hardware situation on Linux is improving but haven’t seen much to get excited about over the years.

From what I understand Nvidia’s drivers are fairly unified across platforms, and a lot of their magic sauce is in the drivers, less of it in the chips. So myself  I can understand them wanting to protect that competitive edge. Provided they keep supplying quality product anyways.

Strangely enough the most recent kernel upgrade didn’t impact the Nvidia driver but still of course broke the e1000e driver. I’m not complaining about that though it comes with the territory (my Toshiba Laptop on the other hand is fully supported by Ubuntu 10.04 no special drivers needed – though I do need to restart X11 after suspend/resume if I expect to get high performance video(mainly in intensive 3D games). My laptop doesn’t travel much and stays on 24×7 not a big deal.

The issue more than anything else is even now after all these years there isn’t a good enough level of compatibility across kernel versions or even across user land. So many headaches for the user would be fixed if this was made more of a priority. The counter argument of course is open source the code and integrate it and it will be better all around. Except unless the product is particularly popular it’s much more likely (even if open source) that it will just die on the vine, not being able to compile against more modern libraries and binaries themselves will just end up segfaulting. Use the source luke, comes to mind here where I could technically try to hire someone to fix it for me (or learn to code myself) but it’s not that important – I wish product X would still work and there isn’t anything realistically I can do to make it work.

But even if the application(or game or whatever) is old and not being maintained anymore it still may be useful to people. Microsoft has obviously done a really good job in this department over the years. I was honestly pretty surprised when I was able to play the game Xwing vs Tie Fighter(1997) on my dual processor Opteron with XP Professional (and reports say it works fine in Windows 7 provided you install it using another OS because the installer is 16-bit which doesn’t work in Windows 7 64-bit). I very well could be wrong but 1997 may of been even before Linux moved from libc5 to glibc.

I had been quietly hoping that as time has gone on that at some point things would stabilize as being good enough for some of these interfaces but it doesn’t seem to be happening. One thing that has seemed to have stabilize is the use of iptables as the firewall of choice on Linux. I of course went through ipfwadm in kernel 2.0, and ipchains in 2.2, then by the time iptables came out I had basically moved on to FreeBSD for my firewalls (later OpenBSD when pf came out). I still find iptables quite a mess compared to pf but about the most complicated thing I have to do with it is transparent port redirection and for that I just copy/paste examples of config I have from older systems. Doesn’t bug me if I don’t end up using it.

Another piece of software that I had grown to like over the years – this time something that really has been open source is xmms (version 1). Basically a lookalike of the popular Winamp software xmms v1 is a really nice simple MP3/OGG player. I even used it in it’s original binary-only incarnation. Version 1 was abandoned years ago(They list Red Hat 9 binaries if that gives you an idea), and version 2 seems to be absolutely nothing remotely similar to version 1. So I’ve tried to stick to version 1. With today’s screen resolutions I like to keep it in double size mode. Here’s a bug report on Debian from 2005 to give you an idea how old this issue is, but fortunately the workaround still works. Xmms still does compile(though I did need to jump through quite a few hoops if I recall right) – for how long I don’t know.

I remember a few months ago wanting to hook up my old Sling boxes again, which are used to stream TV over the internet (especially since I was going to be doing some traveling late last year/this year). I bought them probably 6-7-8 years ago and have not had them hooked up in years. Back then I was able to happily use WINE to install the Windows based Sling Player and watch video. This was in 2006. I tried earlier this year and it doesn’t work anymore. The same version of Sling Player (same .exe from 5+ years ago) doesn’t work on today’s WINE. I wasn’t the only one, a lot of other people had problems too(could not find any reports of it working for anyone). Of course it still worked in XP. I keep the Sling box turned off so it doesn’t wear out prematurely unless I plan to use it. Of course I forgot to plug it in before I went on my recent trip to Amsterdam.

I look at a stack of old Linux games from Loki Software and am quite certain none of them will ever run again, but the windows versions of such games will still happily run(some of them even run in Wine of all things). It’s disappointing to say the least.

I’m sure I am more committed to Linux on the desktop/laptop than most Linux folks out there (that are more often than not using OS X), and I don’t plan to change – just keep chugging along. From the early days of staying up all night compiling KDE 0.something on Slackware to what I have in Ubuntu today..

I’m grateful that Nvidia has been able to put out such quality drivers for Linux over the years and as a result I opt for their chipsets in my Linux laptop/desktops at every opportunity. I’ve been running it (Linux) since I want to say 1998 when my patience with NT4 finally ran out. Linux was the first system I was exposed to at a desktop level that didn’t seem to slow down or become less stable with the more software you loaded on it (stays true for me today as well). I never quite understood what I was doing, or what the OS was doing that would prompt me to re-install from the ground up at least once a year back in the mid 90s with Windows.

I don’t see myself ever going to OS X, I gave it an honest run for about two weeks a couple years ago and it was just so different to what I’m used to I could not continue using it, even putting Ubuntu as the base OS on the hardware didn’t work because I couldn’t stand the track pad (I like the nipple, who wouldn’t like a nipple? My current laptop has both and I always use the nipple) and the keyboard had a bunch of missing keys. I’m sure if I tried to forget all of my habits that I have developed over the years and do things the Apple way it could of worked but going and buying a Toshiba and putting Ubuntu 10.04 on it was (and remains) the path of least resistance for me to becoming productive on a new system (second least(next to Linux) resistance is a customized XP).

I did use Windows as my desktop at work for many years but it was heavily, heavily customized with Blackbox for windows as well as cygwin and other tools. So much so that the IT departments didn’t know how to use my system(no explorer shell, no start menu). But it gave windows a familiar feel to Linux with mouse over activation (XP Power toys – another feature OS X lacked outside of the terminal emulator anyways), virtual desktops (though no edge flipping). It took some time to configure but once up and going it worked well. I don’t know how well it would work in Windows 7, the version of BB I was using came out in 2004/2005 time frame, there are newer versions though.

I do fear what may be coming down the pike from a Linux UI perspective though, I plan to stick to Ubuntu 10.04 for as long as I can. The combination of Gnome 2 + some software called brightside which allows for edge flipping(otherwise I’d be in KDE) works pretty well for me(even though I have to manually start brightside every time I login, when it starts automatically it doesn’t work for some reason. The virtual desktop implementation isn’t as good as Afterstep, something I used for many years but Gnome makes up for it in other areas where Afterstep fell short.

I’ve gotten even more off topic than I thought I would.

So – thanks Nvidia for making such good drivers over the years, because of them it’s made Linux on the desktop/laptop that much easier to deal with for me. The only annoying issue I recall having was on my M5 laptop, which wasn’t limited to Linux and didn’t seem specific to Nvidia (or Toshiba).

Also thank you to Linus for making Linux and getting it to where it is today.

June 14, 2012

Nokia’s dark future

Filed under: Random Thought — Tags: — Nate @ 8:00 am

Nokia made some headlines today, chopping a bunch of jobs, closing factories and stuff. With sales crashing and market share continuing to slip, their reliance on Windows Phone has been a bet that has gone bad, at least so far.

What I find more interesting though is what Microsoft has gotten Nokia to inflict upon itself. It’s basically investing much of it’s remaining resources to turn into a Microsoft shop. Meanwhile their revenues decline, and their market valuation plunges. There was apparently talks last year about Microsoft buying Nokia outright, but they fell through. For good reason, I mean all Microsoft has to do is wait, Nokia is doing their bidding already, and making the valuation of the company even less as time goes on. From a brand name standpoint Nokia doesn’t exist in the smart phone world (really), so there really isn’t much to lose (other than the really good employees that may be jumping ship in the meantime – though I’m sure Nokia keeps MS aware of who is leaving so MS can contact them in the event they want to try to hire them back).

At some point barring a miracle, Nokia will get acquired. By so heavily investing itself in Microsoft technologies now, and until that acquisition happens they are naturally preparing themselves for assimilation – and at the same time making themselves less attractive to most other buyers because they are so committed to the Microsoft platform. Other buyers may come in and say we want to buy the patents or this piece or that piece. But then Microsoft can come in and offer a much higher price because all of the other parts of the company have much more value to them.

Not that I think going the Microsoft way was a mistake. All too often I see people say all Nokia had to do is embrace Android and they’d be fine. I don’t agree at all here. Look at the Android market place, there are a very few select standouts, Samsung (Apple and Samsung receive 90%+ of the mobile phone profits) being the main one these days (though I believe as recently as perhaps one year ago it was HTC though they have fallen from grace as well). There’s not enough to differentiate in the Android world, there are tons of handset makers, most of them are absolute crap(very cheap components, breaks easily, names you’ve never heard of), the tablets aren’t much better.

So the point here is just being another me too supplier of Android wasn’t going to cut it. To support an organization that large they needed something more extraordinary. Of course that is really hard to come up with, so they went to Microsoft. It’s too bad that Nokia, like RIM and even Palm(despite me being a WebOS fan and user, the WebOS products were the only Palm-branded products I have ever owned) floundered so long before they came up with a real strategy.

HP obviously realized this as well given the HP Touchpad was originally supposed to run Android – before the Palm acquisition. Which would explain the random Touchpad showing up (from RMA) in customer’s hands running Android.

Palm’s time of course prematurely ran out last year (HP’s original plan had a three year runway for Palm), Nokia and RIM still have a decent amount of cash on hand and it remains to be seen if they have enough time to execute on their plans. I suspect they won’t, with Nokia ending up at Microsoft, and RIM I don’t know. I think it would make another good MS fit primarily for the enterprise subscribers, though by the time the valuation is good enough (keeping in mind MS will acquire Nokia) there may not be enough of them left. Unless RIM breaks apart, sells the enterprise biz to someone like MS, and maintains a smaller global organization supporting users where they still have a lot of growth which seems to be in emerging markets.

Of course Nokia is not the only one making Windows Phone handsets, but at least that market is still so new (at least with the latest platform) that there was a better opportunity for them to stand out amongst the other players.

Speaking of the downfall of Nokia and RIM, there was a fascinating blog post a while back about the decline of Apple since the founder is gone now. It generated a ton of buzz, I think the person makes a lot of very good and valid points.

Now that I’ve written that maybe my mind can move on to something else.

May 15, 2012

Facebook’s race to the bottom

Filed under: Random Thought — Tags: — Nate @ 8:36 am

I’m as excited as everyone else about Facebook’s IPO – hopefully it will mark the passing of an era, and people can move on to talk about something else.

For me it started back in 2006 when I went to work for my first social media startup, a company that very quickly seemed to lose it’s way and just wanted to try to capitalize on Facebook some how. Myself I did not(and still don’t) care about social media, one of the nice things about being on the operations/IT/internet side of the company is it doesn’t really matter what the company’s vision is or what they do, my job stays pretty much the same. Optimize things, monitor things, make things faster etc. All of those sorts of tasks improve the user experience no matter what and I don’t need to get involved in the innards of company strategy or whatever. I mistakenly joined another social media startup a few years later and that place was just a disaster any way you slice it. No more social media companies for me! Tired of the “I wanna be facebook too!” crowd.

Anyways, back on topic. The forthcoming Facebook IPO. The most anticipated IPO in the past decade I believe anyways. Obviously tons of hype around it but I learned a couple interesting things, yesterday I think, that made me chuckle. This information comes from analysts on CNBC – I don’t care enough about Facebook to research the IPO myself it’s a waste of time.

Facebook has a big problem, and the problem is mobile. There hasn’t been any companies that have been able to monetize mobile(weird thinking I worked at a mobile payments company going back to 2003 that was later acquired by AMDOCS which is a huge billing provider for the carriers) in the same way that companies have been able to monetize the traditional PC-based web browsing platform with advertising. There have been companies like Apple that makes tons of money off their mobile stuff but that’s a different and somewhat unique model. The point is advertising. Whether it’s Google, or Pandora, Facebook, and I’m confident Twitter is in the same boat. Nobody is making profits on mobile advertising – despite all the hype and efforts. I guess the screen is too small.

So expanding on that a bit, this analyst said yesterday that outside of the U.S. and parts of Europe the bulk of the populations using Facebook use it almost exclusively on mobile – so there’s no real revenue for Facebook from them at this time.

Add to that apparently Facebook has written China off as a growth market for some specific reason(don’t recall what). Which seems contrary to the recent trend where companies are falling head over heels to try to get into China, giving up their intellectual property to the Chinese government(why..why?!) to get into that market.

So that leaves the U.S. and a few other developed markets that are still, for the most part, using their computers to interact with Facebook.

So Facebook is in a race – to be the first company to monetize mobile before their lucrative subscriber base that they have in these few developed markets shifts away from the easy-to-advertise-on computer platform.

Not only that but there’s another challenge that faces them as well. Employee retention. Myself of course would never work for Facebook, I’ve talked to several people that have interviewed there, and a couple that have worked there and I’ve never really heard anything positive come out of anyone about the company.

Basically it seems like the only thing holding it together is the impending IPO. In fact at one point I believe it was reported that Zuckerberg delayed the IPO in order to get employees to re-focus on the company and software and not get side tracked by the IPO.

So why IPO now? One big reason seems to be taxes, of all things. With many tax rates currently scheduled to go up on Jan 1, 2012 – Facebook wants to IPO now, with the employee lock up preventing anyone from selling shares for six months – that gets you pretty close to the New Year, and the potential new taxes.

The IPO is also expected to trigger a housing boom in and around Palo Alto, CA. I remember seeing a report about a year ago that mentioned many people in the area wanted to sell their houses but were holding off for the IPO – as a result the housing market(at least at the time, not sure what the state is now) was very tight with only a few dozen properties on the market out of tens of thousands.

There was even a California politician or two earlier in the year that said the state’s finances weren’t in as bad of shape as some people were making out because they weren’t taking into account the effect of the Facebook IPO. Of course recently it was announced that things were in fact, much worse than some had previously communicated.

I’m not saying the hype won’t drive the stock really high on opening day – wouldn’t surprise me if it went to $90 or $100 or more. It seems like the IPO road show that Facebook took, in their case it felt like a formality more than anything else. I just saw someone mention that in Asia the IPO is 25X oversubscribed.

One stock person I saw recently mentioned her company has received more requests about the Facebook IPO than any other IPO in the past 20 years.

Maybe they can pull mobile off before it’s too late, I’m not holding my breath though.

I really didn’t participate in the original dot com bubble, I worked at a dot com for about 3 months in the summer of 2000 but that was about it. So this comparison may not be accurate but the hype around this IPO really reminds me of that time, I’m not sure how many of the original dot com companies you would have to combine to reach a market cap of $100B,  hopefully it’s 100 at least. But it’s sort of like a mini dot com bubble all contained within one company. With so many other wanna be hopefuls in the wings not able to get any momentum to capitalize on it beyond their initial VC investments. The two social media companies I worked for combined got around  I want to say $90M in funding alone.

Another point along these lines, is the esteemed CEO of Facebook seems to be on a social mission and cares more about the mission than the money. That reminds me so much of the dot com days, it’s just another way of saying we want even more traffic, more web site hits! Sure it’s easy to not care much about the money now because people have bought the hype hook line and sinker and are just throwing money at it. Obviously it won’t last though 🙂

Myself of course, will not buy any Facebook stock – or any other stock. I’m not an investor, or trader or whatever.

April 23, 2012

MS Shooting themselves in their mobile feet again?

Filed under: General,Random Thought — Tags: — Nate @ 10:03 am

I’ve started to feel sorry for Microsoft recently. Back in the 90s and early 00s I was firmly in the anti MS camp, but the past few years I have slowly moved out of that camp mainly because MS isn’t the beast that it used to be. It’s a company that just fumbles about at what it does now and doesn’t appear to be much of a threat anymore. It has a massive amount of cash still but for some reason can’t figure out how to use it. I suppose the potential is still there.

Anyways I was reading this article on slashdot just now about Skype on Windows phone 7. The most immediate complaint was the design of WP7 prevents skype from receiving calls while in the background because with few exceptions like streaming media and stuff any background app is suspended. There is no multi tasking on WP7? As some others I have seen notice – I haven’t seen a WP7 phone on anyone yet, so haven’t seen the platform in action. Back when what was Palm was gutted last year and the hardware divisions shut down many people were saying how WP7 was a good platform to go to from WebOS, especially the 7.5 release which was pretty new at the time.

I don’t multi task too much on my phone or tablets, but it’s certainly nice to have the option there. WebOS has a nice messaging interface with full skype integration so skype can run completely in the background. I don’t use it in this way mainly because the company I’m at uses Skype as a sort of full on chat client, so the device would be hammered by people talking (to other people) in group chats which is really distracting. Add to that the audible notifications for messaging on WebOS applies to all protocols, so I use a very loud panic alarm for SMS messages for my on call stuff, and having that sound off every couple of seconds when a skype discussion is going is not workable! So I keep it off unless I specifically need it. 99.9% of my skype activity is work related. Otherwise I wouldn’t even use the thing. Multi tasking has been one of the biggest selling points of WebOS since it was released several years ago, really seeming to be the first platform to support it (why it took even that long sort of baffles me).

So no multi tasking, and apparently no major upgrades coming either – I’ve come across a few articles like this one that say it is very unlikely that WP7 users will be able to upgrade to Windows 8/WP8. Though lack of mobile phone upgrades seems pretty common, Android in particular has had some investigations done to illustrate the varying degrees when or if the various handsets get upgrades. WebOS was in the same boat here, with the original phones not getting past version 1.4.5 or something, the next generation of phones not getting past 2.x, and only the Touchpad (with a mostly incompatible UI for phones apparently) having 3.x. For me, I don’t see anything in WebOS 3.x that I would need on my WebOS 2.x devices, and I remember when I was on WebOS 1.x I didn’t see anything in 2.x that made me really want to upgrade, the phone worked pretty well as it was. iOS seems to shine the best in this case providing longer term updates for what (has got to be) is a very mature OS at this point.

But for a company that has as much resources as Microsoft, especially given the fact that they seem to be maintaining tighter control over the hardware the phones run on, it’s really unfortunate that they may not be willing/able to provide the major update to WP8.

Then there was the apparent ban Microsoft put on all players, preventing them from releasing multi core phones in order to give Nokia time to make one themselves, instead of giving even more resources to making sure they could succeed they held the other players back, which not only hurts all of their partners (minus Nokia, or not?) but of course hurt the platform as a whole.

I’m stocked up on WebOS devices to last me a while on a GSM network. So I don’t have to think about what I may upgrade to in the future, I suspect my phones might outlive the network technologies they use.

To come back to the original topic – lack of multi tasking – specifically the inability for Skype to operate in the background is really sad. Perhaps the only thing worse is it took this long for Skype to show up on the platform in the first place. Even the zombie’d WebOS has had Skype for almost a year on the Touchpad, and if you happened to have a Verizon Pre2 phone at the time, Skype for that was released just over a year ago(again with full background support). I would of thought given Microsoft bought Skype about a year ago that they would of/ could of had a release for WP7 within a very short period of time(30 days?). But at least it’s here for the 8 people that use the phone, even if the version is crippled by the design of the OS. Even Linux has had Skype (which I use daily) for longer. There have been some big bugs in Skype on WebOS – most of them I think related to video/audio, doesn’t really impact me since most of my skype usage is for text chat.

While I’m here chatting about mobile I find it really funny, and ironic that apparently Microsoft makes more money off of Android than it does it’s own platform(estimated to be five times more last year), and Google apparently makes four times more money off of iOS than it’s own platform.

While there are no new plans for WebOS hardware at this point – it wouldn’t surprise me at all if people inside HP were working to make the new ARM-based WP8 tablets hackable in some way to get a future version of WebOS on them, even though MS is going to do everything they can to prevent that from happening.

April 6, 2012

Flash from the past – old game review

Filed under: General,Random Thought — Tags: — Nate @ 11:39 pm

I was talking to a friend that I’ve known for more than 20 years a few days ago (these posts really are making me feel old 🙂 ) and we were talking about games, Wing Commander Saga among them and he brought up an old game that we tried playing for a while he couldn’t remember the name, but I did. It was Outpost. A Sci-fi simcity or civilization style game from 1994. It has amazing visuals, being one of the earlier CDROM-based games. I bought it after seeing the visuals and really wanted to like it but no matter what I couldn’t get very far without losing the game, no matter what I did I would run out of resources, air, food water, whatever it was, all my people would die and I would have to start again. Rinse and repeat a few times and I gave up on it eventually. I really liked the concept being a long time sci fi fan (well hard core sci fi fans would probably refer to me as a space opera fan).

ANYWAYS, I hadn’t looked for anything about this game since well probably the mid 90s. I ran a basic search earlier today and came across this 10-minute video review from an interactive CDROM magazine published back in 1994. I had no idea how much of a mess the game really was, the review was incredibly funny to watch they rip the game apart. They are in awe of the visuals like I was but other than that the game was buggy and under finished. They make it sound like it was one of the most un finished games of all time. They talk about entire sections of the strategy guide that are completely left out of the game, functions that are mentioned that just don’t exist. In-game artificial intelligence that gives absolutely worthless data, no explanation on how to plan to play the game up front. It’s quite humorous! I’m going to go watch it again.

From Wikipedia

Initial reviews of Outpost were enthusiastic about the game. Most notoriously, the American version of PC Gamer rated the game at 93%, one of its highest ratings ever for the time. It was later made known that the reviewers had in fact played beta versions of the game, and had been promised certain features would be implemented, but never were.

[..]

Following the release of the game, the game’s general bugginess and perceived mediocre gameplay, along with the lack of features described in most of the game’s reviews and the game’s own documentation led to a minor backlash against the computer game magazines of the time by consumers who bought the game based on their reviews.

April 5, 2012

Built a new computer – first time in 10 years

Filed under: linux,Random Thought — Tags: , — Nate @ 8:51 am

I was thinking about when the last time I built a computer from scratch this morning and I think it was about ten years ago, maybe longer – I remember the processor was a Pentium 3 800Mhz. It very well may of been almost 12 years ago. Up until around 2004 time frame I had built and re-built computers re-using older parts and some newer components, but as far as going out and buying everything from scratch, it was 10-12 years ago.

I had two of them, one was a socket-based system the other was a “Slot 2“-based system. I also built a couple systems around dual-slot (Slot 1) Pentium 2 systems with the Intel L440GX+ Motherboard (probably my favorite motherboard of all time). For those of you think that I use nothing but AMD I’ll remind you that aside from the AMD K6-3 series I was an Intel fanboy up until the Opteron 6100 was released. I especially liked the K6-3 for it’s on chip L2 cache, combined with 2 Megabytes of L3 cache on the motherboard it was quite zippy. I still have my K6-3 CPU itself in a drawer around here somewhere.

So I decided to build a new computer to move my file serving functions out of my HP xw9400 workstation which I bought about a year and a half ago into something smaller so I could turn the HP box into something less serious to play some games on my TV on (like WC: Saga!). Maybe get a better video card for it I’m not sure.

I have a 3Ware RAID card and 4x2TB disks in my HP box so I needed something that could take that. This is what I ended up going with, from Newegg –

Seemed like an OK combination, the case is pretty nice having a 5-port hot swap SATA backplane, supporting up to 7 HDDs. PC Power & Cooling (I used to swear by them and so thought might as well go with them again) had a calculator and said for as many HDDs as I had to get a 500W so I got that.

There is a lot of new things here that are new to me anyways and it’s been interesting to see how technology has changed since I last did this in the Pentium 3 era.

Mini ITX. Holy crap is that small. I knew it was small based on dimensions but it really didn’t set in until I held the motherboard box in my hand and it seemed about the same size as a retail box for a CPU 10 years ago. It’s no wonder the board uses laptop memory. The amount of integrated features on it are just insane as well from ethernet to USB 2 and USB 3, eSATA, HDMI, DVI, PS/2, optical audio output, analog audio out, and even wireless all crammed into  that tiny thing. Oh! Bluetooth is thrown in as well. During my quest to find a motherboard I even came across a motherboard that had a parallel port on it – I thought those died a while ago. The thing is just so tiny and packed.

On the subject of motherboards – the very advanced overclocking functions is just amazing. I will not overclock since I value stability over performance, and I really don’t need the performance in this box. I took the overclocking friendliness of this board to hopefully mean higher quality components and the ability to run more stable at stock speeds. Included tweaking features –

  • 64-step DRAM voltage control
  • Adjustable CPU voltage at 0.00625V increments (?!)
  • 64-step chipset voltage control
  • PCI Express frequency tuning from 100Mhz up to 150Mhz in 1Mhz increments
  • HT Tuning from 100Mhz to 550Mhz in 1Mhz increments
  • ASUS C.P.R. (CPU Parameter Recall) – no idea what that is
  • Option to unlock the 4th CPU core on my CPU
  • Options to run on only 1,2 or or all 3 cores.

Last time I recall over clocking stuff there was maybe 2-3 settings for voltage and the difference was typically at least 5-15% between them. I remember the only CPU I ever over clocked was a Pentium 200 MMX (o/c to 225Mhz – no voltage changes needed ran just fine).

I seem to recall from a PCI perspective, back in my day there was two settings for PCI frequencies, whatever the normal was, and one setting higher(which was something like 25-33% faster).

ASUS M4A88T-I Motherboard

Memory – wow it’s so cheap now, I mean 8GB for $45 ?! The last time I bought memory was for my HP workstation which requires registered ECC – and it was not so cheap ! This system doesn’t use ECC of course. Though given how dense memory has been getting and the possibility of memory errors only increasing I would think at some point soon we would want some form of ECC across the board ? It was certainly a concern 10 years ago when building servers with even say 1-2GB of memory now we have many desktops and laptops coming standard with 4GB+. Yet we don’t see ECC on the desktops and laptops – I know because of cost but my question is more around there doesn’t appear to be a significant (or perhaps in some cases even noticeable) hit in reliability of these systems with larger amounts of memory without ECC which is interesting.

Another thing I noticed was how massive some video cards have become, consuming as many as 3 PCI slots in some cases for their cooling systems. Back in my day the high end video cards didn’t even have heat sinks on them! I was a big fan of Number Nine back in my day and had both their Imagine 128 and Imagine 128 Series 2 cards, with a whole 4MB of memory (512kB chips if I remember right on the series 2, they used double the number of smaller chips to get more bandwidth). Those cards retailed for $699 at the time, a fair bit higher than today’s high end 3D cards (CAD/CAM workstation cards excluded in both cases).

Modular power supplies – the PSU I got was only partially modular but it was still neat to see.

I really dreaded the assembly of the system since it is so small, I knew the power supply was going to be an issue as someone on Newegg said that you really don’t want a PSU that is longer than 6″ because of how small the case is. I think PC Power & Cooling said mine was about 6.3″(with wiring harness). It was gonna be tight — and it was tight. I tried finding a shorter power supply in that class range but could not. It took a while to get the cables all wrapped up. My number one fear of course after doing all that work, hitting the power button and find out there’s a critical problem (bought the wrong ram, bad cpu, bad board, plugged in the power button the wrong way whatever).

I was very happy to see when I turned it on for the first time it lit up and the POST screen came right up on the TV. There was a bad noise comming from one of the fans because the cable was touching it, so I opened it up again and tweaked the cables more so they weren’t touching the fan, and off I went.

First, without any HDs just to make sure it turned on, the keyboard worked, I could get into the BIOS screen etc. All that worked fine, then I opened up the case again and installed an old 750GB HD in one of the hot swap slots. Hooked up a USB CDROM with a CD of Ubuntu 10.04 64-bit and installed it on the HD.

Since this board has built in wireless I was looking forward to trying it out – didn’t have much luck. It could see the 50 access points in the area but it was not able to login to mine for some reason, I later found that it was not getting a DHCP response so I hard wired an IP and it worked — but then other issues came up like DNS not working, very very slow transfer speeds(as in sub 100 BYTES per second), after troubleshooting for about 20 minutes I gave up and went wired and it was fast. I upgraded the system to the latest kernel and stuff but that didn’t help the wireless. Whatever, not a big deal I didn’t need it anyways.

I installed SSH, and logged into it from my laptop,  shut off X-Windows, and installed the Cerberus Test Suite (something else I used to swear by back in the mid 00s). Fortunately there is a packaged version of it for Ubuntu as, last I checked it hasn’t been maintained in about seven years. I do remember having problems compiling it on a 64-bit RHEL system a few years ago (Though 32-bit worked fine and the resulting binaries worked fine on 32-bit too).

Cerberus test suite (or ctcs as I call it), is basically a computer torture test. A very effective one, the most effective I’ve ever used myself. I found that if a computer can survive my custom test (which is pretty basic) for 6 hours then it’s good, I’ve run the tests as long as 72 hours and never saw a system fail in a period more than 6 hours. Normally it would be a few minutes to a couple hours. It would find problems with memory that memtest wouldn’t find after 24 hours of testing.

What cerberus doesn’t do, is it doesn’t tell you what failed or why, if your system just freezes up you still have to figure it out. On one project I worked on that had a lot of “white box” servers in it, we deployed them about a rack at a time, and I would run this test, maybe 85% of them would pass, and the others had some problem, so I told the vendor go fix it, I don’t know what it is but these are not behaving like the others so I know there is a issue. Let them figure out what component is bad (90% of the time it was memory).

So I fired up ctcs last night, and watched it for a few minutes, wondering if there is enough cooling on the box to keep it from bursting into flames. To my delight it ran great, with the CPU topping out at around 54C (honestly have no idea if that is good or not, I think it is OK though). I ran it for 6 hours overnight and no issues when I got up this morning. I fired it up again for another 8 hours (the test automatically terminates after a pre defined interval).

I’m not testing the HD, because it’s just a temporary disk until I move my 3ware stuff over.  I’m mainly concerned about the memory, and CPU/MB/cooling. The box is still running silent (I have other stuff in my room so I’m sure it makes some noise but I can’t hear it). It has 4 fans in it including the CPU fan. A 140mm, a 120mm and the PSU fan which I am not sure how big that is.

My last memory of ASUS was running on an Athlon with an A7A-266 motherboard(I think in 2000), that combination didn’t last long. The IDE controller on the thing corrupted data like nobody’s business. I would install an OS, and everything would seem fine then the initial reboot kicked in and everything was corrupt. I still felt that ASUS was a pretty decent brand maybe that was just specific to that board or something. I’m so out of touch with PC hardware at this level the different CPU sockets,  CPU types, I remember knowing everything backwards and forwards in the Socket 7 days, back when things were quite interchangeable. Then there was my horrible year or two experience in the ABIT BP-6, a somewhat experimental dual socket Celeron system. What a mistake that was, oh what headaches that thing gave me. I think I remember getting it based on a recommendation at Tom’s Hardware guide, a site I used to think had good information (maybe it does now I don’t know). But that experience with the BP6 fed back into my thoughts about Tom’s hardware and I really didn’t go back to that site ever again(sometimes these days I stumble upon it on accident). I noticed a few minutes ago that Abit as a company is out of business now, they seemed to be quite the innovator back in the late 90s.

Maybe this weekend I will move my 3ware stuff over and install Debian (not Ubuntu) on the new system and set it up. While I like Red Hat/CentOS for work stuff, I like Debian for home. It basically comes down to if I am managing it by hand I want Debian, if I’m using tools like CFEngine to manage it I want RH. If it’s a laptop, or desktop then it gets Ubuntu 10.04 (I haven’t seen the nastiness in the newer Ubuntu release(s) so not sure what I will do after 10.04).

I really didn’t think I’d ever build a computer again, until this little side project came up.

Another reason I hate SELinux

Filed under: linux,Random Thought — Tags: , , — Nate @ 7:43 am

I don’t write too much about Linux either but this is sort of technical I guess.

I’ve never been a fan of SELinux. I’m sure it’s great if your in the NSA, or the FBI, or some other 3 letter agency, but for most of the rest of the people it’s a needless pain to deal with, and provides little benefit.

I remember many moons ago back when I dealt with NT4, encountering situations where I, as an administrator could not access a file on the NTFS file system. It made no sense – I am administrator – get me access to that file – but no, I could not get access. HOWEVER, I could change the security settings and take ownership of the file NOW I can get access. Since I have that right to begin with it should just give me access and not make me jump through those hoops. That’s what I think at least. I recall someone telling me back in the 90s that Netware was similar and even went to further extremes where you could lock the admin out of files entirely, and in order to back data up you had another backup user which the backup program used and that was somehow protected too. I can certainly understand the use case, but it certainly makes things frustrating. I’ve never been at a company that needed anywhere remotely that level of control (I go out of my way to avoid them actually since I’m sure that’s only a small part of the frustrations of being there).

On the same token I have never used (for more than a few minutes anyways) file system ACLs on Linux/Unix platforms either. I really like the basic permissions system it works for 99.9% of my own use cases over the years, and is very simple to manage.

I had a more recent experience that was similar, but even more frustrating on Windows 7. I wanted to copy a couple files into the system32 directory, but no matter what I did (including take ownership, change permissions etc) it would not let me do it. It’s my #$#@ computer you piece of #@$#@.

Such frustration is not limited to Windows however, Linux has it’s own similar functionality called SE Linux, which by default is turned on in many situations. I turn it off everywhere, so when I encounter it I am not expecting it to be on, and the resulting frustration is annoying to say the least.

A couple weeks ago I installed a test MySQL server, and exposed a LUN to it which had a snapshot of a MySQL database from another system. My standard practice is to turn /var/lib/mysql into a link which points to this SAN mount point. So I did that, and started MySQL …failed. MySQL complained about not having write access to the directory. So I spent the next probably 25 minutes fighting this thing only to discover it was SE Linux that was blocking access to the directory. Disable SE Linux, reboot and MySQL came up fine w/o issue.  #@$#@!$

Yesterday I had another, more interesting encounter with SE Linux. I installed a few CentOS 6.2 systems to put an evaluation of Vertica on. These were all built by hand since we have no automation stuff to deal with CentOS/RH, everything we have is Ubuntu. So I did a bunch of basic things including installing some SSH keys so I could login as root w/my key. Only to find out that didn’t work. No errors in the logs, nothing just rejected my key. I fired up another SSH daemon on another port and my key was accepted no problem. I put the original SSH daemon in debug mode and it gave nothing either just said rejected my key. W T F.

After fighting for another probably 10 minutes I thought, HEY maybe SE Linux is blocking this, and I checked and SE Linux was in enforcing mode. So I disabled it, and rebooted – now SSH works again. I didn’t happen to notice any logs anywhere related to SE Linux and how/why it was blocking this, and only blocking it on port 22 not on any other ports(I tried two other ports), but there you have it, another reason to hate SE Linux.

You can protect your system against the vast majority of threats fairly easily, I mean the last system I dealt with that got compromised was a system that sat out on the internet (with tons of services running) that hadn’t had an OS upgrade in at least 3 years. The system before that I recall was another Linux host(internet-connected as well – it was a firewall) – this time back in 2001 and probably hadn’t had upgrades in a long time either.  The third – a FreeBSD system that was hacked because of me really – I told my friend who ran it to install SSH as he was using telnet to manage it. So he installed SSH and SSH got exploited (back in 2000-2001). I’ve managed probably 900-1000 different hosts over that time frame without an issue. I know there is value in SE Linux, just not in the environments I work in.

Oh and while I’m here, I came across a new feature in CentOS 6.2  yesterday which I’m sure probably applies to RHEL too. When formatting an ext4 file system by default it discards unused blocks. The man page says this is good for thin provisioned file systems and SSDs. Well I’ll tell you it’s not good for thin provisioned file systems, the damn thing sent 300 Megabytes a second of data (450-500,000+ sectors per second according to iostat) to my little storage array with a block size of 2MB (never seen a block size that big before), which had absolutely no benefit other than to flood the storage interfaces and possibly fill up the cache. I ran this on three different VMs at the same time. After a few seconds my front end latency on my storage went from 1.5-3ms to 15-20ms. And the result on the volumes themselves? Nothing, there was no data being written to them. So what’s the point? My point is disable this stupid function with the -K option when running mke2fs on CentOS 6.2. On Ubuntu 10.04 (what we use primarily), it uses ext4 too, but it does not perform this function when a file system is created.

Something that was strange when this operation ran, and I have a question to my 3PAR friends on it – is the performance statistics for the virtual LUN showed absolutely no data flowing through the system, but the performance stats for the volume itself were there(a situation I have never seen before in 6 years on 3PAR), and the performance stats of the fibre channel ports were there, there was no noticeable hit on back end I/O  that I could see, so the controllers were eating it. My only speculation is because RHEL/CentOS 6 has built in support for SCSI UNMAP that these commands were actually UNMAP commands rather than actual data. I’m not sure though.

April 1, 2012

Wing Commander Saga: The Darkest Dawn

Filed under: General,Random Thought — Tags: — Nate @ 11:00 pm

I don’t write very often about games, since I don’t play too many of them. Somehow I missed the release of Wing Commander Saga: The Darkest Dawn when it was announced(did not even know it existed), fortunately one of my friends pointed it out and I played it a bit over the past few days.

Here is a trailer for Wing Commander Saga: The Darkest Dawn.

Wing Commander Saga is 10 years in the making(!), an open source project – they were able to take (fortunately without pissing people off) a lot of content from the original series and re-use it including sounds, music, ships, tons of stuff. They put all of that along with an open source 3D engine from the Descent 2: Freespace a ton of new content, voice acting, story lines to make what seems to me the best Wing Commander yet.

They had 50,000 downloads in the first 24 hours of launch (which was on March 22nd).

I really can’t put into words how amazing this game is. I was there about 20 years ago playing the first Wing Commander on an HP 286 12Mhz. I was talking with a friend of mine who played it with me at the time and we both recall using a keyboard to fly at least initially – didn’t get a joystick until later.

It is one of my favorite games of all time, I remember so much of it even today (played it through again about two years ago). The missions, the story, the attention to detail that was put into the game was just awesome. Then came Wing Commander 2 with more advanced AI – I struggled with that one. Later came Wing Commander 3 and it’s breakthrough video sequences that could be run on ordinary computers, the level of quality was just outstanding, I remember playing it on a 486 DX 33MHz, with, if I remember right was a Cirrus Logic VLB video card.  Later Wing Commander 4 and Wing Commander Prophecy (wasn’t a big fan of the last one).

From Wikipedia on Wing Commander 3:

The game made the transition from animated cut-scenes to full motion video, one of the first computer games to do so; it was frequently marketed as the world’s first interactive movie. It pioneered the use of CGI backgrounds and greenscreen work; all sets were added digitally during post-production, nearly a decade before George Lucas would use the same tactic in Star Wars Episode III: Revenge of the Sith.

To given an example of attention to detail, take a look at the manual (16MB), more than a hundred pages (the vast, vast majority of which are not directly related to playing the actual game but rather history and stories and stuff).

The original Wing Commander series to some extent was limited as to what they could put on the screen by the hardware. Wing Commander Saga seems to have no limits, massive dogfights with tons of fighters, multiple capital ships engaging each other head on things that I never recalled seeing in the original Wing Commander series (Though to be honest my memory of things other than WC1 are kind of fuzzy it was a long time ago). The last mission I played tonight I barely escaped with my life (had to try the mission twice since the first time I was shot down), limped away with 36 kills — a number far higher than any other WC game that I can recall.

I’ve played probably 8-10 missions at this point. In those missions I already have war stories to talk about .

Beyond the added visuals comes the really excellent voices, story, smalltalk etc. It adds an amazing level of realism to the game.

Confederation Strike force going to their target

The controls are quite advanced as well, quite similar to those available in the X-Wing series of flight simulators. Again my memory of WC:4 and WC:P are fuzzy at best those very well may of been available then too I don’t remember.

If I had one complaint it is the game is too strict in the rules, I get in trouble quite often when all I want to do is go in and skin some cats. One mission I was ordered to retreat, but I didn’t want to I turned around and watched some action, I never engaged or got close to the enemy, at the end of the scene they abandoned me.

It’s really amazing this team of people were able to hold together for a decade and release such a quality product, I really still have no words for how incredible of a game this is. If you haven’t tried the Wing Commander series this is something you should check out — in order to play it properly you will need a joystick though. I use a trusty old CH Flightstick Pro which works pretty good. I re-bought it about two years ago when I went and re-bought the original WC series (excluding Prophecy). I had intended to play the series through but haven’t had as much time as I would of liked so didn’t get past part way into WC2. The games play well in DOS Box so when I get time in the future I can go back to them again.

I don’t see any way to donate to them on their site, I read some speculation that they would probably get in trouble with the content owners (Electronic Arts I think) if they did take any money for the game.

March 19, 2012

Apple and their dividends

Filed under: Random Thought — Nate @ 4:20 pm

I watch quite a bit of CNBC despite never having invested a dime in the markets. I found this news of Apple doing a stock buyback and offering a dividend curious.

People have been clamoring for a dividend from Apple for a while now, wanting Apple to return some of that near $100B cash stock pile to investors.  I’m no fan of Apple(to put it mildly), never really have been, but for some reason I always thought Apple should NOT offer a dividend nor do a stock buyback.

The reason? Just look at the trajectory of the stock, investors are getting rewarded by a large amount already, from about $225 two years ago to near $600 now. Keep the cash, sooner or later their whole iOS ecosystem will start to fizzle and they will go on a decline again,  when that is I don’t know but the cash could be used to sustain them for quite some time to come. As long as the stock keeps going up at a reasonable amount year over year (I hardly think what has happened to their stock is reasonable but that’s why I’m not an investor), with some room for corrections here and there, keep the cash.

Same goes for stock buybacks, these seem to be tools for companies that really don’t have any other way to boost their stock price, a sort of last resort, they are all out of ideas. Apple doesn’t seem to be near that situation.

Even with the dividend I saw people complaining this morning that it was too low.

Apple could do something crazy with their cash stockpile too – perhaps buy HP (market cap of $48B). Or Dell ($30B market cap), if for nothing else then for diversification.  Not that I’d be happy with an Apple acquisition of HP.. Apple wouldn’t have to do anything with the companies just let them run like they are now. For some reason I thought both HP and Dell’s market caps were much higher(I suppose they were, just a matter of time frame).

What I think Apple should do with their stock? is a massive split (10:1 ?). Something apparently they are considering. Not for any other reason than to allow the little guy to get in on the action easier, having a $600/share price is kind of excessive, not Berkshire Hathaway excessive ($122k/share) but still excessive.

With the Nasdaq hitting new 10+ year highs in recent days/weeks, I would be curious to see the charts of such indexes that have Apple in them, how they would look without the ~50 fold increase in Apple’s stock over the past 10 years). I seem to recall seeing/hearing that Dow won’t put Apple in their DJI index because it would skew the numbers too much due to how massive it is. One article here says that if Apple had been put in the DJI in 2009 instead of Cisco the Dow would be past 15,000 now.

CNBC reported that Steve Jobs was against dividends.

March 17, 2012

Who uses Legacy storage?

Filed under: Random Thought,Storage — Tags: — Nate @ 3:34 pm

Still really busy these days haven’t had time to post much but I was just reading someone’s LinkedIn profile who works at a storage company and it got me thinking.

Who uses legacy storage? It seems almost everyone these days tries to benchmark their storage system against legacy storage.  Short of something like maybe direct attached storage which has no functionality, legacy storage has been dead for a long time now. What should the new benchmark be? How can you go about (trying to) measuring it?  I’m not sure what the answer is.

When is thin, thin?

One thing that has been in my mind a lot on this topic recently is how 3PAR constantly harps on about their efficient allocation at 16kB blocks. I think I’ve tried to explain this in the past but I wanted to write about it again. I wrote a comment on it in a HP blog recently I don’t think they published the comment though (haven’t checked for a few weeks maybe they did). But they try to say they are more efficient (by dozens or hundreds of times) than other platforms because of this 16kB allocation thing-a-ma-bob.

I’ve never seen this as an advantage to their platform. Whether you allocate in 16kB chunks or perhaps 42MB chunks in the case of Hitachi, it’s still a tiny amount of data in any case and really is a rounding error. If you have 100 volumes and they all have 42MB of slack hanging off the back of them, that’s 4.2GB of data, it’s nothing.

What 3PAR doesn’t tell you is this 16kB allocation unit is what a volume draws from a storage pool (Common Provisioning Group in 3PAR terms – which is basically a storage template or policy which defines things like RAID type, disk type, placement of data, protection level etc). They don’t tell you up front how much these storage pools provision storage on, which is in-part based on the number of controllers in the system.

If your volumes max out a CPG’s allocated space and it needs more, it won’t grab 16kB, it will grab (usually at least) 32GB, this is adjustable. This is – I believe in part how 3PAR addresses minimizing impact of thin provisioning with large amounts of I/O, because it allocates these pools with larger chunks of data up front. They even suggest that if you have a really large amount of growth that you increase the allocation unit even higher.

Growth Increments for CPGs on 3PAR

I bet you haven’t heard HP/3PAR say their system grows in 128GB increments recently 🙂

It is important to note, or to remember, that a CPG can be home to hundreds of volumes, so it’s up to the user, if they only have one drive type for example maybe they only want 1 CPG.  But I think as they use the system they will likely go down a similar path that I have and have more.

If you only have one or two CPGs on the system it’s probably not a big deal, though the space does add up. Still I think for the most part even this level of allocation can be a rounding error. Unless you have a large number of CPGs.

Myself, on my 3PAR arrays I use CPGs not just for determining data characteristics of the volumes but also for organizational purposes / space management. So I can look at one number and see all of the volumes dedicated to development purposes are X in size, or set an aggregate growth warning on a collection of volumes. I think CPGs work very well for this purpose. The flip side is you can end up wasting a lot more space. Recently on my new 3PAR system I went through and manually set the allocation level of a few of my CPGs from 32GB down to 8GB because I know the growth of those CPGs will be minimal. At the time I had maybe 400-450GB of slack space in the CPGs, not as thin as they may want you to think (I have around 10 CPGs on this array). So I changed the allocation unit and compacted the CPGs which reclaimed a bunch of space.

Again, in the grand scheme of things that’s not that much data.

For me 3PAR has always been more about higher utilizations which are made possible by the chunklet design and the true wide striping, the true active-active clustered controllers, one of the only(perhaps one of if not the first?) storage designs in the industry that goes beyond two controllers, and the ASIC acceleration which is at the heart of the performance and scalability. Then there is the ease of use and stuff, but I won’t talk about that anymore I’ve already covered it many times. One of my favorite aspects of the platform is the fact that they use the same design on everything from the low end to the high end, the only difference really is scale. It’s also part of the reason why their entry level pricing can be quite a bit higher than entry level pricing from others since there is the extra sauce in there that the competition isn’t willing or able to put on their low end box(s).

Sacrificing for data availability

I was talking to Compellent recently learning about some of their stuff for a project over in Europe and they told me their best practice (not a requirement) is to have 1 hot spare of each drive type (I think drive type meaning SAS or SATA, I don’t think drive size matters but am not sure) per drive chassis/cage/shelf.

They, like many other array makers don’t seem to support the use of low parity RAID (like RAID 50 3+1, or 4+1), they (like others) lean towards higher data:parity ratios I think in part because they have dedicated parity disks(they either had a hard time explaining to me how data is distributed or I had a hard time understanding, or both..), and dedicating 25% of your spindles to parity is very excessive, but in the 3PAR world dedicating 25% of your capacity  to parity is not excessive(when compared to RAID 10 where there is a 50% overhead anyways).

There are no dedicated parity, or dedicated spares on a 3PAR system so you do not lose any I/O capacity, in fact you gain it.

The benefits to a RAID 50 3+1 configuration are a couple fold – you get pretty close to RAID 10 performance, and you can most likely (depending on the # of shelves) suffer a shelf failure w/o data loss or downtime(downtime may vary depending on your I/O requirements and I/O capacity after those disks are gone).

It’s a best practice (again, not a requirement) in the 3PAR world to provide this level of availability (losing an entire shelf), not because you lose shelves often but just because it’s so easy to configure and is self managing. With a 4, or 8-shelf configuration I do like RAID 50 3+1. In an 8-shelf configuration maybe I have some data volumes that don’t need as much performance so I could go with a 7+1 configuration and still retain shelf-level availability.

Or, with CPGs you could have some volumes retain shelf-level availability and other volumes not have it, up to you. I prefer to keep all volumes with shelf level availability. The added space you get with a higher data:parity ratio really has diminishing returns.

Here’s a graphic from 3PAR which illustrates the dimishing returns(at least on their platform, I think the application they used to measure was Oracle DB):

The impact of RAID on I/O and capacity

3PAR can take this to an even higher extreme on their lower end F-class series which uses daisy chaining in order to get to full capacity (max chain length is 2 shelves). There is a availability level called port level availability which I always knew was there but never really learned what it truly was until last week.

Port level availability applies only to systems that have daisy chained chassis and protects the system from the failure of an entire chain. So two drive shelves basically. Like the other forms of availability this is fully automated, though if you want to go out of your way to take advantage of it you need to use a RAID level that is compliant with your setup to leverage port level availability otherwise the system will automatically default to a lower level of availability (or will prevent you from creating the policy in the first place because it is not possible on your configuration).

Port level availability does not apply to the S/T/V series of systems as there is no daisy chaining done on those boxes (unless you have a ~10 year old S-series system which they did support chaining – up to 2,560 drives on that first generation S800 – back in the days of 9-18GB disks).

« Newer PostsOlder Posts »

Powered by WordPress