TechOpsGuys.com Diggin' technology every day

February 24, 2011

So easy it could be a toy, but it’s not

Filed under: General,Random Thought — Tags: — Nate @ 8:44 pm

I was at a little event thrown for the Vertica column-based database, as well as Tableau Software, a Seattle-based data visualization company. Vertica was recently acquired by HP for an undisclosed sum. I had not heard of Tableau until today.

I went in not really knowing what to expect, have heard good things about Vertica from my friend over there but it’s really not an area I have much expertise in.

I left with my mouth on the floor. I mean holy crap that combination looks wicked. Combining the world’s fastest column based data warehouse with a data visualization tool that is so easy some of my past managers could even run it. I really don’t have words to describe it.

I never really considered Vertica for storing IT-related data, and they brought up a case study with one of their bigger customers – Comcast who sends more than 65,000 events a second into a vertica database (including logs, SNMP traps and other data). Hundreds of terabytes of data with sub second query response times. I don’t know if they use Tableau software’s products or not. But there was a good use case for storing IT data in Vertica.

(from Comcast case study)

The test included a snapshot of their application running on a five-node cluster of inexpensive servers with 4 CPU AMD 2.6 GHz core processors with 64-bit 1 MB cache; 8 GB RAM; and ~750 GBs of usable space in a RAID- 5 configuration.
To stress-test Vertica, the team pushed the average insert rate to 65K samples per second; Vertica delivered millisecond-level performance for several different query types, including search, resolve and accessing two days’ worth of data. CPU usage was about 9%, with a fluctuation of +/- 3%, and disk utilization was 12% with spikes up to 25%.

That configuration could of course easily fit on a single server. How about a 48-core Opteron with 256GB of memory and some 3PAR storage or something? Or maybe a DL385G7 with 24 cores, 192GB memory(24x8GB), and 16x500GB 10k RPM SAS disks with RAID 5  and dual SAS controllers with 1GB of flash-backed cache(1 controller per 8 disks). Maybe throw some Fusion IO in there too?

Now I suspect that there will be additional overhead with trying to feed IT data into a Vertica database since  you probably have to format it in some way.

Another really cool feature of Vertica – all of it’s data is mirrored at least once to another server, nothing special about that right? Well they go one step further, they give you the ability to store the data pre-sorted in two different ways, so mirror #1 may be sorted by one field, and mirror #2 is sorted by another field, maximizing use of every copy of the data, while maintaining data integrity.

Something that Tableu did really well that was cool was you don’t need to know how you want to present your data, you just drag stuff around and it will try to make intelligent decisions on how to represent it. It’s amazingly flexible.

Tableu does something else well, there is no language to learn, you don’t need to know SQL, you don’t need to know custom commands to do things, the guy giving the presentation basically never touched his keyboard. And he published some really kick ass reports to the web in a matter of seconds, fully interactive, users could click on something and drill down really easily and quickly.

This is all with the caveat that I don’t know how complicated it might be to get the data into the database in the first place.

Maybe there are other products out there that are as easy to use and stuff as Tableau I don’t know as it’s not a space I spend much time looking at. But this combination looks incredibly exciting.

Both products have fully functional free evaluation versions available to download on the respective sites.

Vertica licensing is based on the amount of data that is stored (I assume regardless of the number of copies stored but haven’t investigated too much), no per-user, no per-node, no per-cpu licensing. If you want more performance, add more servers or whatever and you don’t pay anything more. Vertica automatically re-balances the cluster as you add more servers.

Tableau is licensed as far as I know on a named-user basis or a per-server basis.

Both products are happily supported in VMware environments.

This blog entry really does not do the presentation justice, I don’t have the words for how cool this stuff was to see in action, there aren’t a lot of products or technologies that I get this excited about, but these has shot to near the top of my list.

Time to throw your Hadoop out the window and go with Vertica.

16-core 3.5Ghz Opterons coming?

Filed under: News — Tags: , — Nate @ 11:32 am

Was just reading an article from our friends at The Register about some new news on the upcoming Opteron 6200 (among other chips), it seems AMD is cranking up both the cores and clock speeds in the same power evelope, the smaller manufacturing process certainly does help! I think they’re going from 45nm to 32nm.

McIntyre said that AMD was targeting clock speeds of 3.5 GHz and higher with the Bulldozer cores within the same power envelop as the current Opteron 4100 and 6100 processors.

Remember that the 6200 is socket compatible with the 6100!

Can you imagine a blade chassis with 512 x 3.5Ghz CPU cores and 4TB of memory in only 10U of space drawing roughly 7,000 watts peak ? Seems unreal ..but sounds like it’s already on it’s way.

February 23, 2011

Certifiably not qualified

Filed under: Random Thought — Tags: — Nate @ 10:12 am

What is it with people and certifications? I’ve been following Planet V12n for a year or more now and I swear I’ve never seen so many kids advertise how excited they are that they passed some test or gotten some certification.

Maybe I’m old and still remember the vast number of people out there with really pointless certs like MCSE and CCNA (at least older versions of them maybe they are better now). When interviewing people I purposely gave people negative marks if they had such low level certifications, I remember one candidate even advertising he had his A+ certification, I mean come on!

I haven’t looked into the details behind VMware certification I’m sure the processes taken to get the certs have some value (to VMware who cashes in), but certifications still have a seriously negative stigma with me.

I hope the world of virtualization and “cloud” isn’t in the process of being overrun with unqualified idiots much like the dot com / Y2K days were overrun with MCSEs and CCNAs. What would be even worse if it was the same unqualified idiots as before.

There’s a local shop in my neck of the woods that does VMware training, they do a good job in my opinion, costs less, and you won’t get a certification at the end (but maybe you learn enough to take the test I don’t know). My only complaint about their stuff is they are too Cisco focused on networking and too NetApp focused on storage, would be nice to see more neutral things, but I can understand they are a small shop and can only support so much. NetApp makes a good storage platform for VMware I have to admit, but Cisco is just terrible in every way.

February 19, 2011

Flash not good for offline storage?

Filed under: Random Thought,Storage — Tags: , , — Nate @ 9:36 am

A few days ago I came across an article on Datacenter Knowledge that was talking about Flash reliability. As much as I’d love to think that just because it’s solid state that it will last much longer, real world tests to-date haven’t shown that to be true in many cases.

I happened to have the manual open on my computer for the Seagate Pulsar SSD, and just saw something that was really interesting to me, on page 15 it says –

As NAND Flash devices age with use, the capability of the media to retain a programmed value begins to deteriorate. This deterioration is affected by the number of times a particular memory cell is programmed and subsequently erased. When a device is new, it has a powered off data retention capability of up to ten years. With use the retention capability of the device is reduced. Temperature also has an effect on how long a Flash component can retain its pro-grammed value with power removed. At high temperature the retention capabilities of the device are reduced. Data retention is not an issue with power applied to the SSD. The SSD drive contains firmware and hardware features that can monitor and refresh memory cells when power is applied.

I am of course not an expert in this kind of stuff, so was operating under the assumption that if the data is written then it’s written and won’t get  “lost” if it is turned off for an extended period of time.

Seagate rates their Pulsar to retain data for up to one year without power at a temperature of 25 C (77 F).

Compare to what tape can do. 15-30 years of data retention.

Not that I think that SSD is a cost effective method to do backups!

I don’t know what other manufacturers can do, I’m not picking on Seagate, but found the data tidbit really interesting.

(I originally had the manual open to try to find reliability/warranty specs on the drive to illustrate that many SSDs are not expected to last multiple decades as the original article suggested).

February 15, 2011

IBM Watson does well in Jeopardy

Filed under: General — Nate @ 10:17 am

I’m not a fan of Jeopardy, don’t really watch game shows in general though I do miss the show Greed I think it was called, on about 10 years ago for a brief time.

I saw a few notes yesterday on how Watson was going to compete and I honestly wasn’t all that interested for some reason, but I was reading the comments on the story at The Register and someone posted a link (part 1, part 2) to it on Youtube, and I started watching. I couldn’t stop watching the more I saw the more it interested me.

It really was amazing to me to see some of the brief history behind it, how it evolved and stuff, and it was even more exciting to see such innovation occurring still, I really gotta give IBM some mad props for doing that sort of thing,it’s not the first time they’ve done it of course, but in an age where we are increasingly  thinking shorter and shorter term it’s really inspiring I think is the word I’m looking for to see an organization like IBM invest the time and money over several years to do something like this.

Here are the questions and answers from the show (as usual I could answer less than 10% of them), and here is more information on Watson.

My favorite part of the show aside from the historical background was when Watson gave the same wrong response that another one of the contestants gave right after they gave it (though Watson was unable to hear or see anything so can’t fault it for that but it was a funny moment).

Thanks IBM – keep up the good work!

(maybe it’s just me but that avatar that Watson has, has a cycle where it shows a lot of little circles expanding, reminds me of War games and the computer in that movie running nuclear war simulations)

February 14, 2011

Lackluster FCoE adoption

Filed under: Networking — Tags: — Nate @ 9:22 pm

I wrote back in 2009, wow was it really that long ago, one of my first posts, about how I wasn’t buying into the FCoE movement, at first glance it sounded really nice until you got into the details and then that’s when it fell apart. Well it seems that I’m not alone, not long ago in an earnings announcement Brocade said they were seeing lackluster FCoE adoption, lower than they expected.

He discussed what Stifel’s report calls “continued lacklustre FCoE adoption.” FCoE is the running of Fibre Channel storage networking block-access protocol over Ethernet instead of using physical Fibre Channel cabling and switchgear. It has been, is being, assumed that this transition to Ethernet would happen, admittedly taking several years, because Ethernet is cheap, steamrolls all networking opposition, and is being upgraded to provide the reliable speed and lossless transmission required by Fibre Channel-using devices.

Maybe it’s just something specific to investors, I was at a conference for Brocade products I think it was in 2009 even, where they talked about FCoE among many other things and if memory serves they didn’t expect much out of FCoE for several years so maybe it was management higher up that was setting the wrong expectations or something I don’t know.

Then more recently I saw this article posted from slashdot which basically talks about the same thing.

Even today I am not sold of FCoE, I do like Fibre Channel as a protocol but don’t see a big advantage at this point to running it over native Ethernet. These days people seem to be consolidating on fewer, larger systems, I would expect the people more serious about consolidation are using quad socket systems, and much much larger memory configurations (hundreds of gigs). You can power that quad socket system with hundreds of gigs of memory with a single dual port 8Gbps fibre channel HBA.Those that know about storage and random I/O understand more than anyone how much I/O it would really take to max out an 8Gbps Fibre channel card, your not likely to ever really manage to do it with a virtualization workload, even with most database workloads. And if you do you’re probably running at a 1:1 ratio of storage arrays to servers.

The cost of the Fibre network is trivial at that point (assuming you have more than one server). I really like the latest HP blades because well you just get a ton of bandwidth options with them right out of the box, why stop with running everything over a measly single dual port 10Gbe NIC when you can have double the NICs, AND throw in a dual port Fibre adapter for not much more cash. Not only does this give more bandwidth, but more flexibility and traffic isolation as well(storage/network etc). On the blades at least it seems you can go even beyond that(more 10gig ports), I was reading in one of the spec sheets for the PCIe 10GbE cards that on the Proliant servers no more than two adapters are supported

NOTE: No more than two 10GbE I/O devices are supported in a single ProLiant server.

I suspect that NOTE may be out of date with the more recent Proliant systems that have been introduced, after all they are shipping a quad socket Intel Proliant blade with three dual port 10GbE devices on it from the get go. And I can’t help but think the beast DL980 has enough PCI busses to handle a handful of 10GbE ports. The 10GbE flexfabric cards list the BL685c G7 as supported as well, meaning you can get at least six ports on that blade as well. So who knows…..

Do the math, the added cost of a dedicated fibre channel network really is nothing. Now if you happen to go out and chose the most complicated to manage fibre channel infrastructure along with the most complicated fibre channel storage array(s) then all bets are off. But just because there are really complicated things out there doesn’t mean your forced to use them of course.

Another factor is staff I guess, if you have monkeys running your IT department maybe Fibre channel is not a good thing and you should stick to something like NFS, and you can secure your network by routing all of your VLANs through your firewall while your at it, because you know your firewall can keep up with your line rate gigabit switches right? riiight.

I’m not saying FCoE is dead, I think it’ll get here eventually, I’m not holding my breath for it though, it’s really more of a step back than a step forwards with present technology.

Vertica snatched by HP

Filed under: News — Tags: , , — Nate @ 9:00 pm

Funny timing! One of my friends who used to work for 3PAR left 3PAR not long after HP completed the acquisition and he went to Vertica, which is a scale out column-based distributed high performance database. Certainly not an area I am well versed in but I got a bit of info a couple weeks ago and the performance numbers are just outstanding, the kind of performance gains that you really probably have to see to believe, fortunately for users their software is free to download, and it sounds like it is easy to get up and running (I have no personal experience with it, but would like to see it in action at some point soon). Performance gains up up to 10,000% are not uncommon vs traditional databases.

It really sounds like an awesome product that can do more real time analysis on large amounts of data (from a few gigs to over a Petabyte). Something that Hadoop users out there should take notice of. If you recall last year I wrote a bit about organizations I have talked to that were trying to do real time with hadoop with (most likely) disastrous results, it’s not built for that, never was, which is why Google abandoned it (well not hadoop since they never used the thing but Mapreduce technology in general at least as far as their search index is concerned they may use it for other things). Vertica is unique in that it is the only product of it’s kind in the world that has a software connector that can connect hadoop to Vertica. Quite a market opportunity. Of course a lot of the PHB-types are attracted to Hadoop because it is a buzzword and because it’s free. They’ll find out the hard way that it’s not the holy grail they thought it was going to be and go to something like Vertica kicking and screaming.

So back to my friend, he’s back at HP again, he just couldn’t quite escape the gravitational pull that was HP.

Also somewhat funny as it wasn’t very long ago that HP announced a partnership with Microsoft to do data warehousing applications. Sort of reminds me when NetApp tried to go after Data Domain, mere days before they announced their bid they put out a press release saying how good their dedupe was..

Oh and here’s the news article from our friends at The Register.

The database runs in parallel across multiple machines, but has a shared-nothing architecture, so the query is routed to the data and runs locally. And the data for each column is stored in main memory, so a query can run anywhere from 50 to 1,000 times faster than a traditional data warehouse and its disk-based I/O – according to Vertica.

The Vertica Analytics Database went from project to commercial status very quickly – in under a year – and has been available for more than five years. In addition to real-time query functions, the Vertica product continuously loads data from production databases, so any queries done on the data sets is up to date. The data chunks are also replicated around the x64-based cluster for high availability and load balancing for queries. Data compression is heavily used to speed up data transfers and reduce the footprint of a relational database, something on the order of a 5X to 10X compression.

Vertica’s front page now has a picture of a c Class blade enclosure, jus think of what you can analyze with a enclosure filled with 384 x 2.3Ghz Opteron 6100s (which were released today as well and HP announced support for them on my favorite BL685c G7), and 4TB of memory all squeezed into 10U of space.

If your in the market for a data warehouse / BI platform of sorts, I urge you to at least see what Vertica has to offer, it really does seem revolutionary, and they make it easy enough to use that you don’t need an army of PhDs to design and build it yourself (i.e. google).

Speakin’ of HP, I did look at what the new Palm stuff will be and I’m pretty excited I just wish it was going to get here sooner. I went out and bought a new phone in the interim until I can get my hands on the Pre 3 and the Touchpad. My Pre 1 was not even on it’s last legs it was in a wheelchair and a oxygen bottle. New phone isn’t anything fancy just a feature phone, it does have one thing I’m not used to having though, battery life. The damn thing can go easily 3 days and the battery doesn’t even go down by 1 bar. And I have heard from folks that it will be available on Sprint, which makes me happy as a Sprint customer. Still didn’t take a chance and extend my contract just in case that changes.

February 8, 2011

New WebOS announcements tomorrow

Filed under: Events,Random Thought — Tags: , — Nate @ 9:11 pm

Looking forward myself to the new WebOS announcements coming from HP/Palm, seem to be at about noon tomorrow. I’ve been using a Palm Pre for almost two years now I think, and recently the keyboard on it stopped working, so hoping to see some good stuff announced tomorrow. Not sure what I will do – I don’t trust Google or Apple or Microsoft, so for smart phones it’s Palm and Blackberry. WebOS is a really nice software platform from a user experience standpoint it’s quite polished. I’ve read a lot of complaints about the hardware from some folks, until recently my experience has been pretty good though. As an email device the blackberry rocked, though I really don’t have to deal with much email (or SMS for that matter).

Maybe I’ll go back to a ‘feature phone’ and get a WebOS tablet, combined with my 3G/4G Mifi and use that as my web-connected portable device or something. My previous Sanyo phones worked really well. Not sure where I’m at with my Sprint contract for my phone, and Sprint no longer carries the Pre and doesn’t look like it will carry the Pre 2. I tried the Pixi when it first came out but the keyboard keys were too small for my fingers even when using the tips of my fingers.

I found a virtual keyboard app which lets me hobble along on my Pre in the meantime while I figure out what to do.

February 6, 2011

Debian 6.0 Released

Filed under: linux,News — Nate @ 8:49 pm

I’ve been using Debian since version 2.0 back in 1998, it is still my favorite distribution for systems that I maintain by hand (instead of using fancier automated tools) mainly because of the vast number of packages and ease of integration. I still prefer Red Hat for “work” stuff, and anything larger than small scale installations.

Anyways I happened to peek in on the progress they were making a few days ago, and they were down to something like 7 release critical bugs, so I was kinda-sorta excited that another new revision was coming out. I remember some of the leader(s) back in 2009 set some pretty aggressive targets for this version of Debian, like most people out there I just laughed and knew it wasn’t achievable.  I’m patient, release when it’s ready, not before. Debian was pretty quick to say they weren’t official targets(I believe) more like best effort estimates. For some reason this particular Debian press release is not on their main site, maybe a hiccup in the site-redesign, as the news from 2009 page shows a bunch of stuff from 2008.

Almost a year after that original goal, Debian 6.0 is here. To be honest I’m not sure what all is really new, I mean of course there’s a lot of updated packages and stuff, but, I suppose for me Linux has pretty much gotten to the point where it’s good enough for me, I mean the only thing I really look forward to in upgrades is better hardware support (and even then that’s just for my own laptops/desktops etc, otherwise everything I run is in a virtual machine and hardware support hasn’t been an issue there ever).

Normally I’m not one to upgrade right away, but today was a different day, maybe it was the weather, maybe it was just waiting for the Super Bowl to come on (watching it now, paused on Tivo while I write this). But I decided to upgrade my workstation at home today, more than 1,000 package updates, and for the first time in a decade the installation instructions recommended a reboot mid-upgrade. The upgrade went off without a hitch, my desktop isn’t customized much, re-installed my Nivida driver, told VMware Workstation to rebuild it’s kernel drivers, fired off X, and then I went back to my laptop(my workstation is connected to my TV so I have to decide which input I want to use, I’d like my next TV to have picture in picture if any TVs out there anymore have that ability it was pretty popular back in the ..80s?).

My workstation, for reference:

  • HP xw9400 Workstation
  • 2 x Opteron 2380 CPUs (8 cores total)
  • 12GB ECC memory
  • Nvidia Geforce GT 240 (what lspci says at least)
  • 3Ware 9650 SE SATA RAID controller with battery backed write back cache
  • 4x2TB Western Digital green(I think) drives in RAID 1+0
  • 1x64GB Corsair SSD (forgot what type) for OS

I got a really good deal on the base system at the time, bought it through HP’s refurb dept, for a configuration that retailed brand new on their own site for about $5,000 (note that is not the above config I have added a bunch to it), my cost was about $1,500, and that included a 3 year warranty. I wanted something that should last a good long time, and of course it’s connected to an APC Smart UPS, gotta have that sine wave power…

I have had my eye on Debian‘s kFreeBSD port for some time and I decided what the hell let’s try that out too. I have two Soekris boxes (one is backup), so I took the one that was not in use and put a fresh compact flash card in there and poked around for how to install Debian kFreeBSD on it, because you know I hate BSD userland but really like to use pf.

First off, I did get it working..eventually!

kFreeBSD is a technology preview, not a fully supported release, so it is rough around the edges. Documentation for what I wanted to do was sparse at best and there seemed to be only one other person trying this out on a Sokeris box, so the mailing list thread from nearly a year ago was helpful.

Official Documentation was lacking in a few areas:

  • Documentation on how to setup the tftp server was mostly good, except it wasn’t exactly easy to find the files to use, I had to poke around quite a bit to find them.
  • No documentation on how to enable serial console for the installer, there was no mention of serial console at all except for here, and no mention on how to set those various variables.
    • For those that want to know you need to edit the grub.cfg (Debian 6.0 uses Grub 2 now, which I guess is good but it’s more confusing to me), and add the parameters -D -h to the kernel line, example:
menuentry "Default install" {
 echo "Loading ..."
 kfreebsd /kfreebsd.gz -D -h
 kfreebsd_module /initrd.gz type=mfs_root
 set kFreeBSD.vfs.root.mountfrom=ufs:/dev/md0
 set DEBIAN_FRONTEND=text
}

I tried setting the DEBIAN_FRONTEND variable as you can see, but it didn’t seem to do anything, the installer behavior was unchanged from the default.

Took me a significant amount of time to figure out I could not use minicom to install Debian kFreeBSD, instead I had to use cu (something that I’ve never used before). I’ve used minicom for everything from switches, to routers, to load balancers, to OpenBSD installs, to Red Hat Linux installs (I have never tried to install Debian over serial until today). But on Debian kFreeBSD the terminal emulation is not compatible between minicom and the installer, the result is I could never get past the assign a host name screen, it just kept sending random escape characters setting me back to previous screens, it was pretty frustrating.

Since there is no VGA port on the Soekris I did a tftp net install over serial console, when it came to installing the various base packages it took forever. I think at least part of it is due to the CF card being put in PIO mode instead of DMA mode, though looking at my OpenBSD Sokeris system it says it is using PIO mode 4 too. I am using the same model and size of CF card in both systems, I specifically used this one (Lexar 1GB, have had it for 5-6 years) because it seemed to run really fast on my systems vs my Kingston CF cards ran like dogs. Anyways it took upwards of two hours to install the base packages(around ~400MB installed). Doing the same in a VMware VM took about 5 minutes tops(much faster system mind you..)

I chose to install the base operating system along with the SSH option (which I swear was “SSH server”). And everything installed.

Then I rebooted and was greeted to a blank screen where GRUB should be. It took a little time to figure it out but I managed to edit the PXE grub configuration so that it would boot my local CF card over serial port.

So there we go , the kFreeBSD kernel is booting on Soekris –

Copyright (c) 1992-2010 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
 The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
#0 Tue Jan  4 16:41:50 UTC 2011 i386
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Geode(TM) Integrated Processor by AMD PCS (499.91-MHz 586-class CPU)
 Origin = "AuthenticAMD"  Id = 0x5a2  Family = 5  Model = a  Stepping = 2
 Features=0x88a93d<FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CLFLUSH,MMX>
 AMD Features=0xc0400000<MMX+,3DNow!+,3DNow!>
real memory  = 536870912 (512 MB)
avail memory = 511774720 (488 MB)
module_register_init: MOD_LOAD (vesa, 0xc0952d8e, 0) error 19
kbd1 at kbdmux0
K6-family MTRR support enabled (2 registers)
ACPI Error: A valid RSDP was not found (20100331/tbxfroot-309)
ACPI: Table initialisation failed: AE_NOT_FOUND
ACPI: Try disabling either ACPI or apic support.
pcib0: <Host to PCI bridge> pcibus 0 on motherboard
[..]

And a bunch of services started, including PostgreSQL (?!?!), and then it just sat there. No login prompt.

I could ping it but could not ssh to the system, the only port open was port mapper. I told it to install SSH related things(I forgot exactly what the menu option was but find it hard to believe that there would be an openssh client option and not a server option I can go back and look, maybe later).

So, now I was stuck.. I rebooted back into the installer and had some trouble mounting the CF card in the rescue shell but managed to do it, I chroot’d into the mount point, enabled the serial console per the examples in /etc/inittab, and used apt-get to install openssh — only that failed, some things weren’t properly configured in order for the ssh setup to complete. So I thought..and thought…

Telnet to the rescue! I haven’t used telnet on a server in I don’t know how many years probably since I worked at a Unix software company in 2002 where we had a bunch of different unixes and most did not have ssh. Anyways I installed telnet on the system via chroot, unmounted the file system, rebooted and the system came up — but still no login prompt on the serial console. Fortunately I was able to telnet to the thing, and install ssh along with a few other packages, and removed PostgreSQL, I do not want to run a SQL database on this tiny machine.

I did more futzing around trying to get DMA enabled on the CF card to see if that would make it go faster to no avail. top does not report any i/o wait but I think that is a compatibility issue rather than there not being any i/o wait on the system.

After poking around more I determined why the login prompt wasn’t appearing on the serial console, it’s because the examples in the /etc/inittab were not right, at least not for Soekris, I can’t speak to other platforms. But it mentions using /dev/ttyd0 when in fact I have to use /dev/ttyu0. Oh and another thing on serial console and this kFreeBSD, from what I read setting a custom baud rate (other than default 9600) is difficult if not impossible, I have not tried, so instead I changed the Sokeris default baud rate from 19200 to 9600.

I also did some hand editing of grub.cfg to enable serial console in grub and stuff, because I was unable to figure out how to do it in the grub v2 templates.

So all in all, certainly feels like a technology preview, very very very rough around the edges, I’m sure it will get there in time. My own needs are really minimal, I run a tiny set of infrastructure services on my home firewalls like dhcp, dns, OpenVPN, Network UPS Tools and stuff, no desktop, no web servers, nothing fancy, So I can probably use this to replace my OpenBSD system, I will test pf out maybe next weekend, spent enough time on it for now.

root@ksentry:~# cat /etc/debian_version
6.0
root@ksentry:~# uname -a
GNU/kFreeBSD ksentry 8.1-1-486 #0 Tue Jan  4 16:41:50 UTC 2011 i586 i386 Geode(TM) Integrated Processor by AMD PCS GNU/kFreeBSD

February 2, 2011

Oh no! We Ran outta IPs yesterday!

Filed under: Networking,Random Thought — Nate @ 9:37 pm

The Register put it better than I could put it

World shrugs as IPv4 addresses finally exhausted

Count me among those that shrugged, commented on this topic a few months ago.

Powered by WordPress