TechOpsGuys.com Diggin' technology every day

March 2, 2011

Innovation Unleashed

Filed under: Random Thought — Tags: — Nate @ 11:13 pm

This has really nothing to do with IT, but it has to do with innovation, and my three readers know I like innovation, whether it is in IT systems or other technology.

So, in a nutshell I bought a new car this past weekend. I’m very happy and excited about it, it’s really my first new car that I have owned, past vehicles I’ve always bought used.

The tag line for the car is Innovation Unleashed.

My previous vehicle had 113,000 miles on it and was 10 years old. The check engine light seemed to be coming on once every 3-4 months and I was getting tired of it. Bottom line – if I knew how much it was going to cost to maintain for the next two years I would be happy, but for all I knew it may be another $5k in repairs and parts, I don’t know. I’m not a car guy.

So a couple of weeks ago the check engine light comes on again and I start thinking about the possibility of a new car, I wanted:

  • Something that I could fit into, need leg room, I’m not a small person
  • Something that was smallish on the outside so it’s easier to park than my previous SUV
  • Wanted a SUV of sorts, I didn’t want to have to seriously climb down into a really low riding car.
  • Something that was more fun to drive (want more speed for passing)

So after some research, and a few test drives…

I decided on the 2011 Nissan Juke SV. There is a good video-review of it here.

Those are generic pictures, not of my car specifically.

Innovations in the Juke

First off, let me start by saying cars have come a long way since I last really looked at them, I mean features that I would of expected to be on $50,000+ cars seem standard on cars that cost half as much.

Torque Vectoring All wheel drive

That just sounds cool doesn’t it? Anyways, I learned something new from this buying experience (again I’m not a car person so don’t keep up to date on this stuff). Traditional all wheel drive systems transfer power between the front and back wheels to increase traction. That much I knew of course.

Torque vectoring all wheel drive goes one step further, in addition to front and back it can control power side to side as well, individual wheels can have their power levels adjusted for maximum control. By default the car tries to say in 2WD mode to improve fuel economy but of course automatically switches to 4WD/AWD when it feels a disturbance in the force and needs more traction.

Here is a video that shows it in action.

This really does make it pretty fun to drive, you can make some crazy tight turns and it doesn’t seem to lose any grip.

In between the gauges for MPH and RPM is a dynamic LCD, which has many modes, one of which shows real time information as to which wheels are getting the power applied to them, so you can see when making tight turns that typically one wheel gets almost all power removed from it, and the others get more power.

I-CON System

The I-CON, or Integrated Control system is just below the stereo / navigation system and is a really neat way to control the car, and is very easy to use.

The same set of controls manages two different modes, either climate controls, or driver controls which change how the car performs. The same buttons and interface are used and the functions change seamlessly at the touch of a button, here is part of a video of it in action.

Climate controls are pretty typical, hot/cold, fan speed etc. The graphics on the LCD are neat to see though.

Driver mode is a bit different though, there are 3 modes – Normal, Sport, and Eco. Changing modes adjusts a few things dynamically in the car to suit more towards sporty driving or more towards fuel economy. Me, at the moment like sport mode, only down side is the car defaults to Normal whenever it starts so I have to manually switch it to sport each time, it doesn’t remember the last mode it was in.

Then there are more ..how can I say, cosmetic things the LCD displays such as

  • Boost level (the car has a turbo boost)
  • Torque level
  • Fuel economy information (MPG over the last X number of starts, or last X number of days etc)
  • G-force information

Performance and Fuel Economy

The car has a 4-cylinder direct injected gas turbo charged engine. To me that says, it has a smaller engine for fuel economy, but has the turbo charger for performance. So you get a balance of both. It really works well together.

The official specs are 188 horsepower and 177 pound feet of torque. If only they could give me a number that measured performance in IOPS…

Fuel economy for the AWD version is 25 city, 30 highway, the FWD version gets slightly better economy. My previous vehicle was 12 city, 17 highway, so I’m coming out ahead in either case! Not that fuel economy is at the top of my list of priorities.

Other misc features

It has a standard (but to me fancy since I don’t think I’ve ever used such a system before) key less entry and operation system, push button starter, I put a big fancy audio system in it with multiple amps, sub woofer (which turned out a lot bigger than I expected, and got a custom fiberglass enclosure for the sub woofer), high end navigation system(which is windows based – it’s already crashed on me once and I had to turn the car off than on again to reboot it, there might be another way to reboot it I’m not sure). After market backup camera (again, heard of them never used one before).

It comes standard with a CVT, or Continuously Variable Transmission, where it does not have traditional gears, instead has hundreds (thousands?) of smaller gear ratios or something which provides for smoother shifting and stuff. AWD models are automatic only, manual transmission not available. But even in the automatic version it has a manual mode, which emulates a six speed transmission. The only thing lacking is paddle shifters…some day hopefully someone will come out with some. I do prefer manual, but if I have to make a choice, AWD or manual I’ll take AWD. My last vehicle didn’t have the best of traction (even with new tires) on slick surfaces.

It comes standard with 17″ wheels.

The transmission comes standard with a 120,000 (or is it 110,000) mile / 10 year warranty, I opted for the 100,000 / 7 year extended warranty as well. Cars are so complicated now, and given this is the first model year for this car and it has a lot of brand new things, who knows what might break in the coming years or how much it’ll cost to fix.

How it drives

It’s a mean little car, it has some solid power to it, I haven’t pushed it too hard yet the manual says to keep it under 4,000 RPM for the first 1200 miles (have about 400 on it now), so doing my best to keep it under 4,000 RPM, sometimes unavoidable though with the turbo, since there is some lag before turbo kicks in the RPMs tend to spike really high, so i try to slow down quickly so it doesn’t stay above 4k RPMs for more than a couple seconds.

I can’t help but think I’m driving a cross between a Prius and a Porsche.

Sound system is pretty amazing, Navigation system is nice. I have no sense of direction so navigation is a must, past few years I have been using Sprint Navigation on my various phones, it gets the job done but certainly not as nice as an in-dash unit, especially a Navigation system that doesn’t rely on a 3G signal, that has screwed me up on Sprint Navigation more than once since it requires 3G connectivity to get map data.

It has a really tight turning radius, and is significantly shorter than all other SUV-type vehicles on the road, so makes it easy to park. Despite it’s small exterior it has a lot of space in the front seats. The back seats are cramped as is the trunk, all I really care about is the front seats though.

What’s missing

Nothing is perfect, and Juke is no exception, there are a few minor things I would like to see:

  • Paddle shifter option (mentioned above)
  • Some sort of compartment to put sun glasses
  • Arm rest for drivers right arm

Only Complaint

Not about the car itself but rather the process of buying the car. For the most part it went very smoothly and I was very happy with the service I got. When trading in my previous vehicle the sales rep came back and said there was an accident reported on my vehicle by carfax and that would lower the resale value, I asked him What? Why is there an accident reported? And he said there just is, so I asked to see the report and there it was.

I bought the vehicle used in late January 2004 in Washington. I have traveled to Washington, Oregon, California and Arizona in it, that’s it.

So you imagine my surprise when he said there was an accident reported on my vehicle in New York. In 2008. I’ve never been to New York. I never intend to ever visit that city ever in my lifetime (too crowded). So I was kind of confused. I owned the car in 2008, and it never got further east than Arizona.

I of course ran a carfax report when I bought the vehicle in 2004, and it came back clean. So I naturally wasn’t too happy.

It turns out that my vehicle came from New Jersey and was sold at an auction in 2004 in the northwest. So I can only assume, for some really stupid #$%@ reason that someone decided to wait 4-5 years before reporting to whatever system carfax uses to get it’s data. I mean I can understand a few months, six months maybe a year, but practically half a decade? That’s not right. Maybe it was a mistake, I don’t know. Cost me about $1000 in value though.

I’m sure there may be things I could of done to contest it and stuff I just wanted to be done with the whole situation so said screw it, I don’t care, just put it behind me and move on, so I did.

Overall

Overall I am very satisfied with the Juke so far (only been driving it for 4 days now), it is a good value (base model price of the version I got is roughly $24,000). It’s small enough for easy parking, has good space up front (compared it to much larger SUVs and it has comparable or even better space than them for the driver’s seat at least). I can’t wait to take it on some kind of road trip, at least a couple hundred miles, that will be fun.

While I have seen some comments on line how some people hate the way it looks (for whatever reason), I think it looks fine and so far everyone I have come across really likes it as well, so I wouldn’t be surprised if it became a really successful model for Nissan, especially given it’s low cost.

The Juke looks even meaner at night, with the various gauges and the illuminated kick plates that have the Juke logo.

At the moment Jukes are made only in Japan and imported to the U.S. Supplies are tight, in fact there was only one Juke SV (the one I bought) that did not have a navigation system(remember I put in an after market system) in the entire northwest region. It came from somewhere in Oregon, they managed to get it to the dealership here in a matter of hours and I picked it up the next day. My dealership didn’t even have an AWD model to test drive so my test drives were only on FWD.

Compellent gets Hyper efficient storage tiering

Filed under: Storage — Tags: , , , , — Nate @ 9:24 am

So according to this article from our friends at The Register, Compellent is considering going to absurdly efficient storage tiering taking the size of data being migrated to 32kB from their currently insanely efficient 512kB.

That’s just insane!

For reference, as far as I know:

  • 3PAR moves data around in 128MB chunks
  • IBM moves data around in 1GB chunks (someone mentioned that XIV uses 1MB)
  • EMC moves data around in 1GB chunks
  • Hitachi moves data around in 42MB chunks (I believe this is the same data size they use for allocating storage to thin provisioned volumes)
  • NetApp has no automagic storage tiering functionality though they do have PAM cards which they claim is better.

I have to admit I do like Compellent’s approach the best here, hopefully 3PAR can follow. I know 3PAR allocates data to think provisioned volumes in 16kB chunks, what I don’t know is whether or not their system is adjustable to get down to a more granular level of storage tiering.

There’s just no excuse for the inefficient IBM and EMC systems though, really, none.

Time will tell if Compellent actually follows through with going as granular as 32kB, I can’t help but suspect the CPU overhead of monitoring so many things will be too much for the system to bear.

Maybe if they had purpose built ASIC…

 

February 24, 2011

So easy it could be a toy, but it’s not

Filed under: General,Random Thought — Tags: — Nate @ 8:44 pm

I was at a little event thrown for the Vertica column-based database, as well as Tableau Software, a Seattle-based data visualization company. Vertica was recently acquired by HP for an undisclosed sum. I had not heard of Tableau until today.

I went in not really knowing what to expect, have heard good things about Vertica from my friend over there but it’s really not an area I have much expertise in.

I left with my mouth on the floor. I mean holy crap that combination looks wicked. Combining the world’s fastest column based data warehouse with a data visualization tool that is so easy some of my past managers could even run it. I really don’t have words to describe it.

I never really considered Vertica for storing IT-related data, and they brought up a case study with one of their bigger customers – Comcast who sends more than 65,000 events a second into a vertica database (including logs, SNMP traps and other data). Hundreds of terabytes of data with sub second query response times. I don’t know if they use Tableau software’s products or not. But there was a good use case for storing IT data in Vertica.

(from Comcast case study)

The test included a snapshot of their application running on a five-node cluster of inexpensive servers with 4 CPU AMD 2.6 GHz core processors with 64-bit 1 MB cache; 8 GB RAM; and ~750 GBs of usable space in a RAID- 5 configuration.
To stress-test Vertica, the team pushed the average insert rate to 65K samples per second; Vertica delivered millisecond-level performance for several different query types, including search, resolve and accessing two days’ worth of data. CPU usage was about 9%, with a fluctuation of +/- 3%, and disk utilization was 12% with spikes up to 25%.

That configuration could of course easily fit on a single server. How about a 48-core Opteron with 256GB of memory and some 3PAR storage or something? Or maybe a DL385G7 with 24 cores, 192GB memory(24x8GB), and 16x500GB 10k RPM SAS disks with RAID 5  and dual SAS controllers with 1GB of flash-backed cache(1 controller per 8 disks). Maybe throw some Fusion IO in there too?

Now I suspect that there will be additional overhead with trying to feed IT data into a Vertica database since  you probably have to format it in some way.

Another really cool feature of Vertica – all of it’s data is mirrored at least once to another server, nothing special about that right? Well they go one step further, they give you the ability to store the data pre-sorted in two different ways, so mirror #1 may be sorted by one field, and mirror #2 is sorted by another field, maximizing use of every copy of the data, while maintaining data integrity.

Something that Tableu did really well that was cool was you don’t need to know how you want to present your data, you just drag stuff around and it will try to make intelligent decisions on how to represent it. It’s amazingly flexible.

Tableu does something else well, there is no language to learn, you don’t need to know SQL, you don’t need to know custom commands to do things, the guy giving the presentation basically never touched his keyboard. And he published some really kick ass reports to the web in a matter of seconds, fully interactive, users could click on something and drill down really easily and quickly.

This is all with the caveat that I don’t know how complicated it might be to get the data into the database in the first place.

Maybe there are other products out there that are as easy to use and stuff as Tableau I don’t know as it’s not a space I spend much time looking at. But this combination looks incredibly exciting.

Both products have fully functional free evaluation versions available to download on the respective sites.

Vertica licensing is based on the amount of data that is stored (I assume regardless of the number of copies stored but haven’t investigated too much), no per-user, no per-node, no per-cpu licensing. If you want more performance, add more servers or whatever and you don’t pay anything more. Vertica automatically re-balances the cluster as you add more servers.

Tableau is licensed as far as I know on a named-user basis or a per-server basis.

Both products are happily supported in VMware environments.

This blog entry really does not do the presentation justice, I don’t have the words for how cool this stuff was to see in action, there aren’t a lot of products or technologies that I get this excited about, but these has shot to near the top of my list.

Time to throw your Hadoop out the window and go with Vertica.

16-core 3.5Ghz Opterons coming?

Filed under: News — Tags: , — Nate @ 11:32 am

Was just reading an article from our friends at The Register about some new news on the upcoming Opteron 6200 (among other chips), it seems AMD is cranking up both the cores and clock speeds in the same power evelope, the smaller manufacturing process certainly does help! I think they’re going from 45nm to 32nm.

McIntyre said that AMD was targeting clock speeds of 3.5 GHz and higher with the Bulldozer cores within the same power envelop as the current Opteron 4100 and 6100 processors.

Remember that the 6200 is socket compatible with the 6100!

Can you imagine a blade chassis with 512 x 3.5Ghz CPU cores and 4TB of memory in only 10U of space drawing roughly 7,000 watts peak ? Seems unreal ..but sounds like it’s already on it’s way.

February 23, 2011

Certifiably not qualified

Filed under: Random Thought — Tags: — Nate @ 10:12 am

What is it with people and certifications? I’ve been following Planet V12n for a year or more now and I swear I’ve never seen so many kids advertise how excited they are that they passed some test or gotten some certification.

Maybe I’m old and still remember the vast number of people out there with really pointless certs like MCSE and CCNA (at least older versions of them maybe they are better now). When interviewing people I purposely gave people negative marks if they had such low level certifications, I remember one candidate even advertising he had his A+ certification, I mean come on!

I haven’t looked into the details behind VMware certification I’m sure the processes taken to get the certs have some value (to VMware who cashes in), but certifications still have a seriously negative stigma with me.

I hope the world of virtualization and “cloud” isn’t in the process of being overrun with unqualified idiots much like the dot com / Y2K days were overrun with MCSEs and CCNAs. What would be even worse if it was the same unqualified idiots as before.

There’s a local shop in my neck of the woods that does VMware training, they do a good job in my opinion, costs less, and you won’t get a certification at the end (but maybe you learn enough to take the test I don’t know). My only complaint about their stuff is they are too Cisco focused on networking and too NetApp focused on storage, would be nice to see more neutral things, but I can understand they are a small shop and can only support so much. NetApp makes a good storage platform for VMware I have to admit, but Cisco is just terrible in every way.

February 19, 2011

Flash not good for offline storage?

Filed under: Random Thought,Storage — Tags: , , — Nate @ 9:36 am

A few days ago I came across an article on Datacenter Knowledge that was talking about Flash reliability. As much as I’d love to think that just because it’s solid state that it will last much longer, real world tests to-date haven’t shown that to be true in many cases.

I happened to have the manual open on my computer for the Seagate Pulsar SSD, and just saw something that was really interesting to me, on page 15 it says –

As NAND Flash devices age with use, the capability of the media to retain a programmed value begins to deteriorate. This deterioration is affected by the number of times a particular memory cell is programmed and subsequently erased. When a device is new, it has a powered off data retention capability of up to ten years. With use the retention capability of the device is reduced. Temperature also has an effect on how long a Flash component can retain its pro-grammed value with power removed. At high temperature the retention capabilities of the device are reduced. Data retention is not an issue with power applied to the SSD. The SSD drive contains firmware and hardware features that can monitor and refresh memory cells when power is applied.

I am of course not an expert in this kind of stuff, so was operating under the assumption that if the data is written then it’s written and won’t get  “lost” if it is turned off for an extended period of time.

Seagate rates their Pulsar to retain data for up to one year without power at a temperature of 25 C (77 F).

Compare to what tape can do. 15-30 years of data retention.

Not that I think that SSD is a cost effective method to do backups!

I don’t know what other manufacturers can do, I’m not picking on Seagate, but found the data tidbit really interesting.

(I originally had the manual open to try to find reliability/warranty specs on the drive to illustrate that many SSDs are not expected to last multiple decades as the original article suggested).

February 15, 2011

IBM Watson does well in Jeopardy

Filed under: General — Nate @ 10:17 am

I’m not a fan of Jeopardy, don’t really watch game shows in general though I do miss the show Greed I think it was called, on about 10 years ago for a brief time.

I saw a few notes yesterday on how Watson was going to compete and I honestly wasn’t all that interested for some reason, but I was reading the comments on the story at The Register and someone posted a link (part 1, part 2) to it on Youtube, and I started watching. I couldn’t stop watching the more I saw the more it interested me.

It really was amazing to me to see some of the brief history behind it, how it evolved and stuff, and it was even more exciting to see such innovation occurring still, I really gotta give IBM some mad props for doing that sort of thing,it’s not the first time they’ve done it of course, but in an age where we are increasingly  thinking shorter and shorter term it’s really inspiring I think is the word I’m looking for to see an organization like IBM invest the time and money over several years to do something like this.

Here are the questions and answers from the show (as usual I could answer less than 10% of them), and here is more information on Watson.

My favorite part of the show aside from the historical background was when Watson gave the same wrong response that another one of the contestants gave right after they gave it (though Watson was unable to hear or see anything so can’t fault it for that but it was a funny moment).

Thanks IBM – keep up the good work!

(maybe it’s just me but that avatar that Watson has, has a cycle where it shows a lot of little circles expanding, reminds me of War games and the computer in that movie running nuclear war simulations)

February 14, 2011

Lackluster FCoE adoption

Filed under: Networking — Tags: — Nate @ 9:22 pm

I wrote back in 2009, wow was it really that long ago, one of my first posts, about how I wasn’t buying into the FCoE movement, at first glance it sounded really nice until you got into the details and then that’s when it fell apart. Well it seems that I’m not alone, not long ago in an earnings announcement Brocade said they were seeing lackluster FCoE adoption, lower than they expected.

He discussed what Stifel’s report calls “continued lacklustre FCoE adoption.” FCoE is the running of Fibre Channel storage networking block-access protocol over Ethernet instead of using physical Fibre Channel cabling and switchgear. It has been, is being, assumed that this transition to Ethernet would happen, admittedly taking several years, because Ethernet is cheap, steamrolls all networking opposition, and is being upgraded to provide the reliable speed and lossless transmission required by Fibre Channel-using devices.

Maybe it’s just something specific to investors, I was at a conference for Brocade products I think it was in 2009 even, where they talked about FCoE among many other things and if memory serves they didn’t expect much out of FCoE for several years so maybe it was management higher up that was setting the wrong expectations or something I don’t know.

Then more recently I saw this article posted from slashdot which basically talks about the same thing.

Even today I am not sold of FCoE, I do like Fibre Channel as a protocol but don’t see a big advantage at this point to running it over native Ethernet. These days people seem to be consolidating on fewer, larger systems, I would expect the people more serious about consolidation are using quad socket systems, and much much larger memory configurations (hundreds of gigs). You can power that quad socket system with hundreds of gigs of memory with a single dual port 8Gbps fibre channel HBA.Those that know about storage and random I/O understand more than anyone how much I/O it would really take to max out an 8Gbps Fibre channel card, your not likely to ever really manage to do it with a virtualization workload, even with most database workloads. And if you do you’re probably running at a 1:1 ratio of storage arrays to servers.

The cost of the Fibre network is trivial at that point (assuming you have more than one server). I really like the latest HP blades because well you just get a ton of bandwidth options with them right out of the box, why stop with running everything over a measly single dual port 10Gbe NIC when you can have double the NICs, AND throw in a dual port Fibre adapter for not much more cash. Not only does this give more bandwidth, but more flexibility and traffic isolation as well(storage/network etc). On the blades at least it seems you can go even beyond that(more 10gig ports), I was reading in one of the spec sheets for the PCIe 10GbE cards that on the Proliant servers no more than two adapters are supported

NOTE: No more than two 10GbE I/O devices are supported in a single ProLiant server.

I suspect that NOTE may be out of date with the more recent Proliant systems that have been introduced, after all they are shipping a quad socket Intel Proliant blade with three dual port 10GbE devices on it from the get go. And I can’t help but think the beast DL980 has enough PCI busses to handle a handful of 10GbE ports. The 10GbE flexfabric cards list the BL685c G7 as supported as well, meaning you can get at least six ports on that blade as well. So who knows…..

Do the math, the added cost of a dedicated fibre channel network really is nothing. Now if you happen to go out and chose the most complicated to manage fibre channel infrastructure along with the most complicated fibre channel storage array(s) then all bets are off. But just because there are really complicated things out there doesn’t mean your forced to use them of course.

Another factor is staff I guess, if you have monkeys running your IT department maybe Fibre channel is not a good thing and you should stick to something like NFS, and you can secure your network by routing all of your VLANs through your firewall while your at it, because you know your firewall can keep up with your line rate gigabit switches right? riiight.

I’m not saying FCoE is dead, I think it’ll get here eventually, I’m not holding my breath for it though, it’s really more of a step back than a step forwards with present technology.

Vertica snatched by HP

Filed under: News — Tags: , , — Nate @ 9:00 pm

Funny timing! One of my friends who used to work for 3PAR left 3PAR not long after HP completed the acquisition and he went to Vertica, which is a scale out column-based distributed high performance database. Certainly not an area I am well versed in but I got a bit of info a couple weeks ago and the performance numbers are just outstanding, the kind of performance gains that you really probably have to see to believe, fortunately for users their software is free to download, and it sounds like it is easy to get up and running (I have no personal experience with it, but would like to see it in action at some point soon). Performance gains up up to 10,000% are not uncommon vs traditional databases.

It really sounds like an awesome product that can do more real time analysis on large amounts of data (from a few gigs to over a Petabyte). Something that Hadoop users out there should take notice of. If you recall last year I wrote a bit about organizations I have talked to that were trying to do real time with hadoop with (most likely) disastrous results, it’s not built for that, never was, which is why Google abandoned it (well not hadoop since they never used the thing but Mapreduce technology in general at least as far as their search index is concerned they may use it for other things). Vertica is unique in that it is the only product of it’s kind in the world that has a software connector that can connect hadoop to Vertica. Quite a market opportunity. Of course a lot of the PHB-types are attracted to Hadoop because it is a buzzword and because it’s free. They’ll find out the hard way that it’s not the holy grail they thought it was going to be and go to something like Vertica kicking and screaming.

So back to my friend, he’s back at HP again, he just couldn’t quite escape the gravitational pull that was HP.

Also somewhat funny as it wasn’t very long ago that HP announced a partnership with Microsoft to do data warehousing applications. Sort of reminds me when NetApp tried to go after Data Domain, mere days before they announced their bid they put out a press release saying how good their dedupe was..

Oh and here’s the news article from our friends at The Register.

The database runs in parallel across multiple machines, but has a shared-nothing architecture, so the query is routed to the data and runs locally. And the data for each column is stored in main memory, so a query can run anywhere from 50 to 1,000 times faster than a traditional data warehouse and its disk-based I/O – according to Vertica.

The Vertica Analytics Database went from project to commercial status very quickly – in under a year – and has been available for more than five years. In addition to real-time query functions, the Vertica product continuously loads data from production databases, so any queries done on the data sets is up to date. The data chunks are also replicated around the x64-based cluster for high availability and load balancing for queries. Data compression is heavily used to speed up data transfers and reduce the footprint of a relational database, something on the order of a 5X to 10X compression.

Vertica’s front page now has a picture of a c Class blade enclosure, jus think of what you can analyze with a enclosure filled with 384 x 2.3Ghz Opteron 6100s (which were released today as well and HP announced support for them on my favorite BL685c G7), and 4TB of memory all squeezed into 10U of space.

If your in the market for a data warehouse / BI platform of sorts, I urge you to at least see what Vertica has to offer, it really does seem revolutionary, and they make it easy enough to use that you don’t need an army of PhDs to design and build it yourself (i.e. google).

Speakin’ of HP, I did look at what the new Palm stuff will be and I’m pretty excited I just wish it was going to get here sooner. I went out and bought a new phone in the interim until I can get my hands on the Pre 3 and the Touchpad. My Pre 1 was not even on it’s last legs it was in a wheelchair and a oxygen bottle. New phone isn’t anything fancy just a feature phone, it does have one thing I’m not used to having though, battery life. The damn thing can go easily 3 days and the battery doesn’t even go down by 1 bar. And I have heard from folks that it will be available on Sprint, which makes me happy as a Sprint customer. Still didn’t take a chance and extend my contract just in case that changes.

February 8, 2011

New WebOS announcements tomorrow

Filed under: Events,Random Thought — Tags: , — Nate @ 9:11 pm

Looking forward myself to the new WebOS announcements coming from HP/Palm, seem to be at about noon tomorrow. I’ve been using a Palm Pre for almost two years now I think, and recently the keyboard on it stopped working, so hoping to see some good stuff announced tomorrow. Not sure what I will do – I don’t trust Google or Apple or Microsoft, so for smart phones it’s Palm and Blackberry. WebOS is a really nice software platform from a user experience standpoint it’s quite polished. I’ve read a lot of complaints about the hardware from some folks, until recently my experience has been pretty good though. As an email device the blackberry rocked, though I really don’t have to deal with much email (or SMS for that matter).

Maybe I’ll go back to a ‘feature phone’ and get a WebOS tablet, combined with my 3G/4G Mifi and use that as my web-connected portable device or something. My previous Sanyo phones worked really well. Not sure where I’m at with my Sprint contract for my phone, and Sprint no longer carries the Pre and doesn’t look like it will carry the Pre 2. I tried the Pixi when it first came out but the keyboard keys were too small for my fingers even when using the tips of my fingers.

I found a virtual keyboard app which lets me hobble along on my Pre in the meantime while I figure out what to do.

« Newer PostsOlder Posts »

Powered by WordPress