Diggin' technology every day

December 19, 2013

Facebook deploying HP Vertica

Filed under: General — Tags: , , — Nate @ 9:33 am

I found this interesting. Facebook – a company that designs their own servers(in custom racks no less), writes their own software, does fancy stuff in PHP to make it scale, is a big user of Hadoop, massive horizontal scaling of sharded MySQL systems, and has developed an exabyte scale query engine -  is going to be deploying HP Vertica as part of their big data infrastructure.

Apparently announced at HP Discover

“Data is incredibly important: it provides the opportunity to create new product enhancements, business insights, and a significant competitive advantage by leveraging the assets companies already have. At Facebook, we move incredibly fast. It’s important for us to be able to handle massive amounts of data in a respectful way without compromising speed, which is why HP Vertica is such a perfect fit.”

Not much else to report on, just thought it was interesting given all the stuff Facebook tries to do on it’s own.

October 7, 2013

Verizon looks to Seamicro for next gen cloud

Filed under: General — Tags: , , — Nate @ 10:01 am

Last week Verizon made big news in the cloud industry that they were shifting gears significantly and were not going to have their clouds built on top of traditional enterprise equipment from the likes of HP, Cisco, EMC etc.

I can’t find an article on it but I recall hearing on CNBC that AT&T announced something similar – that was going to result in them in saving $2 billion over some period of time that I can’t remember.

Today our friends at The Register reveal that this design win actually comes from AMD‘s Seamicro unit. AMD says they have been working closely with Verizon for two years on designs for a highly flexible and efficient platform to scale with.

Seamicro has a web page dedicated to this announcement.

Some of the capabilities include:

  • Fine-grained server configuration options that match real life requirements, not just small, medium, large sizing, including processor speed (500 MHz to 2,000 MHz) and DRAM (.5 GB increments) options
  • Shared disks across multiple server instances versus requiring each virtual machine to have its own dedicated drive
  • Defined Storage quality of service by specifying performance up to 5,000 IOPS to meet the demands of the application being deployed, compared to best-effort performance
  • Strict traffic isolation, data encryption, and data inspection with full featured firewalls that achieve Department of Defense and PCI compliance levels
  • Reserved network performance for every virtual machine up to 500 Mbps

I don’t see much more info than that. Questions that remain with me are what level of SMP will they support, and what processor(s) are they using (specifically are they using AMD procs or Intel procs since Seamicro can use both, Intel has obviously been dominating the cloud landscape, so it would be nice to see a new large scale deployment of AMD).

I have written about SeaMicro a couple times in the past, most recently comparing HP’s Moonshot to the AMD platform. In those posts I mentioned how I felt that Moonshot fell far short of what Seamicro seems to be capable of offering. Given Verizon’s long history as a customer of HP, I can’t help but assume that HP tried hard to get them to consider Moonshot but fell short on the technology(or timing, or both).

Seamicro, to my knowledge (I don’t follow micro servers too closely) is the only micro server platform that offers fully virtualized storage, both inside the chassis as well as more than 300TB of external storage. One of the unique abilities that sounds nice for larger scale deployments is the ability to export essentially read only snapshots of base operating systems to many micro servers for easier management(and you could argue more secure given they are read only), without needing fancy SAN storage. It’s also fairly mature (relatively to the competition) given it’s been on the market for several years now.

Verizon/Terremark obviously had some trouble competing with the more commodity players with their enterprise platform both on cost and on capabilities. I was a vCloudExpress user for about a year, and worked through an RFP with them at one of my former companies for a disaster recovery project. Their cost model, like most cloud providers was pretty insane. The assumption we had at the time is we were a small company without much purchasing leverage, so expected the cost to be pretty decent given the volumes a cloud provider can command. Though reality set in quick when their cost was at least 5-6 fold what our cost was for the same capabilities from similar enterprise vendors.

Other providers had similar pricing models, and I continue to hear stories to this day about various providers costing too much relative to doing things in house (there really is no exception), with ROIs really never exceeding 12 months. I think I’ve said many times but I’ll say it again – I’ll be the first one to be willing to pay a premium for something that gives premium abilities. None of them come close to meeting that though. Not even in the same solar system at this point.

This new platform will certainly make Verizon’s cloud offering more competitive, they are having to build an entirely new control platform for it though – not much off the shelf software here, simply because none of it is built to that level of scale. Such problems are difficult to address, and until you encounter them you probably won’t anticipate what is required to solve them.

I am mainly curious whether or not these custom things that AMD built for Verizon — if those will be available to other cloud players. I assume they will..

September 18, 2013

RIP Blackberry – Android is the Windows of the mobile world

Filed under: General,linux,Random Thought — Tags: , , , — Nate @ 4:32 pm

You can certainly count me as in the camp of folks that believed RIM/Blackberry had a chance to come back. However more recently I no longer feel this is possible.

While the news today of Blackberry possibly cutting upwards of 40% of their staff before the end of the year, is not the reason I don’t think it is possible, it just gave me an excuse to write about something..

The problem stems mainly from the incredibly fast paced maturation (can’t believe I just used that word) of the smart phone industry especially in the past three years. There was an opportunity for the likes of Blackberry, WebOS, and even Windows Phone to participate but they were not in the right place at the right time.

I can speak most accurately about WebOS so I’ll cover a bit on that. WebOS had tons of cool concepts and ideas, but they lacked the resources to put together a fully solid product – it was always a work in progress (fix coming next version). I felt even before HP bought them (and the feeling has never gone away even in the days of HP’s big product announcements etc) – that every day that went by WebOS fell further and further behind(obviously some of WebOS’ key technologies took years for the competition to copy, go outside that narrow niche of cool stuff and it’s pretty deserted). As much as I wanted to believe they had a chance in hell of catching up again (throw enough money at anything and you can do it) – there just wasn’t (and isn’t) anyone willing to commit to that level – and it makes sense too – I mean really the last major player left willing to commit to that level is Microsoft – their business is software and operating systems.

Though even before WebOS was released Palm was obviously a mess when they went through their various spin offs, splitting the company divisions up, licensing things around etc. They floundered without a workable (new) operating system for many years. Myself I did not become a customer of Palm until I puchased a Pre back in 2009. So don’t look at me as some Palm die hard because I was not. I did own a few Handspring Visors a long time ago and the PalmOS compatibility layer that was available as an App on the Pre is what drove me to the Pre to begin with.

So onto a bit of RIM. I briefly used a Blackberry back in 2006-2008 – I forget the model it was a strange sort of color device, I want to say monochrome-like color(I think this was it). It was great for email. I used it for a bit of basic web browsing but that was it – didn’t use it as a phone ever. I don’t have personal experience supporting BIS/BES or whatever it’s called but have read/heard almost universal hatred for those systems over the years. RIM obviously sat on their hands too long and the market got away from them. They tried to come up with something great with QNX and BB10 but the market has spoken – it’s not great enough to stem the tide of switchers, or to bring (enough) customers back to make a difference.

Windows Phone..or is it Windows Mobile.. Pocket PC anyone? Microsoft has been in the mobile game for a really long time obviously (it annoys me that press reporters often don’t realize exactly how long Microsoft has been doing mobile — and tablets for – not that they were good products but they have been in the market). They kept re-inventing themselves and breaking backwards compatibility every time. Even after all that effort – what do they have to show for themselves? ~3.5% global market share? Isn’t that about what Apple Mac has ? (maybe Mac is a bit higher).

The mobile problem is compounded further though. At least with PCs there are (and have been for a long time) standards. Things were open & compatible. You can take a computer from HP or from Dell or from some local whitebox company and they’ll all be able to run pretty much the same stuff, and even have a lot of similar components.

Mobile is different though, with ARM SoCs while having a common ancestor in the ARM instruction sets really seem to be quite a bit different enough that it makes compatibility a real issue between platforms. Add on top of that the disaster of the lack of a stable Linux driver ABI which complicates things for developers even more (this is in large part why I believe I read FirefoxOS and/or Ubuntu phone run on top of Android’s kernel/drivers).

All of that just means the barrier to entry is really high even at the most basic level of a handset. This obviously wasn’t the case with the standardized form factor components(and software) of the PC era.

So with regards to the maturation of the market the signs are clear now – with Apple and Samsung having absolutely dominated the revenues and profits in the mobile handset space for years now – both players have shown for probably the past year to 18 months that growth is really levelling out.

With no other players showing even the slightest hint of competition against these behemoths with that levelling of growth that tells me, sadly enough that the opportunity for the most part is gone now. The market is becoming a commodity certainly faster than I thought would happen and I think many others feel the same way.

I don’t believe Blackberry – or Nokia for that matter would of been very successful as Android OEMs.  Certainly at least not at the scale that they were at – perhaps with drastically reduced workforces they could of gotten by with a very small market share – but they would of been a shadow of their former selves regardless. Both companies made big bets going it alone and I admire them for trying – though neither worked out in the end.

Samsung may even go out as well the likes of Xiaomi (never heard of them till last week) or perhaps Huawei or Lenovo coming in and butchering margins below where anyone can make money on the hardware front.

What really prompted this line of thinking though was re-watching the movie Pirates of Silicon Valley a couple of weeks ago following the release of that movie about Steve Jobs. I watched Pirates a long time ago but hadn’t seen it since, this quote from the end of the movie really sticks with me when it comes to the whole mobile space:

Jobs, fresh from the launch of the Macintosh, is pitching a fit after realizing that Microsoft’s new Windows software utilizes his stolen interface and ideas. As Gates retreats from Jobs’ tantrum, Jobs screeches, “We have better stuff!”

Gates, turning, simply responds, “You don’t get it. That doesn’t matter.”

(the whole concepts really gives me the chills to think about, really)

Android is the Windows of the mobile generation (just look at the rash of security-related news events reported about Android..). Ironically enough the more successful Android is the more licensing revenue Microsoft gets from it.

I suppose in part I should feel happy being that it is based on top of Linux – but for some reason I am not.

I suppose I should feel happy that Microsoft is stuck at 3-4% market share despite all of the efforts of the world’s largest software company. But for some reason I am not.

I don’t know if it’s because of Google and their data gathering stuff, or if it’s because I didn’t want to see any one platform dominate as much as Android (and previously IOS) was.

I suppose there is a shimmer of hope in the incorporation of the Cyanogen folks to become a more formalized alternative to the Android that comes out of Google.

All that said I do plan to buy a Samsung Galaxy Note 3 soon as mentioned before. I’ve severed the attachment I had to WebOS and am ready to move on.

September 3, 2013

Microsoft buys Nokia division – was Nokia about to jump ship?

Filed under: General — Tags: , , — Nate @ 6:39 pm

So obviously the big news of the day is Microsoft buying Nokia’s handset division for a big chunk of change. Both seem to be spinning it as a good thing, a logical next step in their partnership. For Nokia it probably is a good thing as it gives them an exit strategy from that business which hasn’t been doing so hot. For Microsoft the deal is less attractive with investors obviously agreeing sending their stock down ~5% on the day.

Some folks are saying a big reason for this was perhaps Nokia’s patents, which Microsoft apparently gets a ten year license to, they don’t acquire them outright (I can only wonder what that would of done for their war on Android), many folks speculate that the CEO of Nokia may be the successor to Ballmer who recently announced his retirement.

I’m going to go out on a limb here as I have nothing to lose and say this is because Nokia was seriously looking at throwing in the towel on the Windows Phone platform.

I think that because there really was no reason for Microsoft to buy Nokia (YET). Nokia was doing Microsoft’s bidding, taking all the risk and reaping none of the rewards. They were sacrificing themselves slowly on the sword of Microsoft, and the investors were getting upset. I fully believe(d) that they would be acquired by Microsoft but not until the viability of Nokia was called into question or perhaps if Nokia was going to give up. I suppose the optimistic point of view would be Windows Phone is about to catapult and the acquisition cost is cheap relative to where it would be in the future. I’m not an optimist like that though! Microsoft obviously has a ton of money and has a strong track record of paying a large premium for companies. So I don’t think value played a key role here.

More commentary from someone on CNBC this morning asked why didn’t Ballmer leave an acquisition of this magnitude to his successor(this being at least the 2nd largest in the company’s history) – someone who will be driving the future of the company. Though if Ballmer seriously things this Elop fella is the one to take the reigns, I think that would probably be a mistake – with Elop’s recent track record of basically burning the company to the ground to make a bet on a new platform. Microsoft has a ton of businesses, and they need to not burn them to the ground in an effort to chase after the next shiny. Elop sounds like a great leader for devices. I don’t know who would make a good MS CEO. That’s not an area I try to claim any level of expertise to!

So I think Nokia was at least talking seriously about a major shift in strategy internally — perhaps just calling Microsoft’s bluff – in  order to get Microsoft to finally move and acquire them while their share price is where it’s at now.

In the end it doesn’t matter to me of course, I’m not an investor regardless, I’m not vested for or against the platform. I do admire Microsoft a bit for not giving up though. They have had some major adoption issues with their new platform forcing Nokia to make  major price cuts. They’ve also been able to capitalize on the chaos at Blackberry and wrestle the #3 spot from them. Though globally that #3 spot as it stands today, is still a rounding error in the grand scheme of things.

I just hope for the sake of their users they don’t do to Windows Phone 8 what they did to 7, and 6.x, and perhaps prior versions in basically abandoning them and making the newer versions completely incompatible. Windows on desktops has been able to sustain such a large presence in a big part due to such massive amounts of compatibility. I’m honestly still shocked I can run a game that came out in 1995 on a modern 64-bit Windows 7 system without any modifications. To even propose such an idea for the Linux platform just makes me laugh, or cry, or maybe a little bit of both.

August 8, 2013

Nth Symposium 2013: HP Bladesystem vs Cisco UCS

Filed under: General — Tags: , , , — Nate @ 11:00 pm

Travel to HP Storage Tech Day/Nth Generation Symposium was paid for by HP; however, no monetary compensation is expected nor received for the content that is written in this blog.

I can feel the flames I might get for this post but I’m going to write about it anyway because I found it interesting. I have written about Cisco UCS in the past(very limited topics), have never been impressed with it, and really at the end of the day I can’t buy Cisco on principle alone – doesn’t matter if it was $1, I can’t do it (in part because I know that $1 cost would come by screwing over many other customers to make that price possible for me).

Cisco has gained a lot of ground in the blade market since they came out with this system a few years ago and I think they are in 3rd place, maybe getting close to 2nd (last I saw 2nd was a very distant position behind HP).

So one of the keynotes (I guess you can call it that? it was on the main stage) was someone from HP who says they recently re-joined HP earlier in the year(or perhaps last year) after spending a couple of years at Cisco both selling and training their partners on how to sell UCS to customers. So obviously that was interesting to me, hearing this person’s perspective on the platform. There was a separate break-out session on this topic that went into more detail but it was NDA-only so I didn’t attend.

I suppose what was most striking is HP going out of their way to compare themselves against UCS, that says a lot right there. They never mentioned Dell or IBM stuff, just Cisco. So Cisco obviously has gotten some good traction (as sick as that makes me feel).

Out of band management

HP claims that Cisco has no out of band management on UCS, there are primary and backup data paths but if those are down then you are SOL. HP obviously has (optionally) redundant out of band management on their blade system.

I love out of band management myself, especially full lights out. My own HP VMware servers have dedicated in-band(1GbE) as well as the typical iLO out of band management interfaces. This is on top of the 4x10GbE and 2x4Gbps FC for storage. Lots of connectivity. When I was having issues with our Qlogic 10GbE NICs last year this came in handy.

Fault domains

This can be a minor issue – mainly an implementation one. Cisco apparently allows UCS to have a fault domain of up to 160 servers, vs HP is 16(one chassis). So you can, of course, lower your fault domain on UCS if you think about this aspect of things — how many customers realize this and actually do something about it? I don’t know.

HP Smart Update Manager

I found this segment quite interesting. HP touts their end to end updates mechanism which includes:

  • Patch sequencing
  • Driver + Firmware management
  • Unified service pack (1 per quarter)

HP claims Cisco has none of these, they cannot sequence patches, their management system does not manage drivers (it does manage firmware), and the service packs are not unified.

At this point the HP person pointed out a situation a customer faced recently where they used the UCS firmware update system to update the firmware on their platform. They then rebooted their ESX systems(I guess for the firmware to take effect), and the systems could no longer see the storage. It took the customer on the line with Cisco, VMware, and the storage company 20 hours until they figured out the problem was the drivers were out of sync with the firmware which was the reason for the downtime.

I recall a few years ago another ~20 hour outage on a Cisco UCS platform at a sizable company in Seattle for similar reasons, I don’t know why in both cases it took so long to resolve, in the Seattle case there was a firmware bug (known bug) that was causing link flapping and as a result massive outage because I believe storage was not very forgiving to that. Fortunately Cisco had a patch but it took em ~20 hours of hard downtime to figure out the problem.

I’m sure there are similar stories for the HP end of things too… I have heard of some nasty issues with flex fabric and virtual connect.  There is one feature I like about flexfabric and virtual connect, that is the chassis-based MAC/WWN assignments. Everything else they can keep. I don’t care about converged ethernet, I don’t care about reducing my cable count(having a few extra fibre cables for storage per chassis really is nothing)…

Myself the only outages I have had that have lasted that long have been because of application stack failures, I think the longest infrastructure related outage I’ve been involved with in the past 15 years was roughly six, maybe eight hours.  I have had outages where it took longer than 20 hours to recover fully from – but the bulk of that time the system was running we just had recovery steps to perform. Never had a 20 hour outage where 15 hours into the thing nobody has any idea what is the problem or how to fix it.

Longest outage ever though was probably ~48-72 hours – and that was entirely application stack failure. That was the time we got all the senior software developers and architects in a room and asked them How do we fix this? and they gave us blank stares and said We don’t know, it’s not supposed to do this.  Not a good situation to be in!

Anyway, back on topic.

HP says since December 2011 they have released 9 critical updates, and Cisco have released 38 critical updates.

The case for intelligent compute

I learned quite a bit from this segment as well. Back in 2003 the company I was at was using HP and Compaq gear, it ran well though obviously was pretty expensive. Everything was DL360s, some DL380s, some DL580s. When it came time to do a big data center refresh we wanted to use SATA disks to cut some costs, so we ended up going with a white box company instead of HP (this was before HP had the DL100 series). I learned a lot from that experience, and was very happy to return to HP as a customer at my next company(though I certainly realize given the right workload HP’s premium may not be worth it – but for highly consolidated virtualized stuff I really don’t want to use anything else). The biggest issue I had with white box stuff was bad ram. It seemed to be everywhere. Not long after we started deployment I started using the Cerberus Test Suite to burn in our systems which caught a lot of it. Cerberus is awesome if you haven’t tried it. I even used it on our HP gear mainly to drive CPU and memory to 100% usage to burn them in (no issues found).

HP Advanced ECC Outcomes

HP Advanced ECC Outcomes

HP has a technology called Advanced ECC, which they’ve had since I believe 1996, and is standard on at least all 300-series servers and up. 10 years ago when our servers rarely had more than 2GB of memory in them(I don’t think we went 64-bit until at least 2005), Advanced ECC wasn’t a huge deal, 2GB of memory is not much. Today, with my servers having 384GB ..I really refuse to run any high memory configuration without something like that. IBM has ChipKill, which is similar. Dell has nothing in this space. Not sure about Cisco(betting they don’t, more on that in a moment).

HP's advanced ECC

HP Advanced ECC

HP talked about their massive numbers of sensors with some systems(I imagine the big ones!) having up to 1,600 sensors in them. (Here is a neat video on Sea of Sensors from one of the engineers who built them – one thing I learned is the C7000 chassis has 104 different fan speeds for maximum efficiency) HP introduced pre failure alerting in 1995, and has had pre failure warranties for a long time (perhaps back to 1995 as well). They obviously have complete hypervisor integration (one thing I wasn’t sure of myself until recently, while upgrading our servers one of the new sticks went bad and an alert popped up in vCenter and I was able to evacuate the host and get the stick replaced without any impact — this failure wasn’t caught by burn-in, just regular processing, I didn’t have enough spare capacity to take out too many systems to dedicate to burn-in at that point).

What does Cisco have? According to HP not much. Cisco doesn’t treat the server with much respect apparently, they treat it as something that can fail and you just get it replaced or repaired at that point.

UCS: Post failure response

UCS: Post failure response

That model reminds me of what I call built to fail which is the model that public clouds like Amazon and stuff run on. It’s pretty bad. Though at least in Cisco’s case the storage is shared and the application can be restarted on another system easily enough, public cloud you have to build a new system and configure it from scratch.

The point here is obviously, HP works hard to prevent the outage in the first place, Cisco doesn’t seem to care.

Simplicity Matters

I’ll just put the full slide here there’s not a whole lot to cover. HP’s point here is the Cisco way is more complicated and seems angled to drive more revenue for the network. HP is less network oriented, and they show you can directly connect the blade chassis to a 3PAR storage system(s). I think HP’s diagram is even a bit too complicated for all but the largest setups you could easily eliminate the distribution layer.

BladeSystem vs UCS: Simplicity matters

BladeSystem vs UCS: Simplicity matters

The cost of the 17th server

I found this interesting as well, Cisco goes around telling folks that their systems are cheaper, but they don’t do an apples to apples comparison, they use a Smart Play Bundle, not a system that is built to scale.

HP put a couple of charts up showing the difference in cost between the two solutions.

BladeSystem vs UCS TCO: UCS Smart Play bundle

BladeSystem vs UCS TCO: UCS Smart Play bundle

BladeSystem vs UCS: UCS Built to scale

BladeSystem vs UCS TCO: UCS Built to scale

Portfolio Matters

Lastly HP went into some depth on comparing the different product portfolios and showed how Cisco was lacking in pretty much every area whether it was server coverage, storage coverage, blade networking options, software suites and the integration between them.

They talked about how Cisco has one way to connect networking to UCS, HP has many whether it is converged ethernet(similar to Cisco), or regular ethernet, native Fibre channel, Infiniband, and even SAS to external disk enclosures. The list goes on and on for the other topics but I’m sure you get the point. HP offers more options so you can build a more optimal configuration for your application.

BladeSystem vs UCS: Portfolio matters

BladeSystem vs UCS: Portfolio matters

Then they went into analyst stuff and I took a nap.

In reviewing the slide deck they do mention Dell once.. in the slide, not by the speaker –

HP vs Dell in drivers/firmware management

HP vs Dell in drivers/firmware management

By attending this I didn’t learn anything that would affect my purchasing in the future, as I mentioned I won’t buy Cisco for any reason already. But it was still interesting to hear about.

August 7, 2013

Nth Symposium 2013: HP Moonshot

Filed under: General — Tags: , , — Nate @ 10:05 pm

Travel to HP Storage Tech Day/Nth Generation Symposium was paid for by HP; however, no monetary compensation is expected nor received for the content that is written in this blog.

HP launched Moonshot a few months ago, I wrote at the time I wasn’t overly impressed. At the Nth Symposium there were several different HP speakers that touched on Moonshot.

HP has been blasting the TV airwaves with Moonshot ads – something that I think is a waste of money – just as much as it would be if HP were blasting the TV with 3PAR ads. Moonshot obviously is a special type of system- and those in that target market will obviously (to me anyway) know about it. Perhaps it’s more of an ad to show HP is innovating still, in which case it’s pretty decent (not as good as the IBM Linux commercials from years back though!).

Initial node for HP Moonshot

Initial node for HP Moonshot for Intel Atom processors

Sure it is cute, the form factor certainly grabs your eye. Micro servers are nothing new though, HP is just the latest entrant into the market. I immediately got into tech mode and wanted to know how it measured up to AMD’s Seamicro platform. In my original post I detail several places where I feel Moonshot falls short of Seamicro which has been out for years.

Seamicro Node for Intel Atom processors

Seamicro Node for Intel Atom processors – Note no storage! All of that is centralized in the chassis, virtualized so that it is very flexible.

HP touts this as a shift in the way of thinking – going from building apps around the servers to building servers around the apps (while they sort of forgot to mention we’ve been building servers around the apps in the form of VMs for many years now). I had not heard of the approach described like that until last week, it is an interesting description.

HP was great in being honest about who should use this system – they gave a few different use cases, but they were pretty adamant that Moonshot is not going to take over the world, it’s not going to invade every SMB and replace your x86 systems with ARM or whatever. It’s a purpose built system for specific applications. There is only one Moonshot node today, in the future there will be others, each targeted at a specific application.

One of them will even have DSPs on it I believe, which is somewhat unique. HP calls Moonshot out as:

  • 77% less costly
  • 89% less energy
  • 80% less space
  • 97% less complex

Certainly very impressive numbers. If I had an application that was suitable for Moonshot then I’d certainly check it out.

One of the days that I was there I managed to get myself over to the HP Moonshot booth and ask the technical person there some questions. I don’t know what his role was, but he certainly seemed well versed in the platform and qualified to answer my basic questions.

My questions were mainly around comparing Moonshot to Seamicro – specifically the storage virtualization layers and networking as well. His answers were about what I expected. They don’t support that kind of thing, and there’s no immediate plans to. Myself, I think the concept of being able to export read-only file system(s) from central SSD-storage to dozens to hundreds of nodes within the Seamicro platform a pretty cool idea. The storage virtualization sounds very flexible and optionally extends well beyond the Seamicro chassis up to ~1,400 drives.

Same for networking, Moonshot is pretty basic stuff. (At one point Seamicro advertised integrated load balancing but I don’t see that now).  The HP person said Moonshot is aimed squarely at web applications, scale out etc.. Future modules may be aimed at memcache nodes, or other things.. There will also be a storage module as well(I forget specifics but it was nothing too exciting).

I believe the HP rep also mentioned how they were going to offer units with multiple servers on a single board (Seamicro has done this for a while as well).

Not to say Moonshot is a bad system, I’m sure HP will do pretty well with them, but I find it hard to get overly excited about it when Seamicro seems to be years ahead of Moonshot already. Apparently Moonshot was in HP Labs for a decade, and it wasn’t until one of the recent CEOs(I think a couple of years ago) came around to HP Labs and said something like “What do you have that I can sell?” and the masterminds responded “We have Moonshot!”, and it took them a bit of time to productize it.

(I have no personal experience with either platform nor have I communicated with anyone who has told me personal experience with either platform so I can only go by what I have read/been told of either system at this point)

June 4, 2013

Infoworld suggests radical Windows 8 changes

Filed under: General — Tags: , — Nate @ 8:46 am

Saw this come across on slashdot, an article over at Infoworld on how Microsoft can fix Windows 8.

They suggest ripping out almost all of the new stuff (as defaults) and replacing it with a bunch of new options that users can pick from.

Perhaps I am lucky in that I’ve never used Windows 8 (I briefly held a MS Surface RT in my hands, a friend who is an MS employee got one for free(as did all employees I believe) and handed it to me to show me some pictures on it).

Some of the suggestions from Infoworld sound pretty good to me, though hard to have a firm opinion since I’ve never used the Metro UI (oh, sorry they changed the name to something else).

Windows 8 (as it stands today) certainly sounds pretty terrible from a UI standpoint. The only positives I have read on Windows 8 is people say it is faster. Which isn’t much these days, machines have been fast enough for many years(which at least in part has led to the relative stagnation of the PC market). My computers have been fast enough for years(the laptop I am typing on is almost 3 years old, I plan to keep it around for at least another year as my primary machine — I have another year of on site support so I’m covered from that angle).

It has been interesting to see, that really since XP was released, there haven’t been anything really exciting on the Windows desktop front, it’s a mature product(the results have shown, much like the economy pretty much every OS launch they’ve done has had weaker reception than the previous – Windows 7 sort of an exception from the hard core community but from a broader sense it still seemed weak). It’s come a long way from the mess many of us dealt with in the 90s (and instability in NT4 was one big driver for me to attempt Linux on my primary desktop 15 years ago and I’m still with Linux today).

I don’t use Windows enough to be able to leverage the new features. I’m still used to using the XP interface, so am not fond of many of the new UI things that MS has come up with over the years. Since I don’t use it much,  it’s not critical.

The last time I did use Windows seriously was at a few different companies I had windows as my primary desktop. But you probably wouldn’t know it if you saw it. It was customized with cygwin, and Blackbox for windows. Most recently was about three years ago (company was still on XP at the time). Most of the time my screen was filled with rxvt X terminals (there is a native Windows port for rxvt in cygwin that works wonderfully), and firefox. Sometimes had Outlook open or Visio or in rare cases IE.

Not even the helpdesk IT guy could figure my system out “Can you launch control panel for me?”. It gave it a nice Linux look & feel(I would of killed for proper virtual desktop edge flipping but I never found a solution for that) with the common windows apps.

Ironically enough I’ve purchased more copies of Windows 7 (I think I have 7 now – 2 or 3 licenses are not in use yet – stocked up so I wouldn’t have to worry about Windows 8 for a long time) than all previous MS operating systems combined. I’ve bought more Microsoft software in the past 3-4 years (Visio Pro 2010 is another one) than in the previous decade combined. As my close friends will attest I’m sure – I have not been a “hater” of Microsoft for some time now (12 years ago I threatened to quit if they upgraded from NT4 to Windows 2000 – and they didn’t at least not as long as I was there – those were the days when I was pretty hard core anti MS – I was working on getting Samba-tng and LDAP to replace NT4 – I never deployed the solution, and today of course I wouldn’t bother)

Some new Linux UIs suck too

Microsoft is not alone in crappy UIs though. Linux is right up there too (many would probably argue it always was, that very well could be true, though myself I was fine with what I have used over the years from KDE 0.x to AfterStep to GNOME 1.x/2.x). GNOME 3 (and the new Unity stuff from Ubuntu) looks at least as terrible as the new Microsoft stuff (if not more so).

I really don’t like how organizations are trying to unify the UI between mobile and PC. Well maybe if they did it right I’d like it (not knowing what “right” would be myself).

By the same notion I find it ludicrous that LG would want to put WebOS on a TV! Maybe they know something I don’t though, and they are actually going to accomplish something positive. I love WebOS (well the concept – the implementation needs a lot of work and billions of investment to make it competitive) don’t get me wrong but I just don’t see how there is any advantage to WebOS on a device like a TV. The one exception is ecosystem – if there is an ecosystem of WebOS devices that can seamlessly inter-operate with each other.  There isn’t such an ecosystem today, what’s left has been a rotting corpse for the past two years (yes I still use my HP Pre3 and Touchpad daily). There’s no sign LG has a serious interest in making such an ecosystem, and even if they did, there’s no indication they have the resources to pull it off (I’d wager they don’t).

I haven’t used Unity but last weekend I did install Debian 7 on my server at home (upgraded from 6). 99% of the time from a UI perspective this system just cycles through tens of thousands of images as a massive slide show (at some point I plan to get a 40″+ screen and hang it on my wall as a full sized digital picture frame, I downloaded thousands of nice 1080p images from interfacelift as part of the library).

I was happy to see Debian 7 included a “GNOME 2 like” option, as a moderately customized Gnome 2 is really what I am used to, and I have absolutely positively no interest to change it.

It gets mostly there, maybe 50-75% of the way. First thing I noticed was the new Gnome did not seem to import any of the previous settings. I got a stock look – stock wallpaper, stock settings, and no desktop icons(?). I tried to right click on the desktop to change the wall paper – that didn’t work either. I tried to right click on the menu bar to add some widgets, that didn’t work either. I went from 0 to very annoyed almost immediately. This was with the “compatibility” gnome desktop! Imagine if I had tried to login to regular GNOME 3, I probably would of thrown my laptop against the wall before it finished logging in! 🙂 (fortunately for my laptop’s sake I have never gotten to that point)

Eventually I found the way to restore the desktop icons and the right click on the desktop, I managed to set one of my wonderful high res NSFW desktop backgrounds. I still can’t add widgets to the menu bar I assume it is not possible. I haven’t checked if I can do virtual desktop edge flipping with brightside (or with something built in), I’d wager that doesn’t work either.

I’m not sure what I will do on my main laptop/desktop which are Ubuntu 10.04 which is now unsupported. I hear there are distros/packages out there that are continuing to maintain/upgrade the old Gnome 2 stuff (or have replaced Gnome 3’s UI with Gnome 2), so will probably have to look into that, maybe it will be easy to integrate into Debian or Ubuntu 12.04(or both).

I saw a fantastic comment on slashdot recently that so perfectly describes the typical OSS developer on this stuff


What X11 is, is old. And developers are bored with it. And they want something new and shiny and a chance to play with the hardware without abstraction throwing a wet blanket over their benchmark scores.

The benchmark of success for Wayland is that _users_ don’t actually notice that anything changed. They’ll fall short of that benchmark because too many people like using X11, and even the backward compatibility inevitably will cause headaches.

But developers will enjoy it more, and in the FOSS world those are the only consumers that matter.

(the last sentence especially)

That was in a conversation related to replacing X11 (the main GUI base for Linux) with something completely different (apparently being developed by some of the same folks that worked on X11) that has been under development for many, many years. Myself I have no issues with X11, it works fine for me. Last time I had major issues with X11 was probably 10+ years ago.

As someone who has worked closely with developers for the past 13 years now I see a lot of this first hand. Often times the outcome is good, many other times not so much.

One system I worked with was so architecturally complex that two people on my team left the company within a year of starting and their primary complaint was the application was too complicated to learn (they had been working with it almost daily for their time there). It was complex for sure(many many sleepless nights and long outages too) – though it didn’t make me want to throw my laptop against the wall like Chef does.

In the case of Microsoft, I found it really funny that one of(if not the) main managers behind Windows 8 suddenly resigned mere weeks after the Windows 8 launch.

June 2, 2013

Travel tips for Washington DC area?

Filed under: General — Nate @ 8:38 am

I am planning on being in the Washington DC area next week to visit a a friend I haven’t seen in a couple of years. I have another friend that I know that is in that area and will visit them too.

I’ve never been there before, so if anyone knows something/place cool to visit please send a note my way! Doesn’t have to be in DC – but within say a 2 or maybe even 3 hour drive is fine.

I arrive at 10 AM on Saturday and can’t check in till 3PM so my first thought to kill some time is to drive to Philadelphia to grab one of those famous original Cheese steak sandwiches at either one of Geno’s Steaks or Pat’s King of Steaks (which are apparently right across the street from each other).

Five hours round trip is a long ways to drive but in the grand scheme of life, who knows if I’ll ever be in the area again, so I figure it’s worth it. If I could find some place else to visit and take some pictures(preferably with a nice view, my camera has a 42X zoom). I’m not much for historical stuff or museums (though I may make some exceptions on this trip).

I browsed ~250 potential places on Trip Advisor in Philadelphia but sadly did not see anything that really interested me(except maybe this). Given the sheer number of ideas on that site I figure it may be difficult to find things on other sites that aren’t just repeats.

One day during the week(perhaps Sunday the 9th) I plan to visit Norfolk, VA – a full six hours round trip. But it looks like it will be worth it too — mainly to see the military stuff there. Of the three major locations it’s the one I am most looking forward to.

Possibilities in Norfolk include

One day hit Baltimore up for their Blue Crab, possibilities for this trip include

Then the rest of the time in DC, most of these places to visit just so I can say I visited them, really not excited about any of them (specifically avoided any places that don’t allow pictures like the Mint) –

Hopefully I can easily hit all the above sites in less than one day.

  • Sweetwater Tavern – local friend says there is good food there
  • Ted’s Montana Grill – enjoyed this place when I was in Atlanta – it was the first time I had Buffalo (that I can recall).
  • Tilted Kilt would be nice but is ~70 miles away so probably will wait till I’m in Atlanta next to hit that place up.


I’ll likely be working remotely for a couple days while there, not sure yet.

May 20, 2013

When a server is a converged solution

Filed under: General — Tags: — Nate @ 3:56 pm

Thought this was kind of funny/odd/ironic/etc…

I got an email a few minutes ago which is talking about HP App System for Vertica. Which, among other things HP describes as being able to

This solution delivers system performance and reduces implementation from months to hours.

I imagine they are referring to competing solutions and not comparing to running Vertica on bare metal. In fact it may be kind of misleading as Vertica is very open – you can run it on physical hardware (any hardware really), virtual hardware, and even some cloud services (it is supported in *shudder*Amazon even..). So you can get implementation of a basic Vertica system without buying anything new.

But if you are past the evaluation stage, and perhaps outgrew your initial deployment and want to grow into something more formal/dedicated, then you may need some new hardware.

HP pitches this as a Converged Solution. So I was sort of curious what HP solutions are they converging here?

Basically it’s just a couple base configurations of HP DL380G8s with internal storage (these 2U servers support up to 25 2.5″ disks).  They don’t even install Vertica for you

HP Vertica Analytics Platform software installation is well documented and can be installed by customers.

They are kind enough to install the operating system though (no mention of any special tuning, other than they say it is “Standard” so I guess no tuning).

No networking included(outside of the servers as far as I can tell), the only storage is the internal DAS. Minimum of three servers is required so some sort of 10GbE switches are required (since the severs are 10GbE, you can run Vertica fine on 1GbE too for smaller data sets).

I would of expected the system to come with Vertica pre-installed, or automatically installed as part of setup and have a trial license built into the system.

Vertica is very easy to install and configure the basics, so in the grand scheme of things this AppSystem might save the average Vertica customer a few minutes.

Vertica is licensed normally by the amount of data stored in the cluster (pre-compression / encoding).  The node count, CPU count, memory, spindles doesn’t matter. There is a community edition that goes up to 3 nodes, and 3TB (it has some other software limitations – and as far as I know there is no direct migration path from community to enterprise without data export/import).

Don’t get me wrong I think this is a great solution, very solid server, with a lot of memory and plenty of I/O to provide a very powerful Vertica experience. Vertica’s design reduces I/O requirements by up to ~90% in some cases, so you’d be probably shocked the amount of performance you’d get out of just one of these 3 node clusters, even without any tuning at the Vertica level.

Vertica does not require a fancy storage system, it’s really built with DAS in mind. Though I know there are bunches of customers out there that run it on big fancy storage because they like the extra level of reliability/availability.

I just thought it was kind of strange some of the marketing behind it, saving months of time, being converged infrastructure and what not..

It makes me think(if I had not installed Vertica clusters before) that if I want Vertica and don’t get this AppSystem then I am in a world of hurt when it comes to setting Vertica up. Which is not the right message to send.

Here is this wonderful AppSystem that is in fact — just a server with RHEL installed.

For some reason I expected more.

May 17, 2013

Big pop in Tableau IPO

Filed under: General — Tags: , — Nate @ 9:35 am

I was first introduced to Tableau (and Vertica) a couple of years ago at a local event in Seattle. Both products really blew me away(and still do to this day). Though it’s not an area I spend a lot of time in – my brain struggles with anything analytics related (even when using Tableau, same goes for Splunk, or SQL). I just can’t make the connections, when I come across crazy Splunk queries that people write I just stare at it for a while in wonder(as in I can’t possibly imagine how someone could of come up with such a query even after working with Splunk for the past six years).. then I copy+paste it and hope it works.

Sample Tableu reports pulled from google images

But that doesn’t stop me from seeing an awesome combination that is truly ground breaking both in performance and ease of use.

I’ve seen people try to use Tableau with MySQL for example and they fairly quickly give up in frustration at how slow it is. I remember being told that Tableau used to get a bunch of complaints from users years ago saying how slow it seemed to be — but it really wasn’t Tableau’s fault it was the slow back end data store.

Vertica unlocks Tableau’s potential by providing a jet engine to run your queries against. Millions of rows? hundreds of millions? No problem.. billions ? It’ll take a bit longer but shouldn’t be an issue either. Try that with most other back ends and well you’ll be waiting there for days if not weeks.

Tableau is a new generation of data visualization technology that is really targeted at the Excel crowd. It can read in data from practically anything(Excel files included), and it provides a seamless way to analyze your data and provide fancy charts and graphs, tables and maps..

It’s not really for the hard core power users who want to write custom queries. Though I still think it is useful for those folks. A great use case for Tableau is for the business users to play around with it, and come up with the reports that they find useful, then the data warehouse people can take those results and optimize the warehouse for those types of queries (if required). It’s a lot simpler and faster than the alternative..

I remember two years ago I was working with a data warehouse guy at a company and we were testing Tableau with MySQL at the time actually (small tables), just playing around, he poked around, created some basic graphs and drilled down into them. In all we spent about 5 minutes on this task and we found some interesting information. He said if he had to do that in MySQL queries himself it would of taken him roughly two days. Running query after query and then building new queries based on results etc.  From two days to roughly five minutes — for a very experienced SQL/data warehouse person.

Tableau has a server component as well, which you can publish your reports for others to see with a web browser or mobile device, the server can also of course directly link to your data to get updates as frequently as you want them.

You can have profiles and policies, one example Tableau gave me last year was one big customer enforces certain color codes across their organization so no matter what they are looking at they know Blue means X and Orange means Y. This is enforced at the server level, so it’s not something people have to worry about remembering. They can also enforce policies around reporting so that the term “XYZ” is always the result of “this+that”, so people get consistent results every time — not a situation where someone interprets something one way, and another person another way. Again this is enforced at the server level, reducing the need for double checking and additional training.

They also have APIs – and users are able to embed Tableau reports directly into their applications and web sites(through the server component). I know one organization where almost all of their customer reporting is presented with Tableau – I’m sure it saved them a ton of time trying to replicate the behavior in their own code. I’ve seen folks try to write reporting UIs in past companies and usually what comes out is significantly sub par because it’s a complicated thing to get right. Tableau makes it easy, and probably very cost effective relative to full time developers taking months/years to try to do it yourself.

It’s one of the few products out there that I am really excited about, and I’ve seen some amazing stuff done with the software in a very minimal amount of time.

Tableau has a 15 day evaluation period if you want to try it out — it really should be more, but whatever.  Vertica has a community edition which you can use as a sort of long term evaluation – it’s limited to 1TB of data and 3 cluster nodes. You can get a full fledged enterprise evaluation from Vertica as well if you want to test all of the features.

I wrote some scripts at my current company to refresh/import about 150GB of data from our MySQL systems to Vertica each night. It is interesting to see MySQL struggle to read the data out, and Vertica is practically idle as it ingests it (I’d otherwise normally think that the writing of the data would be more intensive than the reading). In order to improve performance I compiled a few custom MySQL binaries that allowed me to run MySQL queries and pipe the results directly into Vertica (instead of writing 10s of GBs to disk only to read it back again). The need for the custom binaries is MySQL by default only supports tab delimited results which was not sufficient for this data set (I actually compiled 3-4 different binaries with different delimiters depending on the tables  – managed to get ~99.99999999% of the rows in without further effort). Also wrote a quick perl script to fix some of the invalid data like invalid time stamps which MySQL happily allows but Vertica does not.

Sample command:

$MYSQL --raw --batch --quick --skip-column-names -u $DB_USERNAME --password="${DB_PASSWORD}" --host=${DB_SERVER} $SOURCE_DBNAME -e "select * from $MY_TABLE" | $DATA_FIX | vsql -w $VERTICA_PASSWORD -c "COPY ${VERTICA_SCHEMA}.${MY_TABLE} FROM STDIN DELIMITER '|' RECORD TERMINATOR '##' NULL AS 'NULL' DIRECT"


Oh and back to the topic of the post – Tableau IPO’d today (ticker is DATA) – as of last check it is up 55%.

So, congrats Tableau on a great IPO!


Older Posts »

Powered by WordPress