TechOpsGuys.com Diggin' technology every day

June 15, 2014

HP Discover 2014: Software defined

Filed under: Datacenter,Events — Tags: , , , — Nate @ 12:26 pm

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

I have tried to be a vocal critic of the whole software defined movement, in that much of it is hype today and has been for a while and will likely to continue to be for a while yet. My gripe is not so much about the approach, the world of “software defined” sounds pretty neat, my gripe is about the marketing behind it that tries to claim we’re already there, and we are not, not even close.

I was able to vent a bit with the HP team(s) on the topic and they acknowledged that we are not there yet either. There is a vision, and there is a technology. But there aren’t a lot of products yet, at least not a lot of promising products.

Software defined networking is perhaps one of the more (if not the most) mature platforms to look at. Last year I ripped pretty good into the whole idea with good points I thought, basically that technology solves a problem I do not have and have never had. I believe most organizations do not have a need for it either (outside of very large enterprises and service providers). See the link for a very in depth 4,000+ word argument on SDN.

More recently HP tried to hop on the bandwagon of Software Defined Storage, which in their view is basically the StoreVirtual VSA. A product that to me doesn’t fit the scope of Software defined, it is just a brand  propped up onto a product that was already pretty old and already running in a VM.

Speaking of which, HP considers this VSA along with their ConvergedSystem 300 to be “hyper converged”, and least the people we spoke to do not see a reason to acquire the likes of Simplivity or Nutanix (why are those names so hard to remember the spelling..). HP says most of the deals Nutanix wins are small VDI installations and aren’t seen as a threat, HP would rather go after the VCEs of the world. I believe Simplivity is significantly smaller.

I’ve never been a big fan of StoreVirtual myself, it seems like a decent product, but not something I get too excited about. The solutions that these new hyper converged startups offer sound compelling on paper at least for lower end of the market.

The future is software defined

The future is not here yet.

It’s going to be another 3-5 years (perhaps more). In the mean time customers will get drip fed the technology in products from various vendors that can do software defined in a fairly limited way (relative to the grand vision anyway).

When hiring for a network engineer, many customers would rather opt to hire someone who has a few years of python experience than more years of networking experience because that is where they see the future in 3-5 years time.

My push back to HP on that particular quote (not quoted precisely) is that level of sophistication is very hard (and expensive) to hire for. A good comparative mark is hiring for something like Hadoop.  It is very difficult to compete with the compensation packages of the largest companies offering $30-50k+ more than smaller (even billion $) companies.

So my point is the industry needs to move beyond the technology and into products. Having a requirement of knowing how to code is a sign of an immature product. Coding is great for extending functionality, but need not be a requirement for the basics.

HP seemed to agree with this, and believes we are on that track but it will take a few more years at least for the products to (fully) materialize.

HP Oneview

(here is the quick video they showed at Discover)

I’ll start off by saying I’ve never really seriously used any of HP’s management platforms(or anyone else’s for that matter). All I know is that they(in general not HP specific) seem to be continuing to proliferate and fragment.

HP Oneview 1.1 is a product that builds on this promise of software defined. In the past five years of HP pitching converged systems seeing the demo for Oneview was the first time I’ve ever shown just a little bit of interest in converged.

HP Oneview was released last October I believe and HP claims something along the lines of 15,000 downloads or installations. Version 1.10 was announced at Discover which offers some new integration points including:

  • Automated storage provisioning and attachment to server profiles for 3PAR StoreServ Storage in traditional Fibre Channel SAN fabrics, and Direct Connect (FlatSAN) architectures.
  • Automated carving of 3PAR StoreServ volumes and zoning the SAN fabric on the fly, and attaching of volumes to server profiles.
  • Improved support for Flexfabric modules
  • Hyper-V appliance support
  • Integration with MS System Center
  • Integration with VMware vCenter Ops manager
  • Integration with Red Hat RHEV
  • Similar APIs to HP CloudSystem

Oneview is meant to be light weight, and act as a sort of proxy into other tools, such as Brocade’s SAN manager in the case of Fibre channel (myself I prefer Qlogic management but I know Qlogic is getting out of the switch business). Though for several HP products such as 3PAR and Bladesystem Oneview seems to talk to them directly.

Oneview aims to provide a view that starts at the data center level and can drill all the way down to individual servers, chassis, and network ports.

However the product is obviously still in it’s early stages – it currently only supports HP’s Gen8 DL systems (G7 and Gen8 BL), HP is thinking about adding support for older generations but their tone made me think they will drag their feet long enough that it’s no longer demanded by customers. Myself the bulk of what I have in my environment today is G7, only recently deployed a few Gen8 systems two months ago. Also all of my SAN switches are Qlogic (and I don’t use HP networking now) so Oneview functionality would be severely crippled if I were to try to use it today.

The product on the surface does show a lot of promise though, there is a 3 minute video introduction here.

HP pointed out you would not manage your cloud from this, but instead the other way around, cloud management platforms would leverage Oneview APIs to bring that functionality to the management platform higher up in the stack.

HP has renamed their Insight Control systems for vCenter and MS System Center to Oneview.

The goal of Oneview is automation that is reliable and repeatable. As with any such tools it seems like you’ll have to work within it’s constraints and go around it when it doesn’t do the job.

“If you fancy being able to deploy an ESX cluster in 30 minutes or less on HP Proliant Gen8 systems, HP networking and 3PAR storage than this may be the tool for you.” – me

The user interface seems quite modern and slick.

They expose a lot of functionality in an easy to use way but one thing that struck me watching a couple of their videos is it can still be made a lot simpler – there is a lot of jumping around to do different tasks.  I suppose one way to address this might be broader wizards that cover multiple tasks in the order they should be done in or something.

HP Discover 2014: Helion (Openstack)

Filed under: Datacenter,Events — Tags: , , , — Nate @ 10:36 am

(Standard disclaimer HP covered my hotel and stuff while in Vegas etc etc…)

HP Helion

This is a new brand for HP’s cloud platform based on OpenStack. There is a commercial version and a community edition. The community edition is pure OpenStack without some of the fancier HP management interfaces on top of it.

The easiest thing about OpenStack is setting it up – organizations spend the majority of the time simply keeping it running after it is set up.”

HP admits that OpenStack has a long way to go before it is considered a mature enterprise application stack. But they do have experience running a large OpenStack public cloud and have hundreds of developers working on the product. In fact HP says that most OpenStack community projects these days are basically run by HP, while other larger contributors (even Rack Space) have pulled back on resource allocation to the project HP has gone in full steam ahead.

HP has many large customers who specifically asked HP to get involved in the project and to provide a solution for them that can be supported end to end. I must admit the prospect does sound attractive, being that you can get HP Storage, Servers, Networking all battle tested and ready to run this new cloud platform, the Openstack platform is by far the biggest weak point today.

It is not there yet though, HP does offer a professional services for the customers entire life cycle of OpenStack deployment.

One key area that has been weak in OpenStack which recently made the news, is the networking component Neutron.

[..] once you get beyond about 50 nodes, Neutron falls apart”

So to stabilize this component HP integrated support with their SDN controller into the lower levels of Neutron. This allowed it to scale much better and maintain complete compatibility with existing APIs.

That is something HP is doing in several cases, they emphasize very strongly they are NOT building a proprietary solution, and they are NOT changing any of the APIs (they are helping change them upstream) as to break compatibility. They are however adding/moving some things around beneath the API level to improve stability.

The initial cost for the commercial $1,400/server/year which is quite reasonable, I assume that includes basic support. The commercial version is expected to become generally available in the second half of 2014.

Major updates will be released every six months, and minor updates every three months.

Very limited support cycle

One thing that took almost everyone in the room by surprise is the support cycle for this product. Normally enterprise products have support for 3-5 years, Helion has support for a maximum of 18 months. HP says 12 of those months is general support and the last six of those are specifically geared towards migration to the next version, which they say is not a trivial task.

I checked Red Hat’s policy as they are another big distribution of OpenStack, and their policy is similar – they had one year of support on version three of their production and have one and a half years on version four (current version). Despite the version numbers apparently version three was the first release to the public.

So given that it should just reinforce the fact that Openstack is not a mature platform at this point and it will take some time before it is, probably another 2-3 years at least. They only recently got the feature that allowed for upgrading the system.

HP does offer a fully integrated ConvergedSystem with Helion, though despite my best efforts I am unable to find a link that specifically mentions Helion or OpenStack.

HP is supporting ESXi and KVM as the initial hypervisors in their Helion. Openstack supports a much wider variety itself but HP is electing those two to begin with anyway. Support for Hyper-V will follow shortly.

HP also offers full indemnification from legal issues as well.

This site has a nice diagram of what HP is offering, not sure if it is an HP image or not so sending you there to see it.

Conclusion

My own suggestion is to steer clear of Openstack for a while yet, give it time to stabilize, don’t deploy it just because you can. Don’t deploy it because it’s today’s hype.

If you really, truly need this functionality internally then it seems like HP has by far the strongest offerings from a product and support standpoint(they are willing and able to do everything from design to deployment to operationally running it). Keep in mind depending on scale of deployment you may be constantly planning for the next upgrade (or having HP plan for you).

I would argue that the vast majority of organizations do not need OpenStack (in it’s current state) and would do themselves a favor by sticking to whatever they are already using until it’s more stable. Your organization may have pains running whatever your running now, but your likely to just trade those pains for other pains going the OpenStack route right now.

When will it be stable? I would say a good indicator will be the support cycle, when HP (or Redhat) starts having a full 3 year support cycle on the platform (with back ported fixes etc) that means it’s probably hit a good milestone.

I believe OpenStack will do well in the future, it’s just not there yet today.

May 14, 2014

Hooterpalooza 2014

Filed under: Events,Random Thought — Tags: — Nate @ 7:18 pm

(if you prefer you can skip my review and jump straight to the pictures, usual disclaimers apply)

This isn’t directly related to tech but I wanted to write about it a bit, since it was quite a blast. I just went by myself though I made a few new friends.

I’ll apologize now for all misspellings of names, didn’t get any written stuff so just had to wing it.

hooterpalooza2014

I learned about it a couple of weeks ago, though this was I believe their 8th annual event. I purchased a VIP ticket ($100) which included close to stage seating as well as a back stage pass(which was outdoors in 95 degree heat!). The venue was at the Saddle Rack in Fremont, CA. The staff there were very friendly and quick to serve out drinks, of which I had many.

It had representatives from the four bay area Hooters locations, 31 hooters girls in all, last year I was told there was quite a bit more. There were a handful of judges, the only ones I remember were a couple radio DJs from 107.7 The Bone.

I have never been to this kind of event before and I wasn’t sure what to expect, but my expectations were exceeded, it ended around 9:30PM and it packed with entertainment.

One of the host’s was Amanda I think (someone behind me kept yelling her name anyway) she was quite good as well.

HooterPalooza hostess

HooterPalooza hostess

Madman’s Lullaby, which is a local band(from Campbell it seems) here played for quite a while I was very impressed with their talent(I’ve never followed local bands before). They had a very polished performance, by far I think the best live performance I have seen/heard in a club settings (granted I haven’t seen many I usually avoid places with live music it is often too loud – wasn’t in this case). I purchased two of their CDs (professionally made with case, shrink wrap etc no CD-R stuff here), they recently got signed on by a record label(Kivel Records). The album is called Unhinged.

Madman - Lead singer

Dave Friday – Lead Vocals for Madman’s Lullaby

Hooterpalooza - Madman on stage

Madman’s Lullaby performing at Hooterpalooza 2014

I had my phone, and then later went and got my real camera. The lighting in the place was good for watching in person but made it difficult to take pictures(w/o flash), most of which were washed out by the bright spotlight. Auto focus was also very slow due to low surrounding lighting. Video recording was more successful and I was able to take snapshots from the video frames.

I’ll put most of my pictures here if you want to see more in depth coverage. Here is the video of the top five contestants.

I live and work in San Bruno, CA – and the Hooters here is roughly two blocks from my apartment which is convenient. So of course I wanted the San Bruno girls to win.

Hooterpalooza - Dom

Surrender the booty winner

During intermission there was a Surrender the booty contest which was very entertaining, and fortunately a San Bruno Hooters girl won that contest so congrats to Dominique.
 
 
  
 
 
 
 
  
 
 
 

Winners of Hooterpalooza 2014

HooterPalooza - Top 3

From right to left:

  1. First place went to Lexi from Hooters of Dublin, CA (?? not sure the voice was difficult to understand)
  2. Second place went Ariana from Hooters of San Bruno, CA
  3. Third place went to Brittney from Hooters of Dublin, CA

(Fifth place went to San Bruno as well)

For sure the most fun I’ve had (in the bay area) since I moved here almost three years ago. Looking forward to next year’s event!

July 30, 2013

HP Storage Tech Day – 3PAR

Filed under: Events,Storage — Tags: , , — Nate @ 2:04 am

Before I forget again..

Travel to HP Storage Tech Day/Nth Generation Symposium was paid for by HP; however, no monetary compensation is expected nor received for the content that is written in this blog.

So, HP hammered us with a good seven to eight hours of storage related stuff today, the bulk of the morning was devoted to 3PAR and the afternoon covered StoreVirtual, StoreOnce, StoreAll, converged management and some really technical bits from HP Labs.

This post is all about 3PAR. They covered other topics of course but this one took so long to write I had to call it a night, will touch on the other topics soon.

I won’t cover everything since I have covered a bunch of this in the past. I’ll try not to be too repetitive…

I tried to ask as many questions as I could, they answered most .. the rest I’ll likely get with another on site visit to 3PAR HQ after I sign another one of those Nate Disclosure Agreements (means I can’t tell you unless your name is Nate). I always feel guilty about asking questions directly to the big cheeses at 3PAR. I don’t want to take up any of their valuable time…

There wasn’t anything new announced today of course, so none of this information is new, though some of is new to this blog, anyway!

I suppose if there is one major take away for me for this SSD deep dive, is the continued insight into how complex storage really is, and how well 3PAR does at masking that complexity and extracting the most of everything out of the underlying hardware.

Back when I first started on 3PAR in late 2006, I really had no idea what real storage was. As far as I was concerned one dual controller system with 15K disks was the same as the next. Storage was never my focus in my early career (I did dabble in a tiny bit of EMC Clariion (CX6/700) operations work – though when I saw the spreadsheets and visios the main folks used to plan and manage I decided I didn’t want to get into storage), it was more servers, networking etc.

I learned a lot in the first few years of using 3PAR, and to a certain extent you could say I grew up on it. As far as I am concerned being able to wide stripe, or have mesh active controllers is all I’ve ever (really) known. Sure since then I have used a few other sorts of systems. When I see architectures and processes of doing things on other platforms I am often sort of dumbfounded why they do things that way. It’s sometimes not obvious to me that storage used to be really in the dark ages many years ago.

Case in point below, there’s a lot more to (efficient, reliable, scalable, predictable) SSDs than just tossing a bunch of SSDs into a system and throwing a workload at them..

I’ve never tried to proclaim I am a storage expert here(or anywhere) though I do feel I am pretty adept at 3PAR stuff at least, which wasn’t a half bad platform to land on early on in the grand scheme of things. I had no idea where it would take me over the years since. Anyway, enough about the past….

New to 3PAR

Still the focus of the majority of HP storage related action these days, they had a lot to talk about. All of this initial stuff isn’t there yet(up until the 7450 stuff below), just what they are planning for at some point in the future(no time frames on anything that I recall hearing).

Asynchronous Streaming Replication

Just a passive mention of this on a slide, nothing in depth to report about, but I believe the basic concept is instead of having asynchronous replication running on snapshots that kick off every few minutes (perhaps every five minutes) the replication process would run much more frequently (but not synchronous still), perhaps as frequent as every 30 seconds or something.

I’ve never used 3PAR replication myself. Never needed array based replication really. I have built my systems in ways that don’t require array based replication. In part because I believe it makes life easier(I don’t build them specifically to avoid array replication it’s merely a side effect), and of course the license costs associated with 3PAR replication are not trivial in many circumstances(especially if your only needing to replicate a small percentage of the data on the system). The main place where I could see leveraging array based replication is if I was replicating a large number of files, doing this at the block layer is often times far more efficient(and much faster) than trying to determine changed bits from a file system perspective.

I wrote/built a distributed file transfer architecture/system for another company a few years ago that involved many off the shelf components(highly customized) that was responsible for replicating several TB of data a day between WAN sites, it was an interesting project and proved to be far more reliable and scalable than I could of hoped for initially.

Increasing Maximum Limits

I think this is probably out of date, but it’s the most current info I could dig up on HP’s site. Though this dates back to 2010. These pending changes are all about massively increasing the various supported maximum limits of various things. They didn’t get into specifics. I think for most customers this won’t really matter since they don’t come close to the limits in any case(maybe someone from 3PAR will read this and send me more up to date info).

3PAR OS 2.3.1 supported limits

3PAR OS 2.3.1 supported limits(2010)

The PDF says updated May 2013, though the change log says last update is December. HP has put out a few revisions to the document(which is the Compatibility Matrix) which specifically address hardware/software compatibility, but the most recent Maximum Limits that I see are for what is now considered quite old – 2.3.1 release – this was before their migration to a 64-bit OS (3.1.1).

Compression / De-dupe

They didn’t talk about it, other than mention it on a slide, but this is the first time I’ve seen HP 3PAR publicly mention the terms. Specifically they mention in-line de-dupe for file and block, as well as compression support. Again, no details.

Personally I am far more interested in compression than I am de-dupe. De-dupe sounds great for very limited workloads like VDI(or backups, which StoreOnce has covered already). Compression sounds like a much more general benefit to improving utilization.

Myself I already get some level of “de duplication” by using snapshots. My main 3PAR array runs roughly 30 MySQL databases entirely from read-write snapshots, part of the reason for this is to reduce duplicate data, another part of the reason is to reduce the time it takes to produce that duplicate data for a database(fraction of a second as opposed to several hours to perform a full data copy).

File + Object services directly on 3PAR controllers

No details here other than just mentioning having native file/object services onto the existing block services. They did mention they believe this would fit well in the low end segment, they don’t believe it would work well at the high end since things can scale in different ways there. Obviously HP has file/object services in the IBRIX product (though HP did not get into specifics what technology would be used other than taking tech from several areas inside HP), and a 3PAR controller runs Linux after all, so it’s not too far fetched.

I recall several years ago back when Exanet went bust, I was trying to encourage 3PAR to buy their assets as I thought it would of been a good fit. Exanet folks mentioned to me that 3PAR engineering was very protective of their stuff and were very paranoid about running anything other than the core services on the controllers, it is sensitive real estate after all.  With more recent changes such as supporting the ability to run their reporting software(System Reporter) directly on the controller nodes I’m not sure if this is something engineering volunteered to do themselves or not. Both approaches have their strengths and weaknesses obviously.

Where are 3PAR’s SPC-2 results?

This is a question I asked them (again). 3PAR has never published SPC-2 results. They love to tout their SPC-1, but SPC-2 is not there……. I got a positive answer though: Stay tuned.  So I have to assume something is coming.. at some point. They aren’t outright disregarding the validity of the test.

In the past 3PAR systems have been somewhat bandwidth constrained due to their use of PCI-X. Though the latest generation of stuff (7xxx/10xxx) all leverage PCIe.

The 7450 tops out at 5.2 Gigabytes/second of throughput, a number which they say takes into account overhead of a distributed volume system (it otherwise might be advertised as 6.4 GB/sec as a 2-node system does 3.2GB/sec). Given they admit the overhead to a distributed system now, I wonder how, or if, that throws off their previous throughput metrics of their past arrays.

I have a slide here from a few years ago that shows a 8-controller T800 supporting up to 6.4GB/sec of throughput, and a T400 having 3.2GB/sec (both of these systems were released in Q3 of 2008). Obviously the newer 10400 and 10800 go higher(don’t recall off the top of my head how much higher).

This compares to published SPC-2 numbers from IBM XIV at more than 7GB/sec, as well as HP P9500/HDS VSP at just over 13GB/sec.

3PAR 7450

Announced more than a month ago now, the 7450 is of course the purpose built flash platform which is, at the moment all SSD.

Can it run with spinning rust?

One of the questions I had, was I noticed that the 7450 is currently only available in a SSD-only configuration. No spinning rust is supported. I asked why this was and the answer was pretty much what I expected. Basically they were getting a lot of flak for not having something that was purpose built. So at least in the short term, the decision not to support spinning rust is purely a marketing one. The hardware and software is the same(other than being more beefy in CPU & RAM) than the other 3PAR platforms. The software is identical as well. They just didn’t want to give people more excuses to label the 3PAR architecture as something that wasn’t fully flash ready.

It is unfortunate that the market has compelled HP to do this, as other workloads would still stand to gain a lot especially with the doubling up of data cache on the platform.

Still CPU constrained

One of the questions asked by someone was about whether or not the ASIC is the bottleneck in the 7450 I/O results. The answer was a resounding NO – the CPU is still the bottleneck even at max throughput. So I followed up with why did HP choose to go with 8 core CPUs instead of 10-core which Intel of course has had for some time. You know how I like more cores! The answer was two fold to this. The primary reason was cooling(the enclosure as is has two sockets, two ASICs, two PCIe slots, 24 SSDs, 64GB of cache and a pair of PSUs in 2U). The second answer was the system is technically Ivy-bridge capable but they didn’t want to wait around for those chips to launch before releasing the system.

They covered a bit about the competition being CPU limited as well especially with data services, and the amount of I/O per CPU cycle is much lower on competing systems vs 3PAR and the ASIC.  The argument is an interesting one though at the end of the day the easy way to address that problem is throw more CPUs at it, they are fairly cheap after all. The 7000-series is really dense so I can understand the lack of ability to support a pair of dual socket systems within a 2U enclosure along with everything else. The 10400/10800 are dual socket(though older generation of processors).

TANGENT TIME

I really have not much cared for Intel’s code names for their recent generation of chips. I don’t follow CPU stuff all that closely these days(haven’t for a while), but I have to say it’s mighty easy to confuse code name A from B, which is newer? I have to look it up. every. single. time.

I believe in the AMD world (AMD seems to have given up on the high end, sadly), while they have code names, they have numbers as well. I know 6200 is newer than 6100 ..6300 is newer than 6200..it’s pretty clear and obvious. I believe this goes back to Intel and them not being able to trademark the 486.

On the same note, I hate Intel continuing to re-use the code word i7 in laptops. I have an Core i7 laptop from 3 years ago, and guess what the top end today still seems to be? I think it’s i7 still. Confusing. again.
</ END TANGENT >

Effortless SSD management of each SSD with proactive alerts

I wanted to get this in before going deeper into the cache optimizations since that is a huge topic. But the basic gist of this is they have good monitoring of the wear of the SSDs in the platform(something I think that was available on Lefthand a year or two ago), in addition to that the service processor (dedicated on site appliance that monitors the array) will alert the customer when the SSD is 90% worn out. When the SSD gets to 95% then the system pro-actively fails the drive and migrates data off of it(I believe). They raised a statistic that was brought up at Discover that something along the lines of 95% of all deployed SSDs in 3PAR were still in the field – very few have worn out. I don’t recall anyone mentioning the # of SSDs that have been deployed on 3PAR but it’s not an insignificant number.

SSD Caching Improvements in 3PAR OS 3.1.2

There have been a number of non trivial caching optimizations in the 3PAR OS to maximize performance as well as life span of SSDs. Some of these optimizations also benefit spinning rust configurations as well – I have personally seen a noticeable drop in latency in back end disk response time since I upgraded to 3.1.2 back in May(it was originally released in December), along with I believe better response times under heavy load on the front end.

Bad version numbers

I really dislike 3PAR’s version numbering, they have their reasons for doing what they do, but I still think it is a really bad customer experience. For example going from 2.2.4 to 2.3.1 back in what was it 2009 or 2010. The version number implies minor update, but this was a MASSIVE upgrade.  Going from 2.3.x to 3.1.1 was a pretty major upgrade too (as the version implied).  3.1.1 to 3.1.2 was also a pretty major upgrade. On the same note the 3.1.2 MU2 (patch level!) upgrade that was released last month was also a major upgrade.

I’m hoping they can fix this in the future, I don’t think enough effort is made to communicate major vs minor releases. The version numbers too often imply minor upgrades when in fact they are major releases. For something as critical as a storage system I think this point is really important.

Adaptive Read Caching

Adaptive Read Caching

3PAR Adaptive Read Caching for SSD (the extra bits being read there from the back end are to support the T10 Data Integrity Feature- available standard on all Gen4 ASIC 3PAR systems, and a capability 3PAR believes is unique in the all flash space for them)

One of the things they covered with regards to caching with SSD is the read cache is really not as effective(vs with spinning rust), because the back end media is so fast, there is significantly less need for caching reads. So in general, significantly more cache is used with writes.

For spinning rust 3PAR reads a full 16kB of data from the back end disk regardless of the size of the read on the front end (e.g. 4kB). This is because the operation to go to disk is so expensive already and there is no added penalty to grab the other 12kB while your grabbing the 4kB you need. The next I/O request might request part of that 12kB and you can save yourself a second trip to the disk when doing this.

With flash things are different. Because the media is so fast, you are much more likely to become bandwidth constrained rather than IOPS constrained.  So if for example you have that 500,000 4k read IOPS on the front end, and your performing those same 16kB read IOPS on the back end, that is, well 4x more bandwidth that is required to perform those operations. Again because the flash is so fast, there is significantly less penalty to go back to SSD again and again to retrieve those smaller blocks. It also improves latency of the system.

So in short, read more from disks because you can and there is no penalty, read only what you need from SSDs because you should and there is (almost) no penalty.

Adaptive Write Caching

Adaptive Write Caching

Adaptive Write Caching

With writes the situation is similar to reads, to maximize SSD life span, and minimize latency you want to minimize the number of write operations to the SSD whenever possible.

With spinning rust again 3PAR works with 16kB pages, if a 4kB write comes in then the full 16kB is written to disk, again  because there is no additional penalty for writing the 16kB vs writing 4kB. Unlike SSDs your not likely bandwidth constrained when it comes to disks.

With SSDs, the optimizations they perform, again to maximize performance and reduce wear, is if a 4kB write comes in, a 16kB write occurs to the cache, but only the 4kB of changed data is committed to the back end.

If I recall right they mentioned this operation benefits RAID 1 (anything RAID 1 in 3PAR is RAID 10, same for RAID 5 – it’s RAID 50) significantly more than it benefits RAID 5/6, but it still benefits RAID 5/6.
 

Autonomic Cache offload

Autonomic Cache Offload

Autonomic Cache Offload

Here the system changes the frequency at which it flushes cache to back end media based on utilization. I think this plays a lot into the next optimization.

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Multi Tenant I/O Processing

3PAR has long been about multi tenancy of their systems. The architecture lends itself well to running in this mode though it wasn’t perfect, I believe for the most part the addition of Priority Optimization that was announced late last year and finally released last month fills the majority of the remainder of that hole. I have run “multi tenant” 3PAR systems since the beginning. Now to be totally honest the tenants were all me, just different competing workloads, whether it is disparate production workloads or a mixture of production and non production(and yes in all cases they ran on the same spindles). It wasn’t nearly as unpredictable as say a service provider with many clients running totally different things, that would sort of scare me on any platform. But there was still many times where rogue things (especially horrible SQL queries) overran the system (especially write cache). 3PAR handles it as well, if not better than anyone else but every system has it’s limits.

Front end operations

The caching flushing process to back end media is now multi threaded. This benefits both SSD as well as existing spinning rust configurations. Significantly less(no?) locking involved when flushing cache to disk.

Here is a graph from my main 3PAR array, you can see the obvious latency drop from the back end spindles once 3.1.2 was installed back in May (again the point of this change was not to impact back end disk latency as much as it was to improve front end latency, but there is a significant positive behavior change post upgrade):

Latency Change on back end spinning rust with 3.1.2
Latency Change on back end spinning rust with 3.1.2

There was a brief time when latency actually went UP on the back end disks. I was concerned at first but later determined this was the disk defragmentation processes running(again with improved algorithms), before the upgrade they took FAR too long, post upgrade they completed a big backlog in a few days and latency returned to low levels.

Multi Tenant Caching

Multi Tenant Caching

Back end operations

On the topic of multi tenant with SSDs an interesting point was raised which I had never heard of before. They even called it out as being a problem specific to SSDs, and does not exist with spinning rust. Basically the issue is if you have two workloads going to the same set of SSDs, one of them issuing large I/O requests(e.g. sequential workload), and the other issuing small I/O requests(e.g. 4kB random read), the smaller I/O requests will often get stuck behind the larger ones causing increases in latency to the app using smaller I/O requests.

To address this, the 128kB I/Os are divided up into four 32kB I/O requests and sent in parallel to the other workload. I suppose I can get clarification but I assume for a sequential read operation with 128kB I/O request there must not be any additional penalty for grabbing the 32kB, vs splitting it up even further into even more smaller I/Os.
 
 
 

Maintaining performance during media failures

3PAR has always done wide striping, and sub disk distributed RAID so the rebuild times are faster, the latency is lower and all around things run better(no idle hot spares) that way vs the legacy designs of the competition. The system again takes additional steps now to maximize SSD life span by optimizing the data reads and writes under a failure condition.

HP points out that SSDs are poor at large sequential writes, so as mentioned above they divide the 128kB writes that would be issued during a rebuild operation (since that is largely a sequential operation) into 32kB I/Os again to protect those smaller I/Os from getting stuck behind big I/Os.

They also mentioned that during one of the SPC-1 tests (not sure if it was 7400 or 7450) one of the SSDs failed and the system rebuilt itself. They said there was no significant performance hit(as one might expect given experience with the system) as the test ran. I’m sure there was SOME kind of hit especially if you drive the system to 100% of capacity and suffer a failure. But they were pleased with the results regardless. The competition would be lucky to have something similar.

What 3PAR is not doing

When it comes to SSDs and caching something 3PAR is not doing, is leveraging SSDs to optimize back end I/Os to other media as sequential operations. Some storage startups are doing this to gain further performance out of spinning rust while retaining high random performance using SSD. 3PAR doesn’t do this and I haven’t heard of any plans to go this route.

Conclusion

I continue to be quite excited about the future of 3PAR, even more so pre acquisition. HP has been able to execute wonderfully on the technology side of things. Sales from all accounts at least on the 7000 series are still quite brisk. Time will tell if things hold up after EVA is completely off the map, but I think they are doing many of the right things. I know even more of course but can’t talk about it here(yet)!!!

That’s it for tonight, at ~4,000 (that number keeps going up, I should goto bed) words this took three hours or more to write+proof read, it’s also past 2AM. There is more to cover, the 3PAR stuff was obviously what I was most interested in. I have a few notes from the other sessions but they will pale in comparison to this.

Today I had a pretty good idea on how HP could improve it’s messaging around whether to choose 3PAR or StoreVirtual for a particular workload. The messaging to-date to me has been very confusing and conflicting (HP tried to drive home a point about single platforms and reducing complexity, something this dual message seems to conflict with).  I have been communicating with HP off and on for the past few months, and today out of the blue I  came up with this idea which I think will help clear the air. I’ll touch on this soon when I cover the other areas that were talked about today.

Tomorrow seems to be a busy day, apparently we have front row seats, and the only folks with power feeds. I won’t be “live blogging”(as some folks tend to love to do), I’ll leave that to others. I work better at spending some time to gather thoughts and writing something significantly longer.

If you are new to this site you may want to check out a couple of these other articles I have written about 3PAR(among the dozens…)

Thanks for reading!

July 24, 2013

Going to Nth Symposium 2013 – Anaheim, CA

Filed under: Events — Tags: — Nate @ 9:28 am

I have been MIA for a while sorry about that, I hope to write more soon, there are a few things I want to touch on, though have been fairly busy (and lazy I admit) and haven’t gotten round to writing.

I wanted to mention though I’ll be going to the Nth Symposium 2013 down in Anaheim CA next week. HP invited me and I was able to take them up on this one. This is the first time I’ve accepted anything like this where HP is covering travel and lodging costs for myself and a dozen or so other bloggers etc. They say I’m under no obligation to write anything, good or bad about anything I see. I imagine there will be at least a couple posts.. I’m way late writing about the new 3PAR flash stuff, I’ve been busy enough I haven’t gotten the briefing from them on the details. I’ll get that next week, and be able to ask any questions I may have on it. Oh, I have to remember HP gave me a special disclaimer to put on those blog posts to say how they paid my way to avoid FCC problems..

While the Nth event does have costs, they seem to waive them if your qualified (have to work in IT).

There is also an HP Tech Day on Monday I believe(private event?), I believe I was told that was a sort of mini Discover.

I’ve never been much for IT conferences(or any event with lots of people), though I hope this one will be better than past ones I have attended, since at least some of the topics are more my pace, and I’ll know a few people there.

I’ll be in Orange County all of next week(driving down on Sunday leaving Saturday or the following Sunday), HP is covering a few of the days that the conference is at, the rest is out of my pocket. I lived in OC for the latter half of the 90s, so I have some friends there, and the bulk of my immediate family resides there as well.

If your in the area and want to get some drinks drop me a line.. I don’t know what my schedule is yet, other than Thur/Fri night I am available for sure (Mon-Wed night may have some of the night taken up by HP I don’t know).

November 6, 2012

Hazards of Multi Master MySQL Webinar

Filed under: Events — Tags: — Nate @ 8:27 am

Stupid me, here I was thinking if you run MySQL in multi master mode it should not have issues with writes coming in to multiple locations. I’ve never heard of any issues(at the same time it’s been a while since I’ve heard anyone talk about running multi master MySQL themselves), but apparently there are some as this guy is indicating, he has a webinar about it on November 15th. It sounds interesting to me, though I’ll be on the road that day so won’t be able to listen in, hopefully he posts the data afterwards.

At my current organization we do have multi master MySQL though we have yet to run them active-active (with writes going to both) for more than a short period at a time (usually just during fail-over events – “oh my god MySQL is about to crash fail over!”). The load balancing is handled by our Netscalers and their MySQL-aware load balancing.  Overall the load is low enough(avg under 10% CPU and avg 75 write IOPS/DB – all reads being served from RAM)  that I don’t think it’d provide any performance benefit to us anyway.

From the MySQL performance blog post

This talk gives an overview and concrete examples of how writing across dual-masters can and will break your assumptions about ACID compliance, how you can work around it, and some alternative solutions that are on the market today that attempt to address this problem.  This will be a great session for DBAs just getting into this problem space, are moving from hot-cold architectures to hot-warm or even hot-hot, and even for developers to get a sense of the difficultly of this problem.

 

 

August 19, 2011

Touchpad fire sale starts tomorrow

Filed under: Events,General — Tags: , — Nate @ 4:57 pm

UPDATED HP has apparently given their partners notice to liquidate everything – 16GB Touchpads will go for $99, and 32GB go for $149 according to Pre Central.

 

HP Touchpad

Sale starts tomorrow apparently, I will go get some 16GBs myself, 3? 4?  5? Not sure yet..

Even if you use it for nothing other than web browsing (or watching video) it’s a nice device. Pair it with a Touchstone (I love the Touchstone technology) and you got yourself a high resolution picture frame that can do far more for about the same cost as a 1024×768 standalone frame. I assume the accessories for the Touchpad will be dirt cheap as well.

Beats the hell out of a fire sale Android tablet I saw on Daily Steals recently.

Apparently some users in Europe received the Pre3 already, so they have manufactured  some, hopefully I can get my hands on a couple.

UPDATE – I visited about a dozen stores in the bay area none of them had any Touchpads in stock this morning. Best Buy is refusing to participate in the fire sale and I think supply is generally constrained. I suspect mostly what is happening is most stores probably have single digit numbers of Touchpads in stock, I’d be surprised if in many cases employees weren’t buying some of those right away, then all it takes is one or two customers to consume the remainder of the stock. More than one place I went to today had others like me racing around the area trying to find stock.

Thought it was kind of funny, one of the office supply stores – forgot which – had little advertisements with the reduced pricing (though for them it was $129 for the 16GB), and said good from now till the end of the year while supplies last – only the supplies didn’t even last through the first morning.

I read many similar stories on Pre Central in other parts of the country. Then I saw a link mentioning the HP SMB store had Touchpads available for order (the “Home” store lists them as out of stock).

So I tried to order — and was confronted with out of memory errors on the HP servers

again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and

I’ll save you the next 100 lines of that, and just say I was pretty persistant, working for more than two hours to try to get an order through. Not long after I started I found that Firefox and Seamonkey which I normally use in Linux was not rendering the billing address page on HP’s site, I suspect the page was somehow broken because HP’s servers are in a hissy fit at the moment. So I tried Opera – fortunately Opera was able to render the page. After trying again and again and again and.. I finally got an order confirmed for 4 x 16GB Touchpads for $99/ea.

I hope HP honors it, I suspect they will have enough to ship given the massive numbers of units they will be receiving from Best buy (apparently something like 250,000 units).

Woohoo.

Though earlier today I was really thinking to myself it really would of been nice if HP had given preferential treatment to those early adopters, give them the opportunity to buy more before the rest of the public. I suppose there wasn’t any time, and possibly wasn’t anyone around that cared enough to try to arrange something like that. I just hate the idea that small numbers of people may be able to buy large numbers of touchpads (I saw one report of a person buying 35 of them at once), and trying to profit off of it on eBay (reminds me of a lady that mortgaged her home and tried to buy every iPhone at a store one day only to be told she was only allowed to buy one). For me I will use one of the four for work use only, at least two more will likely be gifts for family I think they will like them a lot, and the 4th I haven’t figured out yet, probably a spare unit or something, since HP apparently is not going to honor warranties on these units (though I believe they are honoring them on the ones sold at full price).

Since I was in such a rush to order – I’m contemplating trying to go through the process again to get a 32GB touchpad – so I can return it to best buy for $599 (what I paid for my current TP). I thought about just returning my current one but migrating the data and stuff would be a pain – and I know my current TP works great – there’s always a chance that a replacement might have a glitch.

Best buy extended the return period for Touchpad to 60 days, which gives me about 11 days if I wanted to return mine.

UPDATE 2 Just checked and HP charged my CC .. so I should be good to go

UPDATE 3 Best buy changed their policy again and is allowing customers to buy 1 per customer while supplies last, reports seem to be stores selling out within minutes or stores haven’t gotten the word yet, Best buy removed the SKUs from their systems and are having to put them back in again. See more info here. Looks like they’ll refund the difference in price for my launch day 32GB TP, just need to find that receipt…

April 14, 2011

Palm Pixi Plus $40 w/no contract today only

Filed under: Events — Nate @ 11:25 am

Just came across this on PreCentral, seems like a good deal to me, I ordered two. I don’t know if I will ever really use them, I think I could give one as a gift or something and keep the other for..I don’t know. I’m a big fan of WebOS though, and these phones are wifi equipped. I suppose they could be fancy MP3 players if nothing else they have 7GB usable storage, and I don’t believe they have any expansion storage.

Here are the specs for the Palm Pixi Plus.

Offer is good for today only. I have not heard of The daily steals web site until today so I can’t vouch for them(yet). As usual I used a temporary credit card to make the purchase.

I retired my Palm Pre 1 while I wait for a Pre 3, and I must say since I’ve had the Pre in airport mode the battery lasts forever, wow. I was really truely shocked. It could go all day and maybe drop 2-3% at the most. Battery life was at the lowest level when I had my Pre integrated with Exchange I’d have to charge it a couple times a day. Thankfully the touchstone made that easy.

February 8, 2011

New WebOS announcements tomorrow

Filed under: Events,Random Thought — Tags: , — Nate @ 9:11 pm

Looking forward myself to the new WebOS announcements coming from HP/Palm, seem to be at about noon tomorrow. I’ve been using a Palm Pre for almost two years now I think, and recently the keyboard on it stopped working, so hoping to see some good stuff announced tomorrow. Not sure what I will do – I don’t trust Google or Apple or Microsoft, so for smart phones it’s Palm and Blackberry. WebOS is a really nice software platform from a user experience standpoint it’s quite polished. I’ve read a lot of complaints about the hardware from some folks, until recently my experience has been pretty good though. As an email device the blackberry rocked, though I really don’t have to deal with much email (or SMS for that matter).

Maybe I’ll go back to a ‘feature phone’ and get a WebOS tablet, combined with my 3G/4G Mifi and use that as my web-connected portable device or something. My previous Sanyo phones worked really well. Not sure where I’m at with my Sprint contract for my phone, and Sprint no longer carries the Pre and doesn’t look like it will carry the Pre 2. I tried the Pixi when it first came out but the keyboard keys were too small for my fingers even when using the tips of my fingers.

I found a virtual keyboard app which lets me hobble along on my Pre in the meantime while I figure out what to do.

August 23, 2010

HP to the rescue

Filed under: Datacenter,Events,News,Storage — Tags: , , , , — Nate @ 6:03 am

Knock knock.. HP is kicking down your back door 3PAR..

Well that’s more like it, HP offered $1.6 Billion to acquire 3PAR this morning topping Dell’s offer by 33%. Perhaps the 3cV solution can finally be fully backed by HP. More info from The Register here. And more info on what this could mean to HP and 3PAR products from the same source here.

3PAR’s website is having serious issues, this obviously has spawned a ton of interest in the company, I get intermittent blank pages and connection refused messages.

I didn’t wake my rep up for this one.

The 3cV solution was announced about three years ago –

Elements of the 3cV solution include:

  • 3PAR InServ® Storage Servers—highly virtualized, tiered-storage arrays built for utility computing. Organizations creating virtualized IT infrastructures for workload consolidation use InServ arrays to reduce the cost of allocated storage capacity, storage administration, and SAN infrastructure.
  • HP BladeSystem c-Class Server Blades—the leading blade server infrastructure on the market for datacenters of all sizes. HP BladeSystem c-Class server blades minimize energy and space requirements and increase administrative productivity through advantages in I/O virtualization, powering and cooling, and manageability.
  • VMware vSphere—the leading virtualization platform for industry-standard servers. VMware vSphere helps customers reduce capital and operating expenses, improve agility, ensure business continuity, strengthen security, and go green.

While I could not find the image that depicts the 3cV solution(not sure how long it’s been gone for), here is more info on it for posterity.

The Advantages of 3cV
3cV offers combined benefits that enable customers to manage and scale their server and storage environments simply, allowing them to halve server, storage and operational costs while lowering the environmental impact of the datacenter.

  • Reduces storage and server costs by 50%—The inherently modular architectures of the HP BladeSystem c-Class and the 3PAR InServ Storage Server—coupled with the increased utilization provided by VMware Infrastructure and 3PAR Thin Provisioning—allow 3cV customers to do more with less capital expenditure. As a result, customers are able to reduce overall storage and server costs by 50% or more. High levels of availability and disaster recovery can also be affordably extended to more applications through VMware Infrastructure and 3PAR thin copy technologies.
  • Cuts operational costs by 50% and increases business agility—With 3cV, customers are able to provision and change server and storage resources on demand. By using VMware Infrastructure’s capabilities for rapid server provisioning and the dynamic optimization provided by VMware VMotion and Distributed Resource Scheduler (DRS), HP Virtual Connect and Insight Control management software, and 3PAR Rapid Provisioning and Dynamic Optimization, customers are able to provision and re-provision physical servers, virtual hosts, and virtual arrays with tailored storage services in a matter of minutes, not days. These same technologies also improve operational simplicity, allowing overall server and storage administrative efficiency to increase by 3x or more.
  • Lowers environmental impact—With 3cV, customers are able to cut floor space and power requirements dramatically. Server floor space is minimized through server consolidation enabled by VMware Infrastructure (up to 70% savings) and HP BladeSystem density (up to 50% savings). Additional server power requirements are cut by 30% or more through the unique virtual power management capabilities of HP Thermal Logic technology. Storage floor space is reduced by the 3PAR InServ Storage Server, which delivers twice the capacity per floor tile as compared to alternatives. In addition, 3PAR thin technologies, Fast RAID 5, and wide striping allow customers to power and cool as much as 75% less disk capacity for a given project without sacrificing performance.
  • Delivers security through virtualization, not dedicated hardware silos—Whereas traditional datacenter architectures force tradeoffs between high resource utilization and the need for secure segregation of application resources for disparate user groups, 3cV resolves these competing needs through advanced virtualization. For instance, just as VMware Infrastructure securely isolates virtual machines on shared severs, 3PAR Virtual Domains provides secure “virtual arrays” for private, autonomous storage provisioning from a single, massively-parallel InServ Storage Server.

Though due to the recent stack wars it’s been hard for 3PAR to partner with HP to promote this solution since I’m sure HP would rather push their own full stack. Well hopefully now they can. The best of both worlds technology wise can come together.

More details from 3PAR’s VMware products site.

From HP’s offer letter

We propose to increase our offer to acquire all of 3PAR outstanding common stock to $24.00 per share in cash. This offer represents a 33.3% premium to Dell’s offer price and is a “Superior Proposal” as defined in your merger agreement with Dell. HP’s proposal is not subject to any financing contingency. HP’s Board of Directors has approved this proposal, which is not subject to any additional internal approvals. If approved by your Board of Directors, we expect the transaction would close by the end of the calendar year.

In addition to the compelling value offered by our proposal, there are unparalleled strategic benefits to be gained by combining these two organizations. HP is uniquely positioned to capitalize on 3PAR’s next-generation storage technology by utilizing our global reach and superior routes to market to deliver 3PAR’s products to customers around the world. Together, we will accelerate our ability to offer unmatched levels of performance, efficiency and scalability to customers deploying cloud or scale-out environments, helping drive new growth for both companies.
As a Silicon Valley-based company, we share 3PAR’s passion for innovation.
[..]

We understand that you will first need to communicate this proposal and your Board’s determinations to Dell, but we are prepared to execute the merger agreement immediately following your termination of the Dell merger agreement.

Music to my ears.

[tangent — begin]

My father worked for HP in the early days back when they were even more innovative than they are today, he recalled their first $50M revenue year. He retired from HP in the early 90s after something like 25-30 years.

I attended my freshman year at Palo Alto Senior High school, and one of my classmates/friends (actually I don’t think I shared any classes with him now that I think about it) was Ben Hewlett, grandson of one of the founders of HP. Along with a couple other friends Ryan and Jon played a bunch of RPGs (I think the main one was Twilight 2000, something one of my other friends Brian introduced me to in 8th grade).

I remember asking Ben one day why he took Japanese as his second language course when it was significantly more difficult than Spanish(which was the easy route, probably still is?) I don’t think I’ll ever forget his answer. He said “because my father says it’s the business language of the future..”

How times have changed.. Now it seems everyone is busy teaching their children Chinese. I’m happy knowing English, and a touch of bash and perl.

I never managed to keep in touch with my friends from Palo Alto, after one short year there I moved back to Thailand for two more years of high school there.

[tangent — end]

HP could do some cool stuff with 3PAR, they have much better technology overall, I have no doubt HP has their eyes on their HDS partnership and the possibility of replacing their XP line with 3PAR technology in the future has got to be pretty enticing. HDS hasn’t done a whole lot recently, and I read not long ago that regardless what HP says, they don’t have much (if any) input into the HDS product line.

The HP USP-V OEM relationship is with Hitachi SSG. The Sun USP-V reseller deal was struck with HDS. Mikkelsen said: “HP became a USP-V OEM in 2004 when the USP-V was already done. HP had no input to the design and, despite what they say, very little input since.” HP has been a Hitachi OEM since 1999.

Another interesting tidbit of information from the same article:

It [HDS] cannot explain why it created the USP-V – because it didn’t, Hitachi SSG did, in Japan, and its deepest thinking and reasons for doing so are literally lost in translation.

The loss of HP as an OEM customer of HDS, so soon after losing Sun as an OEM customer would be a really serious blow to HDS(one person I know claimed it accounts for ~50% of their business), whom seems to have a difficult time selling stuff in western countries, I’ve read it’s mostly because of their culture. Similarly it seems Fujitsu has issues selling stuff in the U.S. at least, they seem to have some good storage products but not much attention is paid to them outside of Asia(and maybe Europe). Will HDS end up like Fujtisu as a result of HP buying 3PAR? Not right away for sure, but longer term they stand to lose a ton of market share in my opinion.

And with the USP getting a little stale (rumor has it they are near to announcing a technology refresh for it), it would be good timing for HP to get 3PAR, to cash in on the upgrade cycle by getting customers to go with the T class arrays instead of the updated USP whenever possible.

I read on an HP blog earlier in the year an interesting comment –

The 3PAR is drastically less expensive than an XP, but is an active/active concurrent design, can scale up to 8 clustered controllers, highly virtualized, customers can self-install, self-maintain, and requires no professional services. Its on par with the XP in terms of raw performance, but has the ease of use of the EVA. Like the XP, the 3PAR can be carved up into virtual domains so that service providers or multi-tenant arrays can have delegated administration.

I still think 3PAR is worth more, and should stay independent, but given the current situation would much rather have them in the arms of HP than Dell.

Obviously those analysts that said Dell paid too much for 3PAR were wrong, and didn’t understand the value of the 3PAR technology. HP does otherwise they wouldn’t be offering 33% more cash.

After the collapse of so many of 3PAR’s NAS partners over the past couple of years, the possibility of having Ibrix available again for a longer term solution is pretty good. Dell bought Exanet’s IP earlier in the year. LSI owns Onstor, HP bought Polyserve and Ibrix. Really just about no “open” NAS players left. Isilon seems to be among the biggest NAS players left but of course their technology is tightly integrated into their disk drive systems, same with Panasas.

Maybe that recent legal investigation into the board at 3PAR had some merit after all.

Dell should take their $billion and shove it in Pillar’s(or was it Compellent ? I forgot) face, so the CEO there can make his dream of being a billion dollar storage company come true, if only for a short time.

I’m not a stock holder or anything, I don’t buy stocks(or bonds).

Older Posts »

Powered by WordPress