TechOpsGuys.com Diggin' technology every day

November 9, 2010

New NetApp boxes

Filed under: Storage — Nate @ 8:19 pm

So it looks like NetApp launched some beefy new systems yesterday, though I got to say if I was a customer of theirs I would feel kind of cheated on the 3200 series systems since they have stuck to dual core processors, when quad core has been available forever. In the “world of Intel” in my eyes there’s no excuse to release anything that’s not at least quad core unless your trying to squeeze your customers for every last bit (which I’m sure they are…).

Companies like NetApp could take a hint from someone like Citrix, who has a few Netscaler load balancers that they software rate limit the throughput but give you the same hardware as the higher end boxes. So take the 17500 model rated for 20Gbps, you can software upgrade that to more than double the throughput to 50Gbps. But the point isn’t the increased throughput via the software upgrade. The point is having the extra CPU horsepower on the smaller end box so that you can enable more CPU intensive features without incurring a noticeable performance hit because you have so much headroom on the system CPU wise.

NetApp introduced compression as one of their new features(I think it’s new, maybe wrong). That is of course likely to be a fairly CPU intensive operation. If they had quad or hex core CPUs in there, you could do a lot more, even if they limited your IOPS or throughput to X amount. Maybe they don’t have a good way of artificially rate limiting.

But even without rate limiting, it costs them a trivial amount of money to put quad core processors, they just want to squeeze their customers.

Even 3PAR put quad core processors in their F400 system more than a year ago. This is despite the Intel CPUs not doing much work on the 3PAR side, most of the work is done by their Gen3 ASIC. But they realize it’s a trivial cost to put in the beefier processor so they do it.

Their new 6200 series controllers do have quad core processors, among other improvements I’m sure. The previous 6000 series was quad socket. (in case your wondering where I’m getting these processors stats from it’s from the SPEC disclosure)

NetApp was fast to post both SPEC SFS results for their 3200 and 6200 series, as well as SPC-1E results for their 3200.

All in all very impressive results for SPEC SFS, very efficient results for SPC-1, both heavily assisted by 1TB of their flash cache. Interestingly enough at least on the SPC-1 side since full cost disclosures are there, the cost per usable TB and cost per IOP still doesn’t match that of the F400 (which has many more drives, and running RAID 1+0, and more than a year old so would consider the F400 at a great disadvantage but still wins out). SPC-1E isn’t a full SPC-1 test though, it’s more about power efficiency than raw performance. So time will tell if they do a “regular” SPC-1 test, their SPC-1E IOPS is about the same as their 3170, and the 3270 has much faster CPUs so I’d think it’s pretty safe to say that the controllers have capacity to go beyond 68,000 IOPS.

Nice upgrade for their customers in any case.

 

November 4, 2010

Chicken and the egg

Filed under: Random Thought,Storage,Virtualization — Tags: , , , , , , — Nate @ 5:24 pm

Random thought time! –  came across an interesting headline on Chuck’s Blog – Attack of the Vblock Clones.

Now I’m the first to admit I didn’t read the whole thing but the basic gist he is saying if you want a fully tested integrated stack (of course you know I don’t like these stacks they restrict you too much, the point of open systems is you can connect many different types of systems together and have them work but anyways), then you should go with their VBlock because it’s there now, and tested, deployed etc. Others recently announced initiatives are responses to the VBlock and VCE, Arcadia(sp?) etc.

I’ve brought up 3cV before, something that 3PAR coined back almost 3 years ago now. Which is, in their words Validated Blueprint of 3PAR, HP, and VMware Products Can Halve Costs and Floor Space”.

And for those that don’t know what 3cV is, a brief recap –

The Elements of 3cV
3cV combines the following products from 3PAR, HP, and VMware to deliver the virtual data center:

  • 3PAR InServ Storage Server featuring Virtual Domains and thin technologies—The leading utility storage platform, the 3PAR InServ is a highly virtualized tiered-storage array built for utility computing. Organizations creating virtualized IT infrastructures for workload consolidation use the 3PAR InServ to reduce the cost of allocated storage capacity, storage administration, and the SAN infrastructure.
  • HP BladeSystem c-Class—The No. 1 blade infrastructure on the market for datacenters of all sizes, the HP BladeSystem c-Class minimizes energy and space requirements and increases administrative productivity through advantages in I/O virtualization, power and cooling, and manageability. (1)
  • VMware Infrastructure—Infrastructure virtualization suite for industry-standard servers. VMware Infrastructure delivers the production-proven efficiency, availability, and dynamic management needed to build the responsive data center.

Sounds to me that 3cV beat VBlock to the punch by quite a ways. It would have been interesting to see how Dell would of handled the 3cV solution had they managed to win the bidding war, given they don’t have anything that competes effectively with c-Class. But fortunately HP won out so 3cV can be just that much more official.

It’s not sold as a pre-packaged product I guess you could say, but I mean how hard is it to say I need this much CPU, this much ram, this much storage HP go get it for me. Really it’s not hard. The hard part is all the testing and certification. Even if 3cV never existed you can bet your ass that it would work regardless. It’s not that complicated, really. Even if Dell managed to buy 3PAR and kill off the 3cV program because they wouldn’t want to directly promote HP’s products, you could still buy the 3PAR from Dell and the blades from HP and have it work. But of course you know that.

The only thing missing from 3cV is I’d like a more powerful networking stack, or at least sFlow support. I’ll take Flex10 (or Flexfabric) over Cisco any day of the week but I’d still like more.

I don’t know why this thought didn’t pop into my head until I read that headline, but it gave me something to write about.

But whatever, that’s my random thought of the day/week.

October 28, 2010

Compellent beats expectations

Filed under: News,Random Thought,Storage — Tags: , — Nate @ 11:10 am

Earlier in the year Compellent‘s stock price took a big hit following lower expectations for sales and stuff, a bunch of legal stuff followed that, it seems yesterday they redeemed themselves though with their stock going up nearly 33% after they tripled their profits or something.

I’ve had my eye on Compellent for a couple of years now, don’t remember where I first heard about them. They have similar technology to 3PAR, just it’s implemented entirely in software using Intel CPUs as far as I know vs 3PAR leveraging ASICs (3PAR has Intel CPUs too but they aren’t used for too much).

I have heard field reports that because of this that their performance is much more on the lower end of things, they have never published a SPC-1 result and I don’t know anyone that uses them so don’t know how they really perform.

They seem to use the same Xyratex enclosures that most everyone else uses. Compellent’s controllers do seem to be somewhat on the low end of things, I really have nothing other to go on other than cache. With their high end controller coming in at only 3.5GB of cache (I assume 7GB mirrored for a pair of controllers?) it is very light on cache. The high end has a dual core 3.0Ghz CPU.

The lower amount of cache combined with their software-only design and only two CPUs per controller and the complex automated data movement make me think the systems are built for the lower end and not as scalable, but I’m sure perfectly useful for the market they are in.

Would be nice to see how/if their software can scale if they were to put say a pair of 8 or 12 core CPUs in their controllers. After all since they are leveraging x86 technology performing such an upgrade should be pretty painless! Their controller specs have remained the same for a while now(as far back as I can remember). The bigger CPUs will use more power, but from a storage perspective I’m happy to give a few hundred more watts if I can get 5x+ the performance, don’t have to think once, yet alone twice.

They were, I believe the first to have automagic storage tiering and for that they deserve big props, though again no performance numbers posted (that I am aware of) that can illustrate the benefits this technology can bring to the table. I mean if anybody can prove this strategy works it should be them right? On paper it certainly sounds really nice but in practice I don’t know, haven’t seen indications that it’s as ready as the marketing makes it out to be.

My biggest issue with automagic storage tiering is how fast the array can respond to “hot” blocks and optimize itself, which is why I think from a conceptual perspective I really like the EMC Fast Cache approach more (they do have FAST LUN and sub LUN tiering too). Not that I have any interest in using EMC stuff but they do have cool bits here and there.

Maybe Compellent the next to get bought out (as a block storage company yeah I know they have their zNAS), I believe from a technology standpoint they are in a stronger position than the likes of Pillar or Xiotech.

Anyway that’s my random thought of the day

October 10, 2010

Intel or ASIC

Filed under: Random Thought,Storage — Tags: , , , , — Nate @ 11:33 am

Just another one of my random thoughts I have been having recently.

Chuck wrote a blog not too long ago how he believes everyone is going to go to Intel (or x86 at least) processors in their systems and move away from ASICs.

He illustrated his point by saying some recent SPEC NFS results showed the Intel based system outperforming everything else. The results were impressive, the only flaw in them is that the costs are not disclosed for SPEC. An EMC VMAX with 96 EFDs isn’t cheap. And the better your disk subsystem is the faster your front end can be.

Back when Exanet was still around they showed me some results from one of their customers testing SPEC SFS on the Exanet LSI (IBM OEM’d) back end storage vs 3PAR storage, and for the same number of disks the SPEC SFS results were twice as high on 3PAR.

But that’s not my point here or question. A couple of years ago NetApp posted some pretty dismal results for the CX-3 with snapshots enabled. EMC doesn’t do SPC-1 so NetApp did it for them. Interesting.

After writing up that Pillar article where I illustrated the massive efficiency gains on the 3PAR architecture(which is in part driven by their own custom ASICs), it got me thinking again, because as far as I can tell Pillar uses x86 CPUs.

Pillar offers multiple series of storage controllers to best meet the needs of your business and applications. The Axiom 600 Series 1 has dual-core processors and supports up to 24GB cache. The Axiom 600 Slammer Series 2 has quad-core processors and double the cache providing an increase in IOPS and throughput over the Slammer Series 1.

Now I can only assume they are using x86 processors, for all I know I suppose they could be using Power, or SPARC, but I doubt they are using ARM 🙂

Anyways back to the 3PAR architecture and their micro RAID design. I have written in the past about how you can have tens to hundreds of thousands of mini RAID arrays on a 3PAR system depending on the amount of space that you have. This is, of course to maximize distribution of data over all resources to maximize performance and predictability. When running RAID 5 or RAID 6, there are of course parity calculations involved. I can’t help but wonder what sort of chances in hell a bunch of x86 CPU cores have in calculating RAID in real time for 100,000+ RAID arrays, with 3 and 4TB drives not too far out, you can take that 100,000+ and make it 500,000.

Taking the 3PAR F400 SPC-1 results as an example, here is my estimate on the number of RAID arrays on the system, fortunately it’s mirrored so math is easier:

  • Usable capacity = 27,053 GB (27,702,272 MB)
  • Chunklet size = 256MB
  • Total Number of RAID-1 arrays = ~ 108,212
  • Total Number of data chunklets = ~216,424
  • Number of data chunklets per disk = ~563
  • Total data size per disk = ~144,128 MB (140.75 GB)

For legacy RAID designs it’s probably not a big deal, but as disk drives grow ever bigger I have no doubt that everyone will have to move to a distributed RAID architecture, to reduce disk rebuild times and lower the chances of a double/triple disk failure wiping out your data. It is unfortunate (for them) that Hitachi could not pull that off in their latest system.

3PAR does use Intel CPUs in their systems as well, though they aren’t used too heavily, on the systems I have had even at peak spindle load I never really saw CPU usage above 10%.

I think ASICs are here to stay for some time, on the low end you will be able to get by with generic CPU stuff, but on the higher end it will be worth the investment to do it in silicon.

Another place to look for generic CPUs vs ASICs is in the networking space. Network devices are still heavily dominated by ASICs because generic CPUs just can’t keep up. Now of course generic CPUs are used for what I guess could be called “control data”, but the heavy lifting is done by silicon. ASICs often draw a fraction of the power that generic CPUs do.

Yet another place to look for generic CPUs vs ASICs is in the HPC space – the rise of GPU-assisted HPC allowing them to scale to what was (to me anyways) unimaginable heights.

Generic CPUs are of course great to have and they have come a long way, but there is a lot of cruft in them, so it all depends on what your trying to accomplish.

The fastest NAS in the world is still BlueArc, which is powered by FPGAs, though their early cost structures put them out of reach for most folks, their new mid range looks nice, my only long time complaint about them has been their back end storage – either LSI or HDS, take it or leave it. So I leave it.

The only SPEC SFS results posted by BlueArc are for the mid range, nothing for their high end (which they tested on the last version of SFS, nothing yet for the current version).

 

October 9, 2010

Capacity Utilization: Storage

Filed under: Storage — Tags: , , , — Nate @ 9:49 am

So I was browsing through that little drop down address bar in Firefox hitting the sites I usually hit, and I decided hey let’s go look at what Pillar is doing. I’ve never used their stuff but I dig technology you know, so I like to try to keep tabs on companies and products that I haven’t used, and may never consider using, good to see what the competition is up to, because you never know they may come out with something good.

Tired of the thin argument

So the CEO of Pillar has a blog, and he went on a mini rant about how 3PAR^H^H^H^HHP is going around telling people you can get 100TB of capacity in one of their 70TB arrays. I haven’t read too deep into what the actual claim they are making is, but being so absolutely well versed in 3P..HP technology I can comment with confidence in what their strategy is and how they can achieve those results. Whether or not they are effective at communicating that is another story, I don’t know because well I don’t read everything they say.

Pillar notes that HP is saying that due to the 3PAR technologies you can get by with less and he’s tired of hearing that old story over and over.

Forget about thin for the moment!

So let me spray paint another angle for everyone to see. As you know I do follow SPC-1 numbers pretty carefully. Again not that I really use them to make decisions, I just find the numbers and disclosure behind them very interesting and entertaining at times. It is “fun” to see what others can do with their stuff in a way that can be compared on a level playing field.

I wrote, what I consider a good article on SPC-1 benchmarks a while back, EMC gave me some flak because they don’t believe SPC-1 is a valid test, when I believe EMC just doesn’t like the disclosure requirements, but I’m sure you won’t ever hear EMC say that.

SPC-1 Results

So let’s take the one and only number that Pillar published, because, well that’s all I have to go on, I have no personal experience with their stuff, and don’t know anyone that uses it. So if this information is wrong it’s wrong because the results they submitted were wrong.

So, the Pillar Axiom 600‘s results have not stood the test of time well at all, as you would of noticed in my original article, but to highlight:

  • System tested: January 12, 2009
  • SPC-1 IOPS performance: 64,992
  • SPC-1 Usable space: 10TB
  • Disks used: 288 x 146G 15k RAID 1
  • IOPS per disk: 226 IOPS/disk
  • Average Latency at peak load: 20.92ms
  • Capacity Utilization (my own metric I just thought of): 34.72 GB/disk
  • Cost per usable TB (my own metric extrapolated from SPC-1): $57,097 per TB
  • Cost per IOP (SPC-1 publishes this): $8.79

The 3PAR F400 by contrast was tested just 105 days later and absolutely destroyed the Pillar numbers, and unlike the Pillar numbers the F400 has held up very well against the test of time all the way to present day even:

  • System tested: April 27, 2009
  • SPC-1 IOPS performance: 93,050
  • SPC-1 Usable space: 27 TB
  • Disks used: 384 x 146G 15k RAID 1
  • IOPS per disk: 242 IOPS/disk
  • Average Latency at peak load: 8.85ms
  • Capacity Utilization: 70.432 GB/disk
  • Cost per usable TB: $20,312 per TB
  • Cost per IOP: $5.89

Controller Capacity

Now in my original post I indicated stark differences in some configurations that tested substantially less physical disks than the controllers supported, there are a couple of possibilities I can think of for this:

  • The people running the test didn’t have enough disks to test (less likely)
  • The controllers on the system couldn’t scale beyond the configuration tested, so to illustrate the best bang for your $ they tested with the optimal number of spindles to maximize performance (more likely)

So in Pillar’s case I think the latter is the case as they tested with a pretty small fraction of what their system is advertised as being capable of supporting.

Efficiency

So taking that into account, the 3PAR gives you 27TB of usable capacity, note here we aren’t even taking into account the thin technologies, just throw those out the window for a moment, let’s simplify this.

The Pillar system gives you 10TB of usable capacity, the 3PAR system gives you about 270% more space and 130% more performance for less money.

What would a Pillar system look like(or Systems I guess I should say since we need more than one) that could give us 27TB usable capacity and 93,000 SPC-1 IOPS using 146G 15k RPM disks (again trying to keep level playing field here)?

Well I can only really guess, to reach the same level of performance Pillar would need an extra 124 disks, so 412 spindles. Maintaining the same level of short stroking that they are doing(34.7GB/disk), those extra 124 spindles only get you to roughly 14.3TB.

And I’m assuming here because my comments earlier about optimal number of disks to achieve performance, if you wanted to get those extra 124 spindles in you need a 2nd Axiom 600, and all the costs with the extra controllers and stuff. Controllers obviously carry a hefty premium over the disk drives. While the costs are published in Pillar’s results I don’t want to spend the time to try to extrapolate that angle.

And if you do in fact need more controllers, the system was tested with two controllers, if you have to go to four (tested 3PAR F400 has four), 3PAR has another advantage completely unrelated to SPC-1, the ability to maintain performance levels under degraded conditions (controller failure, software upgrade, whatever) with Persistent Cache. Run your same SPC-1 test, and yank a controller out from each system (3PAR and Pillar) and see what the results are. The numbers would be even more embarrassingly in 3PAR’s favor thanks to their architecture and this key caching feature. Unlike most of 3PAR’s feature add-ons, this one comes at no cost to the customer, the only requirement is you must have at least 4 controllers on the box.

So you still need to get to 27 TB of usable capacity. From here it can get really fuzzy because  you need to add enough spindles to get that high but then you need to adjust the level of short stroking your doing to use more of the space per drive, it wouldn’t surprise me if this wasn’t even possible on the Pillar system(not sure if any system can do it really,  but I don’t know).

If Pillar can’t adjust the size of the short stroking then the numbers are easy, at 34.7GB/disk they need 778 drives to get to 27TB of usable capacity, roughly double what 3PAR has.

Of course the performance of a two-system based Axiom 600 with 778 drives will likely outperform a 384-disk F400(I should hope so at least), but you see where I’m going.

I’m sure Pillar engineers could come up with a way to configure the system more optimally my 778 drive solution is crazy but from a math perspective it’s the easiest and quickest thing I could come up with, with the data I have available to me.

This is also a good illustration why when I go looking at what Xiotech posts, I really can’t compare them against 3PAR or anybody else, because they only submit results for ~16 drive systems. To me, it is not valid to compare a ~16 drive system to something that has hundreds of drives and try to extrapolate results. Xiotech really does give stellar results as far as capacity utilization and IOPS/disk and stuff, but they haven’t yet demonstrated that those numbers are scalable beyond a single ISE enclosure – yet alone to several hundred disks.

I also believe the 3PAR T800 results could be better too, the person at 3PAR who was responsible for running the test was new to the company at the time and the way he laid out the system was, odd to say the least. The commands he used were even depreciated. But 3PAR isn’t all that interested in re-testing, they’re still the record holder for spinning rust in a single system(more than two years running now no doubt!).

Better response times to boot

You can see the 3PAR system performs with less than half the amount of latency that the Pillar system does despite the Pillar system short stroking their disks. Distributed RAID with full mesh architecture at work baby. I didn’t even mention it but the Pillar system has double the cache than than the F400. I mean the comparison really almost isn’t fair.

I’m sure Pillar has bigger and better things out now since they released the SPC-1 numbers for the Axiom, so this post has the obvious caveat that I am reporting based on what is published. They’d need to pull more than a rabbit out of a hat to make up these massive gaps though I think.

Another Angle

We could look at this another way as well, assuming for simplicity’s sake for a moment that both systems scale lineally up or down, we can configure a 3PAR F400 with the same performance specs as the Pillar that was tested.

You’d need 268 disks on the 3PAR F400 to match the performance of the Pillar system. With those 268 disks you’d get 18.4 TB of usable space, same performance, fewer disks, and 184% additional usable capacity. And scaling the cost down like we scaled the performance down, the cost would drop to roughly $374,000, a full $200,000 less than Pillar for the same performance and more space.

Conclusion

So hopefully this answers the question with more clarity why you can get less storage from the 3PAR F400 and get the same or better performance and usable capacity than going with a Pillar Axiom 600.  At the end of the day 3PAR drives higher capacity utilization and delivers superior results for significantly less greenbacks. And I didn’t even take 3PAR’s thin technologies into account, the math there can become even more fuzzy depending on the customer’s actual requirements and how well they can leverage thin built in.

You may be able to understand why HP was willing to go to the end of the earth to acquire 3PAR technology. And you may be able to understand why I am so drawn to that very same technology. And here I’m just talking about performance. Something that unlike other things(ease of use etc) is really easy to put hard numbers on.

The numbers are pretty simple to understand, and you can see why the big cheese at HP responsible for spear heading the 3PAR purchase said:

The thin provisioning, automatic storage tiering, multi-tenancy, shared-memory architecture, and built-in workload management and load balancing in the 3PAR arrays are years ahead of the competition, according to Donatelli, and therefore justify the $2.4bn the company paid to acquire 3PAR in a bidding war with rival Dell.

Maybe if I’m lucky I can trigger interest in The Register again by starting a blog war or something and make tech news! woohoo! that would be cool. Of course now that I said that it probably won’t happen.

I’m told by people who know the Pillar CEO he is “raw”, much like me. so it will be interesting to see the response. I think the best thing they can do is post new SPC-1 numbers with whatever the latest technology they have is, preferably on 146G 15k disks!

Side note

It was in fact my 3PAR rep that inspired me to write about this SPC-1 stuff, I was having a conversation with him earlier in the year where he didn’t think the F400 was as competitive against the HDS AMS2500 as he felt it needed to be. I pointed out to him that despite the AMS2500 having similar SPC-1 IOPS and similar cost, the F400 offered almost twice the usable capacity. And the cost per usable TB was far higher on the 2500. He didn’t realize this.  I did see this angle so felt the need to illustrate it. Hence my Cost per SPC-1 Usable TB. It’s not a performance metric, but in my opinion from a cost perspective a very essential metric, at least for highly efficient systems.

(In case it wasn’t obvious, I am by no means compensated by 3PAR in any way for anything I write, I have a deep passion for technology and they have some really amazing technology, and they make it easy to use and cost effective to boot)

October 8, 2010

How inefficient can you get?

Filed under: Storage — Tags: , , , — Nate @ 8:29 pm

[ the page says the system was tested in Jan 2010, so not recent, but I don’t recall seeing it on the site before now, in any case it’s still crazy]

I was about to put my laptop down when I decided hey let’s go over to SPEC and see if there are any new NFS results posted.

So I did, you know me I am into that sort of stuff. I’m not a fan of NFS but for some reason the SPEC results still interest me.

So I go and see that NEC has posted some results. NEC isn’t a very well known server or even IT supplier in the U.S. at least as far as I know. I’m sure they got decent market share over in Asia or something.

But anyways they posted some results, and I have to say I’m shocked. Either there is a glaring typo or that is just the worst NAS setup on the face of the planet.

It all comes down to usable capacity. I don’t know how you can pull this off but they did – they apparently have 284 300GB disks on the system but only have 6.1 TB of usable space! That is roughly 83TB of raw storage and they only manage to get something like 6% capacity utilization out of the thing?

Why even bother with disks at all if your going to do that? Just go with a few SSDs.

But WAIT! .. WAIT! It gets better. That 6.1 TB of space is spread across — wait for it — 24 file systems.

12 filesystems were created and used per node. One of 24 filesystems consisted of 8 disks which were divided into two 4-disk RAID 1+0 pools, and each of the other 23 filesystems consisted of 12 disks which were divided into two 6-disk RAID 1+0 pools. There were 6 Disk Array Controllers. One Disk Array Controller controlled 47 disks, and each one of the other 5 controlled 48 disks.

I mean the only thing I can hope for is that the usable capacity is in fact a big typo.

Total Exported Capacity 6226.5GB

But if it’s not I have to hand it to them for being brave enough to post such terrible results. That really takes some guts.

 

September 27, 2010

Bye Bye 3PAR, Hello HP!

Filed under: News,Storage — Tags: , , — Nate @ 2:14 pm

Wow that was fast! HP completed it’s purchase of 3PAR this morning.

HP today announced that it has completed the acquisition of 3PAR Inc., a leading global provider of utility storage, for a price of $33 per share in cash, or an enterprise value of $2.35 billion.

3PAR technologies expand HP’s storage portfolio into enterprise-class public and private cloud computing environments, which are key growth markets for HP. Complementary with HP’s current storage portfolio, 3PAR brings market-differentiating technology to HP that will enable clients to maximize storage utilization, balance workloads and automate storage tiering. This allows clients to improve productivity and more efficiently operate their storage networks.

With a worldwide sales and channel network, coupled with extensive service operations, HP is uniquely positioned to rapidly expand 3PAR’s market opportunity. As part of the HP Converged Infrastructure portfolio, which integrates servers, storage, networking and management technologies, 3PAR solutions will further strengthen HP’s ability to simplify data center environments for clients.

Further details on product integration will be announced at a later date.

Certainly not messing around!

September 26, 2010

Still waiting for Xiotech..

Filed under: Random Thought,Storage — Tags: , , , — Nate @ 2:55 pm

So I was browsing the SPC-1 pages again to see if there was anything new and lo and behold, Xiotech posted some new numbers.

But once again, they appear too timid to release numbers for their 7000 series, or the 9000 series that came out somewhat recently. Instead they prefer to extrapolate performance from their individual boxes and aggregate the results. That doesn’t count of course, performance can be radically different at higher scale.

Why do I mention this? Well nearly a year ago their CEO blogged, in response to one of my posts, and that was one of the first times I made news in The Register (yay! – I really was excited) , and in part the CEO said:

Responding to the Techopsguy blog view that 3PAR’s T800 outperforms an Emprise 7000, the Xiotech writer claims that Xiotech has tested “a large Emprise 7000 configuration” on what seems to be the SPC-1 benchmark; “Those results are not published yet, but we can say with certainty that the results are superior to the array mentioned in the blog (3PAR T800) in several terms: $/IOP, IOPS/disk and IOPS/controller node, amongst others.”

So here we are almost a year later, and more than one SPC-1 result later, and still no sign of Xiotech’s SPC-1 numbers for their higher end units. I’m sorry but I can’t help but feel they are hiding something.

If I were them I would put my customers more at ease by publishing said numbers, and be prepared to justify the results if they don’t match up to Xiotech’s extrapolated numbers from the 5000 series.

Maybe they are worried they might end up like Pillar, who’s CEO was pretty happy with their SPC-1 results. Shortly afterwards the 3PAR F400 launched and absolutely destroyed the Pillar numbers from every angle. You can see more info on these results here.

At the end of the day I don’t care of course, it just was a thought in my head and gave me something to write about 🙂

I just noticed that these past two posts puts me over the top as far as the most number of posts I have done in a month since this TechOpsGuys things started. I’m glad I have my friends Dave, Jake and Tycen generating tons of content too, after all this site was their idea!

September 16, 2010

Fusion IO now with VMware support

Filed under: Storage,Virtualization — Tags: , , , , , — Nate @ 8:58 am

About damn time! I read earlier in the year on their forums that they were planning on ESX support for their next release of code, originally expected sometime in March/April or something. But that time came and went and saw no new updates.

I saw that Fusion IO put on a pretty impressive VDI demonstration at VMworld, so I figured they must have VMware support now, and of course they do.

I would be very interested to see how performance could be boosted and VM density incerased by leveraging local Fusion IO storage for swap in ESX.  I know of a few 3PAR customers that say they get double the VM density per host vs other storage because of the better I/O they get from 3PAR, though of course Fusion IO is quite a bit snappier.

With VMware’s ability to set swap file locations on a per-host basis, it’s pretty easy to configure, in order to take advantage of it though you’d have to disable memory ballooning in the guests I think in order to force the host to swap. I don’t think I would go so far as to try to put individual swap partitions on the local fusion IO for the guests to swap to directly, at least not when I’m using a shared storage system.

I just checked again, and as far as I can tell, still, from a blade perspective at least, still the only player offering Fusion IO modues for their blades is the HP c Class in the form of their IO Accelerator. With up to two expansion slots on the half width, and three on the full width blades, there’s plenty of room for the 80, 160 GB SLC models or the 320GB MLC model. And if you were really crazy I guess you could use the “standard” Fusion IO cards with the blades by using the PCI Express expansion module, though that seems more geared towards video cards as upcomming VDI technologies leverage hardware GPU acceleration.

HP’s Fusion IO-based I/O Accelerator

FusionIO claims to be able to write 5TB per day for 24 years, even if you cut that to 2TB per day for 5 years, it’s quite an amazing claim.

From what I have seen (can’t speak with personal experience just yet), the biggest advantage Fusion IO has over more traditional SSDs is write performance, of course to get optimal write performance on the system you do need to sacrifice space.

Unlike drive form factor devices, the ioDrive can be tuned to achieve a higher steady-state write performance than what it is shipped with from the factory.

September 12, 2010

Crazy Seagate Statistics

Filed under: Storage — Tags: — Nate @ 1:09 pm

Been a while since I clicked on their blog but I just did and as the most current entry says, those are pretty eye popping.

  • A drive’s recording head hovers above the disks at a height of 100 atoms, 100 times thinner than a piece of paper
  • Seagate clean rooms are 100 times cleaner than a hospital operating room
  • Seagate can analyze over 1.5 Million drives at a time
  • Seagate builds 6 hard drives, hybrid drives, and solid state drives every second
  • Every single drive travels through over 1000 manufacturing steps

[Begin First Tangent –]

If your using a Seagate SATA disk, do yourself a favor and don’t let the temperature of the drive drop below 20 degrees celcius 🙂

I read an interesting article recently on the various revenue numbers of the big drive manufacturers, and the numbers were surprising to me.

Hitach GST had revenues of $4.8bn in 2009.
[..]
Seagate’s fiscal 2010 revenue of $11.4bn
[..]
Western Digital’s latest annual revenue of $9.8bn

I really had no idea Western Digital was so big! After all since they do not (not sure if they ever did) participate in the SCSI / Fibre Channel / SAS arena that leaves them out of the enterprise space for the most part (I never really saw their Raptor line of drives get adopted, too bad!). Of course “Enterprise SATA” has taken off quite a bit in recent years but I would think that would still pale in comparison to Enterprise SAS/SCSI/FC. But maybe not I don’t know, haven’t looked into the details.

I thought Hitachi was a lot bigger especially since Hitachi bought the disk division from IBM way back when. I used to be a die hard fan of IBM disks, up until the 75GXP fiasco. I’m still weary of them even now. I still have a CDROM filled with “confidential” information with regards to the class action suit that I played a brief part in (the judge kicked me out because he wanted to consolidate the people in the suite to folks in California), very nteresting stuff, not that I remember much of it, I haven’t looked at it in years.

The 75GXP was the only drive where I’ve ever suffered a “double disk failure” before I could get a replacement in. Only happened once. My company had 3 “backup” servers, one at each office site. Each one had I think it was 5 x 100GB disks, or was it another size, this was back in 2001. RAID5, connected to a 3Ware 7000-series controller. One Friday afternoon one of the disks in my local office failed, so I called to get an RMA, about 2 hours later, another disk failed in a remote office, so I called to get that one RMA’d too.  The next day the bad disk for my local server arrived, but it was essentially DOA from what I recall. So the system kept running in degraded mode( come on how many people’s servers in 2001 had hot spares, that’s what I thought). There was nobody in the office for the other server in degraded mode so the drive was set to arrive on Monday to be replaced. On Sunday that same weekend a 2nd disk in the remote server failed, killing the RAID array of course. In the end, that particular case wasn’t a big deal, it was a backup server after all, everything on the disk was duplicated at least once to another site. But it was still a pain. If memory serves I had a good 15-20 75GXP disks fail over the period of a year or so(both home+work), all of them were what I would consider low duty cycle, hardly being stressed that much. In all cases the data lost wasn’t a big deal, it was more of a big deal to be re-installing the systems, that took more time than anything else. Especially the Solaris systems..

[End First Tangent –]
[Begin Second Tangent — ]

One thing that brings back fond childhood memories related to Seagate is where they are based out of – Scotts Valley, California. Myself I wouldn’t consider it in Silicon Valley itself but it is about as close as you can get. I spent a lot of time in Soctts Valley as a kid, I grew up in Boulder Creek, California (up until I was about 12 anyways) which is about 10 miles from Scotts Valley. I considered it(probably still is) the first “big town” to home, where it had things like movie theaters, and arcades. I didn’t find out Seagate was based there until a few years ago, but for some reason makes me proud(?), for such a big giant to be located in such a tiny town so close to what I consider home.

[End Second Tangent –]

« Newer PostsOlder Posts »

Powered by WordPress