TechOpsGuys.com Diggin' technology every day

September 16, 2010

Fusion IO now with VMware support

Filed under: Storage,Virtualization — Tags: , , , , , — Nate @ 8:58 am

About damn time! I read earlier in the year on their forums that they were planning on ESX support for their next release of code, originally expected sometime in March/April or something. But that time came and went and saw no new updates.

I saw that Fusion IO put on a pretty impressive VDI demonstration at VMworld, so I figured they must have VMware support now, and of course they do.

I would be very interested to see how performance could be boosted and VM density incerased by leveraging local Fusion IO storage for swap in ESX.  I know of a few 3PAR customers that say they get double the VM density per host vs other storage because of the better I/O they get from 3PAR, though of course Fusion IO is quite a bit snappier.

With VMware’s ability to set swap file locations on a per-host basis, it’s pretty easy to configure, in order to take advantage of it though you’d have to disable memory ballooning in the guests I think in order to force the host to swap. I don’t think I would go so far as to try to put individual swap partitions on the local fusion IO for the guests to swap to directly, at least not when I’m using a shared storage system.

I just checked again, and as far as I can tell, still, from a blade perspective at least, still the only player offering Fusion IO modues for their blades is the HP c Class in the form of their IO Accelerator. With up to two expansion slots on the half width, and three on the full width blades, there’s plenty of room for the 80, 160 GB SLC models or the 320GB MLC model. And if you were really crazy I guess you could use the “standard” Fusion IO cards with the blades by using the PCI Express expansion module, though that seems more geared towards video cards as upcomming VDI technologies leverage hardware GPU acceleration.

HP’s Fusion IO-based I/O Accelerator

FusionIO claims to be able to write 5TB per day for 24 years, even if you cut that to 2TB per day for 5 years, it’s quite an amazing claim.

From what I have seen (can’t speak with personal experience just yet), the biggest advantage Fusion IO has over more traditional SSDs is write performance, of course to get optimal write performance on the system you do need to sacrifice space.

Unlike drive form factor devices, the ioDrive can be tuned to achieve a higher steady-state write performance than what it is shipped with from the factory.

June 19, 2010

40 Million IOPS in two racks

Filed under: Storage — Tags: , , — Nate @ 7:43 am

Fusion IO does it again, another astonishing level of performance in such an efficient design, from the case study:

LLNL used Fusion’s ioMemory technology to create the world’s highest performance storage array. Using Fusion’s ioSANs and ioDrive Duos, the cluster achieves an unprecedented 40,800,000 IOPS and 320GB/s aggregate bandwidth.
Incredibly, Fusion’s ioMemory allowed LLNL to accomplish this feat in just two racks of appliances– something that would take a comparable hard disk-based solution over 43 racks. In fact, it would take over 100 of the SPC-1 benchmark’s leading all-flash vendor systems combined to match the performance, at a cost of over $300 million.

40 Million IOPS @ ~250 IOPS per 15K RPM disk your talking 160,000 disk drives.

Not all flash is created equal of course, many people don’t understand that. They just see ooh this one is cheap, this one is not, not having any clue (shocker).

It’s just flat out irresponsible to ignore such a industry changing technology, especially for workloads that deal with small (sub TB) amounts of data.

April 2, 2010

Grid Iron decloaks

Filed under: News,Storage — Tags: , , — Nate @ 10:30 am

Grid Iron Systems seems to have left stealth mode somewhat recently, they are another start up that makes an accelerator appliance that sits in between your storage and your server(s). Kind of what Avere does on the NAS side, Grid Iron does on the SAN side with their “TurboCharger“.

Certainly looks like an interesting product but it appears they make it “safe” by making it cache only reads, I want a SSD system that can cache writes too! (yes I know that wears the SSDs out faster I’m sure, but just do warranty replacement). I look forward to seeing some SPC-1 numbers on how Grid Iron can accelerate systems, at the same time I look forward to SPC-1 numbers on how automatic storage tiering can accelerate systems as well.

I’d also be interested in seeing how Grid Iron can accelerate NetApp systems vs using NetApp’s own read-only PAM (since Grid Iron specifically mentions NetApp in their NAS accelerator, although yes I’m sure they just used NetApp as an example).

December 8, 2009

Fusion IO throughput benchmarks

Filed under: Storage — Tags: , , — Nate @ 4:51 pm

I don’t visit the MySQL performance blog too often, but today happened to run across a very interesting post here comparing a Fusion IO card to an 8-disk 15k RPM RAID 1+0 array. Myself I’ve been interested in Fusion IO since I first heard about it, very interesting technology, have not used it personally yet.

The most interesting numbers to me was the comparably poor sequential write performance vs random write performance on the same card. Random write was upwards of 3 times faster.

November 18, 2009

Xiotech goes SSD

Filed under: Storage — Tags: , , — Nate @ 8:00 am

Just thought it was kind of funny timing. Xiotech came to my company a few weeks ago touting their ISE systems, about the raw IOPS they can deliver(apparently they do something special with the SCSI protocol that gets them 25% more IOPS than you can normally get). I asked them about SSD and they knocked it saying it wasn’t reliable enough for them(compared to their self healing ISEs).

Well apparently that wasn’t the case, because it seems they might be using STEC SSDs in the near future according to The Register. What? No Seagate? As you may or may not be aware Xiotech’s unique features come with an extremely tight integration with the disk drives, something they can only achieve by using a single brand, which is Seagate(who helped create the technology and later spun it out into Xiotech). Again, The Register has a great background on Xiotech and their technology.

My own take on their technology is it certainly looks interesting, their Emprise 5000 looks like a great little box as a standalone unit. It scales down extremely well. I’m not as convinced with how well it can scale up with the Emprise 7000 controllers though, they tried to extrapolate SPC-1 numbers from a single ISE 5000 to the same number of drives as a 3PAR T800 which I believe still holds the SPC-1 record at least for spinning disks anyways. Myself I’d like to see them actually test a high end 64-node ISE 7000 system for SPC-1 and show the results.

If your a MS shop you might appreciate Xiotech’s ability to integrate with MS Excel, as a linux user myself I did not of course. I prefer something like perl. Funny that they said their first generation products integrated with perl, but their current ones do not at the moment.

This sort of about face with regards to SSD in such a short time frame of reminds me when NetApp put out a press release touting their de-duplication technology as being the best for customers only to come out a week later and say they are trying to buy Data Domain because they have better de-duplication technnology. I mean I would of expected Xiotech to say something along the lines of “we’re working on it” or something. Perhaps the STEC news was an unintentional leak, or maybe their regional sales staff here was just not informed or something.

November 17, 2009

Three thousand drives in the palm of your hand

Filed under: Storage — Tags: , , — Nate @ 2:59 pm

I was poking around again and came across a new product from Fusion IO which looked really cool. Their new Iodrive Octal, which packs 800,000 IOPS on a single card with 6 Gigabytes/second sustained bandwidth. To put this in perspective, a typical high end 15,000 RPM SAS/Fiber Channel disk drive can do about 250 IOPS. As far as I/O goes this is roughly the same as 3,200 drives. The densest high performance storage I know of is 3PAR who can pack 320 15,000 RPM drives in a rack in their S-class and T-class systems (others can do high density SATA,  I’m not personally aware of others that can do high density 15,000 RPM drives for online data processing).

But anyways, in 3PAR’s case that is 10 racks of drives, and three more racks for disk controllers(24 controllers), roughly 25,000 pounds of equipment(performance wise) in the palm of your hand with the Iodrive Octal. Most other storage arrays top out at between 200 and 225 disks per rack.

The Fusion IO solutions aren’t for everyone of course they are targeted mostly at specialized applications with smaller data sets that require massive amounts of I/O. Or those that are able to distribute their applications amongst several systems using their PCIe cards.

Powered by WordPress