TechOpsGuys.com Diggin' technology every day

August 24, 2010

EMC and IBM’s Thick chunks for automagic storage tiering

Filed under: Storage,Virtualization — Tags: , , , , — Nate @ 12:59 pm

If you recall not long ago IBM released some SPC-1 numbers with their automagic storage tiering technology Easy Tier. It was noted that they are using 1GB blocks of data to move between the tiers. To me that seemed like a lot.

Well EMC announced the availability of FAST v2 (aka sub volume automagic storage tiering) and they too are using 1GB blocks of data to move between tiers according to our friends at The Register.

Still seems like a lot. I was pretty happy when 3PAR said they use 128MB blocks, which is half the size of their chunklets. I thought to myself when I first heard of this sub LUN tiering that you may want a block size as small as, I don’t know 8-16MB. At the time 128MB still seemed kind of big(before I had learned of IBM’s 1GB size).

Just think of how much time it takes to read 1GB of data off a SATA disk (since the big target for automagic storage tiering seems to be SATA + SSD).

Anyone know what size Compellent uses for automagic storage tiering?

5 Comments

  1. I believe that compellent uses 2meg blocks

    Comment by rgt — August 24, 2010 @ 3:44 pm

  2. cool! that is amazingly efficient. thanks for the info

    Comment by Nate — August 24, 2010 @ 3:51 pm

  3. […] IBM moves data around in 1GB chunks […]

    Pingback by Compellent gets Hyper efficient storage tiering « TechOpsGuys.com — March 2, 2011 @ 9:24 am

  4. Nate, question about the 1 GB chunk

    Let’s say i have a 10 GB LUN and it’s 50% full of 1 MB text files

    Let’s say I only read two of the text files stored on completely different physical disks comprising the LUN, and I access them repeatedly. Does the EMC grab those two 1 MB files and the physically adjacent 1023 MB worth of files on disk, or would it grab some other collection? Just wondering if it’s 1 GB of my hottest data or does it bring cold data too?

    Comment by A guy you know — August 22, 2012 @ 2:58 pm

  5. I’m not familiar with the thresholds EMC has, but in all cases some cold data can be moved as well, whether it is EMC, IBM, 3PAR etc, the only way to avoid moving cold (or less hot) data is to make the granularity very small, though there are trade offs there as to the overhead of watching such small units of data on a large system especially.

    One extreme example would be this post I wrote on something Compellent was thinking about implementing:
    http://www.techopsguys.com/2011/03/02/compellent-gets-hyper-efficient-storage-tiering/

    Which they speculated they could (or perhaps would, or have by now I’m not sure) go as small as 32kB, vs at the time they said they were moving stuff around in 512*kB* blocks.

    I believe in IBM’s case their XIV has more granular movement of data vs their older platforms. I haven’t come across anything about EMC that shows any products using a size of less than 1GB.

    This movement of such large chunks of data is one of the reasons I have yet to buy into the concept, it takes a lot of I/Os to move a GB of data off a SATA disk! (and it’s even more expensive to put it back!). While 3PAR’s 128MB is significantly smaller it’s still quite big in the grand scheme of things.

    thanks for the comment!

    Comment by Nate — August 22, 2012 @ 3:13 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress