Diggin' technology every day

March 2, 2011

Compellent gets Hyper efficient storage tiering

Filed under: Storage — Tags: , , , , — Nate @ 9:24 am

So according to this article from our friends at The Register, Compellent is considering going to absurdly efficient storage tiering taking the size of data being migrated to 32kB from their currently insanely efficient 512kB.

That’s just insane!

For reference, as far as I know:

  • 3PAR moves data around in 128MB chunks
  • IBM moves data around in 1GB chunks (someone mentioned that XIV uses 1MB)
  • EMC moves data around in 1GB chunks
  • Hitachi moves data around in 42MB chunks (I believe this is the same data size they use for allocating storage to thin provisioned volumes)
  • NetApp has no automagic storage tiering functionality though they do have PAM cards which they claim is better.

I have to admit I do like Compellent’s approach the best here, hopefully 3PAR can follow. I know 3PAR allocates data to think provisioned volumes in 16kB chunks, what I don’t know is whether or not their system is adjustable to get down to a more granular level of storage tiering.

There’s just no excuse for the inefficient IBM and EMC systems though, really, none.

Time will tell if Compellent actually follows through with going as granular as 32kB, I can’t help but suspect the CPU overhead of monitoring so many things will be too much for the system to bear.

Maybe if they had purpose built ASIC…



  1. FYI:

    IBM’s XIV uses 1MB chunks

    Comment by thegreatsatan — March 2, 2011 @ 11:25 am

  2. good to know! thanks!

    Comment by Nate — March 2, 2011 @ 1:39 pm

  3. Was talking with an EMC SE the other day about this and he brought up the point that if you have very small chunks that you are moving around constantly, exactly how much processor time on the array is dedicated to the management of those chunks, and how quickly can it react to hot data.

    IIRC the compellent boxes perform data migration once a day, so yeah moving those chunks around just once a day and dedicating a specific time to their movement would make sense. on the flipside, moving 1MB/1GB chunks around as needed anytime during the day might offer better response times for hot data.

    In the end, the devil tends to be in the details, and while small chunk size has its benefits, there will be drawbacks as well.

    Comment by thegreatsatan — March 9, 2011 @ 10:43 am

  4. I’m just going to say this, but “The Register” lot did dirty deeds to get their pitch.

    Maybe best not said, but they played company law like pros, when they had little.


    Not connected, but heard a lot.

    – j

    Comment by John (other John) — March 9, 2011 @ 6:49 pm

  5. @thegreatsatan … 1MB chunks sound great.. only XIV doesn’t employ tiering as it has 1 tier of crappy SATA disk and nothing more..

    Comment by Adam Wolfson — March 18, 2011 @ 11:41 am

  6. True, its not tiering, the point was to the size of chunklets that are moved about the array.

    I will say, even with my “crappy sata disk” I can pull excess of 30k IOPS without any real degredation of performance at around 2ms latency. Not bad for crappy sata disk.

    Comment by thegreatsatan — March 30, 2011 @ 3:32 pm

  7. The post was about chunk/page size for tiering.. Not about page sizes in general for allocation of writes. 1MB is still pretty crappy for page sizes as if you look at 3PAR and Comepllent, or even VMAX their allocation sizes are FAR smaller.

    Also.. is your XIV still pumping out 30k+ IOps when you surpass 60-65% storage utilization? Or did your IBM rep convince you to buy a 2nd frame before you got to that point?

    Comment by adam — July 14, 2011 @ 7:41 am

  8. @ thegreatsatan … where did you hear compellent only relevels data once a day? I’d love meat to that claim as I’ve been told the opposite by Compellent many times…

    Comment by adam — July 14, 2011 @ 7:44 am

  9. 3PAR allocates storage in 16kB increments from CPGs (Storage pools) to VVs (Virtual volumes) – that’s pretty efficient. They tout this highly.

    Myself I don’t really care if the allocation size is 16kB or 42MB (HDS), in the grand scheme of things unless your running thousands of volumes on your system the impact is not going to be noticeable.

    Comment by Nate — July 14, 2011 @ 7:45 am

  10. one of the biggest advantages in this discussion about Compellent is that ALL writes go to the fastest tier in your volume (always RAID 10 btw) and relevells your blocks to tiers 2, 3 (other RAID or other type disk). According to the usage of the data you will be reading it from one of those tiers. There is no competitor at this point that does this.

    The default settings for data progression is correct that it runs once a day, starting each day at 7PM. The restriping would also occur when adding drives or changing the tier redundancy (or ex changing tier 3 from raid 5 to 6) I don’t really know at this point if changing the defaults is limitied on the time schedule (starting hour) or the amount fo times per day.

    The biggest advantages for 3PAR are the superior hardware. They have their own ASIC and have the possibility to use 4 controllers in one system (HUGHE advantage over competitors!!!)

    disclaimer: I am DELL Compellent & HP 3PAR partner consultant.

    Comment by HansDeLeenheer — July 20, 2011 @ 1:00 am

  11. […] moves dynamically. The core strengths of Compellent Storage center are Dynamic block architecture, Intelligent Tiering, Data Progression. Virtualization at disk level helps Compellent automatically creates tiers from […]

    Pingback by Today’s Dell Digest September 8, 2011 » ServerKing — September 8, 2011 @ 4:40 pm

  12. […] The AssuredSAN Pro 5000 has a pretty bad name,  but borrows the autonomic term that 3PAR loves to use so much(I was never too fond of it, preferred the more generic term automatic), in describing their real time tiering. According to their PDF, they move data around in 4MB increments, far below most other array makers. […]

    Pingback by Real time storage auto tiering? « — August 23, 2012 @ 9:42 am

  13. I know this thread is a bit old, it came up in one of my Google searches. I would like to point out a couple of points in terms of EMC (Disclaimer, I am a pre-sales engineer for EMC).

    First, the 1GB sub lun slice is accurate for the FAST VP auto-tiering. This decision really came down to the fact that EMC did not want to run into potential controller issues with over running the processors. While the majority of customers do not come close to over running the processors on the VNX’s, there would/could be a few that are utilizing a large amount of EFDs that could end up over saturating the processors and we all know that any bad experience is immediately put on the internet, whereas good experiences are not always.

    Second, EMC has a software product called FAST Cache which is not mentioned once here. FAST Cache was built to help overcome some of the issues around the 1GB size. FAST Cache is utilizing enterprise flash drives (EFDs) as an extension of DRAM Cache. When a LUN/Pool is enabled for fast cache, blocks of data are cached in real time to the EFDs. FAST Cache works at a 64K IO size or smaller and is aimed at small block IO. FAST Cache is also read/write enabled, this is very important as any data cached is also getting write benefits. Depending on the model of VNX, you can go up to 2.1TB of usable FAST Cache. I personally have several customers that have 200GB of usable fast cache (4x100GB EFD Mirrored) that have 50-60% of their IO’s being services our of FAST Cache.

    Third – I am not the most familiar with 3Par hardware, so I cannot say one way or another if they do in fact have superior hardware. I do know that EMC is the first and only (to my knowledge) validation partner of Intels. This means that we are developing on pre-release chips and able to efficiently optimize our code in reasonable amounts of time for the latest Intel processors coming out. The Intel Westmere processors in use today are extremely fast and I have not seen a single customer coming close to saturating them.

    Just my two cents. I try to be vendor neutral and keep an open mind when reading and posting on forums. I know that not any one product is perfect, while I do personally feel that EMC is a market leader and has strong differentiators to back that, I am always open to hearing other opinions and learning new items about other products.



    Comment by Justin — September 12, 2012 @ 6:01 pm

  14. thanks for the comment! yes I did not mention fast cache since it is a different tech(the article was specifically about tiering), though I have mentioned on many occasions my, how should I word it admiration ? for the Fast Cache technology, and how I wish others had something similar (it’s not enough to get me to want to use EMC myself but I still think it’s great tech — on paper anyways, I haven’t talked to anyone that has used it — I struggle to think of anyone off the top of my head that I know that uses EMC right now anyways). I’m sure it works pretty good though.

    Couple places that I mention fast cache:
    (though in that last one I mis spelled the name, easy to get the names mixed up)

    so hopefully I can redeem myself somewhat 🙂

    thanks agian

    Comment by Nate — September 12, 2012 @ 6:10 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress