So according to this article from our friends at The Register, Compellent is considering going to absurdly efficient storage tiering taking the size of data being migrated to 32kB from their currently insanely efficient 512kB.
That’s just insane!
For reference, as far as I know:
- 3PAR moves data around in 128MB chunks
- IBM moves data around in 1GB chunks (someone mentioned that XIV uses 1MB)
- EMC moves data around in 1GB chunks
- Hitachi moves data around in 42MB chunks (I believe this is the same data size they use for allocating storage to thin provisioned volumes)
- NetApp has no automagic storage tiering functionality though they do have PAM cards which they claim is better.
I have to admit I do like Compellent’s approach the best here, hopefully 3PAR can follow. I know 3PAR allocates data to think provisioned volumes in 16kB chunks, what I don’t know is whether or not their system is adjustable to get down to a more granular level of storage tiering.
There’s just no excuse for the inefficient IBM and EMC systems though, really, none.
Time will tell if Compellent actually follows through with going as granular as 32kB, I can’t help but suspect the CPU overhead of monitoring so many things will be too much for the system to bear.
Maybe if they had purpose built ASIC…