TechOpsGuys.com Diggin' technology every day

23Aug/12Off

Real time storage auto tiering?

Was reading this article over by our friends at The Register, and apparently there is some new system from DotHill that claims to provide real time storage tiering - that is moving data between tiers every 5 seconds.

The AssuredSAN Pro 5000 has a pretty bad name, borrows the autonomic term that 3PAR loves to use so much(I was never too fond of it, preferred the more generic term automatic), in describing their real time tiering. According to their PDF, they move data around in 4MB increments, far below most other array makers.

From the PDF -

  • Scoring to maintain a current page ranking on each and every I/O using an efficient process that adds less than one microsecond of overhead. The algorithm takes into account both the frequency and recency of access. For example, a page that has been accessed 5 times in the last 100 seconds would get a high score.
  •  Scanning for all high-scoring pages occurs every 5 seconds, utilizing less than 1.0% of the system’s CPU. Those pages with the highest scores then become candidates for promotion to the higher-performing SSD tier.
  •  Sorting is the process that actually moves or migrates the pages: high scoring pages from HDD to SSD; low scoring pages from SDD back to HDD. Less than 80 MB of data are moved during any 5 second sort to have minimal impact on overall system performance.

80MB of data every 5 seconds is quite a bit for a small storage system, I have heard of situations where auto tiering has such an impact that it actually made things worse due to so much data moving around internally on the system, and had to be disabled. I would hope they have some other safe gaurds in there like watching spindle latency and scheduling the movements to be low priority, and perhaps even canceling movements if the job can't be completed in some period of time.

The long delay time between data movements is perhaps the #1 reason why I have yet to buy into the whole automagic storage tiering concept. My primary source of storage bottlenecks over the years has been some out-of-nowhere job that generates a massive amount of writes and blows out the caches on the system. Sometimes the number of I/Os inbound is really small too, but the I/O size can be really big (5-10x normal) and is then split up into smaller I/Os on the back end and the IOPS are multiplied as a result. The worst offender was some special application I supported a few years ago which would, as part of it's daily process dump 10s of GB of data from dozens of servers in parallel to the storage system as fast as it could. This didn't end well as you can probably imagine, at the peak we had about roughly 60GB of RAM cache between the SAN and NAS controllers. I tried to get them to re-architect the app to use something like local Fusion IO storage but they did not they needed shared NFS. I suspect this sort of process would not be helped too much by automatic storage tiering because I'm sure the blocks are changing each day to some degree. This is also why I have not been a fan of things like SSD read caches (hey there NetApp!), and of course having a SSD-accelerated write cache on the server end doesn't make a lot of sense either since you could lose data in the event the server fails. Unless you have some sort of fancy mirroring system, to mirror the writes to another server but that sounds complicated and problematic I suspect.

Compellent does have one type of tiering that is real time though - all writes by default go to the highest tier and then are moved down to lower tiers later. This particular tiering is even included in the base software license. It's a feature I wish 3PAR had.

This DotHill system also supports thin provisioning, though no mention of thin reclaiming, not too surprising given the market this is aimed at.

They also claim to have some sort of rapid rebuild, by striping volumes over multiple RAID sets, I suppose this is less common in the markets they serve (certainly isn't possible on some of the DotHill models), this of course has been the norm for a decade or more on larger systems. Rapid rebuild to me obviously involves sub disk distributed RAID.

Given that DotHill is a brand that others frequently OEM I wonder if this particular tech will bubble up under another name, or will the bigger names pass on this in the fear it may eat into their own storage system sales.

TechOps Guy: Nate

Tagged as: Comments Off
Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.