TechOpsGuys.com Diggin' technology every day

November 15, 2011

LSI quietly smothers Onstor

Filed under: Storage — Nate @ 8:35 pm

About a year ago I was in the market for a new NAS box to hook up to my 3PAR T400, something to eventually replace the Exanet cluster that was hooked up to it since Exanet as a company went bust.

There wasn’t many options left, Onstor had been bought by LSI, I really couldn’t find anything on the Ibrix offering from HP at the time (at least for a SAN-attached Ibrix rather than a scale-out Ibrix), and then there was of course NetApp V-Series. I could not find any trace of Polyserve which HP acquired as well, other than something related to SQL server.

Old 3PAR graphic that I dug up from the Internet Archive

3PAR was suggesting I go with Onstor(this was, of course before the HP/Dell bidding war), claiming they still had a good relationship with Onstor through LSI. I think it was less about the partnership and more about NetApp using the V-series to get their foot in the door and then try to replace the back end disk with their own, a situation understandably 3PAR (or any other competition) doesn’t like to be in.

My VAR on the other hand had another story to tell, after trying to reach out to LSI/Onstor they determined that Onstor was basically on their death bed, with only a single reseller in the country authorized to sell the boxes, and it seemed like there was maybe a half dozen employees left working on the product.

So, I went with NetApp, then promptly left the company and left things in the hands of the rest of the team(there’s been greater than 100% turnover since I left both in the team and in the management).

One of my other friends who used to work for Exanet was suggesting to me that LSI bought Onstor with the possible intention of integrating the NAS technology into their existing storage systems, to be able to offer a converged storage option to the customers, and that the stand alone gateway would probably be going away.

Another product I had my eyes on at the time and 3PAR was working hard with to integrate was the Symantec Filestore product. I was looking forward to using it, other companies were looking to Filestore to replace their Exanet clusters as well. Though I got word through unofficial channels that Symantec planned to kill the software-only version and go the appliance route. It took longer than I was expecting but they finally did it, I was on their site recently and noticed that the only way to get it now is with integrated storage from their Chinese partner.

I kept tabs on Onstor now and then, wondering if it would come back to life in some form, the current state of the product at least from a SAN connectivity perspective seemed to be very poor – as in you couldn’t do things like live software upgrades on a cluster in most cases, the system had to have a full outage(in a lot of cases anyways). But no obvious updates ever came.

Then LSI sold their high end storage division to NetApp. I suppose that was probably the end of the road for Onstor.

So tonight, I was hitting some random sites and decided to check in on Onstor again, only to find most traces of the product erased from LSI’s site.

The only things I ever really heard about Onstor was how the likes of BlueArc and Exanet were replacing Onstor clusters, I talked to one service provider who had an Onstor system (I think connected to 3PAR too), I talked with them briefly while I was thinking about what NAS gateway to move to about a year ago and they seemed fairly content with it, no major complaints. Though it seemed like if they were to buy something new (at that time) they probably wouldn’t buy Onstor due to the uncertainty around it.

It seemed to be an interesting design – using dual processor quad core MIPS CPUs of all things.

RIP Onstor, another one bites the dust.

Hopefully LSI doesn’t do the same to 3ware. I always wondered why 3ware technology was never integrated(as far as I know anyways) into server motherboards, even after LSI acquired them, given that a lot of integrated RAID controllers (Dell etc) are LSI. I think for the most part the 3ware technology is better (if not why did they get acquired and continue to develop products?). I’ve been a 3ware user for what seems like 12 years now, and really have no complaints.

I really hope the HP X9000 NAS gateway works out, the entry level pricing for it as-is seems quite high to me though.

Dell’s distributed core

Filed under: Networking — Tags: — Nate @ 9:59 am

Dell’s Force10 unit must be feeling the heat from the competition. I came across this report which the industry body Tolly did on behalf of Dell/Force10.

Normally I think Tolly reports are halfway decent although they are usually heavily biased towards the sponsor (not surprisingly). This one though felt light on details. It felt like they rushed this to market.

Basically what Force10 is talking about is a distributed core architecture with their 32-port 40GbE Z9000 switches as what they call the spine(though sometimes they are used as the leaf), and their 48-port 10 GbE S4810 switches as what they call the leaf (though sometimes they are used as the spine).

They present 3 design options:

Force10 Distributed Core Design

I find three things interesting about these options they propose:

  • The minimum node count for spine is 4 nodes
  • They don’t propose an entirely non blocking fabric until you get to “large”
  • The “large” design is composed entirely of Z9000s, yet they keep the same spine/leaf configuration, whats keeping them from being entirely spine?

The distributed design is very interesting, though it would be a conceptual hurdle I’d have a hard time getting over if I was in the market for this sort of setup. It’s nothing against Force10 specifically I just feel safer with a less complex design (I mentioned before I’m not a fan of stacking for this same reason), less things talking to each other in such a tightly integrated fashion.

That aside though a couple other issues I have with the report is while they do provide the configuration of the switches (that IOS-like interface makes me want to stab my eyes with an ice pick) – I’m by no means familiar with Force10 configuration and they don’t talk about how the devices are managed. Are the spine switches all stacked together? Are the spine and leaf switches stacked together? Are they using something along the lines of Brocade’s VCS technology? Are the devices managed independently and they are relying on other protocols like MLAG? The web site mentions using TRILL at layer 2, which would be similar to Brocade.

The other issue I have with the report is the lack of power information, specifically would be interested (slightly, in the grand scheme of things I really don’t think this matters all that much) in the power per usable port (ports that aren’t being used for up links or cross connects). They do rightly point out that power usage can vary depending on the workload and so it would be nice to get power usage based on the same workload. Though conversely it may not matters as much, looking at the specs for the Extreme X670V (48x10GbE + 4x40GbE) says there is only 8 watts of difference between (that particular switch) 30% traffic load and 100% traffic load, seems like a trivial amount.

Extreme Networks X670V Power Usage

As far as I know the Force10 S4810 switch uses the same Broadcom chipset as the X670V.

On their web site they have a nifty little calculator where you input your switch fabric capacity and it spits out power/space/unit numbers. The numbers there don’t sound as impressive:

  • 10Tbps fabric = 9.6kW / 12 systems / 24RU
  • 15Tbps fabric = 14.4kW / 18 systems / 36RU
  • 20Tbps fabric = 19.2kW / 24 systems / 48RU

The aforementioned many times Black Diamond X-Series comes in at somewhere around 4kW (well if you want to be really conservative you could say 6.1kW assuming 8.1W/port which their report was likely high considering system configuration) and a single system to get up to 20Tbps of fabric(you could perhaps technically say it is has 15Tbps of fabric since the last 5Tbps is there for redundancy, 192 x 80Gbps = 1.5Tbps). 14.5RU worth of rack space too.

Dell claims non-blocking scalability up to 160Tbps, which is certainly a lot! Though I’m not sure what it would take for me to make the leap into a distributed system such as TRILL. Given TRILL is a layer 2 only protocol (which I complained about a while ago), I wonder how they handle layer 3 traffic, is it distributed in a similar manor? What is the performance at layer 3? Honestly I haven’t read much on TRILL at this point (mainly because it hasn’t really interested me yet), but one thing that is not clear to me(maybe someone can clarify), is is TRILL just a traffic management protocol or does it also include more transparent system management(e.g. manage multiple devices as one), or does that system management part require more secret sauce by the manufacturer.

My own, biased(of course), thoughts on this architecture, while innovative:

  • Uses a lot of power / consumes a lot of space
  • Lots of devices to manage
  • Lots of connections – complicated physical network
  • Worries over resiliency of TRILL (or any tightly integrated distributed design – getting this stuff right is not easy)
  • On paper at least seems to be very scalable
  • The Z9000 32-port 40GbE switch certainly seems to be a nice product from a pure hardware/throughput/formfactor perspective. I just came across Arista’s new 1U 40GbE switch and I think I’d prefer the Force10 design with twice the size and twice the ports purely for more line rate ports in the unit.

It would be interesting to read a bit more in depth about this architecture.

I wonder if this is going to be Force10s approach going forward, the distributed design, or if they are going to continue to offer more traditional chassis products for customers who prefer that type of setup. In theory it should be pretty easy to do both.

Powered by WordPress