TechOpsGuys.com Diggin' technology every day

June 25, 2012

Exanet – two years later – Dell Fluid File system

Filed under: Storage — Tags: , — Nate @ 10:21 pm

I was an Exanet customer a few years ago up until they crashed. They had a pretty nice NFS cluster for scale out, well at least it worked well for us at the time and it was really easy to manage.

Dell bought them over two years ago and hired many of the developers and have been making the product better I guess over the past couple of years. Really I think they could of released a product – wait for it – a couple of years ago given that Exanet was simply a file system that ran on top of CentOS 4.x at the time. Dell was in talks with Exanet at the time they crashed to make Exanet compatible with an iSCSI back end (because really who else makes a NAS head unit that can use iSCSI as a back end disk). So even that part of the work was pretty much done.

It was about as compatible as you could get really. It would be fairly trivial to certify it against pretty much any back end storage. But Dell didn’t do that, they sat on it making it better(one would have to hope at least). I think at some point along the line perhaps even last year they released something in conjunction with Equallogic – I believe that was going to be their first target at least, but with so many different names for their storage products I’m honestly not sure if it has come out yet or not.

Anyways that’s not the point of this post.

Exanet clustering, as I’ve mentioned before was sort of like 3PAR for file storage. It treated files like 3PAR treats chunklets. It was highly distributed (but lacked data movement and re-striping abilities that 3PAR has had for ages).

Exanet File System Daemon - a software controller for files in the file system, typically one per CPU core, a file had a primary FSD and a secondary FSD. New files would be distributed evenly across all FSDs.

One of the areas where the product needed more work I thought was being able to scale up more. It was a 32-bit system – so inherited your typical 32-bit problems like memory performance going in the tank when you try to address large amounts of memory. When their Sun was about to go super nova they told me they had even tested up to 16-node clusters on their system, they could go higher there just wasn’t customer demand.

3PAR too was a 32-bit platform for the longest time, but those limitations were less of an issue for it because so much of the work was done in hardware – it even has physical separation of the memory used for the software vs the data cache. Unlike Exanet which did everything in software, and of course shared memory between the OS and data cache. Each FSD had it’s own data cache, something like up to 1.5GB per FSD.

Requests could be sent to any controller, any FSD, if that FSD was not the owner of the file it would send a request on a back end cluster interconnect and proxy the data for you, much like 3PAR does in it’s clustering.

I believed it was a great platform to just throw a bunch of CPU cores and gobs of memory at, it runs on a x86-64 PC platform (IBM Dual socket quad core was their platform of choice at the time). 8, 10 and 12 core CPUs were just around the corner, as were servers which could easily get to 256GB or even 512GB of memory. When your talking software licensing costs being in the tens of thousands of dollars – give me more cores and ram, the cost is minimal on such a commodity platform.

So you can probably understand my disappointment when I came across this a few minutes ago, which tries to hype up the upcoming Exanet platform.

  • Up to 8 nodes and 1PB of storage (Exanet could do this and more 4 years ago – though in this case it may be a Compellent limitation as they may not support more than two Compellent systems behind a Exanet cluster – docs are un clear) — Originally Exanet was marketed as a system that could scale to 500TB per 2-node pair. Unofficially they preferred you had less storage per pair (how much less was not made clear – at my peak I had around I want to say 140TB raw managed by a 2-node cluster? It didn’t seem to have any issues with that we were entirely spindle bound)
  • Automatic load balancing (this could be new – assuming it does what it implies – which the more I think about it I’d be it does not do what I think it should do and probably does the same load balancing Exanet did four years ago which was less load balancing and more round robin distribution)
  • Dual processor quad core with 24GB – Same controller configuration I got in 2008 (well the CPU cores are newer) — Exanet’s standard was 16GB at the time but  you could get a special order and do 24GB though there was some problem with 24GB at the time that we ran into during a system upgrade I forgot what it was.
  • Back end connectivity – 2 x 8Gbps FC ports (switch required) — my Exanet was 4Gbps I believe and was directly connected to my 3PAR T400, queue depths maxed out at 1500 on every port.
  • Async replication only – Exanet had block based async replication this in late 2009/early 2010. Prior to that they used a bastardized form of rsync (I never used either technology)
  • Backup power – one battery per controller. Exanet used old fashioned UPSs in their time, not sure if Dell integrated batteries into the new systems or what.
  • They dropped support for Apple File Protocol. That was one thing that Exanet prided themselves on at the time – they even hired one of the guys that wrote the AFP stack for Linux, they were the only NAS vendor (that I can recall) at the time that supported AFP.
  • They added support for NDMP – something BlueArc touted to us a lot at the time but we never used it, wasn’t a big deal. I’d rather have more data cache than NDMP.

I mean from what I can see I don’t really see much progress over the past two years. I really wanted to see things like

  • 64-bit (the max memory being 24GB implies to me still a 32-bit OS+ file system code)
  • Large amounts of memory – at LEAST 64GB per controller – maybe make it fancy and make it it flash-backed? RAM IS CHEAP.
  • More cores! At least 16 cores per controller, though I’d be happier to see 64 per controller (4x Opteron 6276 @ 2.3Ghz per controller) – especially for something that hasn’t even been released yet. Maybe based on Dell R815 or R820
  • At least 16-node configuration (the number of blades you can fit in a Dell blade chassis(perhaps running Dell M620), not to mention this level of testing was pretty much complete two and a half years ago).
  • SSD Integration of some kind – meta data at least? There is quite a bit of meta data mapping all those files to FSDs and LUNs etc.
  • Clearer indication that the system supports dynamic re-striping as well as LUN evacuation (LUN evacuation especially something I wanted to leverage at the time – as the more LUNs you had the longer the system took to fail over. In my initial Exanet configuration the 3PAR topped out at 2TB LUNs, later they expanded this to 16TB but there was no way from the Exanet side to migrate to them, and Exanet being fully distributed worked best if the back end was balanced so it wasn’t a best practice to have a bunch of 2TB LUNs then start growing by adding 16TB LUNs you get the idea) – the more I look at this pdf the less confident I am in them having added this capability (that PDF also indicates using iSCSI as a back end storage protocol).
  • No clear indication that they support read-write snapshots yet (all indications point to no). For me at the time it wasn’t a big deal, snapshots were mostly used for recovering things that were accidentally deleted. They claim high performance with their redirect on write – though in my experience performance was not high. It was adequate with some tuning, they claimed unlimited snapshots at the time, but performance did degrade on our workloads with a lot of snapshots.
  • A low end version that can run in VMware – I know they can do it because I have an email here from 2 years ago that walks you through step by step instructions installing an Exanet cluster on top of VMware.
  • Thin provisioning friendly – Exanet wasn’t too thin provisioning friendly at the time Dell bought them – no indication from what I’ve seen says that has changed (especially with regards to reclaiming storage). The last version Exanet released was a bit more thin provisioning friendly but I never tested that feature before I left the company, by then the LUNs had grown to full size and there wasn’t any point in turning it on.

I can only react based on what I see on the site – Dell isn’t talking too much about this at the moment it seems, unless perhaps your a close partner and sign a NDA.

Perhaps at some point I can connect with someone who has in depth technical knowledge as to what Dell has done with this fluid file system over the past two years, because really all I see from this vantage point is they added NDMP.

I’m sure the code is more stable, easier to maintain perhaps, maybe they went away from the Outlook-style GUI, slapped some Dell logos on it, put it on Dell hardware.

It just feels like they could of launched this product more than two years ago minus the NDMP support (take about 1 hour to put in the Dell logos, and say another week to certify some Dell hardware configuration).

I wouldn’t imagine the SpecSFS performance numbers would of changed a whole lot as a result, maybe it would be 25-35% faster with the newer CPU cores (those SpecSFS results are almost four years old). Well performance could be boosted more by the back end storage. Exanet used to use the same cheap crap LSI crap that BlueArc used to use (perhaps still does in some installations on the low end). Exanet even went to the IBM OEM version of LSI and wow have I heard a lot of horror stories about that too(like entire arrays going off line for minutes at a time and IBM not being able to explain how/why then all of a sudden they come back as if nothing happened). But one thing Exanet did see time and time again, performance on their systems literally doubled when 3PAR storage was used (vs their LSI storage). So I suspect fancy Compellent tiered storage with SSDs and such would help quite a bit in improving front end performance on SpecSFS. But that was true when the original results were put out four years ago too.

What took so long? Exanet had promise, but at least so far it doesn’t seem Dell has been able to execute on that promise. Prove me wrong please because I do have a soft spot for Exanet still 🙂

October 18, 2011

Cisco’s new 10GbE push – a little HP and Dell too

Filed under: Networking — Tags: , , , , — Nate @ 7:56 pm

Just got done reading this from our friends at The Register.

More than anything else this caught my eye:

On the surface it looks pretty impressive, I mean it would be interesting to see exactly how Cisco configured the competing products as in which 60 Juniper devices or 70 HP devices did they use and how were they connected?

One thing that would of been interesting to call out in such a configuration, is the number of logical devices needed for management. For example I know Brocade’s VDX product is some fancy way of connecting lots of devices sort of like more traditional stacking just at a larger scale for ease of management. I’m not sure whether or not the VDX technology extends to their chassis product as Cisco’s configuration above seems to imply using chassis switches. I believe Juniper’s Qfabric is similar. I’m not sure if HP or Arista have such technology(I don’t believe they do). I don’t think Cisco does – but they don’t claim to need it either with this big switch. So a big part of the question is managing so many devices, or just managing one. Cost of the hardware/software is one thing..

HP recently announced a revamp of their own 10GbE products, at least the 1U variety. I’ve been working off and on with HP people recently and there was a brief push to use HP networking equipment but they gave up pretty quick. They mentioned they were going to have “their version” of the 48-port 10-gig switch soon, but it turns out it’s still a ways away – early next year is when it’s supposed to ship, even if I wanted it  (which I don’t) – it’s too late for this project.

I dug into their fact sheet, which was really light on information to see what, if anything stood out with these products. I did not see anything that stood out in a positive manor, I did see this which I thought was kind of amusing –

Industry-leading HP Intelligent Resilient Framework (IRF) technology radically simplifies the architecture of server access networks and enables massive scalability—this provides up to 300% higher scalability as compared to other ToR products in the market.

Correct me if I’m wrong – but that looks like what other vendors would call Stacking, or Virtual Chassis. An age-old technology, but the key point here was the up to 300% higher scalability. Another way of putting it is at least 50% less scalable – when your comparing it to the Extreme Networks Summit X670V(which is shipping I just ordered some).

The Summit X670 series is available in two models: Summit X670V and Summit X670. Summit X670V provides high density for 10 Gigabit Ethernet switching in a small 1RU form factor. The switch supports up to 64 ports in one system and 448 ports in a stacked system using high-speed SummitStack-V160*, which provides 160 Gbps throughput and distributed forwarding. The Summit X670 model provides up to 48 ports in one system and up to 352 ports in a stacked system using SummitStack-V longer distance (up to 40 km with 10GBASE-ER SFP+) stacking technology.

In short, it’s twice as scalable as the HP IRF feature, because it goes up to 8 devices (56x10GbE each), and HP’s goes up to 4 devices (48x10GbE each — or perhaps they can do 56 too with breakout cables since both switches have the same number of physical 10GbE and 40GbE ports).

The list price on the HP switches is WAY high too, The Register calls it out at $38,000 for a 24-port switch. The X670 from Extreme has a list price of about $25,000 for 48-ports(I see it on-line for as low as about $17k). There was no disclosure of HP’s pricing for their 48-port switch.

Extreme has another 48-port switch which is cheaper (almost half the cost if I recall right – I see it on-line going for as low as $11,300) but it’s for very specialized applications where latency is really important. If I recall right they removed the PHY (?) from the switch which dramatically reduces functionality and introduces things like very short cable length limits but also slashes the latency (and cost). You wouldn’t want to use those for your VMware setup(well if you were really cost constrained these are probably better than some other alternatives especially if your considering this or 1GbE), but you may want them if your doing HPC or something with shared memory or high frequency stock trading (ugh!).

The X670 also has (or will have? I’ll find out soon) a motion sensor on the front of the switch which I thought was curious, but seems like a neat security feature, being able to tell if someone is standing in front of your switch screwing with it. It also apparently has the ability(or will have the ability) to turn off all of the LEDs on the switch when someone gets near it, and turn them back on when they go away.

(ok back on topic, Cisco!)

I looked at the Cisco slide above, and thought to myself, really, can they be that far ahead? I certainly do not go out on a routine basis and see how many devices and connectivity between them that I need to achieve  X number of line rate ports, I’ll keep it simple, if you need a large number of line rate ports just use a chassis product(you may need a few of them). It is interesting to see though, assuming it’s anywhere close to being accurate.

When I asked myself the question “Can they be that far ahead?” I wasn’t thinking of Cisco, I think I’m up to 7 readers now — you know me better than that! 🙂

I was thinking of the Extreme Networks Black Diamond X-Series which was announced (note not yet shipping…) a few months ago.

  • Cisco claims to do 768 x 10GbE ports in 25U (Extreme will do it in 14.5U)
  • Cisco claims to do 10W per 10GbE/port (Extreme will do it in 5W/port)
  • Cisco claims to do it with 1 device .. Well that’s hard to beat but Extreme can meet them, it’s hard to do it with less than one device.
  • Cisco’s new top end taps out at very respectable 550Gbit per slot (Extreme will do 1.2Tb)
  • Cisco claims to do it with a list price of $1200/port. I don’t know what Extreme’s pricing will be but typically Cisco is on the very high end for costs.

Though I don’t know how Cisco gets to 768 ports, Extreme does it via 40GbE ports and breakout cables (as far as I know), so in reality the X-series is a 40GbE switch (and I think 40GbE only – to start with unless you use the break out cables to get to 10GbE).  It was a little over a year ago that Extreme was planning on shipping 40GbE at a cost of $1,000/port. Certainly the X-series is a different class of product than what they were talking about a while ago, but prices have also come down since.

X-Series is shipping “real soon now’.  I’m sure if you ask them they’ll tell you more specifics.

It is interesting to me, and kind of sad how far Force10 has fallen in the 10GbE area, I mean they seemed to basically build themselves on the back of 10GbE(or at least tried to), but I look at their current products on the very high end, and short from the impressive little 40GbE switch they have, they seem to top out at 140 line rate 10GbE in 21U. Dell will probably do well with them, I’m sure it’ll be a welcome upgrade to those customers using Procurve, uh I mean Powerconnect? That’s what Dell call(ed) their switches right?

As much as it pains me I do have to give Dell some props for doing all of these acquisitions recently and beefing up their own technology base, whether it’s in storage, or networking they’ve come a long way (more so in storage, need more time to tell in networking). I have not liked Dell myself for quite some time, a good chunk of it is because they really had no innovation, but part of it goes back to the days before Dell shipped AMD chips and Dell was getting tons of kick backs from Intel for staying an Intel exclusive provider.

In the grand scheme of things such numbers don’t mean a whole lot, I mean how many networks in the world can actually push this kind of bandwidth? Outside of the labs I really think any organization would be very hard pressed to need such fabric capacity, but it’s there — and it’s not all that expensive.

I just dug up an old price list I had from Extreme – from late November 2005. An 6-port 10GbE module for their Black Diamond 10808 switch (I had two at the time) had a list price of $36,000. For you math buffs out there that comes to $9,000 per line rate port.

That particular product was oversubscribed (hence it not being $6,000/port) as well having a mere 40Gbps of switch fabric capacity per slot, or a total of 320Gbps for the entire switch (it was marketed as a 1.2Tb switch but hardware never came out to push the backplane to those levels – I had to dig into the depths of the documentation to find that little disclosure – naturally I found it after I purchased, didn’t matter for us though I’d be surprised if we pushed more than 5Gbps at any one point!). If I recall right the switch was 24U too. My switches were 1GbE only, cost reasons 🙂

How far we’ve come..

July 20, 2011

I called it! – Force10 bought by Dell

Filed under: Networking — Tags: , — Nate @ 11:35 am

Not that it matters to me too much either way but Dell just bought Force10. I called it! Well it matters to me in that I didn’t want Dell near my Extreme Networks 🙂

It is kind of sad that Force10 was never able to pull off their IPO. I have heard that they have been losing quite a bit of talent recently, but don’t know to what degree. It’s also unfortunate they weren’t able to fully capitalize on their early leadership in the 10 gigabit arena, Arista seems to be the new Force10 in some respects, though it wouldn’t surprise me if they have a hard time growing too barring some next gen revolutionary product.

I wonder if anyone will scoop up BlueArc, they have been trying to IPO as well for a couple of years now, I’d be surprised if they can pull it off in this market.  They have good technology just a whole lot of debt. Though recently I read they started turning a profit..

 

December 12, 2010

Dell and Exanet: MIA

Filed under: Storage — Tags: , — Nate @ 9:37 pm

The thoughts around Dell buying Compellent made me think back to Dell’s acquistiion of the IP and some engineering employees of Exanet, as The Register put it, a crashed NAS company.

I was a customer and user of Exanet gear for more than a year, and at least in my experience it was a solid product, very easy to use, decent performance and scalable. The back end architecture to some extent mirrored the 3PAR hardware-based architecture but in software, really a good design in my opinion.

Basic Exanet Architecture

Their standard server at the time they went under was a IBM x3650, dual proc quad core Intel Xeon 5500-based platform with 24GB of memory.

Each server had multiple software processes called fsds or File system daemons, that ran, they ran one fsd per core. Each fsd was responsible for a portion of the file system (x number of files), they load balanced it quite well I never had to manually re-balance or anything. Each fsd was allocated its own memory space used for itself as well as cache, if I recall right the default was around 1.6GB per fsd.

Each NAS head unit had back end connectivity to all of the other NAS units in the cluster(minimum 2, maximum tested at the time they went under was 16). A request for a file could come in on any node, any link. If the file wasn’t home to that node it would transparently forward the request to the right node/fsd to service the request on the back end. Much like how 3PAR’s backplane forwards requests between controllers.

Standard for back end network was 10Gbps on their last models.

As far as data protection, the use of “commodity” servers did have one downside, they had to use UPS systems as their battery backup to ensure enough time for the nodes to shut down cleanly in the event of a power failure. This could present problems at some data centers as operating a UPS in your own rack can be complicated from a co-location point of view(think EPO etc). Another similar design that Exanet had compared to 3PAR is their use of internal disks to flush cache to, which is something I suppose Exanet was forced into doing, other storage manufacturers use battery backed cache in order to survive power outages of some duration. But both Exanet and 3PAR dump their cache to an internal disk so that the power outage can last for a day, a week, or even a month and it won’t matter, data itnegrity is not compromised.

32-bit platform

The only thing that held it back was they didn’t have enough time or resources to make the system fully 64-bit before they went under, that would of unlocked a whole lot of additional performance they could of gotten. Being locked into a 32-bit OS really limited what they could do on a single node, and as processors became ever more powerful they really had to make the jump to 64-bit.

Exanet was entirely based on “commodity” hardware, not only were they using x86 CPUs but their NAS controllers were IBM 2U rackmount servers running CentOS 4.4 or 4.5 if I recall right.

To me, as previous posts have implied, if your going to base your stuff on x86 CPUs, go all out, it’s cheap anyways. I would of loved to have seen a 32-48 core Exanet NAS controller with 512GB-1TB of memory on it.

Back to Dell

Dell originally went into talks with Exanet a while back because Exanet was willing to certify Equallogic storage as a back end provider of disk to an Exanet cluster, using iSCSI inbetween the Exanet cluster and the Equallogic storage. Since nobody else in the indusry seemed willing to have their NAS solution talk to a back end iSCSI system. As far as I know the basic qualifications for this solution was completed in 2009, quite a ways before they ran out of cash.

Why did Exanet go under? I believe primarily because the market they were playing in was too small with too few players in it, not enough deals to go around, so whomever had the most resources to outlast the rest would come out on top, in this case I believe it was Isilon, even though they too were taken out by EMC from the looks of their growth it didn’t seem like they were in a fine position to continue to operate independently. With Ibrix and Polyserve going to HP, Onstor going to LSI, and I’m still convinced BlueArc will go to HDS at some point(they are once again filing for IPO but word on the street is they aren’t in very good shape), I suspect after they fail to IPO and go under. They have a very nice NAS platform, but HDS has their hands tied in supporting 3rd party storage other than HDS product, BlueArc OEM’s LSI storage like so many others.

About a year ago SGI OEM’d one of BlueArc’s products though recently I have looked around the SGI site and see no mention of it. Either they have abandoned it (more likely) or are just really quiet. Since I know SGI is also a big LSI shop I wonder if they are making the switch to Onstor. One industry insider I know suspects LSI is working on integrating the Onstor technology directly into their storage systems rather than having an independent head unit, which makes sense if they can make it work.

But really my question is why hasn’t Dell announced anything related to the Exanet technology? They could of, quite possibly within a week or two had a system running and certified on Dell PowerEdge equipment and selling to both existing Exanet customers as well as new ones. The technology worked fine, it was really easy to setup and use, and it’s not as if Dell has another solution in house that competes with it. AND since it was an entirely software based solution there was really no costs involved in manufacturing. Exanet had more than one PB-sized deal in the works at the time they went under, that’s a lot of good will Dell just threw away. But hey, what do you expect, it’s Dell. Thankfully they didn’t get their dirty paws on 3PAR.

When I looked at how a NetApp system was managed compared to the Exanet my only response was You’re kidding, right?

Time will tell if anything ever comes of the technology.

I really wanted 3PAR to buy them of course, they were very close partners with 3PAR and both pitched each other’s products at every opportunitiy. Exanet would go out of their way to push 3PAR storage whenever possible because they knew how much trouble the LSI storage could be, and they were happy to get double the performance per spindle off 3PAR vs LSI. But I never did get an adequate answer out of 3PAR as to why they did not pursue Exanet, they were in the early running but pulled out for whatever reason, the price tag of less then $15M was a steal.

Now that 3PAR is with HP we’ll see what they can do with Ibrix, I knew of more than one customer that migrated off of things like Ibrix and Onstor to Exanet, HP has been pretty silent about Ibrix since they bought them as far as I know. I have no idea how much R&D they have pumped into it over the years or what their plans might be.

Dell going after Compellent

Filed under: Storage — Tags: , — Nate @ 12:26 am

I know this first made news a couple of days ago but I can’t tell you how busy I’ve been recently. It seems like after Dell got reamed by HP in the 3PAR bidding war they are going after Compellent,  one of the only other storage technology companies utilizing distributed RAID, and as far as I know the main pioneer of automagic storage tiering.

This time around nobody else is expected to bid, it seems the stock speculators were a bit disappointed when the talks were announced as they had already bid the stock up far higher than what is being discussed as being the acquisition price.

While their previous generation of controllers seemed rather weak, their latest and greatest look to be a pretty sizable step up, and apparently can be leveraged by their existing customers, no need to buy a new storage system.

I can’t wait to see how EMC responds myself. Dell must be really frustrated with them to go after Compellent so soon after losing 3PAR.

August 30, 2010

Dell vs HP in R&D

Filed under: News — Tags: , , — Nate @ 9:50 am

Came across this link on Data Center Knowledge to Forbes online

In fiscal 2010 (ended January 31st), Dell spent $617 million for R&D, or 1.2% of sales [..] an R&D budget like that isn’t going to cut it.

[..]Hewlett Packard, the larger company, already has more going on. In the trailing 12 months, it spent $2.849 billion here, or 2.3% of sales.

[..] Assuming both want to stay relevant five years hence, 3Par looks like it will be a bargain for whichever firm wins this bidding war and likely there will be some incredibly long and tense meetings in the conference rooms of the firm that loses.

And another link from Data Center Knowledge to the Boston Globe, which says something I don’t really agree with –

EMC has also partnered with Dell to allow the computer company to resell high-end network storage products made by EMC. But that arrangement would be severely tested if Dell winds up buying 3Par, giving Dell its own high-end storage provider.

For that reason Kerravala said EMC will most likely fare better if HP ends up winning the 3Par bidding war.

“At least that will preserve EMC’s partnership with Dell,’’ he said.

In the short term it will of course preserve the EMC partnership, but the rift has been created by Dell, showing EMC it’s not willing to sit by and just refer sales along to the EMC direct sales team much longer. I’m sure EMC realizes it’s days are numbered as a tight partner with Dell(hence it’s partnership with Cisco UCS which I’m sure didn’t make Dell a happy camper).

I don’t see Dell going to HDS if they lose out on 3PAR, they probably wouldn’t look that hot if they went to HDS’s arms so soon after HP and Sun/Oracle ditched them.

August 16, 2010

Trying not to think about it

Filed under: News,Storage — Tags: , — Nate @ 6:54 am

Hell just got a little colder. It seems 3PAR was bought by Dell for ~$1.15 billion this morning(news is so fresh as of this posting the official 3PAR press release isn’t posted yet, just a blank page).. I woke my rep up and asked him what happened and he wasn’t aware that it had gone down, they did a good job at keeping it quiet.

It’s not like 3PAR was in any trouble, they had no debt, and the highest margins in the industry along with good sales. They haven’t been making too much profits mainly because they are hiring so many new people to grow the company. In my area since I started using 3PAR they’ve gone from 1 Sales and 1 SE to 3 Sales and 2 SEs, and they’ve really expanded over seas and stuff. I would of expected them to hold out for a few more billion, $1 seems far too cheap.

I have read several complaints about how Equallogic has gone downhill since Dell bought them (from original Equallogic users, not that I’ve ever used that stuff so don’t know whether or not they are accurate), I fear the same may happen to 3PAR. But it will take a little while for it to start.

I think the only hope 3PAR has at this point is if Dell keeps them independent for as long as possible. Outside of their DCS division Dell really shows they have no ability to innovate.

I wonder what Marc Farley thinks, as a former Equallogic/Dell employee now he’s at 3PAR, and Dell came and found him again..

Maybe I’ll get lucky and this will just turn out to be a bad dream, some evil hacker out there manipulating the stock price by planting news.

Do me one favor Dell, stay the hell away from Extreme Networks! With Brocade having bought Foundry, HP having bought 3COM. I was told by a Citrix guy that Juniper tried to buy Extreme shortly after they bought Netscreen instead of making their own switches, from what I recall he said Juniper bought Netscreen for $500M which was way over inflated, and Extreme demanded $1 billion at the time. There’s not many other Enterprise/Service provider independent Ethernet companies still around. There is Force10, Dell can go buy them be a lot cheaper too.

I suppose more than anything else, Dell buying 3PAR is Dell admitting the Equallogic technology doesn’t hold a candle to 3PAR technology, ok maybe a candle but not much more than that!

It may be 6:51AM but I think I need a drink.

July 26, 2010

Dell settles with the SEC

Filed under: News — Tags: , , , , — Nate @ 8:52 am

This story makes me sick. Everyone in the industry knew it was going on, several years ago Intel was paying off their customers to stick to their products, and not deploy the superior Opteron processors. Intel’s strategy to convert the world to Itanium was going down in flames thanks to AMD’s extension of x86 – x86-64, combine that with a superior hardware architecture derived from the Alpha (and created by many former engineers who worked on the Alpha – AMD hired many of them). Whether it was the hypertransport design, the itnegrated memory controllers, multi core designs. In so many areas AMD was showing such massive innovations the only way Intel could respond at the time was by paying their customers to not use their stuff.

In no place was it more obvious than Dell. A company that myself I’ve never had respect for for other reasons(biggest being they don’t innovate at all outside of their supply chain). Dell was the only big OEM that did not use AMD processors at all for the longest time.

The Register has posted a couple of articles on this recently with Dell settling with the SEC for a mere $100M.

What upsets me more than anything else, is not the fact that this went on, but the pocket change of penalties that resulted. Intel paid AMD $1.25 billion to settle all outstanding legal cases last year, a small fraction that otherwise should of been paid. Dell pays only $100M, maybe that’s enough for the SEC, but on anti trust grounds it should be far more. $100M is not a deterrent. It accounts for a small fraction of what Intel paid them!

Same goes for the settlement Intel paid to AMD, I have absolutely no doubt, as you should have none as well, that Intel benefited FAR more than the $1 billion paid to AMD. It should of been $10 billion, if not higher. Intel wrote that settlement off in one quarter!

It really is depressing to see these big companies get away with this sort of thing. Whether it’s Dell, or Intel, or the recent Goldman Sachs SEC settlement. The penalties are pocket change compared to what they should be to make it a real deterrent. And moreover, individuals are not punished in a lot of cases, the company takes the hit, and of course in all cases nobody ever admits any wrong doing. Goldman, like Intel wrote their settlement off in one quarter!

Dell wasn’t alone, we all knew it but no other OEM was being so blatenly obvious in their strategy.

Intel’s rebates amounted to 38 per cent of Dell’s operating profit in the fiscal year 2006, and rose to 76 per cent (or $720m) in one quarter alone, Q1 2007. While almost all of the Intel funds were incorporated into Dell’s component costs, Dell did not disclose the existence, much less the magnitude, of the Intel exclusivity payments.

[..]

New York State’s lawsuit suggests that the reach of the funding was wide indeed. It alleges that IBM benefited by $130m from Intel simply for not launching an AMD product. HP benefited by almost $1bn. Again, you might suppose Intel might have found better use for such resources – such as R&D.

A lot of the big companies do this sort of thing, it’s a wonder that tech startups even bother to start up anymore when there is really nobody keeping the playing field fair. One other similar despicable business deal which I was informed from two different people on both sides of the table was a networking deal Cisco was competing with AT&T for along with some other vendors. AT&T was(and probably still is) the largest user and re-seller of Cisco gear. The competition was the obvious players there’s only so many out there! Anyways the deal went down, Cisco lost hands down on many accounts. Their technology just isn’t competitive in so many areas. So how did Cisco respond? They came back to AT&T with 95% off list pricing. They bought the business. They didn’t win on any real merits, they took a major loss on the deal, which will result in all of their other customers having to continue to pay more to compensate for that. That just makes me sick.

But nothing seems to be on such a grand scale as what Intel did to keep AMD at bay. It was shocking to me seeing the pundits saying “oh well the consumer wasn’t hurt by those practices”, not taking into account how close AMD came to the brink, with their massive(still massive!) amount of debt they have incurred over the years. An incredible market opportunity for them was there for several years, something Intel kept small by throwing cash at their customers because they had nothing else to offer.

Intel can’t afford to lose AMD from an anti trust standpoint, but they also don’t want them to succeed too much, a pretty fine line they walk.

March 26, 2010

Enterprise EqualLogic

Filed under: Storage — Tags: , , — Nate @ 6:33 am

So, I attended that Dell/Denali event I mentioned recently. They covered some interesting internals on the architecture of Exchange 2010. Covering technical topics like migrating to it, how it protects data, etc. It was interesting from that standpoint, they didn’t just come out and say “Hey we are big market leader you will use us, resistance is futile”. So I certainly appreciated that although honestly I don’t really deal with MS stuff in my line of work, I was just there for the food and mainly because it was walking distance(and an excuse to get out of the office).

The other topic that was heavily covered was on Dell EqualLogic storage. This I was more interested in. I have known about EqualLogic for years, and never really liked their iSCSI-only approach(I like iSCSI but I don’t like single protocol arrays, and iSCSI is especially limiting as far as extending array functionality with other appliances, e.g. you can optionally extend a Fiber channel only array with iSCSI but not vise versa – please correct me if I’m wrong.)

I came across another blog entry last year which I found extremely informative – “Three Years of EqualLogic” which listed some great pros and some serious and legimate cons to the system after nearly three years of using it.

Anyways, being brutally honest if there is anything I did really “take away” from the conference with regards to EqualLogic storage it is this – I’m glad I chose 3PAR for my storage needs(and thanks to my original 3PAR sales rep for making the cold call many years ago to me. I knew him from an earlier company..).

So where to begin, I’ve had a night to sleep on this information and absorb it in a more logical way, I’ll start out with what I think are the pros to the EqualLogic platform:

  • Low cost – I haven’t priced it personally but people say over and over it’s low cost, which is important
  • Easy to use – It certainly looks very easy to use, very easy to setup, I’m sure they could get 20TB of EqualLogic storage up running in less time than 3PAR could do it no doubt.
  • Virtualized storage makes it flexible. It pales in comparison to 3PAR virtualization but it’s much better than legacy storage in any case.
  • All software is included – this is great too, no wild cards with licensing. 3PAR by contrast heavily licenses their software and at times it can get complicated in some situations(their decision to license the zero detection abilities of their new F/T class arrays was a surprise to me)

So it certainly looks fine for low(ish) cost workgroup storage, one of the things the Dell presenter tried to hammer on is how it is “Enterprise ready”. And yes I agree it is ready, lots of enterprises use workgroup storage I’m sure for some situations(probably because their real legacy enterprise storage is too expensive to add more applications to, or doesn’t scale to meet mixed workloads simultaneously).

Here’s where I get down & dirty.

As far as really ready for enterprise storage – no way it’s not ready, not in 2010, maybe if it was 1999.

EqualLogic has several critical architectural deficiencies that would prevent me from wanting to use it or advising others to use it:

  • Active/passive controller design – I mean come on, in 2010 your still doing active/passive? They tried to argue the point where you don’t need to “worry” about balancing the load between controllers and then losing that performance when a controller fails. Thanks, but I’ll take the extra performance from the other active controller(s)[with automagic load balancing, no worrying required], and keep performance high with 3PAR Persistant Cache in the event of a controller failure(or software/hardware upgrade/change).
  • Need to reserve space for volumes/snapshots. Hello, 21st century here, we have the technology for reservationless systems, ditching reservations is especially critical when dealing with thin provisioning.
  • Lack of storage pools. This compounds the effects of a reservation-based storage system. Maybe EqualLogic has storage pools, I just did not hear it mentioned in the conference nor anywhere else. Having to reserve space for each and every volume is just stupidly inefficient. At the very least you should be able to reserve a common pool of space and point multiple volumes to it to share. Again hints to their lack of a completely virtualized design. You get a sense that a lot of these concepts were bolted on after the fact and not designed into the system when you run into system limitations like this.
  • No global hot spares – so the more shelves you have the more spindles are sitting there idle, doing nothing. 3PAR by contrast does not use dedicated spares, each and every disk in the system has spare capacity on it. When a RAID failure occurs the rebuild is many:many instead of many:one. This improves rebuild times by 10x+. Also due to this design, 3PAR can take advantage of the I/O available on every disk on the array. There aren’t even dedicated parity disks, parity is distributed evenly across all drives on the system.
  • Narrow striping. They were talking about how the system distributes volumes over all of the disks in the system. So I asked them how far can you stripe say a 2TB volume? They said over all of the shelves if you wanted to, but there is overhead from iSCSI because apparently you need an iSCSI session to each system that is hosting data for the volume, due to this overhead they don’t see people “wide striping” of a single volume over more than a few shelves. 3PAR by contrast by default stripes across every drive in the system, and the volume is accessible from any controller(up to 8 in their high end) transparently. Data moves over an extrenely high speed backplane to the controller that is responsible for those blocks. In fact the system is so distributed that it is impossible to know where your data actually is(e.g. data resides on controller 1 so I’ll send my request to controller 1), and the system is so fast that you don’t need to worry about such things anyways.
  • Cannot easily sustain the failure of a whole shelf of storage. I asked the Dell rep sitting next to me if it was possible, he said it was but you had to have a special sort of setup, it didn’t sound like it was going to be something transparent to the host, perhaps involving synchrnous replication from one array to another, in the event of failure you probably had to re-point your systems to the backup, I don’t know but my point is I have been spoiled by 3PAR in that by default their system uses what they call cage level availability, which means data is automatically spread out over the system to ensure a failure of a shelf does not impact system availability. This requires no planning in advance vs other storage systems, it is automatic. You can turn it off if you want as there are limitations as far as what RAID levels you can use depending no the number of shelves you have (e.g. you cannot run RAID 5 with cage level availability with only 2 shelves because you need at least 3), the system will prevent you from making mistakes.
  • One RAID level per array(enclosure) from what the Dell rep sitting next to me said. Apparently even on their high end 48-drive arrays you can only run a single level of RAID on all of the disks? Seems very limiting for a array that has such grand virtualization claims. 3PAR of course doesn’t limit you in this manor, you can run multiple RAID levels on the same enclosure, you can even run multiple RAID levels on the same DISK, it is that virtualized.
  • Inefficient scale out – while scale out is probably linear, the overhead involved with so many iSCSI sessions with so many arrays has to have some penalty. Ideally what I’d like to see is at least some sort of optional Infiniband connectivity between the controllers to give them higher bandwidth, lower latency, and then do like 3PAR does – traffic can come in on any port, and routed to the appropriate active controller automatically. But their tiny controllers probably don’t have the horsepower to do that anyways.

There might be more but those are the top offenders at the top of my list. One part of the presentation which I didn’t think was very good was when the presenter streamed a video from the array and tested various failure scenarios.  The amount of performance capacity needed to transfer a video under failure conditions of a storage array is a very weak illustration on how seamless a failure can be. Pulling a hard disk out, or a disk controller or a power supply, really is trivial. To the uninformed I suppose it shows the desired effect(or lack of) though which is why it’s done. A better test I think would be running something like IO Zone on the array and showing the real time monitoring of IOPS and latency when doing failure testing(preferably with at least 45-50% of the system loaded).

You never know what you’re missing until you don’t have it anymore. You can become complacent in what you have as being “good enough” because you don’t know any better. I remember feeling this especially strongly when I changed jobs a few years ago, and I went from managing systems in a good tier 4 facility to another “tier 4” facility which had significant power issues(at least one major outage a year seemed like). I took power for granted at the first facility because we had gone so many years without so much as a hiccup. It’s times like this I realize (again) the value that 3PAR storage brings to the market and am very thankful that I can take advantage of it.

What I’d like to see though is some SPC-1 numbers posted for a rack of EqualLogic arrays. They say it is enterprise ready, and they talk about the clouds surrounding iSCSI. Well put your money where your mouth is and show the world what you can do with SPC-1.

Powered by WordPress