TechOpsGuys.com Diggin' technology every day

October 10, 2010

Intel or ASIC

Filed under: Random Thought,Storage — Tags: , , , , — Nate @ 11:33 am

Just another one of my random thoughts I have been having recently.

Chuck wrote a blog not too long ago how he believes everyone is going to go to Intel (or x86 at least) processors in their systems and move away from ASICs.

He illustrated his point by saying some recent SPEC NFS results showed the Intel based system outperforming everything else. The results were impressive, the only flaw in them is that the costs are not disclosed for SPEC. An EMC VMAX with 96 EFDs isn’t cheap. And the better your disk subsystem is the faster your front end can be.

Back when Exanet was still around they showed me some results from one of their customers testing SPEC SFS on the Exanet LSI (IBM OEM’d) back end storage vs 3PAR storage, and for the same number of disks the SPEC SFS results were twice as high on 3PAR.

But that’s not my point here or question. A couple of years ago NetApp posted some pretty dismal results for the CX-3 with snapshots enabled. EMC doesn’t do SPC-1 so NetApp did it for them. Interesting.

After writing up that Pillar article where I illustrated the massive efficiency gains on the 3PAR architecture(which is in part driven by their own custom ASICs), it got me thinking again, because as far as I can tell Pillar uses x86 CPUs.

Pillar offers multiple series of storage controllers to best meet the needs of your business and applications. The Axiom 600 Series 1 has dual-core processors and supports up to 24GB cache. The Axiom 600 Slammer Series 2 has quad-core processors and double the cache providing an increase in IOPS and throughput over the Slammer Series 1.

Now I can only assume they are using x86 processors, for all I know I suppose they could be using Power, or SPARC, but I doubt they are using ARM 🙂

Anyways back to the 3PAR architecture and their micro RAID design. I have written in the past about how you can have tens to hundreds of thousands of mini RAID arrays on a 3PAR system depending on the amount of space that you have. This is, of course to maximize distribution of data over all resources to maximize performance and predictability. When running RAID 5 or RAID 6, there are of course parity calculations involved. I can’t help but wonder what sort of chances in hell a bunch of x86 CPU cores have in calculating RAID in real time for 100,000+ RAID arrays, with 3 and 4TB drives not too far out, you can take that 100,000+ and make it 500,000.

Taking the 3PAR F400 SPC-1 results as an example, here is my estimate on the number of RAID arrays on the system, fortunately it’s mirrored so math is easier:

  • Usable capacity = 27,053 GB (27,702,272 MB)
  • Chunklet size = 256MB
  • Total Number of RAID-1 arrays = ~ 108,212
  • Total Number of data chunklets = ~216,424
  • Number of data chunklets per disk = ~563
  • Total data size per disk = ~144,128 MB (140.75 GB)

For legacy RAID designs it’s probably not a big deal, but as disk drives grow ever bigger I have no doubt that everyone will have to move to a distributed RAID architecture, to reduce disk rebuild times and lower the chances of a double/triple disk failure wiping out your data. It is unfortunate (for them) that Hitachi could not pull that off in their latest system.

3PAR does use Intel CPUs in their systems as well, though they aren’t used too heavily, on the systems I have had even at peak spindle load I never really saw CPU usage above 10%.

I think ASICs are here to stay for some time, on the low end you will be able to get by with generic CPU stuff, but on the higher end it will be worth the investment to do it in silicon.

Another place to look for generic CPUs vs ASICs is in the networking space. Network devices are still heavily dominated by ASICs because generic CPUs just can’t keep up. Now of course generic CPUs are used for what I guess could be called “control data”, but the heavy lifting is done by silicon. ASICs often draw a fraction of the power that generic CPUs do.

Yet another place to look for generic CPUs vs ASICs is in the HPC space – the rise of GPU-assisted HPC allowing them to scale to what was (to me anyways) unimaginable heights.

Generic CPUs are of course great to have and they have come a long way, but there is a lot of cruft in them, so it all depends on what your trying to accomplish.

The fastest NAS in the world is still BlueArc, which is powered by FPGAs, though their early cost structures put them out of reach for most folks, their new mid range looks nice, my only long time complaint about them has been their back end storage – either LSI or HDS, take it or leave it. So I leave it.

The only SPEC SFS results posted by BlueArc are for the mid range, nothing for their high end (which they tested on the last version of SFS, nothing yet for the current version).

 

October 6, 2010

Who’s next

Filed under: Networking,Random Thought — Tags: , , — Nate @ 9:42 pm

I was thinking about this earlier this week or late last week I forget.

It wasn’t long ago that IBM acquired Blade Network Technologies, a long time partner of IBM as Blade made a lot of switches for the Blade Center, and also for the HP blade system as well I believe.

I don’t think that Blade Networks was really well known outside of their niche of being a supplier to HP and IBM (and maybe others I don’t recall and haven’t checked recently) on the back end. I certainly never heard of them until in the past year or two and I do keep my eyes out there for such companies.

Anyways that is what started my train of thought. The next step in the process was watching several reports on CNBC about companies pulling their IPOs due to market conditions. Which to me is confusing considering how high the “market” has come recently. It apparently just boils down to investors and IPO companies not able to agree on a “market price” or whatever. I don’t really care what the reason is, but the point is this — earlier this year Force10 Networks filed for IPO, and well haven’t heard much of a peep since.

Given the recent fight over 3PAR between Dell and HP, and the continuing saga of stack wars, it got me speculating.

What I think should happen, is Dell should go buy Force10 before they IPO. Dell obviously has no networking talent in house, last I recall their Powerconnect crap was OEM’d from someone like SMC or one of those really low tier providers. I remember someone else making the decision to use that product last year, and then when we tried to send 5% of our network traffic to the site that was running those switches they flat out died, had to get remote hands to reboot them. Then shortly afterwards one of them bricked themselves when upgrading the firmware on them, had to RMA. I just pointed and laughed, since I knew it was a mistake to go with them to begin with, the people making the decisions just didn’t know any better. Several outages later they ended up replacing them, and I tought them the benefits of a true layer 3 network, no more static routes.

Then HP should go buy Extreme Networks, which is my favorite network switching company, I think HP could do well with them. Yes we all know HP bought 3COM last year, but we also know HP didn’t buy 3COM for the technology (no matter what the official company line is), they bought them for their presence in China. 3COM was practically a Chinese company by the time HP bought them, really! And yes I did read the news that HP finished kicking Cisco out of their data centers replacing their stuff with a combination of Procurve and 3COM. Juniper tried & failed to buy Extreme a few years ago shortly after they bought Netscreen.

That would make my day though, a c-Class blade system with an Extreme XOS-powered VirtualConnect Ethernet fabric combined with 3PAR storage on the back end. Hell, that’d make my year 🙂

And after that, given that HP bought Palm earlier in the year (yes I own a Palm Pre – mainly so I can run older Palm apps otherwise I’d still be on a feature phone). HP likes the consumer space so they should go buy Tivo and break into the set top box market. Did I mention I use Tivo too? I have 3 of them.

September 26, 2010

Still waiting for Xiotech..

Filed under: Random Thought,Storage — Tags: , , , — Nate @ 2:55 pm

So I was browsing the SPC-1 pages again to see if there was anything new and lo and behold, Xiotech posted some new numbers.

But once again, they appear too timid to release numbers for their 7000 series, or the 9000 series that came out somewhat recently. Instead they prefer to extrapolate performance from their individual boxes and aggregate the results. That doesn’t count of course, performance can be radically different at higher scale.

Why do I mention this? Well nearly a year ago their CEO blogged, in response to one of my posts, and that was one of the first times I made news in The Register (yay! – I really was excited) , and in part the CEO said:

Responding to the Techopsguy blog view that 3PAR’s T800 outperforms an Emprise 7000, the Xiotech writer claims that Xiotech has tested “a large Emprise 7000 configuration” on what seems to be the SPC-1 benchmark; “Those results are not published yet, but we can say with certainty that the results are superior to the array mentioned in the blog (3PAR T800) in several terms: $/IOP, IOPS/disk and IOPS/controller node, amongst others.”

So here we are almost a year later, and more than one SPC-1 result later, and still no sign of Xiotech’s SPC-1 numbers for their higher end units. I’m sorry but I can’t help but feel they are hiding something.

If I were them I would put my customers more at ease by publishing said numbers, and be prepared to justify the results if they don’t match up to Xiotech’s extrapolated numbers from the 5000 series.

Maybe they are worried they might end up like Pillar, who’s CEO was pretty happy with their SPC-1 results. Shortly afterwards the 3PAR F400 launched and absolutely destroyed the Pillar numbers from every angle. You can see more info on these results here.

At the end of the day I don’t care of course, it just was a thought in my head and gave me something to write about 🙂

I just noticed that these past two posts puts me over the top as far as the most number of posts I have done in a month since this TechOpsGuys things started. I’m glad I have my friends Dave, Jake and Tycen generating tons of content too, after all this site was their idea!

Overhead associated with scale out designs

Filed under: Random Thought — Tags: — Nate @ 2:33 pm

Was reading a neat article over at The Register again about the new Google indexing system. This caught my eye:

“The TPC-E results suggest a promising direction for future investigation. We chose an architecture that scales linearly over many orders of magnitude on commodity machines, but we’ve seen that this costs a significant 30-fold overhead compared to traditional database architectures.

Kind of makes you think… I guess if your operating at the scale they are, the overhead is not a big deal, they’ll probably a find a way to reduce(ha ha, map reduce, get it? sorry) it over time.

September 23, 2010

Using open source: how do you give back?

Filed under: General,linux,Random Thought — Tags: — Nate @ 10:11 pm

After reading an article on The Register (yeah you probably realize by now I spend more time on that site online than pretty much any other site), it got me thinking about a topic that bugs me.

The article is from last week but is written by the CEO of the organization behind Ubuntu. It basically talks about how using open source software is a good way to save costs in a down(or up) economy. And tries to give a bunch of examples on companies basing their stuff on open source.

That’s great, I like open source myself, fired up my first Slackware Linux box in 1996 I think it was(Slackware 3.0). I remember picking Slackware over Red Hat at the time specifically because Slackware was known to be more difficult to use and it would force me to learn Linux the hard way, and believe me I learned a lot. To this day people ask me what they should study or do to learn Linux and I don’t have a good answer, I don’t have a quick and easy way to learn Linux the way I learned it. It takes time, months, years of just playing around with it. With so many “easy” distributions these days I’m not sure how practical my approach is now but I’m getting off topic here.

So back to what bugs me. What bugs me is people out there, or more specifically organizations out there that do nothing but leach off of the open source community. Companies that may make millions(or billions!) in revenue in large part because they are leveraging free stuff. But it’s not the usage of the free stuff that I have a problem with, more power to them. I get annoyed when those same organizations feel absolutely no moral obligation to contribute back to those that have given them so much.

You don’t have to do much. Over the years the most that I have contributed back have been participating in mailing lists, whether it is the Debian users list(been many years since I was active there), or the Red Hat mailing list(few years), or the CentOS mailing list(several months). I try to help where I can. I have a good deal of Linux experience, which often means the questions I have nobody else on the list has answers to. But I do(well did) answer a ton of questions. I’m happy to help. I’m sure at some point I will re-join one of those lists(or maybe another one) and help out again, but been really busy these past few months. I remember even buying a bunch of Loki games to try to do my part in helping them(despite it not being open source, they were supporting Linux indirectly). Several of which I never ended up playing(not much of a gamer). VMware of course was also a really early Linux supporter(still have my VMware 1.0.2 linux CD I believe that was the first version they released on CD previous versions were download only), though I have gotten tired of waiting for vCenter for Linux.

The easiest way for a corporation to contribute back is to say use and pay for Red Hat Enterprise, or SuSE or whatever. Pay the companies that hire the developers to to make the open source software go. I’m partial to Red Hat myself at least in a business environment, though I use Debian-based in my personal life.

There are a lot of big companies that do contribute code back, and that is great too, if you have the expertise in house. Opscode is one such company I have been working with recently on their Chef product. They leverage all sorts of open source stuff in their product(which in itself is open source). I asked them what their policy is for getting things fixed in the open source code they depend on, do they just file bugs and wait or do they contribute code, and they said they contribute a bunch of code, constantly. That’s great, I have enormous respect for organizations that are like that.

Then there are the companies that leach off open source and not only don’t officially contribute in any way whatsoever but they actively prevent their own employees from doing so. That’s really frustrating & stupid.

Imagine where Linux, and everything else would be if more companies contributed back. It’s not hard, go get a subscription to Red Hat, or Ubuntu or whatever for your servers (or desktops!). You don’t have to contribute code, and if you can’t contribute back in the form of supporting the community on mailing lists, or helping out with documentation, or the wikis or whatever. Write a check, and you actually get something in return, it’s not like it’s a donation. But donations are certainly accepted by the vast numbers of open source non profits

HP has been a pretty big backer of open source for a long time, they’ve donated a lot of hardware to support kernel.org and have been long time Debian supporters.

Another way to give back is to leverage your infrastructure, if you have a lot of bandwidth or excess server capacity or disk space or whatever, setup a mirror, sponsor a project. Looking at the Debian page as an example it seems AboveNet is one such company.

I don’t use open source everywhere, I’m not one of those folks who has to make sure everything is GPL or whatever.

So all I ask, is the next time you build or deploy some project that is made possible by who knows how many layers of open source products, ask yourself how you can contribute back to support the greater good. If you have already then I thank you 🙂

Speaking of Debian, did you know that Debian powers 3PAR storage systems? Well it did at one point I haven’t checked recently, I do recall telnetting to my arrays on port 22 and seeing a Debian SSH banner. The underlying Linux OS was never exposed to the user. And it seems 3PAR reports bugs, which is another important way to contribute back. And, as of 3PAR’s 2.3.1 release(I believe) they finally officially started supporting Debian as a platform to connect to their storage systems. By contrast they do not support CentOS.

Extreme Networks’s ExtremeWare XOS is also based on Linux, though I think it’s a special embedded version. I remember in the early days they didn’t want to admit it was Linux they said “Unix based”. I just dug this up from a backup from back in 2005, once I saw this on my core switch booting up I was pretty sure it was Linux!

Extreme Networks Inc. BD 10808 MSM-R3 Boot Monitor
Version 1.0.1.5 Branch mariner_101b5 by release-manager on Mon 06/14/04
Copyright 2003, Extreme Networks, Inc.
Watchdog disabled.
Press and hold the <spacebar> to enter the bootrom.

Boot path is /dev/fat/wd0a/vmlinux
(elf)
0x85000000/18368 + 0x85006000/6377472 + 0x8561b000/12752(z) + 91 syms/
Running image boot…

Starting Extremeware XOS 11.1.2b3
Copyright (C) 1996-2004 Extreme Networks.  All rights reserved.
Protected by U.S. Patents 6,678,248; 6,104,700; 6,766,482; 6,618,388; 6,034,957

Then there’s my Tivo that runs Linux, my TV runs Linux(Phillips TV), my Qlogic FC switches run Linux, I know F5 equipment runs on Linux, my phone runs Linux(Palm Pre). It really is pretty crazy how far Linux has come in the past 10 years. And I’m pretty convinced the GPL played a big part, making it more difficult to fork it off and keep the changes for yourself. A lot of momentum built up in Linux and companies and everyone just flocked to it. I do recall early F5 load balancers used BSDI, but switched over to Linux (didn’t the company behind BSDI go out of business earlier this decade? or maybe they got bought I forget). Seems Linux is everywhere and in most cases you never notice it. The only way I knew it was in my TV is because of the instructions came with all sorts of GPL disclosures.

In theory the BSD licensing scheme should make the *BSDs much more attractive, but for the most part *BSD has not been able to keep pace with Linux(outside some specific niches I do love OpenBSD‘s pf) so never really got anywhere close to the critical mass Linux has.

Of course now someone will tell me some big fancy device that runs BSD that is in every data center, every household and I don’t know it’s there! If I recall right I do remember that Juniper’s JunOS is based on FreeBSD? And I think Force10 uses NetBSD.

Also recall being told by some EMC consultants back in 2004/2005 that the EMC Symmetrix ran Linux too, I do remember the Clariions of the time(at least, maybe still) ran Windows(probably because EMC bought the company that made that product rather than creating it themselves)

September 17, 2010

No more Cranky Geeks?

Filed under: News,Random Thought — Tags: — Nate @ 7:46 am

What!! I just noticed that it seems the only online video feed I watch, Cranky Geeks seems to be coming to an end? That sucks! I didn’t stumble upon the series until about one and a half years ago on my Tivo. Been an big fan ever since. I rarely learned anything from the shows but I did like observing the conversations, it’s not quite to the technical depth that I get into but it’s a far cry from the typical “tech tv” videos/shows that don’t seem to go beyond things like over clocking and what motherboard and video card to use for the latest games.

I know I’m a hell of a lot more cranky than anyone I ever saw on the show but they did bitch about some things. There seems to have been quite a few video blogs, for a lack of a better word, that have bitten the dust in recent months, I guess the economy is taking it’s toll.

[Begin Another Tangent –]

I believe that we are entering the second phase of the great depression (how long until we are solidly in the second phase I’m not sure, won’t know until we’re there), the phase where states realize their budget shortfalls are too big for short term budget gimmicks and make drastic cuts and tax hikes which further damages the economy. I don’t blame anyone in particular for our situation it’s a situation that has been festering for more than thirty years, it’s like trying to stop an avalanche with I don’t know a snow plow?

This is what happens when you give people every incentive possible to pull demand forward, you run out of gimmicks to pull demand forward and are faced with a very large chasm that will only be healed with time, just look at Japan.

I have seen lots of folks say that this is not as bad as the real Great Depression, but they aren’t taking into account the massive amount of social safety nets that have been deployed over the past 40-50+ years, I just saw a news report last night that said the rate of poverty among children is the same as it was in the 1960s. And to think the cost of living in the U.S. is so high that living in poverty here in many countries if you got paid that you’d be in the upper middle class.

Not sustainable, and as time goes on more and more people are realizing this, unfortunately too late for many they will be left behind, permanently.

My suggestion? Read the infrastructure report card. Yes I know infrastructure spending is not a short term stimulus, we need to take advantage of lower prices for wages, and materials, and rebuild the country, it will take years, maybe even a couple of decades but we need it. Long term problems call for long term solutions.

[End Another Tangent –]

I hope it doesn’t go but it looks like it’s essentially gone, and I just added the link to the blog roll a few days ago!

Noticed this from John in the comments –

The two companies couldn’t come to any agreement. This is a problem when you personally do not own the show. The fact is the show is not what advertising agencies want. They want two minute shows with a 15 second pre-roll ad at the beginning. They see no market for anything with a long format unless it is on network TV.

The irony is that the demographics for the show should be at $100/per k levels if they understood anything at all.

It’s amazing that we managed to get 4 1/2 years out of the show.

RIP

Sigh

RIP Cranky Geeks, I shall miss you greatly.

September 16, 2010

How High?

Filed under: Random Thought — Tags: , — Nate @ 6:35 pm

I got this little applet on my Ubuntu desktop that tracks a few stocks of companies I am interested in(I don’t invest in anything). And thought it was pretty crazy how close to the offer price the 3PAR stock price got today, I mean as high as 32.98, everyone of course knows the final price will be $33, to think folks are trading the stock with only $0.02 of margin to me is pretty insane.

Looks a fair sight better than the only public company I have ever worked for, surprised they are still around even!

I never bought any options, good thing I guess because from the day I was hired the stock never did anything but go down, I think my options were in the ~$4.50 range (this was 2000-2002)

Just dug this up, I remember being so proud my company is on TV! Not quite as weird as watching the freeinternet.com commercials back when I worked there. A company that spent $7 million a month on bandwidth it didn’t know it had and wasn’t utilizing. Of course by the time they found out it was too late.

My company at the top of the list! I miss Tom Costello, he was a good NASDAQ floor guy. Screen shot is from March 2002. Also crazy that the DOW is only 68 points higher today than it was eight years ago.

September 9, 2010

ZFS Free and clear.. or is it?

Filed under: News,Random Thought,Storage — Tags: , — Nate @ 7:03 pm

So, Sun and Oracle kissed and made up recently over the lawsuits they had against each other, from our best friends at The Register

Whatever the reasons for the mutual agreement to dismiss the lawsuits, ZFS technology product users and end-users can feel relieved that a distracting lawsuit has been cleared away.

Since the terms of the settlement or whatever you want to call it have not been disclosed and there has been no apparent further comment from either side, I certainly wouldn’t jump to the conclusion that other ZFS users are in the clear. I view it as if your running ZFS on Solaris your fine, if your using OpenSolaris your probably fine too. But if your using it on BSD, or even Linux (or whatever other platforms folks have tried to port ZFS to over the years), anything that isn’t directly controlled by Oracle, I wouldn’t be wiping the sweat from my brow just yet.

As is typical with such cases the settlement (at least from what I can see) is specifically between the two companies, there have been no statements or promises from either side from a broader technology standpoint.

I don’t know what OS folks like Coraid, and Compellent use on their ZFS devices, but I recall when investigating NAS options for home use I was checking out Thecus, a model like the N770+ and among the features was a ZFS option. The default file system was ext3, and supported XFS as well. While I am not certain, I was pretty convinced the system was running Linux in order to be supporting XFS and ext3, and not running OpenSolaris. I ended up not going with Thecus because as far as I could tell they were using software RAID. Instead I bought a new workstation(previous computer was many years old), and put a 3Ware 9650SE RAID controller(with a battery backup unit and 256MB of write back cache) along with four 2TB disks(RAID 1+0).

Now as and end user I can see not really being concerned, it is unlikely Netapp or Oracle will go after end users using ZFS on Linux or BSD or whatever, but if your building a product based on it(with the intension of selling/licensing it), and you aren’t using an ‘official’ version, I would stay on my toes. If your product doesn’t compete against any of NetApp’s product lines then you may skirt by without attracting attention. And as long as your not too successful Oracle probably won’t come kicking down your door.

Unless of course further details are released and the air is cleared more about ZFS as a technology in general.

Interestingly enough I was reading a discussion on Slashdot I think, around the time Oracle bought Sun and folks became worried about the future of ZFS in the  open source world. And some were suggesting as far as Linux was concerned btrfs, which is the Linux community’s response to ZFS. Something I didn’t know at the time was that apparently btrfs is also heavily supported by Oracle(or at least it was, I don’t track progress on that project).

Yes I know btrfs is GPL, but as you know I’m sure a file system is a complicated beast to get right. And if Oracle’s involvement in the project is significant and they choose instead to for whatever reason drop support and move resources to ZFS, well that could leave a pretty big gap that will be hard to fill. Just because the code is there doesn’t mean it’s going to magically code itself. I’m sure others contribute, I don’t know what the ratio of support is from Oracle vs outsiders. I recall reading at one point for OpenOffice something like 75-85% of the development was done directly by Sun Engineers. Just something to keep in mind.

I miss reiserfs. I really did like reiserfs v3 way back when. And v4 certainly looked promising (never tried it).

Reminds me of the classic argument that so many make for using open source stuff (not that I don’t like open source, I use it all the time). That is if there is a bug in the program you can go in and fix it yourself. My own experience at many companies is the opposite, they encounter a bug and they go through the usual community channels to try to get a fix. I would say it’s a safe assumption to say in excess of 98% of users of open source code have no ability to comprehend or fix the source they are working with. And that comes from my own experience of working for, really nothing but software companies over the past 10 years. And before anyone asks, I believe it’s equally improbable that a company would hire a contractor to fix a bug in an open source product. I’m sure it does happen, but pretty rare given the number of users out there.

September 7, 2010

We need a new theme

Filed under: General,Random Thought — Tags: — Nate @ 11:42 pm

Do you know WordPress? Good because I sure as hell don’t.

We need a new theme, can you give us some suggestions? My main complaint about the theme we have now is that it doesn’t make effective use of screen real estate for larger resolutions. I mean I feel like I’m stuck in the 90s when viewing our page at 1080p resolutions. Though with a firefox zoom plugin it’s more usable, I have to zoom it in 170%., even then there’s quite a bit of dead space.

Beyond that just something that is pretty simple I guess? I don’t know, none of us are web developers I don’t think so we aren’t able to customize it or whatever.

Only HP has it

Filed under: Datacenter,Random Thought,Virtualization — Tags: , , , , — Nate @ 11:32 pm

I commented in response to an article on The Register recently but figure I’m here writing stuff might as well bring this up to.

Unless you’ve been living under a rock and/or not reading this site you probably know that AMD launched their Opteron 6100 series CPUs earlier this year. One of the highlights of the design is the ability to support 12 DIMMs of memory per socket, up from the previous eight per socket.

Though of all of the servers that have launched HP seems to have the clear lead in AMD technology, for starters as far as I am aware they are the only ones currently offering Opteron 6100-based blades.

Secondly, I have looked around at the offerings of Dell, IBM, HP, and even Supermicro and Tyan, but as far as I can tell only HP is offering Opteron systems with the full 12 DIMMs/socket support.The only reason I can think of I guess is the other companies have a hard time making a board that can accommodate that many DIMMs, after all it is a lot of memory chips. I’m sure if Sun was still independent they would have a new cutting edge design for the 6100. After all they were the first to launch (as far as I know) a quad socket, 2U AMD system with 32 memory slots nearly three years ago.

The new Barcelona four-socket server comes with dual TCP offloading enabled gigabit NIC cards, redundant power supplies, and 32 DIMM slots for up to 256 GBs of memory capacity  [..] Half the memory and CPU are stacked on top of the other half and this is a rather unusual but innovative design.

Anyways, if your interested in the Opteron 6100, it seems HP is the best bet in town, whether it’s

Kind of fuzzy shot of the HP DL165 G7, anyone got a clearer picture?

HP DL385 G7

HP BL685c G7 – I can understand why they couldn’t fit 48 DIMMs on this blade(Note: two of the CPUs are under the hard disks)!

HP BL465c G7 – again, really no space for 24 DIMMs ! (damnit)

Tyan Quad Socket Opteron 6100 motherboard, tight on space, guess the form factor doesn’t cut it.

Twelve cores not enough? Well you’ll be able to drop Opteron 6200 16-core CPUs into these systems in the not too distant future.

« Newer PostsOlder Posts »

Powered by WordPress