TechOpsGuys.com Diggin' technology every day

April 26, 2010

40GbE for $1,000 per port

Filed under: Networking,News — Tags: , — Nate @ 8:32 am

It seems it wasn’t too long ago that 10GbE broke the $1,000/port price barrier. Now it seems we have reached it with 40GbE as well, from my own personal favorite networking company Extreme Networks, announced today the availability of an expansion module for the X650 and X480 stackable switches to include 40GbE support. Top of rack line rate 10GbE just got more feasable.

LAS VEGAS, NV, Apr 26, 2010 (MARKETWIRE via COMTEX News Network) — Extreme Networks, Inc. (NASDAQ: EXTR) today announced highly scalable 40 Gigabit Ethernet (GbE) network solutions at Interop Las Vegas. The VIM3-40G4X adds four 40 GbE connections to the award-winning Summit(R) X650 Top-of-Rack stackable switches for $3,995, or less than $1,000 per port. The new module is fully compatible with the existing Summit X650 and Summit X480 stackable switches, preserving customers’ investments while providing a smooth upgrade to greatly increased scalability of both virtualized and non-virtualized data centers.

[..]

Utilizing Ixia’s IxYukon and IxNetwork test solutions, Extreme Networks demonstrates wire-speed 40Gbps performance and can process 60 million packets per second (120Mpps full duplex) of data center traffic between ToR and EoR switches.

April 19, 2010

Arista ignites networks with groundbreaking 10GbE performance

Filed under: Networking,News — Tags: , — Nate @ 8:53 am

In a word: Wow

Just read an article from our friends at The Register on a new 384-port chassis 10GbE switch that Arista is launching. From a hardware perspective the numbers are just draw dropping.

A base Arista 7500 costs $140,000, and a fully configured machine with all 384 ports and other bells and whistles runs to $460,800, or $1,200 per port. This machine will draw 5,072 watts of juice and take up a little more than quarter of a rack.

Compare this to a Cisco Nexus 7010 setup to get 384 wirespeed ports and deliver the same 5.76 Bpps of L3 throughput, and you need to get 18 of the units at a cost of $13.7m. Such a configuration will draw 160 kilowatts and take up 378 rack units of space – nine full racks. Arista can do the 384 ports in 1/34th the space and 1/30th the price.

I love the innovation that comes from these smaller players, really inspiring.

April 14, 2010

First SPC-1 Numbers with automagic storage tiering

Filed under: News,Storage — Tags: , , , — Nate @ 8:38 am

IBM recently announced that they are adding an “easy tier” of storage to some of their storage systems. This seems to be their form of what I have been calling automagic storage tiering. They are doing it at the sub LUN level in 1GB increments. And they recently posted SPC-1 numbers for this new system, finally someone posted numbers.

Configuration of the system included:

  • 1 IBM DS8700
  • 96 1TB SATA drives
  • 16 146GB SSDs
  • Total ~100TB raw space
  • 256GB Cache

Performance of the system:

  • 32,998 IOPS
  • 34.1 TB Usable space

Cost of the system:

  • $1.58 Million for the system
  • $47.92 per SPC-1 IOP
  • $46,545 per usable TB

Now I’m sure the system is fairly power efficient given that it only has 96 spindles on it, but I don’t think that justifies the price tag. Just take a look at this 3PAR F400 which posted results almost a year ago:

  • 384 disks, 4 controllers, 24GB data cache
  • 93,050 SPC-1 IOPS
  • 26.4 TB Usable space (~56TB raw)
  • $548k for the system (I’m sure prices have come down since)
  • $5.89 per SPC-1 IOP
  • $20,757 per usable TB

The system used 146GB disks, today the 450GB disks seem priced very reasonably, I would opt for those instead and get the extra space for not much of a premium.

Take a 3PAR F400 with 130 450GB 15k RPM disks, that would be about 26TB of usable space with RAID 1+0 (the tested configuration above is 1+0). That would give about 33.8% of the performance of the above 384-disk system, so say 31,487 SPC-1 IOPS, very close to the IBM system and I bet the price of the 3PAR would be close to half of the $548k above (taking into account the controllers in any system are a good chunk of the cost). 3PAR has near linear scalability making extrapolations like this possible and accurate. And you can sleep well at night knowing you can triple your space/performance online without service disruption.

Note of course you can equip a 3PAR system with SSD and use automagic storage tiering as well, they call it Adaptive Optimization, if you really wanted to. The 3PAR system moves data around in 128MB increments by contrast.

It seems the cost of the SSDs and the massive amount of cache IBM dedicated to the system more than offset the benefits of using lower cost nearline SATA disks in the system. If you do that, what’s the point of it then?

So consider me not impressed with the first results of automagic storage tiering. I expected significantly more out of it. Maybe it’s IBM specific, maybe not, time will tell.

April 9, 2010

Found a use for the cloud

Filed under: News,Virtualization — Tags: — Nate @ 1:42 pm

Another interesting article on Datacenter Knowledge and mentioned the U.S. Government’s use of the Terremark cloud, I recall reading about it briefly when it first launched but seeing the numbers again made me do another double take.

”One of the most troubling aspects about the data centers is that in a lot of these cases, we’re finding that server utilization is actually around seven percent,” Federal Chief Information Officer Vivek Kundra said

[..]

Yes, you read that correctly. A government agency was going to spend $600,000 to set up a blog.

[..]

The GSA previously paid $2.35 million in annual costs for USA.gov, including $2 million for hardware refreshes and software re-licensing and $350,000 in personnel costs, compared to the $650,000 annual cost to host the site with Terremark.

For $650k/yr I bet the site runs on only a few servers(dozen or less) and has less than a TB of total disk space.

April 2, 2010

Grid Iron decloaks

Filed under: News,Storage — Tags: , , — Nate @ 10:30 am

Grid Iron Systems seems to have left stealth mode somewhat recently, they are another start up that makes an accelerator appliance that sits in between your storage and your server(s). Kind of what Avere does on the NAS side, Grid Iron does on the SAN side with their “TurboCharger“.

Certainly looks like an interesting product but it appears they make it “safe” by making it cache only reads, I want a SSD system that can cache writes too! (yes I know that wears the SSDs out faster I’m sure, but just do warranty replacement). I look forward to seeing some SPC-1 numbers on how Grid Iron can accelerate systems, at the same time I look forward to SPC-1 numbers on how automatic storage tiering can accelerate systems as well.

I’d also be interested in seeing how Grid Iron can accelerate NetApp systems vs using NetApp’s own read-only PAM (since Grid Iron specifically mentions NetApp in their NAS accelerator, although yes I’m sure they just used NetApp as an example).

April 1, 2010

New IBM blades based on Intel 7500 announced

Filed under: News,Virtualization — Tags: , , , , , — Nate @ 7:46 pm

The Register had the scoop a while back, but apparently today they were officially announced. IBM did some trickery with the new 7500 series Intel Xeons to accomplish two things:

  • Expand the amount of memory available to the system
  • Be able to “connect” two dual socket blades to form a single quad socket system

Pretty creative, though the end result wasn’t quite as impressive as it sounded up front. Their standard blade chassis is 9U and has 14 slots on it.

  • Each blade is dual socket, maximum 16 cores, and 16 DIMMs
  • Each memory extender offers 24 additional DIMMs

So for the chassis as a whole your talking about 7 dual socket systems with 40 DIMMs each. Or 3 quad socket systems with 80 DIMMs each, and 1 dual socket with 40.

Compared to an Opteron 6100 system, which you can get 8 quad socket systems with 48 DIMMs each in a single enclosure(granted such a system has not been announced yet but I am confident it will be).

  • Intel 7500-based system: 112 CPU cures (1.8Ghz), 280 DIMM slots – 9U
  • Opteron 6100-based system: 384 CPU cores (2.2Ghz), 384 DIMM slots – 10U

And the price of the IBM system is even less impressive –

In a base configuration with a single four-core 1.86 GHz E7520 processor and 8 GB of memory, the BladeCenter HX5 blade costs $4,629. With two of the six-core 2 GHz E7540 processors and 64 GB of memory, the HX5 costs $15,095.

They don’t seem to show pricing for the 8 core 7500-based blade, and say there is no pricing or ETA on the arrival of the memory extenders.

They do say this which is interesting (not surprising) –

The HX5 blade cannot support the top-end eight-core Xeon 7500 parts, which have a 130 watt thermal design point, but it has been certified to support the eight-core L7555, which runs at 1.86 GHz, has 24 MB of L3 cache, and is rated at 95 watts.

I only hope AMD has enough manufacturing capacity to keep up with demand, Opteron 6100s will wipe the floor with the Intel chips on price/performance (for the first time in a while).

March 30, 2010

Tough choice

Filed under: News — Nate @ 3:16 pm

I thought the Intel Xeon 7500 (Nehalem-EX) processors were supposed to launch today, I was confused earlier when I didn’t see any news on it, well finally The Register posted some news.

I will keep it short and sweet this time:

  • Intel Nehalem-EX L7555 1.86Ghz 95W 8-core processor price is $3,157 (in 1,000-unit quantities)
  • AMD Opteron 6174 2.2Ghz 80W 12-core processor price is $1,165 (in 1,000 unit quantities)

Touch choice indeed.

If you want more than four sockets of course you will likely not be able to use an off the shelf Opteron 6100 system(may need to build your own chipset), but the justification behind this decision by AMD seems very reasonable:

That 100 GB/sec of memory bandwidth in the forthcoming four-socket Opteron 6100 machine is one of the reasons why AMD decided not to do eight-socket configurations with these chips. “It is a lot of development work for a not very large market,” says Fruehe, who reckons that there are only about 1,800 x64-based eight-socket servers sold worldwide each quarter and that the number is dwindling as four-socket boxes get more powerful.

“Intel is raving about having 15 different designs for its upcoming Nehalem EX machines, but how many is each vendor going to get out of those 1,800 units?” Fruehe says that the 8P boxes account for less than two-tenths of a percent of current shipments each quarter, and that while 4P boxes are only accounting for around 4 per cent, “even though it is a small space, AMD needs to be there.”

March 29, 2010

Opteron 6100s are here

Filed under: News,Virtualization — Tags: , , — Nate @ 8:28 am

UPDATED I’ve been waiting for this for quite some time, finally the 12-core AMD Opteron 6100s have arrived. AMD did the right thing this time by not waiting to develop a “true” 12-core chip and instead bolted a pair of CPUs together into a single package. You may recall AMD lambasted Intel when it released it’s first four core CPUs a few years ago(composed of a pair of two-core chips bolted together), a strategy that paid off well for them, AMD’s market share was hurt badly as a result, a painful lesson which they learned from.

For me I’d of course rather have a “true” 12-core processor, but I’m very happy to make do with these Opteron 6100s in the meantime, I don’t want to have to wait another 2-3 years to get 12 cores in a socket.

Some highlights of the processor:

  • Clock speeds ranging from 1.7Ghz(65W) to 2.2Ghz(80W), with a turbo boost 2.3Ghz model coming in at 105W
  • Prices ranging from $744 to $1,396 in 1,000-unit quantities
  • Twelve-core and Eight–core, L2 – 512K/core, L3 – 12MB of shared L3 Cache
  • Quad-Channel LV & U/RDDR3, ECC, support for on-line spare memory
  • Supports up to 3 DIMMs/channel, up to 12 DIMMS per CPU
  • Quad 16-bit HyperTransportâ„¢ 3 technology (HT3) links, up to 6.4 GT/s per link (more than triple HT1 performance)
  • AMD SR56x0 chipset with I/O Virtualization and PCIe® 2.0
  • Socket compatibility with planned AMD Opteronâ„¢ 6200 Series processor.(16 cores?)
  • New advanced idle states allowing the processor to idle with less power usage than the previous six core systems (AMD seems to have long had the lead in idle power conservation).


The new I/O virtualization looks quite nice as well – AMD-V 2.0, from their site:

Hardware features that enhance virtualization:

  • Unmatched Memory Bandwidth and Scalability – Direct Connect Architecture 2.0 supports a larger number of cores and memory channels so you can configure robust virtual machines, allowing your virtual servers to run as close as possible to physical servers.
  • Greater I/O virtualization efficiencies –I/O virtualization to help increase I/O efficiency by supporting direct device assignment, while improving address translation to help improve the levels of hypervisor intervention.
  • Improved virtual machine integrity and security –With better isolation of virtual machines through I/O virtualization, helps increase the integrity and security of each VM instance.
  • Efficient Power Management – AMD-P technology is a suite of power management features that are designed to drive lower power consumption without compromising performance. For more information on AMD-P, click here
  • Hardware-assisted Virtualization – AMD-V technology to enhance and accelerate software-based virtualization so you can run more virtual machines, support more users and transactions per virtual machine with less overhead. This includes Rapid Virtualization Indexing (RVI) to help accelerate the performance of many virtualized applications by enabling hardware-based VM memory management. AMD-V technology is supported by leading providers of hypervisor and virtualization software, including Citrix, Microsoft, Red Hat, and VMware.
  • Extended Migration – a hardware feature that helps virtualization software enable live migration of virtual machines between all available AMD Opteronâ„¢ processor generations. For a closer look at Extended Migration, follow this link.

With AMD returning to the chipset design business I’m happy with that as well, I was never comfortable with Nvidia as a server chipset maker.

The Register has a pair of great articles on the launch as well, though the main one I was kind of annoyed I had to scroll so much to get past the Xeon news, which I don’t think they had to go out of their way to recap with such detail in an article about the Opterons, but oh well.

I thought this was an interesting note on the recent Intel announcement of integrated silicon for encryption –

While Intel was talking up the fact that it had embedded cryptographic instructions in the new Xeon 5600s to implement the Advanced Encryption Standard (AES) algorithm for encrypting and decrypting data, Opterons have had this feature since the quad-core “Barcelona” Opterons came out in late 2007, er, early 2008.

And as for performance –

Generally speaking, bin for bin, the twelve-core Magny-Cours chips provide about 88 per cent more integer performance and 119 per cent more floating point performance than the six-core “Istanbul” Opteron 2400 and 8400 chips they replace..

AMD seems geared towards reducing costs and prices as well with –

The Opteron 6100s will compete with the high-end of the Xeon 5600s in the 2P space and also take the fight on up to the 4P space. But, AMD’s chipsets and the chips themselves are really all the same. It is really a game of packaging some components in the stack up in different ways to target different markets.

Sounds like a great way to keep costs down by limiting the amount of development required to support the various configurations.

AMD themselves also blogged on the topic with some interesting tidbits of information –

You’re probably wondering why we wouldn’t put our highest speed processor up in this comparison. It’s because we realize that while performance is important, it is not the most important factor in server decisions.  In most cases, we believe price and power consumption play a far larger role.

[..]

Power consumption – Note that to get to the performance levels that our competitor has, they had to utilize a 130W processor that is not targeted at the mainstream server market, but is more likely to be used in workstations. Intel isn’t forthcoming on their power numbers so we don’t really have a good measurement of their maximum power, but their 130W TDP part is being beaten in performance by our 80W ACP part.  It feels like the power efficiency is clearly in our court.  The fact that we have doubled cores and stayed in the same power/thermal range compared to our previous generation is a testament to our power efficiency.

Price – This is an area that I don’t understand.  Coming out of one of the worst economic times in recent history, why Intel pushed up the top Xeon X series price from $1386 to $1663 is beyond me.  Customers are looking for more, not less for their IT dollar.  In the comparison above, while they still can’t match our performance, they really fall short in pricing.  At $1663 versus our $1165, their customers are paying 42% more money for the luxury of purchasing a slower processor. This makes no sense.  Shouldn’t we all be offering customers more for their money, not less?

In addition to our aggressive 2P pricing, we have also stripped away the “4P tax.” No longer do customers have to pay a premium to buy a processor capable of scaling up to 4 CPUs in a single platform.  As of today, the 4P tax is effectively $0. Well, of course, that depends on you making the right processor choice, as I am fairly sure that our competitor will still want to charge you a premium for that feature.  I recommend you don’t pay it.

As a matter of fact, a customer will probably find that a 4P server, with 32 total cores (4 x 8-core) based on our new pricing, will not only perform better than our competitor’s highest end 2P system, but it will also do it for a lower price. Suddenly, it is 4P for the masses!

While for the most part I am mainly interested in their 12-core chips, but I also see significant value in the 8 core chips, being able to replace a pair of 4 core chips with a single socket 8 core system is very appealing as well in certain situations. There is a decent premium on motherboards that need to support more than one socket. Being able to get 8, (and maybe even 12 cores) on a single socket system is just outstanding.

I also found this interesting –

Each one is capable of 105.6 Gigaflops (12 cores x 4 32-bit FPU instructions x 2.2GHz).  And that score is for the 2.2GHz model, which isn’t even the fastest one!

I still have a poster up on one of my walls back from 1995-1996 era on the world’s first Teraflop machine, which was –

The one-teraflops demonstration was achieved using 7,264 Pentium Pro processors in 57 cabinets.

With the same number of these new Opterons you could get 3/4ths of the way to a Petaflop.

SGI is raising the bar as well –

This means as many as 2,208 cores in a single rack of our Rackable™ rackmount servers. And in the SGI ICE Cube modular data center, our containerized data center environment, you can now scale within a single container to 41,760 cores! Of course, density is only part of the picture. There’s as much to be excited about when it comes to power efficiency and the memory performance of SGI servers using AMD Opteron 6100 Series processor technology

Other systems announced today include:

  • HP DL165G7
  • HP SL165z G7
  • HP DL385 G7
  • Cray XT6 supercomputer
  • There is mention of a Dell R815 though it doesn’t seem to be officially announced yet. The R815 specs seem kind of underwhelming in the memory department, with it only supporting 32 DIMMs (the HP systems above support the full 12 DIMMs/socket). It is only 2U however. Sun has had 2U quad socket Opteron systems with 32 DIMMs for a couple years now in the form of the X4440, strange that Dell did not step up to max out the system with 48 DIMMs.

I can’t put into words how happy and proud I am of AMD for this new product launch, not only is it an amazing technological achievement, but the fact that they managed to pull it off on schedule is just amazing.

Congratulations AMD!!!

March 28, 2010

Vulnerable Smart Grid

Filed under: News,Security — Nate @ 9:27 am

As some of you who know me may know, I have been against the whole concept of a “smart grid” for a few years now. The main reason behind this is security. The more intelligence you put into something especially with regards to computer technology the more complex it becomes, the more complex it becomes the harder it is to protect.

Well it seems the main stream media has picked up on this with an article from the AP

SAN FRANCISCO – Computer-security researchers say new “smart” meters that are designed to help deliver electricity more efficiently also have flaws that could let hackers tamper with the power grid in previously impossible ways.

Kind of reminds me of the RFID-based identification schemes that have been coming online in the past few years, just as prone to security issues. In the case of the smart grid, my understanding of it is that the goal is to improve energy efficiency by allowing the power company to intelligently inform downtream customers of power conditions so that things like heavy appliances can be proactively turned off in the event of a surge in usage to prevent brown and blackouts.

Sounds nice in theory, like many things, but as someone who has worked with technology for about 20 years now I see the quality of stuff that comes out of companies, and I just have no confidence that such technonlogy can be made “secure” at the same time it can be made “cost effective”. At least not at our current level of technological sophistication, I mean from an evolutionary standpoint “technology” is still a baby, we’re still figuring stuff out, it’s brand new stuff. I don’t mean to knock any company or organization in particular, they are not directly at fault, I just don’t believe – in general technology is ready for such a role, not in a society such as ours.

Today in many cases you can’t get a proper education in modern technology because the industries are moving too fast for the schools to keep up. Don’t get me started on organizations like OLPC and others trying to pitch laptop computers to schools in an attempt to make education better.

If you want to be green, in my opinion, get rid of the coal fired power plants. I mean 21st century and we still have coal has generating roughly half(or more) of our electricity ? Hasn’t anyone played Sim City?

Of course this concept doesn’t just apply to the smart grid, it applies to everything as our civilization tries to put technology to work to improve our lives. Whether it’s wifi, rfid, or online banking, all of these(and many others) expose us to significant security threats, when not deployed properly, and in my experience, from what I have seen, the numbers of implimentations that are not secure outnumber the ones that are by probably 1000:1. So we have a real significant trend of this in action(technology being deployed then being actively exploited). I’m sure you agree that our power grid is a fairly important resource, it was declared the most important engineering achievement of the 20th century.

While I don’t believe it is possible yet, we are moving down the road where scenes like those portrayed in the movie Eagle Eye (saw it recently had it on my mind), will be achievable, especially now that many nations have spun up formal hacker teams to fight future cyber wars, and you have to admit, we are a pretty tempting target.

There will be a very real cost to this continued penetration of technology into our lives. In the end I think the cost will be too high, but time will tell I guess.

You could say I long for the earlier days of technology where for the most part security “threats” were just people that wanted to poke around in systems, or compromise a host to “share” it’s bandwidth and disk space to host pirated software. Rarely was there any real malice behind any of it, not true anymore.

And for those that are wondering – the answer is no. I have never, ever had a wireless access point hooked to my home  network, and I do my online banking from Linux.

March 16, 2010

IBM partners with Red hat for KVM cloud

Filed under: News,Virtualization — Tags: , , , — Nate @ 6:32 pm

One question: Why?

IBM has bombarded the IT world for years now how they can consolidate hundreds to thousands of Linux VMs onto a single mainframe.

IBM has recently announced a partnership with Red hat to use KVM in a cloud offering. At first I thought, well maybe they are doing it to offer Microsoft applications as well, but that doesn’t appear to be the case:

Programmers who use the IBM Cloud for test and dev will be given RHEV to play with Red Hat Enterprise Linux or Novell SUSE Linux Enterprise Server images with a Java layer as they code their apps and run them through regression and other tests.

Let’s see, Linux and Java, why not use the mainframes to do this? Why KVM? As far as the end users are concerned it really shouldn’t matter, after all it’s java and linux.

Seems like a slap in the face to their mainframe division (I never bought into the mainframe/linux/VM marketing myself, I suppose they don’t either). I do remember briefly having access to a S390 running a SuSE VM about 10 years ago, it was..interesting.

« Newer PostsOlder Posts »

Powered by WordPress