TechOpsGuys.com Diggin' technology every day

August 28, 2010

What a mouthful

Filed under: Networking,Random Thought — Tags: , — Nate @ 9:25 am

I’ve thought about this off and on and I better write about it so I can forget about it.

I think Force10 is way too verbose in choosing the phrase to describe their company, it’s quite a mouthful –

Force10 Networks, Inc., a global technology leader that data center, service provider and enterprise customers rely on when the network is their business[..]

I like Force10, I have been watching them for five years now, I just think any phrase you choose to describe your company should be short enough to say it in one (casual) breath.

How about “Force10 Networks Inc., a global networking technology leader”.

Force10’s marketers are very nice folks I’ve sent them two corrections over the years to their web site(one concerning the number of ports a competitor offers in their products, the other with a math error in a graphic showing much you can save on their products), they were very kind and responsive(and fixed both problems pretty quickly too). This one I won’t send to them directly since it’s more than a cosmetic change 🙂

August 27, 2010

CNBC Videos on 3PAR

Filed under: News,Storage — Tags: — Nate @ 12:32 pm

I’ve watched CNBC for a long time, I find it pretty entertaining, even though I don’t invest.

So often these mergers come about usually about industries and companies I have no interest in and can’t really gauge whether the analysts know what they are talking about.

This one is different of course as a user of 3PAR products for the past 3 years or so I know their stuff inside and out. And I’m constantly looking out for other interesting technologies.

Here’s several videos

HP Now offering $2 billion

Filed under: News,Storage — Tags: — Nate @ 8:46 am

Dell apaprently is being a little bitch again and matched HP’s $27 offer for 3PAR, so HP came right back and offered $30 a share, or $2 billion, up from the $1.1 billion original offer from Dell ($18/share).

PALO ALTO, Calif., Aug 27, 2010 (BUSINESS WIRE) — HP /quotes/comstock/13*!hpq/quotes/nls/hpq (HPQ 37.66, -0.56, -1.47%) today announced that it has increased its proposal to acquire all of the outstanding shares of 3PAR Inc. /quotes/comstock/13*!par/quotes/nls/par (PAR 31.71, +5.68, +21.82%) to $30 per share in cash, or an enterprise value of $2.0 billion. The proposal represents an 11 percent premium above the most recent price offered by Dell Inc. of $27 per share. HP’s proposal is not subject to any financing contingency and has been approved by HP’s board of directors. Once approved by 3PAR’s board, HP expects the transaction to close by the end of the calendar year.

Cut your losses and run Dell. Go buy Compellent.

What should HP do with 3PAR

Filed under: Storage — Tags: — Nate @ 7:36 am

Assuming HP gets them, which I am optimistic will occur. This is what I think HP should do.

I’m sure this is all pretty obvious but it gives me something to write about 🙂

Why the changes to the EVA offerings and dropping of the F200 from 3PAR? To me, it all comes down to the 4 node architecture that 3PAR has, and the ability to offer persistent cache.

3PAR Persistent Cache is a resiliency feature designed to gracefully handle component failures by eliminating the substantial performance penalties associated with “write-through” mode. Supported on all quad-node and larger InServ arrays, Persistent Cache leverages the InServ’s unique Mesh-Active design to preserve write-caching by rapidly re-mirroring cache to the other nodes in the cluster in the event of a controller node failure.

(click on images for larger version)

Persistent cache allows service providers to operate at higher levels of utilization because they know they can maintain high performance even when a controller fails(or if two/three controllers fail in a 6/8 node T800 as long as they are the right nodes!), one of my former employers has a bunch of NetApp stuff, and I’m told they run them pretty much entirely active/passive, so as to protect performance in the event a controller fails. I’m sure that is a fairly common setup.

This is also useful during software upgrades, where the controllers have to be rebooted, or hardware upgrades (adding more FC ports or whatever).

Another reason is the ease of use around configuring multi site replication, and the ability to do synchronous long distance replication on the mid range systems.

3PAR® is the first storage vendor to offer autonomic disaster recovery (DR) configuration that enables you to set up and test your entire DR environment—including multi-site, multi-mode replication using both mid-range and high-end arrays—in just minutes.

[..]

Synchronous Long Distance replication combines the best of both worlds by offering the data integrity of synchronous mode disaster recovery and the extended distances (including cross-continental reach) possible with asynchronous replication. Remote Copy makes all of this possible without the complexity or professional services required by the monolithic vendors that offer multi-target disaster recovery products, and at half the cost or less.

I can understand why 3PAR came up with the F200, it is a bit cheaper, the only difference is the chassis the nodes go in, the nodes are the same, everything else is the same. So to me it’s a no brainer to spend the extra what 10-15% up front and get the capability to go to four controllers even if you don’t need that up front. Takes an extra 4U of rack space. If you really want to be cheap, go with the small 2-node EVA.

I find it kind of funny that on the main page for EVA, the EVA-4000’s blurb for what it is “Ideal for” is blank.

Assuming HP gets them, which I am optimistic will occur. This is what I think HP should do.

* Phase out current USP-based XP line with the 800-series of 3PAR systems, currently the T800
* Phase out the EVA Cluster with the enterprise 400-series of 3PAR systems, currently the T400
* Phase out the EVA 6400 and 8400 with the mid range 400-series of 3PAR systems, currently the F400
* Phase out the 3PAR F200, replace it with the EVA 4400-series

I’m sure this is all pretty obvious but it gives me something to write about 🙂

Why the changes to the EVA offerings and dropping of the F200 from 3PAR? To me, it all comes down to the 4 node architecture that 3PAR has, and the ability to offer persistent cache.

3PAR Persistent Cache is a resiliency feature designed to gracefully handle component failures by eliminating the substantial performance penalties associated with “write-through” mode. Supported on all quad-node and larger InServ arrays, Persistent Cache leverages the InServ’s unique Mesh-Active design to preserve write-caching by rapidly re-mirroring cache to the other nodes in the cluster in the event of a controller node failure.

(click on image for larger version)

Persistent cache allows service providers to operate at higher levels of utilization because they know they can maintain high performance even when a controller fails(or if two controllers fail in a 6/8 node T800 as long as they are the right nodes!), one of my former employers has a bunch of NetApp stuff, and I’m told they run them pretty much entirely active/passive, so as to protect performance in the event a controller fails. I’m sure that is a fairly common setup.

Another reason is the ease of use around configuring multi site replication, and the ability to do synchronous long distance replication on the mid range systems.

I can understand why 3PAR came up with the F200, it is a bit cheaper, the only difference is the chassis the nodes go in, the nodes are the same, everything else is the same. So to me it’s a no brainer to spend the extra what 10-15% up front and get the capability to go to four controllers even if you don’t need that up front. Takes an extra 4U of rack space. If you really want to be cheap, go with the small 2-node EVA.

August 26, 2010

Thank you HP

Filed under: News,Storage — Tags: — Nate @ 4:05 pm

That’s more like it, HP knows what they are doing, they just boosted their offer to $27/share for 3PAR.

SEATTLE (AP) — Hewlett-Packard Co. has again raised its bid for 3Par Inc. above an offer from rival Dell Inc., suggesting that the little-known data-storage maker could be worth more with one of the PC companies’ marketing muscle behind it.

The latest offer from HP for $27 per share in cash, or about $1.69 billion, is nearly three times what 3Par had been trading at before Dell made the first bid last week.

Bring it on. Did I ever mention 3PAR went IPO on my birthday? Coincidence yeah I know but maybe it was a sign.. if I recall right they were supposed to IPO one day earlier but something delayed it by one day. I never did buy any stock(as I mentioned before I don’t buy stocks or bonds).

This is a joke, right?

Filed under: News,Storage — Tags: — Nate @ 7:55 am

This is a joke, right?

So today, right after the jobless claims came out, Dell came out and increased their bid for 3PAR to $24.30, thirty cents above HP’s offer, which was $6.00 above Dell’s original offer.

Even now, hours later I can’t help but laugh, I mean this is a good example showing what kind of company Dell is. Why are they wasting everyone’s time with a mere 1% increase in their bid?

A survey recently done by Reuters came up with an estimated $29 final price for 3PAR.

[..] That’s why some analysts say traditional metrics aren’t sufficient in assessing the value of 3PAR — a small company with unique technology that could grow exponentially with the the massive salesforces of either Dell or HP.

This morning on Squawk Box folks were saying the next step is for HP to bid up again and get 3PAR to eliminate the price matching clause with Dell to level the playing field.

I keep seeing people ask who needs 3PAR more. I think it’s clear Dell needs them more, Dell has nothing right now. But I’m sure Dell will do a lot to screw up the 3PAR technology over time, so HP is the better fit, more innovative company with more market leadership and of course a lot more resources from pretty much every angle.

August 25, 2010

Moving on up to Number two

Filed under: Networking — Tags: , — Nate @ 4:06 pm

Brings a tear to me eye, my favorite switching vendor had a pretty impressive announcement today:

Extreme Networks commanded the #2 revenue position for data center Top-of-Rack switches according to the quarterly Ethernet market share report, behind only Cisco, driven by its industry leading Summit(R) X650, Summit X450 and Summit X480 switches. In the “Top of Rack” switch port shipment category, Extreme Networks increased its port shipments by 194% compared to the same quarter one year ago. This demonstrates continued momentum for the Company in the dynamic and demanding data center Ethernet market.

If you haven’t already seen the X650, X480 and even X450 Series of switches check them out. They do offer several capabilities that no other vendor on the market provides. And they are very affordable.

I have blogged on some of my more favorite topics in the past, with regards to their technology. I’ve been using Extreme stuff for just about 10 years now I think.

[tangent — begin]

I remember the 2nd switch I bought(this one for my employer), a Summit 48 with an external power supply I think it was in 2001. Bought it off Ebay from what I assume was a crashed dot com or something. Anyways they didn’t include the cable(sold “as is”) to connect the switch to the redundant power supply. So I hunted around trying to find what part to order, couldn’t find anything. So I called support.

The support tech had me recite the serial# of the unit to him, and he said they don’t have a part# for that cable, so they couldn’t sell me one. But he happened to have a few cables laying around so he put one in a fedex pouch and shipped it to me, free. I didn’t have a support contract(and didn’t get a support contract until I made a much larger purchase several years later). But I guess you could say that friendly support engagement certainly played a factor in me keeping tabs on the company and the products going forward, leading up to a million dollar purchase several years later(different company) of more than 3,000 ports.

I used my first switch, also Summit 48 as my home network switch for a good 5 years, before I decided it drew too much power for what I needed(48 port switch running on maybe 5-6 ports total), and was pretty noisy(as are pretty much all switches from that era, I think it was manufactured in ’98).  Got a good deal on a Summit 48si, and upgraded to that! For another year, and then retired it to a shelf. It drew half the power, and after replacing all of the fans in the unit(original fans too loud) it was quieter, but my network needs shrank even more from ~5-6 systems to ~2-3 (yay VMware), and I wanted to upgrade to gigabit.

From the Summit 48 article above, I thought this is a good indication on how easy their stuff is to use, even more than 10 years ago:

[..]We tested it with and without the QoS enabled. Without the QoS enabled, I began to see glitches in the video. The video halted abruptly at rates over 98 percent. With two commands, I enabled QoS on the Summit switches. Summit48 intelligently discarded the packets with lower priority, preserving the video stream’s quality even at 100 percent utilization.

Eventually recycled my Summit 48, along with an old Cisco switch(which I never used), couple really old Foundry load balancers(never used them either) a couple of years ago. Was too lazy to try to ebay them or put them on craigslist. Still have my 48si, it’s a really nice switch I like it a lot, they still sell it in fact even today. And still release updates(ExtremeWare 7.x) for it. The Summit 48 code base(ExtremeWare 1.x-4.x) was retired probably in 2002, so nothing new released for it for a long time.

[tangent — end]

So, congratulations Extreme for doing such a great job.

August 24, 2010

EMC and IBM’s Thick chunks for automagic storage tiering

Filed under: Storage,Virtualization — Tags: , , , , — Nate @ 12:59 pm

If you recall not long ago IBM released some SPC-1 numbers with their automagic storage tiering technology Easy Tier. It was noted that they are using 1GB blocks of data to move between the tiers. To me that seemed like a lot.

Well EMC announced the availability of FAST v2 (aka sub volume automagic storage tiering) and they too are using 1GB blocks of data to move between tiers according to our friends at The Register.

Still seems like a lot. I was pretty happy when 3PAR said they use 128MB blocks, which is half the size of their chunklets. I thought to myself when I first heard of this sub LUN tiering that you may want a block size as small as, I don’t know 8-16MB. At the time 128MB still seemed kind of big(before I had learned of IBM’s 1GB size).

Just think of how much time it takes to read 1GB of data off a SATA disk (since the big target for automagic storage tiering seems to be SATA + SSD).

Anyone know what size Compellent uses for automagic storage tiering?

August 23, 2010

HP FlexFabric module launched

Filed under: Datacenter,Networking,Storage,Virtualization — Tags: , , , , — Nate @ 5:03 pm

While they announced it a while back, it seems the HP VirtualConnect FlexFabric Module available for purchase for $18,500 (web price). Pretty impressive technology, Sort of a mix between FCoE and combining a Fibre channel switch and a 10Gbps Flex10 switch into one. The switch has two ports on it that can uplink (apparently) directly fiber channel 2/4/8Gbps. I haven’t read too much into it yet but I assume it can uplink directly to a storage array, unlike the previous Fibre Channel Virtual Connect module which had to be connected to a switch first (due to NPIV).

HP Virtual Connect FlexFabric 10Gb/24-port Modules are the simplest, most flexible way to connect virtualized server blades to data or storage networks. VC FlexFabric modules eliminate up to 95% of network sprawl at the server edge with one device that converges traffic inside enclosures and directly connects to external LANs and SANs. Using Flex-10 technology with Fibre Channel over Ethernet and accelerated iSCSI, these modules converge traffic over high speed 10Gb connections to servers with HP FlexFabric Adapters (HP NC551i or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapters or HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter). Each redundant pair of Virtual Connect FlexFabric modules provide 8 adjustable connections ( six Ethernet and two Fibre Channel, or six Ethernet and 2 iSCSI or eight Ethernet) to dual port10Gb FlexFabric Adapters. VC FlexFabric modules avoid the confusion of traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management is built-in enabling server adds, moves and replacement in minutes instead of days or weeks.

[..]

  • 16 x 10Gb Ethernet downlinks to server blade NICs and FlexFabric Adapters
  • Each 10Gb downlink supports up to 3 FlexNICs and 1 FlexHBA or 4 FlexNICs
  • Each FlexHBA can be configured to transport either Fiber Channel over Ethernet/CEE or Accelerated iSCSI protocol.
  • Each FlexNIC and FlexHBA is recognized by the server as a PCI-e physical function device with adjustable speeds from 100Mb to 10Gb in 100Mb increments when connected to a HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter or any Flex-10 NIC and from 1Gb to 10Gb in 100Mb increments when connected to a NC551i Dual Port FlexFabric 10Gb Converged Network Adapter or NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
  • 4 SFP+ external uplink ports configurable as either 10Gb Ethernet or 2/4/8Gb auto-negotiating Fibre Channel connections to external LAN or SAN switches
  • 4 SFP+ external uplink ports configurable as 1/10Gb auto-negotiating Ethernet connected to external LAN switches
  • 8 x 10Gb SR, LR fiber and copper SFP+ uplink ports (4 ports also support 10Gb LRM fiber SFP+)
  • Extended list of direct attach copper cable connections supported
  • 2 x 10Gb shared internal cross connects for redundancy and stacking
  • HBA aggregation on FC configured uplink ports using ANSI T11 standards-based N_Port ID Virtualization (NPIV) technology
  • Allows up to 255 virtual machines running on the same physical server to access separate storage resources
  • Up to 128 VLANs supported per Shared Uplink Set
  • Low latency (1.2 µs Ethernet ports and 1.7 µs Enet/Fibre Channel ports) throughput provides switch-like performance.
  • Line Rate, full-duplex 240Gbps bridging fabric
  • MTU up to 9216 Bytes – Jumbo Frames
  • Configurable up to 8192 MAC addresses and 1000 IGMP groups
  • VLAN Tagging, Pass-Thru and Link Aggregation supported on all uplinks
  • Stack multiple Virtual Connect FlexFabric modules with other VC FlexFabric, VC Flex-10 or VC Ethernet Modules across up to 4 BladeSystem enclosures allowing any server Ethernet port to connect to any Ethernet uplink

Management

  • Pre-configure server I/O configurations prior to server installation for easy deployment
  • Move, add, or change server network connections on the fly without LAN and SAN administrator involvement
  • Supported by Virtual Connect Enterprise Manager (VCEM) v6.2 and higher for centralized connection and workload management for hundreds of Virtual Connect domains. Learn more at: www.hp.com/go/vcem
  • Integrated Virtual Connect Manager included with every module, providing out-of-the-box, secure HTTP and scriptable CLI interfaces for individual Virtual Connect domain configuration and management.
  • Configuration and setup consistent with VC Flex-10 and VC Fibre Channel Modules
  • Monitoring and management via industry standard SNMP v.1 and v.2 Role-based security for network and server administration with LDAP compatibility
  • Port error and Rx/Tx data statistics displayed via CLI
  • Port Mirroring on any uplink provides network troubleshooting support with Network Analyzers
  • IGMP Snooping optimizes network traffic and reduces bandwidth for multicast applications such as streaming applications
  • Recognizes and directs Server-Side VLAN tags
  • Transparent device to the LAN Manager and SAN Manager
  • Provisioned storage resource is associated directly to a specific virtual machine – even if the virtual server is re-allocated within the BladeSystem
  • Server-side NPIV removes storage management constraint of a single physical HBA on a server blade Does not add to SAN switch domains or require traditional SAN management
  • Centralized configuration of boot from iSCSI or Fibre Channel network storage via Virtual Connect Manager GUI and CLI
  • Remotely update Virtual Connect firmware on multiple modules using Virtual Connect Support Utility 1.5.0

Options

  • Virtual Connect Enterprise Manager (VCEM), provides a central console to manage network connections and workload mobility for thousands of servers across the datacenter
  • Optional HP 10Gb SFP+ SR, LR, and LRM modules and 10Gb SFP+ Copper cables in 0.5m, 1m, 3m, 5m, and 7m lengths
  • Optional HP 8 Gb SFP+ and 4 Gb SFP optical transceivers
  • Supports all Ethernet NICs and Converged Network adapters for BladeSystem c-Class server blades: HP NC551i 10Gb FlexFabric Converged Network Adapters, HP NC551m 10Gb FlexFabric Converged Network Adapters, 1/10Gb Server NICs including LOM and Mezzanine card options and the latest 10Gb KR NICs
  • Supports use with other VC modules within the same enclosure (VC Flex-10 Ethernet Module, VC 1/10Gb Ethernet Module, VC 4 and 8 Gb Fibre Channel Modules).

So in effect this allows you to cut down on the number of switches per chassis from four to two, which can save quite a bit. HP had a cool graphic showing the amount of cables that are saved even against Cisco UCS but I can’t seem to find it at the moment.

The most recently announced G7 blade servers have the new FlexFabric technology built in(which is also backwards compatible with Flex10).

VCEM seems pretty scalable

Built on the Virtual Connect architecture integrated into every BladeSystem c-Class enclosure, VCEM provides a central console to administer network address assignments, perform group-based configuration management and to rapidly deployment, movement and failover of server connections for 250 Virtual Connect domains (up to 1,000 BladeSystem enclosures and 16,000 blade servers).

With each enclosure consuming roughly 5kW with low voltage memory and power capping, 1,000 enclosures should consume roughly 5 Megawatts? From what I see “experts” say it costs roughly ~$18 million per megawatt for a data center, so one VCEM system can manage a $90 million data center, that’s pretty bad ass. I can’t think of who would need so many blades..

If I were building a new system today I would probably get this new module, but have to think hard about sticking to regular fibre channel module to allow the technology to bake a bit more for storage.

The module is built based on Qlogic technology.

Solaris reboot

Filed under: News — Nate @ 4:12 pm

Most everyone saw it coming but I suppose it’s more ‘official’ now, from The Register

The OpenSolaris board has suspended operations and symbolically handed all responsibly for of the open-variant of Solaris back to database giant Oracle.
[..]
Turns out now, instead of OpenSolaris being coded well ahead of the commercial Solaris, the only open source version of any future Solaris stack will come after the commercial product.

While I don’t recall what the license was, I do remember ordering copies of Solaris source code about 8-10 years ago for the company I was at (they were developing apps that ran on among other things Solaris).

Too bad OpenSolaris never really got off the ground, it was pretty close, apparently only a few things were left that were yet to be open sourced, including libc (I think – pretty critical).

While I did not like the userland tools for Solaris(and really hated patch management under Solaris 7 and 8, don’t recall 9, and never really used 10), the kernel was very impressive and solid. It would of been nice to have seen a Debian kSolaris distribution along the lines of Debian kFreeBSD.

« Newer PostsOlder Posts »

Powered by WordPress