TechOpsGuys.com Diggin' technology every day

October 25, 2011

Netflix’s Buzzsaw

Filed under: Random Thought — Tags: — Nate @ 6:39 am

The blurb I just heard out of CNBC which was kind of funny was was anyone willing to go long into that buzzsaw last night?

I know I said it’d be a while since I wrote about Netflix but I have written some other things since. The situation is just so funny I wanted to laugh in public with their stock down to $76 in pre-market trading from about $118 when the market closed yesterday (off original highs of $280 in early July).

Again from CNBC

Headline from a few of the analyst reports

  • Broken story
  • Unsustainable model
  • Nuclear winter
Name of FirmPrice target
(today)
Price target
(yesterday)
Susquehanna$60$124
Goldman Sachs$75$200
Citi$95$220
Barclays$125$260
JP Morgan$67$205

The boldest call on Netflix Janney Capital with a target of $51.

For some reason I was thinking of it more along the lines of a wheel falling off a car when it was going 60 MPH down the highway. It was a momentum stock after all right.

The main story seems to be the loss of subscribers, and not being profitable over the next year or so due to costs associated with expansion into other markets. They lost more than 800,000 subscribers in the quarter, myself being one of them.

“What we are seeing is a second wave of cancellations from the pricing increase,” said Chief Executive Reed Hastings, during a conference call with analysts.

Oh, and this is AFTER they reversed course on the whole Qwikster thing. I would love to see what Netflix management is talking about now, let me get some popcorn first though.

A guy from CNET says Netflix actually had a net loss of roughly 1.8M subscribers for the quarter, since Netflix typically adds 1M subscribers in the quarter and this quarter they lost 800k. so 800k lost + 1M not gained = 1.8M lost on a service that has less than 24M subscribers.

Hey Netflix, I’ll probably be willing to come back(and pay more) if you increased your content catalog by at least, say 500%.

Content certainly seems to win over distribution in this story.

(I’m not an investor in anything, situations like this are why I like to watch CNBC – it’s so funny for some reason)

October 23, 2011

That was fast

Filed under: General — Tags: — Nate @ 8:58 pm

This isn’t really what I expected when I said Sprint would likely take away it’s unlimited data plans, in fact I explicitly thought they would not touch their 4G service because iPhones don’t do Wimax, and I figure it’s fairly uncommon for iPhone people to get Mifis.

Sprint Nextel is ending unlimited data plans for all devices except smartphones, bringing the era of all-you-can-eat mobile data in the U.S. nearer to a close.

[..]

However, Sprint will continue to offer its plans for unlimited data use on phones, including on the Apple iPhone, which Sprint introduced just last week.

I wonder what prompted this move, it’s not as if you can incur roaming charges on their 4G network, is there even such a thing as roaming on Wimax ? I’m sure it’s theoretically possible but am quite confident it just doesn’t happen to Sprint 4G/Clearwire customers. Sprint owns a majority in Clearwire, and has invested billions on it.

For Mobile Hotspot plans on phones, data usage will be capped at 5GB per month of either 3G or combined 3G/4G service, depending on whether the phone can use 4G. Use above that cap will be charged at $0.05 per megabyte.

It is very unfortunate, and surprising move, maybe the executive who came up with this idea came from Netflix.

Their excess charges are well, excessive. AT&T I believe charges $10/GB for going over, which to me is reasonable, Sprint would charge, at a 1GB rate $51 for 1GB over the limit.

The AT&T excess charge is very similar to getting a higher plan which was nice to see, I mean when I signed up, I got the 2GB plan, and if I happened to use 3-4GB, the charges for that would be no different than if I had signed up for a larger plan.

It would be less bad if the devices had an automatic cutoff once you hit the limit, but of course there isn’t. My Mifi is nice enough to tell me how much data I use in a particular session but has no aggregation features. Which I’ve always found strange why phones and other devices don’t have functionality to keep track of that kind of stuff, I suppose it could be intentional, if it were, it would be even more unfortunate.

Luckily for me this change won’t impact me much at all I don’t believe, but still, very unfortunate for all round.

I came across a post from someone who has kept much closer track as to the changes Sprint has implemented than I have:

  • Removal of the Sprint Premiere Membership Program and the removal of all its benefits
  • Using your phone as a Mobile Hotspot no longer has unlimited data but is now capped. It still costs the same $30.
  • Adding a $10 a month 4g charge to every 4g line on an account regardless of whether you get 4g reception or not. This charge was then expanded to include all smartphones on the Sprint network, even if they weren’t capable of 4g.
  • No more Billing to Account.
  • An increase in administrative fees per line.
  • Raising the Early Termination Fee on an account by $150 to $350 for each phone line.
  • Changing the arbitration rules for settling customer disputes in a way that heavily favors Sprint.
  • Stopping people from leaving Sprint because of the arbitration changes without being charged the ETF, even though Federal Courts have ruled that changes in arbitration rules are a material change in the contract.
  • Eliminating unlimited 4g data from it’s Mobile Broadband plans.
  • Dropping WIMAX for their new LTE 4g network. This not only means that if you do not have 4g currently, you will never have it for your current 4g phone but also that all Sprint 4g phones being sold today, even if you are within a current 4g area, will stop operating as 4g at the end of next year because they will not work on Sprint’s new network.

Sprint your not doing me any favors encouraging me to use your service, even after charging what seemed to be a $50 fee to terminate my phone service even though I have not been on a contract for over a year(and having been a customer for 10 years), now eliminating my unlimited data on my remaining service with you. I recently purchased a AT&T 3G/4G mifi from dailysteals for $60. It’s unlocked, I thought I’d keep it handy as a backup, now it looks like it may be a a full time replacement for my Mifi when my contract is up next year.

Someone else mentioned a federal law saying with such a material change in the contract existing users have the legal right to terminate the contract without penalty, I may just do that. Even if I have to return the Mifi I have, it’s not as if I need it anyways.

Trip to Seattle: Nov 18th – 21st

Filed under: Random Thought — Tags: — Nate @ 8:52 am

Hey folks! I just had an idea. For my birthday I am going to come back to the Seattle area for a few days to see friends and hit my favorite places. I’m planning to arrive on Friday the 18th in time to hit Pecos, then Cowgirls on Fri night, and probably Saturday night too. Then leave on Monday the 21st.

Things could change, I could stay longer as my company has a satellite office in Seattle, but for the moment this is my current idea.

I booked a hotel there already, seemed like a really good rate and a pretty nice place.

Embassy Suites Seattle – Bellevue – 3225 158th Avenue SE, Bellevue, Washington,

October 22, 2011

IBM posts XIV SPC-2 results

Filed under: Storage — Tags: , , — Nate @ 8:35 pm

[UPDATED – as usual I re-read my posts probably 30 times after I post them and refine them a bit if needed, this one got quite a few changes. I don’t run a newspaper here so I don’t aim to have a completely thought out article when I hit post for the first time]

IBM finally came out and posted some SPC-2 results for their XIV platform, which is better than nothing but unfortunately they did not post SPC-1 results.

SPC-2 is a sequential throughput test, more geared towards things like streaming media and data warehousing instead of random I/O which represents a more typical workload.

The numbers are certainly very impressive though, coming in at 7.3 gigabytes/second, besting most other systems out there, 42 megabytes/second per disk, IBM’s earlier high end storage array was only able to inch out 12 megabytes/second per disk(with 4 times the number of disks) with disks that were twice as fast. So at least 8 times the I/O capacity, for only about 25% more performance vs XIV, that’s a stark contrast!

SATA/Nearline/7200RPM SAS disks are typically viewed as good at sequential operations, though I would expect 15k RPM disks to do at least as well, since the faster RPM should result in more data traveling under the head at a faster rate, perhaps a sign of a good architecture in XIV with it’s distributed mirrored RAID.

While the results are quite good – again it doesn’t represent the most common types of workloads out there which is random I/O.

The $1.1M discounted price of the system seems quite high for something that only has 180 disks on it(discounts on the system seem to for the most part be 70%), though there is more than 300 gigabytes of cache. I bought a 2-node 3PAR T400 with 200 SATA disks shortly after the T was released in 2008 for significantly less, of course it only had 24GB of data cache!

I hope the $300 modem IBM(after 70% discount) is using is a USR Courier! (Your Price: $264.99still leaves a good profit for IBM). Such fond memories of the Courier.

I can only assume at this point of time IBM has refrained from posting SPC-1 results is because with a SATA-only system the results would not be impressive. In a fantasy world with nearline disks and a massive 300GB cache maybe they could achieve 200-250 IOPS/disk which would put the $1.1M system with 180 disks 36,000 – 45,000 SPC-1 IOPS, or $24-30/IOP.

A more realistic number is probably going to be 25,000 or less($44/IOP), making it one of the most expensive systems out there for I/O (even if it could score 45,000 SPC-1). 3PAR would do 14,000 IOPS (not SPC-1 IOPS mind you, SPC-1 number would probably be lower) with 180 SATA disks and RAID 10 by contrast, based on their I/O calculator with 80% read/20% write workload for about 50% less cost(after discounts) for a 4-node F400.

One of the weak spots on 3PAR is the addressable capacity per controller pair, for I/O and disk connectivity purposes a 2-node F200 (much cheaper) could easily handle 180 2TB SATA disks, but from a software perspective that is not the case. I have been complaining about this for more than 3 years now, they’ve finally addressed it to some extent in the V-class but I am still disappointed to the extent it has been addressed per the supported limits(1.6PB, should be more than double that) that exist today, but at least with the V they have enough memory on the box to scale it up with software upgrades(time will tell if such upgrades come about however).

I would not even use a F400 for this if it was me opting instead for a T800 (800TB) or a V class(800-1600TB), because with 360TB raw on the system that is very close to the limit of the F400’s addressable capacity (384TB), or the T400(400TB). You could of course get a 4-node T800(or a 2-node V400 or V800)  to start, then add additional controllers to get beyond 400TB of capacity if/when the need arises. With the 4-controller design you also get the wonderful persistent cache feature built in (one of the rare software features that is not separately licensed).

But for this case, comparing a nearly maxed out F400 against a maxed out XIV is still fair – it is one of the main reasons I did not consider XIV during my last couple storage purchases.

So there is a strong use case of when to use XIV with these results – throughput oriented workloads! The XIV would absolutely destroy the F400 in throughput, which tops out at 2.6GB/sec (to disk).

With software such as Vertica out there which slashes the need for disk I/O on data warehouses given it’s advanced design, and systems such as Isilon being so geared towards things like scale out media serving (using NFS for media serving seems like a more ideal protocol anyways), I can’t help but wonder what XIV’s place is in the market, at this price point at least. It does seem like a very nice platform from a software perspective, and with their recent switch to Infiniband from 1 Gigabit ethernet a good part of their hardware has been improved as well, also it has SSD read cache coming.

I will say though that this XIV system will handily beat even a high end 3PAR T800 for throughput. While 3PAR has never released SPC-2 numbers the T800 tops out at a 6.4 gigabytes/second(from disk), and it’s quite likely it’s SPC-2 results would be lower than that.

With the 3PAR architecture being as optimized as it is for random I/O I do believe it would suffer vs other platforms with sequential I/O. Not that the 3PAR would run slow, but it would quite likely run slower due to how data is distributed on the system. That is just speculation though a result of not having real numbers to base it on. My own production random I/O workloads in the past have had 15k RPM disks running in the range of 3-4MB/second(numbers are extrapolated as I have only had SATA and 10k RPM disks in my 3PAR arrays to-date though my new one that is coming is 15k RPM), as such with a random I/O workload you can scale up pretty high before you run into any throughput limits on the system (in fact if you max out a T800 with 1,280 drives you could do as high as 5MB/second/disk before you would hit the limit). Though XIV is distributed RAID too so who knows..

Likewise I suspect 3PAR/HP have not released SPC-2 numbers because it would not reflect their system in the most positive light, unlike SPC-1.

Sorry for the tangents on 3PAR  🙂

October 19, 2011

HP Storage strategy – some hits, some misses

Filed under: Random Thought,Storage — Tags: , — Nate @ 9:24 pm

[UPDATED – Some minor updates since my original post] I was at an Executive Briefing by HP, given by HP’s head of storage, and former 3PAR CEO David Scott. I suppose this is one good reason to be in the Bay Area – events like this didn’t really happen in Seattle.

I really didn’t know what to expect, I was looking forward to seeing David speak as I had not heard him before, his accent, oddly enough surprised me.

He covered a lot of various topics, it was clear of course he was more passionate about 3PAR than Lefthand, or Ibrix or XP or EVA, not surprising.

The meeting was not technical enough to get any of my previously mentioned questions answered, seemed very geared towards the PhB crowd.

HP Storage hits

3PAR

One word: Duh. The crown jewel of the HP storage strategy.

He emphasized over and over the 3PAR architecture and how it’s the platform powering 7 of the top 10 clouds out there, the design that lets them handle unpredictable workloads, you know this stuff by now, I don’t need to repeat it.

David pointed out an interesting tidbit with regards to their latest SPC-1 announcement, he compared the I/O performance and cost per usable TB of the V800 to a Texas Memory Systems all-flash array that was tested earlier this year.

The V800 outperformed the TMS system by 50,000 IOPS, and came in at a cost per usable TB of only about $13,000 vs $50,000/TB for the TMS box.

Cost per I/O, which he did not mention certainly favored the TMS system($1.05), but the comparison was still a good one I thought – we can give you the performance of flash and still give you a metric ton of disk space at the same time. Well if you want to get technical I guesstimate the fully loaded V800 weighs in at 13,160 pounds or about 6 metric tons.

Of course flash certainly has it’s use cases, if you don’t have a lot of data it doesn’t make sense to invest in 2,000 spinning rust buckets to get to 450,000 IOPS.

Peer Motion

Peer motion – both a hit and a miss, a hit because it’s a really neat technology, the ability to non disruptively migrate between storage systems without 3rd party appliances, the miss, well I’ll talk about that below.

He compared peer motion to the likes of Hitachi‘s USP/VSP, IBM‘s SVC, and EMC‘s VPLEX, which are all expensive, complicated bolt-on solutions. Seemed reasonable.

Lefthand VSA

It’s a good concept, and it’s nice to see it as a supported platform. David mentioned that Vmware themselves originally tried to acquire Lefthand (or wanted to acquire I don’t know if anything official was made) because they saw the value in the technology – and of course recently Vmware introduced something kinda-sorta-similar to the Lefthand VSA in vSphere 5. Though it seems not quite as flexible or as scalable.

I’m not sure I see much value in the P4000 appliances by contrast, I hear that doing RAID 5 or worse yet RAID 6 on P4000 is just asking for a whole lotta pain.

StoreOnce De-duplication

It sounds like it has a lot of promise, I’ll put it in the hit column for now as it’s still a young technology and it’ll take time to see where it goes. But the basic concept is a single de-duplication technology for all of your data. He contrasted this with EMC’s strategy for example where they have client side de-dupe with their software backup products, in line de-dupe with data domain, and primary storage dedupe — none of which are compatible with each other. Who knows, by the time HP gets it right with StoreOnce maybe EMC and others will get it right too.

I’m still not sold myself on the advantages of dedupe outside of things like backups, and things like VDI. I’ve sat through what seems like a dozen NetApp presentations on the topic so I have had the marketing shoved down my neck many times. I came to this realization a few years ago during an eval test of some data domain gear, I’ll be honest and admit I did not fully comprehend the technicals behind de-duplication at the time and I expected pretty good results from feeding it tens of gigabytes of uncompressed text data. But turns out I was wrong and realized why I was under an incorrect assumption to begin with.

Now data compression on the other hand is another animal entirely, being able to support in line data compression without suffering much or any I/O hit really would be nice to see (I am aware there are one/more vendors out there that offer this technology now).

HP Storage Misses

Nobody is perfect, HP and 3PAR are no exception no matter how much I may sing praises for them here.

Peer Motion

When I first heard about this technology being available on both the P4000 and 3PAR platforms I assumed that it was compatible with each other, meaning you could peer motion data to/from P4000 and 3PAR. One of my friends at 3PAR clarified this was not the case with me a few weeks ago and David Scott mentioned that again today.

He tried to justify it comparing it to vSphere vMotion where you can’t do a vMotion between a vSphere system and a Hyper-V system. He could of gone deeper and said you can’t do vMotion even between vSphere hosts if the CPUs are not compatible, would of been a better example.

So he said that most federation technologies are usually homogeneous in nature, and you should not expect to be able to peer motion from a HP P4000 to a HP 3PAR system.

Where HP’s argument kind of falls apart here is that the bolt on solutions he referred to as inferior previously do have the ability to migrate data between systems that are not the same. It may be ugly, it may be kludgey, but it can work. Hitachi even lists 3PAR F400, S400 and T800 as supported platforms behind the USP. IBM lists 3PAR and HP storage behind their SVC.

So, what I want from HP is the ability to do peer motion between at least all of their higher end storage platforms (I can understand if they never have peer motion on the P2000/MSA since it’s just a low end box). I’m not willing to accept any excuses, other than “sorry, we can’t do it because it’s too complicated”. Don’t tell me I shouldn’t expect to have it, I fully expect to have it.

Just another random thought but when I think of storage federation, and homogeneous I can’t help but think of this scene from Star trek VI

GORKON: I offer a toast. …The undiscovered country, …the future.
ALL: The undiscovered country.
SPOCK: Hamlet, act three, scene one.
GORKON: You have not experienced Shakespeare until you have read him in the original Klingon.
CHANG: (in Klingonese) ‘To be or not to be.’
KERLA: Captain Kirk, I thought Romulan ale was illegal.
KIRK: One of the advantages of being a thousand light years from Federation headquarters.
McCOY: To you, Chancellor Gorkon, one of the architects of our future.
ALL: Chancellor!
SCOTT: Perhaps we are looking at something of that future here.
CHANG: Tell me, Captain Kirk, would you be willing to give up Starfleet?
SPOCK: I believe the Captain feels that Starfleet’s mission has always been one of peace.
CHANG: Ah.
KIRK: Far be it for me to dispute my first officer. Starfleet has always been…
CHANG: Come now, Captain, there’s no need to mince words. In space, all warriors are cold warriors.
UHURA: Er. General, are you fond of …Shakes ….peare?
CHEKOV: We do believe all planets have a sovereign claim to inalienable human rights.
AZETBUR: Inalien… If only you could hear yourselves? ‘Human rights.’ Why the very name is racist. The Federation is no more than a ‘homo sapiens’ only club.
CHANG: Present company excepted, of course.
KERLA: In any case, we know where this is leading. The annihilation of our culture.
McCOY: That’s not true!
KERLA: No!
McCOY: No!
CHANG: ‘To be, or not to be!’, that is the question which preoccupies our people, Captain Kirk. …We need breathing room.
KIRK: Earth, Hitler, nineteen thirty-eight.
CHANG: I beg your pardon?
GORKON: Well, …I see we have a long way to go.

For the most basic workloads it’s not such a big deal if you have vSphere and storage vMotion (or some other similar technology). You cannot fully compare storage vMotion with peer motion but for offering the basic ability to move data live between different storage platforms it does (mostly) work.

HP Scale-out NAS (X9000)

I want this to be successful, I really do. Because I like to use 3PAR disks and well there just aren’t many NAS options out there these days that are compatible. I’m not a big fan of NetApp, I very reluctantly bought a V3160 cluster to try to replace an Exanet cluster on my last 3PAR box because well Exanet kicked the bucket(not the product we had installed but the company itself). I left the company not long after that, and barely a year later the company is already going to abandon NetApp and go with the X9000 (of all things!).  Meanwhile their unsupported Exanet cluster keeps chugging along.

Back to X9000. It sounds like a halfway decent product they say the main thing they lacked was snapshot support and that is there now(or will be soon), kind of strange Ibrix has been around for how long and they did not have file system snapshots till now? I really have not heard much about Ibrix from anyone other than HP whom obviously sings the praises for the product.

I am still somewhat bitter for 3PAR not buying Exanet when they had the chance, Exanet is a far better technology than Ibrix. Exanet was sold for, if I remember right $12 million, a drop in the bucket. Exanet had deals on the table(at the time) that would of brought in more than $12 million in revenue (in each deal) alone. Multi petabyte deals. Here is the Exanet Architecture (and file system), as it stood in 2005, in my opinion, very similar to the 3PAR approach(completely distributed, clustered design – Exanet treats files like 3PAR treats chunklets), except Exanet did not have any special ASICs, everything was done in software. Exanet had more work to do on their product it was far from perfect but it had a pretty solid base to build upon.

So, given that I do hope X9000 does well, I mean my needs are not that great,. what I’d really like to see is a low end VSA for the X9000 along the lines of their P4000 iSCSI VSA. Just for simple file storage in an HA fashion. I don’t need to push 30 gigabits/second, just want something that is HA, has decent performance and is easy to manage.

Legacy storage systems (EVA especially)

Let it die already, HP has been adamant they will continue to support and develop the EVA platform for their legacy customers. That sort of boggles my mind. Why waste money on that dead end platform. Use the money to give discounted incentives to upgrade to 3PAR when the time comes. I can understand supporting existing installs, bug fix upgrades, but don’t waste money on bringing out whole new revisions of hardware and software to this dead end product. David said so himself – supporting the install base of EVA and XP is supporting their 11% market share, the other 89% of the market that they don’t have they are going to push 3PAR/Lefthand/Ibrix.

I would find a way to craft a message to that 11% install base, subliminal messaging (ala Max Headroom, not sure why that came to my head) make them want to upgrade to a 3PAR box, not the next EVA system.

XP/P9500 I can kinda sorta see keeping around, I mean there are some things it is good at that even 3PAR can’t do today. But the market for such things is really tiny, and shrinking all the time. Maybe HP doesn’t put much effort into this platform because it is OEM’d from Hitachi, in which case it doesn’t cost a lot to re-sell, in which case it doesn’t make a big difference if they keep selling it or stopped selling it.  I don’t know.

I can just see what goes through a typical 3PAR SE’s mind (at least those that were present before HP acquired 3PAR) when they are faced with selling an EVA. If the deal closes perhaps they scream NOooooooooooooooooooo like Darth Vader in Return of the Jedi. Sure they’d rather have the customer buy HP then go buy a Clariion or something. But throw these guys a bone. Kill EVA and use the money to discount 3PAR more in the marketplace.

P2000/MSA – gotta keep that stuff, there will probably always be some market for DAS

Insights

I had the opportunity to ask some high level questions of David and got some interesting responses, he seems like a cool guy

3PAR Competition

David harped a lot on how other storage architectures from the big manufacturers were designed 15-20 years ago. I asked him – why does he think – given 3PAR technology is 10+ years old at this point that these other manufacturers haven’t copied it to some degree? It has obvious technological advantages it just baffles me why others haven’t copied it.

His answer came down to a couple of things. The main point was 3PAR was basically just lucky. They were in the right place, at the right time, with the right product. They successfully navigated the tech recession when at least two other utility storage startups could not and folded (I forgot their names, I’m terrible with names). He said the big companies pulled back on R&D spending as a result of the recession and as such didn’t innovate as much in this area, which left a window of opportunity for 3PAR.

He also mentioned two other companies that were founded at about the same time to address the same market – utility computing. He mentioned Vmware as one of them, the other was the inventor of the first blade system, forgot the name. Vmware I think I have to dispute though. I seem to recall Vmware “stumbling” into the server market on accident rather than targeting it directly. I mean I remember using Vmware before it was even Vmware workstation or GSX. It was just a program used to run another OS on top of Linux (that was the only OS Vmware ran on at the time). I recall reading that the whole server virtualization movement came way later and caught Vmware off guard. as much as it caught everyone else off guard.

He also gave an example in EMC and their VMAX product line. He said that EMC mis understood what the 3PAR architecture was about – in that they thought it was just a basic cluster design, so EMC re-worked their system to be a cluster – the result is VMAX. But it still falls short in several design aspects, EMC wasn’t paying attention.

I was kind of underwhelmed when the VMAX was announced, I mean sure it is big, and bad, and expensive, but they didn’t seem to do anything really revolutionary in it. Same goes for the Hitachi VSP. I fully expected both to do at least some sort of sub disk distributed RAID. But they didn’t.

Utilizing flash as a real time cache

David harped a lot on 3PAR’s ability to be able to respond to unpredictable workloads. This is true, I’ve seen it time and time again, it’s one reason why I really don’t want to use any other storage platform at this point in time given the opportunity.

Something I thought really was innovative that came out of EMC in the past year or two is their Flash Cache product (I think that’s the right name), the ability to use high speed flash as both a read and a write cache. The ability to bulk the cache levels up into the multiples of terabytes for big cloud operations.

His response was – we already do that – with RAM cache. I clarified a bit more in saying scaling out the cache even more with flash well beyond what you can do with RAM. He kind of ducked the question saying it was a bit too technical/architectural for the crowd in the room. 3PAR needs to have this technology. My key point to him is the 3PAR tools like Adaptive Optimization and Dynamic Optimization are great tools – but they are not real time. I want something that is real time. It seemed he acknowledged that point – the lack of the real time nature of the existing technologies as a weak point – hopefully HP/3PAR addresses it soon in some form.

In my previous post, Gabriel commented on how the new next gen IBM XIV will be able to have up to 7.5TB of read cache via SSD. I know NetApp can have a couple TB worth of read cache in their higher end boxes. As far as I know only EMC has the technology to do both read and write. I can’t say how well it works, since I’ve never used it and know nobody that has this EMC gear, but it is a good technology to have, especially as flash matures more.

I just think how neat it would be to have, say a 1.5-2PB system running SATA disks with an extra 100TB(2.5-5% of total storage) of flash cache on top of it.

Bringing storage intelligence to the application layer

Another question I asked him was his thoughts around a broader industry trend which seems to be trying to bring the intelligence of storage out of the storage system and put it higher up in the stack – given the advanced functionality of a 3PAR system are they threatened at all by this? The examples I gave were Exchange and Oracle ASM.

He focused on Oracle, mentioning that Oracle was one of the original investors in 3PAR and as a result there was a lot of close collaboration between the two companies, including the development of the ASM technology itself.

He mentioned one of the VPs of Oracle, I forget if he was a key ASM architect or developer or something, but someone high up in the storage strategy involving ASM — in the early days this guy was very gung ho, absolutely convinced that running the world on DAS with ASM was the way to go. Don’t waste your money on enterprise storage, we can do it higher in the stack and you can use the cheap storage, save yourself a lot of money.

David said once this Oracle guy saw 3PAR storage powering an Oracle system he changed his mind, he no longer believed that DAS was the future.

The key point David was trying to make was – bringing storage intelligence higher up in the stack is OK if  your underlying storage sucks. But if you have a good storage system, you can’t really match that functionality/performance/etc that high up in the stack and it’s not worth considering.

Whether he is right or not is another question, for me I think it depends on the situation, but any chance I get I will of course lean towards 3PAR for my back end disk needs rather than use DAS.

In short – he does not feel threatened at all by this “trend”. Though if HP is unwilling or unable to get peer motion working between their products when things like Storage vMotion and Oracle ASM can do this higher up in the stack, there certainly is a case for storage intelligence at the application layer.

Best of Breed

David also seemed to harp a lot about best of breed. He knocked his competitors for having a mis mash of technologies, specifically he mentioned market leading technologies instead of best of breed. Early in his presentation he touted HP’s market leading position in servers, and their #2 position in networking (you could say that is market leading).

He also tried to justify that the HP integrated cloud stack is comprised of best of breed technologies, it just happens to be two out of the three are considered market leading, no coincidence there.

Best of breed is really a perception issue when you get down to it. Where do you assign value in the technology. Do you just want the cheapest you can get? Do you want the most advanced software? Do you want the fastest system? Do you want the most reliable? Ease of use? interoperable ? flexibility? buzz word compliant? Big name brand?

Because of that, many believe these vertically integrated stacks won’t go very far. There will be some customer wins of course, but those will more often then not be wins based on technology but based on other factors, political (most likely), financial (buy from us and we’ll finance it all no questions asked), or maybe just the warm and fuzzy feeling incompetent CIOs get when they buy from a big name that says they will stand behind the products.

I did ask David what is HP’s stance on a more open design for this “cloud” thing. Not building a cloud based on a vertically integrated stack. His response was sort of what I expected – none of the other stack vendors are open, we aren’t either so we don’t view it as an important point.

I was kind of sad that he never used the 3cV term, really, I think was likely the first stack that was out there, and it wasn’t even official, there was no big marketing or sales push behind it.

For me, my best of breed storage is 3PAR, it may have 1 or 2% market share (more now), so it surely is not market leading(might make it there with HP behind it), but for my needs it’s best of breed.

Switching, likewise, Extreme – maybe 1 – 1.5% market share, not market leading either, but for me, best of breed.

Fibre Channel – I like Qlogic. Probably not best of breed, certainly not market leading at least for switches, but damn easy to use and it gets the job done for me. Ironically enough while digging up links for this post I came across this, which is an article suggesting Qlogic should buy Extreme, back in 2009. I somewhat fear, the most likely company to buy Extreme at this point is Oracle. I hope Oracle does not buy them, but Oracle is trying to play the whole stack game too and they don’t really have any in house networking, unlike the other players. Maybe Oracle will jump on someone like Arista instead, be a cheaper price, ala Pillar.

Servers – I do like HP best of course for the enterprise space – they don’t compete as well in scale out though.

Vmware on the other hand happens to be in a somewhat unique position being both the market leader and for the most part best of breed, though others are rapidly closing the gap in the breed area, Vmware had many years with no competition.

Summary

All in all it was pretty good, a lot more formal than I was expecting, I saw 3 people I knew from 3PAR, I sort of expected to see more (I fully was expecting to see Marc Farley there! where were you Marc!).

David did harp on quite a bit using Intel processors, something that Chuck from EMC likes to harp on too. I did not ask this question of David, because I think I know the answer. The question would be does he think HP will migrate to Intel CPUs, and away from their purpose built ASIC? I think the answer to that question is no, at least not in the next few years(barring some revolution in general purpose processors). With 3PAR’s distributed design I’m just not sure how a general purpose CPU could handle calculating the parity for what could be as many as say half a million RAID arrays on a storage system like the V800 without the assistance of an ASIC or FPGA. I really do not like HP’s pushing of the Intel brand so much being a partner of HP and all, at least with regards to 3PAR. Because really – the Intel CPU does not do much at all in the 3PAR boxes, it never has.

Just look at IBM XIV – they do distributed RAID, though they do mirroring only, and they top out at 180 disks, even with 60 2.4Ghz Intel CPU cores (120 threads), with a combined 180MB of CPU cache. Mirroring is a fairly simple task vs doing parity calculations.

Frankly, I’d rather see an AMD processor in there, especially one with 12-16 cores. The chipsets that power the higher end Intel processors are fairly costly, vs AMD’s chipset scales down to 1 CPU socket without an issue. I look at a storage system that has dual or quad core CPUs in it and I think what a waste. Things may be different if the storage manufacturers included the latest & greatest Intel 8 and 10 core processors but thus-far I have not seen anything close to that.

David also mentioned a road VMware is traveling, moving away from file systems to support VMs to a 1:1 relationship between LUNs and VMs, making life a lot more granular. He postulated (there’s a word I’ve never used before) that this technology(forgot the name) will make protocols like NFS obsolete(at least when it comes to hosting VMs), which was a really interesting concept to me.

At the end of the day, for the types of storage systems I have managed myself, for the types of companies I have worked for I don’t get enough bang for the buck out of many of the more advanced software technologies on these storage systems. Whether it is simple replication stuff, or space reclamation, I’m still not sold on Adaptive optimization or other automatic storage tiering techniques, even application aware snapshots. Sure these are all nice features to have, but they are low on my priority list, I have other things I want to buy first before I invest in these things, myself at least. If money is no object – sure, load up on everything you got! I feel the same way about VMware and their software value add strategy. Give me the basics and let me go from there.  Basics being a solid underlying system that is high performance, predictable and easy to manage.

There was a lot of talk about cloud and their integrated stacks and stuff like that but that was about as interesting to me as sitting through a NetApp presentation. At least with most of the NetApp presentations I sat through I got some fancy steak to go with it, just some snacks at this HP event.

One more question I have for 3PAR – what the hell is your service processor running that requires 317W of power! Is it using Intel technology circa 2004 ?

This actually ended up being a lot longer than I had originally anticipated, nearly 4200 words!

Linear scalability

Filed under: Storage — Tags: , — Nate @ 9:56 am

So 3PAR released their SPC-1 results for their Mac daddy P10000, and the results aren’t as high as I originally guessed they might be.

HP claims it is a world record result for a single system. I haven’t had the time yet to try to verify but they are probably right.

I’m going to a big HP/3PAR event later today and will ask my main question – was the performance constrained by the controllers or by the disks? I’m thinking disks, given the IOPS/disk numbers below.

Here’s some of the results

SystemDate
Tested
SPC-1
IOPS
IOPS
per
Disk
SPC-1 Cost
per IOP
SPC-1 Cost
per usable
TB
3PAR V80010/17/2011450,212234$6.59$12,900
3PAR F4004/27/200993,050242$5.89$20,308
3PAR T8009/2/2008224,989175$9.30$26,885

The cost per TB number was slashed because they are using disks that are much larger (300GB vs 147GB on earlier tests).

The cost was pretty reasonable as well coming in at under $7/IOP which is actually less than their previous results on their T800 from 2008 which was already cheap at $9.30/IOP.

It is interesting that they used Windows to run the test, which is a first for them I believe, having used real Unix in the past (AIX and Solaris for T800 and F400 respectively).

The one kind of strange thing, which is typical in 3PAR SPC-1 numbers is the sheer number of volumes they used (almost 700). I’m not sure what the advantage would be to doing that, another question I will try to seek the answer to.

The system was, as expected, remarkably easy to configure, the entire storage configuration process consisted of this

for n in {0..7}
do
	for s in 1 4 7
do
	if(($s==1))
then
	for p in 4
do
	controlport offline -f $n:$s:$p
	controlport config host -ct point -f $n:$s:$p
	controlport rst -f $n:$s:$p
done
fi

for p in 2
do
	controlport offline -f $n:$s:$p
	controlport config host -ct point -f $n:$s:$p
	controlport rst -f $n:$s:$p
done
done
done

PORTS[0]=":7:2"
PORTS[1]=":1:2"
PORTS[2]=":1:4"
PORTS[3]=":4:2"

for nd in {0..7}
do
 createcpg -t r1 -rs 120 -sdgs 120g -p -nd $nd cpgfc$nd

for hba in {0..3}
do
 for i in {0..14} ; do
 id=$((1+60*nd+15*hba+i))
 createvv -i $id cpgfc${nd} asu1.${id} 240g;
 createvlun -f asu1.${id} $((15*nd+i+1)) ${nd}${PORTS[$hba]}
done
for i in {0..1} ; do
 id=$((681+8*nd+2*hba+i))
 j=$((id-680))
 createvv -i $id cpgfc${nd} asu3.${j} 360g;
 createvlun -f asu3.${j} $((2*nd+i+181)) ${nd}${PORTS[$hba]}
done
for i in {0..3} ; do
 id=$((481+16*nd+4*hba+i))
 j=$((id-480))
 createvv -i $id cpgfc${nd} asu2.${j} 840g;
 createvlun -f asu2.${j} $((4*nd+i+121)) ${nd}${PORTS[$hba]}
done
done
done

Think about that, a $3 million storage system(after discount) configured in less than 50 lines of script?

Not a typical way to configure a system, I had to look at it a couple of times but it seems they are still pinning volumes to particular controller pairs, and LUNs to particular FC ports. This is what they have done in the past so it’s nothing new but I would like to see how the system runs without such pinning of resources and let the inter-node routing do it’s magic, since that is how the customers would run the system.

But that’s what full disclosure is all about right! Another reason I like the SPC-1, is the in depth configuration information that you don’t need an NDA to see(and in 3PAR’s case you probably don’t need to attend a 3-week training course to understand!)

I’m trying to think of one but I can’t think of another storage architecture out there that scales as well as the 3PAR Inspire architecture from the low end(F200) to the high end(V800).

The cost of the V800 was a lot more reasonable than I was fearing it might be, it’s only roughly 45% more expensive than the T800 tested in September 2008, for that extra 45% you get 50% more disks, double the I/O capacity, almost three times the usable capacity. Oh, and five times more data cache, and 8 times more control cache to boot!

I’m suspecting the ASICs are not being pushed to their limits here in the V800, and that the system can go quite a bit faster provided there is not a I/O bottleneck on the disks behind the controllers.

On the backs of these numbers The Register is reporting HP is beefing up the 3PAR sales team after having experienced massive growth over the past year, seems to be at least roughly 300% increase in sales over the past year, so much that they are having a hard time keeping up with demand.

I haven’t been happy with the hefty price increases HP has put into the 3PAR product line though in a lot of cases those come back out in the form of discounts. I guess it’s what the market will bear right – as long as things are selling as fast as they can make them HP doesn’t have any need to reduce the price.

I saw an interview with the chairman of HP a few weeks ago, when they announced their new CEO. He mentioned how 3PAR had exceeded their sales expectations significantly for justification for paying that lofty price to acquire them about a year ago.

So congrats to 3PAR, I knew you could do it!

October 18, 2011

Cisco’s new 10GbE push – a little HP and Dell too

Filed under: Networking — Tags: , , , , — Nate @ 7:56 pm

Just got done reading this from our friends at The Register.

More than anything else this caught my eye:

On the surface it looks pretty impressive, I mean it would be interesting to see exactly how Cisco configured the competing products as in which 60 Juniper devices or 70 HP devices did they use and how were they connected?

One thing that would of been interesting to call out in such a configuration, is the number of logical devices needed for management. For example I know Brocade’s VDX product is some fancy way of connecting lots of devices sort of like more traditional stacking just at a larger scale for ease of management. I’m not sure whether or not the VDX technology extends to their chassis product as Cisco’s configuration above seems to imply using chassis switches. I believe Juniper’s Qfabric is similar. I’m not sure if HP or Arista have such technology(I don’t believe they do). I don’t think Cisco does – but they don’t claim to need it either with this big switch. So a big part of the question is managing so many devices, or just managing one. Cost of the hardware/software is one thing..

HP recently announced a revamp of their own 10GbE products, at least the 1U variety. I’ve been working off and on with HP people recently and there was a brief push to use HP networking equipment but they gave up pretty quick. They mentioned they were going to have “their version” of the 48-port 10-gig switch soon, but it turns out it’s still a ways away – early next year is when it’s supposed to ship, even if I wanted it  (which I don’t) – it’s too late for this project.

I dug into their fact sheet, which was really light on information to see what, if anything stood out with these products. I did not see anything that stood out in a positive manor, I did see this which I thought was kind of amusing –

Industry-leading HP Intelligent Resilient Framework (IRF) technology radically simplifies the architecture of server access networks and enables massive scalability—this provides up to 300% higher scalability as compared to other ToR products in the market.

Correct me if I’m wrong – but that looks like what other vendors would call Stacking, or Virtual Chassis. An age-old technology, but the key point here was the up to 300% higher scalability. Another way of putting it is at least 50% less scalable – when your comparing it to the Extreme Networks Summit X670V(which is shipping I just ordered some).

The Summit X670 series is available in two models: Summit X670V and Summit X670. Summit X670V provides high density for 10 Gigabit Ethernet switching in a small 1RU form factor. The switch supports up to 64 ports in one system and 448 ports in a stacked system using high-speed SummitStack-V160*, which provides 160 Gbps throughput and distributed forwarding. The Summit X670 model provides up to 48 ports in one system and up to 352 ports in a stacked system using SummitStack-V longer distance (up to 40 km with 10GBASE-ER SFP+) stacking technology.

In short, it’s twice as scalable as the HP IRF feature, because it goes up to 8 devices (56x10GbE each), and HP’s goes up to 4 devices (48x10GbE each — or perhaps they can do 56 too with breakout cables since both switches have the same number of physical 10GbE and 40GbE ports).

The list price on the HP switches is WAY high too, The Register calls it out at $38,000 for a 24-port switch. The X670 from Extreme has a list price of about $25,000 for 48-ports(I see it on-line for as low as about $17k). There was no disclosure of HP’s pricing for their 48-port switch.

Extreme has another 48-port switch which is cheaper (almost half the cost if I recall right – I see it on-line going for as low as $11,300) but it’s for very specialized applications where latency is really important. If I recall right they removed the PHY (?) from the switch which dramatically reduces functionality and introduces things like very short cable length limits but also slashes the latency (and cost). You wouldn’t want to use those for your VMware setup(well if you were really cost constrained these are probably better than some other alternatives especially if your considering this or 1GbE), but you may want them if your doing HPC or something with shared memory or high frequency stock trading (ugh!).

The X670 also has (or will have? I’ll find out soon) a motion sensor on the front of the switch which I thought was curious, but seems like a neat security feature, being able to tell if someone is standing in front of your switch screwing with it. It also apparently has the ability(or will have the ability) to turn off all of the LEDs on the switch when someone gets near it, and turn them back on when they go away.

(ok back on topic, Cisco!)

I looked at the Cisco slide above, and thought to myself, really, can they be that far ahead? I certainly do not go out on a routine basis and see how many devices and connectivity between them that I need to achieve  X number of line rate ports, I’ll keep it simple, if you need a large number of line rate ports just use a chassis product(you may need a few of them). It is interesting to see though, assuming it’s anywhere close to being accurate.

When I asked myself the question “Can they be that far ahead?” I wasn’t thinking of Cisco, I think I’m up to 7 readers now — you know me better than that! 🙂

I was thinking of the Extreme Networks Black Diamond X-Series which was announced (note not yet shipping…) a few months ago.

  • Cisco claims to do 768 x 10GbE ports in 25U (Extreme will do it in 14.5U)
  • Cisco claims to do 10W per 10GbE/port (Extreme will do it in 5W/port)
  • Cisco claims to do it with 1 device .. Well that’s hard to beat but Extreme can meet them, it’s hard to do it with less than one device.
  • Cisco’s new top end taps out at very respectable 550Gbit per slot (Extreme will do 1.2Tb)
  • Cisco claims to do it with a list price of $1200/port. I don’t know what Extreme’s pricing will be but typically Cisco is on the very high end for costs.

Though I don’t know how Cisco gets to 768 ports, Extreme does it via 40GbE ports and breakout cables (as far as I know), so in reality the X-series is a 40GbE switch (and I think 40GbE only – to start with unless you use the break out cables to get to 10GbE).  It was a little over a year ago that Extreme was planning on shipping 40GbE at a cost of $1,000/port. Certainly the X-series is a different class of product than what they were talking about a while ago, but prices have also come down since.

X-Series is shipping “real soon now’.  I’m sure if you ask them they’ll tell you more specifics.

It is interesting to me, and kind of sad how far Force10 has fallen in the 10GbE area, I mean they seemed to basically build themselves on the back of 10GbE(or at least tried to), but I look at their current products on the very high end, and short from the impressive little 40GbE switch they have, they seem to top out at 140 line rate 10GbE in 21U. Dell will probably do well with them, I’m sure it’ll be a welcome upgrade to those customers using Procurve, uh I mean Powerconnect? That’s what Dell call(ed) their switches right?

As much as it pains me I do have to give Dell some props for doing all of these acquisitions recently and beefing up their own technology base, whether it’s in storage, or networking they’ve come a long way (more so in storage, need more time to tell in networking). I have not liked Dell myself for quite some time, a good chunk of it is because they really had no innovation, but part of it goes back to the days before Dell shipped AMD chips and Dell was getting tons of kick backs from Intel for staying an Intel exclusive provider.

In the grand scheme of things such numbers don’t mean a whole lot, I mean how many networks in the world can actually push this kind of bandwidth? Outside of the labs I really think any organization would be very hard pressed to need such fabric capacity, but it’s there — and it’s not all that expensive.

I just dug up an old price list I had from Extreme – from late November 2005. An 6-port 10GbE module for their Black Diamond 10808 switch (I had two at the time) had a list price of $36,000. For you math buffs out there that comes to $9,000 per line rate port.

That particular product was oversubscribed (hence it not being $6,000/port) as well having a mere 40Gbps of switch fabric capacity per slot, or a total of 320Gbps for the entire switch (it was marketed as a 1.2Tb switch but hardware never came out to push the backplane to those levels – I had to dig into the depths of the documentation to find that little disclosure – naturally I found it after I purchased, didn’t matter for us though I’d be surprised if we pushed more than 5Gbps at any one point!). If I recall right the switch was 24U too. My switches were 1GbE only, cost reasons 🙂

How far we’ve come..

October 7, 2011

Sprint makes drastic changes

Filed under: Random Thought — Tags: — Nate @ 6:08 pm

I’ve been a Sprint customer for about 11 years now, never really had an issue, well I had a billing issue in 2005 I think which was annoying to deal with, but besides that for the most part no issues over the years.

Sprint was known to have bad customer service for a while – fortunately I rarely had to deal with them. Sprint, like T-mobile have been bleeding customers over recent years due to iPhone usage. Sprint of course jumped in head first with the new iPhone 4GS, not too surprising I guess, it is unfortunate that the market has come to that though.

Sprint has committed to buy at least 30.5 million iPhones, even though it would likely lose money on the deal until 2014, according to people familiar with the matter.

What is more surprising (to me at least) is today’s announcement that Sprint is going to more or less abandon WiMax and build out a LTE network. I haven’t looked into the nitty gritty details yet but it seems like a risky move after having invested all this in WiMax just to walk away and commit to spending $10+ billion to upgrade their existing network with LTE instead of enhancing WiMax even further, after all they are a majority shareholder of Clearwire, a company that has it’s HQ not far from where I used to live.

One reason Sprint seems to give for the switch is LTE will support more users – they expect to be able to support 250 million users by 2013 vs 120 million on WiMax. For a company with 52 million customers, building a network to support 250 million seems kind of excessive.

I understand that LTE is more of a standard and will be better supported but it’s still a huge loss to have invested all that in WiMax and not get enough out it to continue developing it. I see this article that mentions Sprint spending $2.5 billion on Wimax through the end of 2008.

What was most surprising to me, was not anything that hit the major headlines, Sprint is abandoning their customer loyalty program Sprint Premier. I got the letter in the mail about a week ago.

What is next? I think Sprint’s unlimited data plans will probably be the next to go. iPhone users use a lot of data, more than Sprint is expecting I think. Verizon killed their unlimited data plan just months after they got the iPhone. This probably won’t impact 4G WiMax service (at least as long as Sprint offers it) since iPhone users can’t use 4G it won’t kill that network.

I recently switched off of Sprint for cell phone service onto AT&T so I could use my new GSM Pre3 handsets. I’m still a Sprint customer with my 3G/4G Mifi at least until my contract is up next year.

I hope Sprint recovers, we really need them to stick around.

Sad times.

October 6, 2011

Fusion IO enhances MySQL performance further

Filed under: Storage — Tags: , , , — Nate @ 7:12 am

This seems pretty neat. Not long ago Fusion IO announced their first new real product refresh in quite a while which offers significantly enhanced performance.

Today I see another related article that goes into something more specific, from Data Center Knowledge

Fusion-io also announced a new extension to its VSL (Virtual Storage Layer) software subsystem for conducting Atomic Writes in the popular MySQL open source database. Atomic Writes are an operation in which a processor can simultaneously write multiple independent storage sectors as a single storage transaction. This accelerates mySQL and gives new features powered by the flexibility of sophisticated flash architectures. With the new Atomic Writes extension, Fusion-io testing has observed 35 percent more transactions per second and a 2.5x improvement in performance predictability compared to conducting the same MySQL tests without the Atomic Writes feature.

I know that Facebook is a massive user of Fusion IO for their MySQL database farms, I suspect this feature was made for them! Though it can benefit everyone.

My only question would be can this Atomic write capability be used by MySQL when running through the ESX storage layer, or does there need to be more native access from the OS.

About the new product lines, from The Register

The ioDrive 2 comes in SLC form with capacities of 400GB and 600GB. It can deliver 450,000 write IOPS working with 512-byte data blocks and 350,000 read IOPS. These are whopping great increases, 3.3 times faster for the write IOPS number, over the original ioDrive SLC model which did 135,000 write IOPS and 140,000 read IOPS. It delivered sequential data at 750-770MB/sec whereas the next-gen product does it at 1.5GB/sec, around two times faster.
[..]
All the products will ship in November. Prices start from $5,950

The cards aren’t available yet, wonder how accurate those numbers will end up being? But in any case, even if they were over inflated by a large amount  that’s still an amazing amount of I/O.

On a related note I was just browsing the Fusion IO blog which mentions this MySQL functionality as well and saw that Fusion IO was/is showing off a beefy 8-way HP DL980 with 14 HP-branded IO accelerators  at Oracle Openworld

We’re running Oracle Enterprise Edition database version 11g Release 2 on a single eight processor HP ProLiant DL980 G7 system integrated with 14 Fusion ioMemory-based HP IO Accelerators, achieving performance of more than 600,000 IOPS with over 6GB/s bandwidth using a real world, read/write mixed workload.

[..]

the HP Data Accelerator Solution for Oracle is configured with up to 12TB of high performance flash[..]

After reading that I could not help but think how HP’s own Vertica, with it’s extremely optimized encoding and compression scheme would run on such a beast. I mean if you can get 10x compression out of the system(Vertica’s best-case real world is 30:1 for reference), get a pair of these boxes (Vertica would mirror between the two) and you have upwards of 240TB of data to play with.

I say 240TB because of the way Vertica mirrors the data it allows you to store it in a different sort order on the mirror allowing for even faster access if your querying the data in different ways. Who knows – with the compression you may be able to get much better than 10:1 depending on your data.

Vertica is so fast that you will probably end up CPU bound more than anything else – 80 cores per server is quite a lot though! The DL980 supports up to 16 PCI Express slots so even with 14 cards that still leaves room for a couple 10GigE ports and/or Fibre channel or some other form of connectivity other than what’s on the motherboard (which seems to have an optional dual port 10GbE NIC)

With Vertica’s licensing (last I checked) starting in the 10s of thousands of dollars per raw TB (before compression), it falls into the category for me to blow a ton of money on hardware to make it run the best it possibly can (same goes for Oracle – though Standard Edition to a lesser degree). Vertica is coming out with a Community Edition soon which I believe is free, I don’t recall what the restrictions are I think one of them was it was limited to a single server, I don’t recall yet hearing on what the storage limits might be(I’d assume there would be some limit maybe half a TB or something)

October 5, 2011

Cisco: the future was here 40 years ago

Filed under: Random Thought — Tags: — Nate @ 2:27 pm

Just read this from our friends at The Register

“You may not agree, but I believe video will be the basis of all communication going forward,” he told attendees at the Oracle OpenWorld conference in San Francisco. “It’s where we see ourselves going – we no longer make devices that aren’t video-capable.”

I don’t know myself, I personally do not like video calls, the other caller can’t see that your playing a video game, or on the toilet, or driving in your car or whatever. Then there’s the compatibility/interoperability issues,  Apple has this face time thing, then there is Skype, and I think several of the IM clients do video. Skype seems the closest thing to a de facto standard. I use skype all day every day but it’s for text chat (99% of it is work related), maybe a couple times a week do a voice conference call, but video, few times a year at best, for me anyways.

Then I can’t help but think back to all of those Star Trek episodes where the enterprise is struggling to communicate with someone else either over video or over voice, I want to scream JUST SEND A TEXT MESSAGE AND REPEAT IT 5000 TIMES FOR REDUNDANCY! It will take less bandwidth and you’re almost certain to get the message across.

I remember back in my BBS days I came across an archiver called UC2 Ultra Compressor 2. It had some pretty crazy redundancy tricks in it, my memory is very foggy as this was about 20 years ago but there was a couple of times when my modem connection was really bad and my modem program registered literally hundreds of errors uploading and downloading data, but UC2 was able to recover it for me. Send your messages with that kind of redundancy, keep it simple, you don’t need to fill your 140″ flat screen TV on your ship’s bridge with garbled video hoping to get the message across.

“Captain! Please repeat, your last transmission was garbled  … DO YOU READ, Come in captain! We’re not reading you!”

Not only that, but text can be more secure, how many times have you seen Star trek episodes where the crew is talking really softly into their communicator so they aren’t heard by the bad guys. A nanosecond of time used to broadcast a text message is also probably harder to trace than a big burst of data used for video or voice.

Looks like AT&T had their first video phone booths back in the 60s, and slowly companies have tried time and again to do video but it’s never caught on, it’s a solution looking for a problem that really isn’t there. There are times when video is nice, but I don’t see a time coming when video communications will dominate over regular voice, or text or whatever.

Powered by WordPress