Diggin' technology every day

May 28, 2013

3PAR up 82% YoY – $1 Billion run rate

Filed under: Storage — Tags: — Nate @ 8:49 am

I came across this article on The Register which covered some of HP’s storage woes (short story: legacy storage is nose diving and 3PAR is shining). El Reg linked to the conference call transcript and I just ran a quick keyword search for 3PAR and saw this

This has been one of our most successful product introductions and 3PAR has now exceeded the $1 billion run-rate revenue mark.


Converged storage products were up 48% year-over-year and within that 3PAR was up 82%

Congratulations 3PAR! Woohoo! All of us over here at Techopsguys are really proud of you – keep up the good work! <voice=”Scotty”>Almost brings a tear to me eye.</voice>

For a comparison, I dug up the 3PAR results on for the quarter immediately previous to them being acquired:

3PAR® (NYSE: PAR), the leading global provider of utility storage, today reported results for the first quarter of fiscal year 2011, which ended June 30th, 2010. Revenue for the first quarter was $54.3 million, an increase of 22% as compared to revenue of $44.5 million for the same period in the prior year, and an increase of 1% as compared to $53.7 million in the prior quarter, which ended March 31st, 2010.

I can’t help but wonder how well Compellent is doing for Dell these days by contrast, since Dell withdrew from the bidding war for 3PAR with HP and went for them instead.. (side note: I once saw some value in Compellent as an alternative to 3PAR but that all went away with the 3PAR 7000-series.). I looked at the transcript for Dell’s latest conference call and the only thing they touch about storage is declines of 10%, no mention of any product lines as far as I could tell.

May 23, 2013

3PAR 7400 SSD SPC-1

Filed under: Storage — Tags: , , — Nate @ 10:41 am

I’ve been waiting to see these final results for a while, and now they are out! The numbers(performance + cost + latency) are actually better than I was expecting.

You can see a massive write up I did on this platform when it was released last year.

(last minute edits to add a new Huawei results that was released yesterday)

(more last minute edits to add a HP P6500 EVA SPC-1E)

SPC-1 Recap

I’ll say this again in case this happens to be read by someone who is new here. Myself, I see value in the SPC-1 as it provides a common playing field for reporting on performance in random transactional workloads (the vast majority of workloads are transactional). On top of the level playing field the more interesting stuff comes in the disclosures of the various vendors. You get to see things like

  • Cost (SpecSFS for example doesn’t provide this and the resulting claims from the vendors showing high performance relative to others at a massive cost premium but not disclosing the costs is very sad)
  • Utilization (SPC-1 minimum protected utilization is 55%)
  • Configuration complexity (only available in the longer full disclosure report)
  • Other compromises the vendor might of made (see the note about disabling cache mirroring)
  • 3 year 24×7 4 hour on site hardware support costs

There is a brief executive summary as well as what is normally a 50-75 page full disclosure report with the nitty gritty details.

SPC-1 also has maximum latency requirements – no I/O request can take longer than 30ms to serve or the test is invalid.

There is another test suite -  SPC-2, which tests throughput with various means. Much fewer systems participate in that test (3PAR never has, though I’d certainly like them to).

Having gone through several storage purchases over the years I can say from personal experience it is a huge pain to try to evaluate stuff under real workloads – often times vendors don’t even want to give evaluation gear (that is in fact in large part why I am a 3PAR customer today). Even if you do manage to get something in house to test, there are many things out there, with wide ranging performance / utilization ratios. At least with something like SPC-1 you can get some idea how the system performs relative to other systems at non trivial utilization rates. This example is rather extreme but is a good illustration.

I have no doubt the test is far from perfect, but in my opinion at least it’s far better than the alternatives, like people running 100% read tests with IOMeter to show they can get 1 million IOPS.

I find it quite strange that none of the new SSD startups have participated in SPC-1, I’ve talked to a couple different ones and they don’t like the test, they give the usual it’s not real world, customers should take the gear and test it out themselves. Typical stuff. Usually means they would score poorly – especially those that leverage SSD as a cache tier, with high utilization rates of SPC-1 you are quite likely to blow out that tier, once that happens performance tanks. I have heard reports of some of these guys getting their systems yanked out of production because they fail to perform after utilization goes up. System shines like a star during brief evaluation – then after several months of usage and utilization increasing, performance no longer holds up.

One person said their system is optimized for multiple workloads and SPC-1 is a single workload. I don’t really agree with that, SPC-1 does a ton of reads and writes all over the system, usually from multiple servers simultaneously. I look back to 3PAR specifically, who have been touting multiple workload (and mixed workload) support since their first array was released more than a decade ago. They have participated in SPC-1 for over a decade as well, so arguments saying testing is too expensive etc doesn’t hold water either. They did it when they were small, on systems that are designed from the ground up for multiple workloads (not just riding a wave of fast underlying storage and hoping that can carry them),  these new small folks can do it too. If they can come up with a better test with similar disclosures I’m all ears too.

3PAR Architecture with mixed workloads

The one place where I think SPC-1 could be improved is in failure testing. Testing a system in a degraded state to see how it performs.

The below results are from what I could find on all SSD SPC-1 results. If there is one/more I have missed(other than TMS, see note below), let me know. I did not include the IBM servers with SSD, since those are..servers.

Test Dates

HP 3PAR 7400May 23, 2013
HP P6500 EVA (SPC-1E)February 17, 2012
IBM Storwize V7000June 4, 2012
HDS Unified Storage 150March 26, 2013
Huawei OceanStor Dorado2100 G2May 22, 2013
Huawei OceanStor Dorado5100August 13, 2012

I left out the really old TMS (now IBM) SPC-1 results as they were from 2011, too old for a worthwhile comparison.

Performance / Latency

System NameSPC-1
Avg Latency
(all utilization
# of times
above 1ms
# of
HP 3PAR 7400258,0780.66ms0.86ms0 / 1532x
HP P6500 EVA (SPC-1E)20,0034.01ms11.23ms13 / 158x
IBM Storwize V7000120,4922.6ms4.32ms15 / 1518x
HDS Unified Storage 150125,0180.86ms1.09ms12 / 1520x
Huawei OceanStor Dorado2100 G2400,5870.60ms0.75ms0 / 1550x
Huawei OceanStor Dorado5100
600,0520.87ms1.09ms7 / 1596x

A couple of my own data points:

  • Avg latency (All utilization levels) – I just took aggregate latency of “All ASUs” for each of the utilization levels and divided it by 6 (the number of utilization levels)
  • Number of times above 1ms of latency – I just counted the number of cells in the I/O throughput table for each of the ASUs (15 cells total) that the test reported above 1ms of latency


System NameTotal
Cost per
Cost per
Usable TB
HP 3PAR 7400$148,737$0.58$133,019
HP P6500 EVA (SPC-1E)$130,982$6.55$260,239
IBM Storwize V7000$181,029$1.50$121,389
HDS Unified Storage 150$198,367$1.59$118,236
Huawei OceanStor Dorado2100 G2$227,062$0.57$61,186
Huawei OceanStor Dorado5100

Capacity Utilization

HP 3PAR 74003,250 GB1,159 GB70.46%
HP P6500 EVA (SPC-1E)1,600 GB515 GB64.41%
IBM Storwize V70003,600 GB1,546 GB84.87%
HDS Unified Storage 1503,999 GB1,717 GB85.90%
Huawei OceanStor Dorado2100 G210,002 GB3,801 GB75.97%
Huawei OceanStor Dorado510019,204 GB6,442 GB67.09%


The new utilization charts in the latest 3PAR/Huawei tests are quite nice to see, really good illustrations as to where the space is being used. They consume a full 3 pages in the executive summary. I wish SPC would go back and revise previous reports so they have these new easier forms of disclosure in them. The data is there for users to compute on their own.


This is a SPC-1e result rather than SPC-1 – I believe the work load is the same(?) they just measure power draw in addition to everything else. The stark contrast between the new 3PAR and the older P6500 is remarkable from every angle whether it is cost, performance, capacity, latency. Any way you slice it (well except power I am sure 3PAR draws more power 🙂 )

It is somewhat interesting in the power results for the P6500 that there is only a 16 watt difference between 0% load and 100% load.

I noticed that the P6500 is no longer being sold (P6550 was released to replace it – and the 3PAR 7000-series was released to replace the P6550 which is still being sold).


While I don’t expect Huawei to be a common rival for the other three outside of China perhaps, I find their configuration very curious. On the 5100 with such a large number of apparently low cost SLC(!) SSDs, and “short stroking” (even though there are no spindles I guess the term can still apply) they have managed to provide a significant amount of performance at a reasonable cost. I am confused though they claim SLC but yet they have so many disks(would think you’d need fewer with SLC), at the same time at a much lower cost. Doesn’t compute..

No software

Huawei appears to have absolutely no software options for these products – no thin provisioning, no snapshots, no replication, nothing. Usually vendors don’t include any software options as part of the testing since they are not used. In this case the options don’t appear to exist at all.

They seem to be more in line with something that LSI/NetApp E-series, or Infortrend or something like that rather than an enterprise storage system. Though looking at Infortrend’s site earlier this morning shows them supporting thin provisioning, snapshots, and replication on some arrays. Even NetApp seems to have thin provisioning on their E-series included.


3PAR’s metadata

3PAR’s utilization in this test is hampered by (relatively) excessive metadata, the utilization results say only 7% unused storage ratio which on the surface is an excellent number. But this number excludes metadata which in this case is 13%(418GB) of the system. Given the small capacity of the system this has a significant impact on utilization (compared to 3PAR’s past results). They are working to improve this.

The next largest meta data size in the above systems is IBM which has only 1GB of metadata (about 99.8% less than 3PAR). I would be surprised if 3PAR was not able to significantly slash the metadata size in the future.

3PAR 7400 SSD SPC-1 Configured Storage Capacity (one of the new charts from the SPC-1 report)

In the grand scheme of things this problem is pretty trivial. It’s not as if the meta data scales linearly with the system.

Only quad controller system

3PAR is the only SSD solution above tested with 4 controllers(totalling 4 Gen4 ASICs, 24 x 1.8Ghz Xeon CPU cores, 64GB of data cache, and 32GB of control cache), meaning with their persistent cache technology(which is included at no extra cost) you can lose a controller and keep a fully protected and mirrored write cache. I don’t believe any of the other systems are even capable of such a configuration regardless of cost.

3PAR Persistent Cache mirrors cache from a degraded controller pair to another pair in the cluster automatically.

The 7400 managed to stay below 1 millisecond response times even at maximum utilization which is quite impressive.

Thin provisioning built in

The new license model of the 3PAR 7000 series means this is the first SPC-1 result to include thin provisioning for a 3PAR system at least. I’m sure they did not use thin provisioning(no point when your driving to max utilization), but from a cost perspective it is something good to keep in mind. In the past thin provisioning would add significant costs onto a 3PAR system. I believe thin provisioning is still a separate license on the P10000-series (though would not be surprised if that changes as well).

Low cost model

They managed to do all of this while remaining a lower cost offering than the competition – the economics of this new 7000 series are remarkable.

IBM’s poor latency

IBM’s V7000 latency is really terrible relative to HDS and HP. I guess that is one reason they bought TMS.  Though it may take some time for them to integrate TMS technology (assuming they even try) to have similar software/availability capabilities as their main enterprise offerings.


With these results I believe 3PAR is showing well that they too can easily compete in the all SSD market opportunities without requiring excessive amounts of rack space or power circuits as some of their previous systems required. All of that performance(only 32 of the 48 drive bays are occupied!), in a small 4U package. Previously you’d likely be looking at a absolute minimum of half a rack!

I don’t know whether or not 3PAR will release performance results for the 7000 series on spinning rust, it’s not too important at this point though. The system architecture is distributed and they have proven time and again they can drive high utilization, so it’s just a matter of knowing the performance capacity of the controllers (which we have here), and just throwing as much disk as you want at it. The 7400 series tops out at 480 disks at the moment – even if you loaded it up with 15k spindles you wouldn’t come close to the peak performance of the controllers.

It is, of course nice to see 3PAR trouncing the primary competition in price, performance and latency. They have some work to do on utilization as mentioned above.

May 21, 2013

SHOCKER! Power grid vulernable to Cyberattack!

Filed under: Security — Tags: , — Nate @ 9:57 pm

Yeah, it shouldn’t be news.. but I guess I am sort of glad it is making some sort of headline. I have written in the past how I think the concept of a smart grid is terrible due to security concerns. I just have no confidence in today’s security technology to properly secure such a system. If we can’t properly secure our bank transactions(my main credit card was compromised for at least the 2nd or 3rd time this year and I am careful), how can anyone expect to secure the grid?

Just saw a new post on Slashdot which points to a new report being released that covers some of how vulnerable we are to attack on our grid.

The report, released ahead of a House hearing on cybersecurity by Congressmen Edward Markey (D-Mass.) and Henry Waxman (D-Calif.), finds that cyberattacks are a daily occurrence, with one power company claiming it fights off 10,000 attempted intrusions each month.


Such attacks could cut power to large sections of the country and take months to repair.

Oh how I miss the days of early cyber security where the threat was little more than kids poking around and learning. These days there is really little defense against the organized military of the likes of China, sigh.

If they want to get you, most likely they are going to get you.

I’ve had a discussion or two with a friend who works with industrial control systems and the security on those is generally worse than I had heard about with the various breaches around the world.

I don’t see any real value the so called smart grid has, anything remotely resembling gains that would offset the massive growth of the network access points that are connected to the grid.

It’s probably already too late. All security is some form of obscurity at the end of the day whether it is a password, or encryption or physical isolation. Obscuring the grid by reducing the network connections to it has got to provide some level of benefit…

May 20, 2013

When a server is a converged solution

Filed under: General — Tags: — Nate @ 3:56 pm

Thought this was kind of funny/odd/ironic/etc…

I got an email a few minutes ago which is talking about HP App System for Vertica. Which, among other things HP describes as being able to

This solution delivers system performance and reduces implementation from months to hours.

I imagine they are referring to competing solutions and not comparing to running Vertica on bare metal. In fact it may be kind of misleading as Vertica is very open – you can run it on physical hardware (any hardware really), virtual hardware, and even some cloud services (it is supported in *shudder*Amazon even..). So you can get implementation of a basic Vertica system without buying anything new.

But if you are past the evaluation stage, and perhaps outgrew your initial deployment and want to grow into something more formal/dedicated, then you may need some new hardware.

HP pitches this as a Converged Solution. So I was sort of curious what HP solutions are they converging here?

Basically it’s just a couple base configurations of HP DL380G8s with internal storage (these 2U servers support up to 25 2.5″ disks).  They don’t even install Vertica for you

HP Vertica Analytics Platform software installation is well documented and can be installed by customers.

They are kind enough to install the operating system though (no mention of any special tuning, other than they say it is “Standard” so I guess no tuning).

No networking included(outside of the servers as far as I can tell), the only storage is the internal DAS. Minimum of three servers is required so some sort of 10GbE switches are required (since the severs are 10GbE, you can run Vertica fine on 1GbE too for smaller data sets).

I would of expected the system to come with Vertica pre-installed, or automatically installed as part of setup and have a trial license built into the system.

Vertica is very easy to install and configure the basics, so in the grand scheme of things this AppSystem might save the average Vertica customer a few minutes.

Vertica is licensed normally by the amount of data stored in the cluster (pre-compression / encoding).  The node count, CPU count, memory, spindles doesn’t matter. There is a community edition that goes up to 3 nodes, and 3TB (it has some other software limitations – and as far as I know there is no direct migration path from community to enterprise without data export/import).

Don’t get me wrong I think this is a great solution, very solid server, with a lot of memory and plenty of I/O to provide a very powerful Vertica experience. Vertica’s design reduces I/O requirements by up to ~90% in some cases, so you’d be probably shocked the amount of performance you’d get out of just one of these 3 node clusters, even without any tuning at the Vertica level.

Vertica does not require a fancy storage system, it’s really built with DAS in mind. Though I know there are bunches of customers out there that run it on big fancy storage because they like the extra level of reliability/availability.

I just thought it was kind of strange some of the marketing behind it, saving months of time, being converged infrastructure and what not..

It makes me think(if I had not installed Vertica clusters before) that if I want Vertica and don’t get this AppSystem then I am in a world of hurt when it comes to setting Vertica up. Which is not the right message to send.

Here is this wonderful AppSystem that is in fact — just a server with RHEL installed.

For some reason I expected more.

May 17, 2013

Big pop in Tableau IPO

Filed under: General — Tags: , — Nate @ 9:35 am

I was first introduced to Tableau (and Vertica) a couple of years ago at a local event in Seattle. Both products really blew me away(and still do to this day). Though it’s not an area I spend a lot of time in – my brain struggles with anything analytics related (even when using Tableau, same goes for Splunk, or SQL). I just can’t make the connections, when I come across crazy Splunk queries that people write I just stare at it for a while in wonder(as in I can’t possibly imagine how someone could of come up with such a query even after working with Splunk for the past six years).. then I copy+paste it and hope it works.

Sample Tableu reports pulled from google images

But that doesn’t stop me from seeing an awesome combination that is truly ground breaking both in performance and ease of use.

I’ve seen people try to use Tableau with MySQL for example and they fairly quickly give up in frustration at how slow it is. I remember being told that Tableau used to get a bunch of complaints from users years ago saying how slow it seemed to be — but it really wasn’t Tableau’s fault it was the slow back end data store.

Vertica unlocks Tableau’s potential by providing a jet engine to run your queries against. Millions of rows? hundreds of millions? No problem.. billions ? It’ll take a bit longer but shouldn’t be an issue either. Try that with most other back ends and well you’ll be waiting there for days if not weeks.

Tableau is a new generation of data visualization technology that is really targeted at the Excel crowd. It can read in data from practically anything(Excel files included), and it provides a seamless way to analyze your data and provide fancy charts and graphs, tables and maps..

It’s not really for the hard core power users who want to write custom queries. Though I still think it is useful for those folks. A great use case for Tableau is for the business users to play around with it, and come up with the reports that they find useful, then the data warehouse people can take those results and optimize the warehouse for those types of queries (if required). It’s a lot simpler and faster than the alternative..

I remember two years ago I was working with a data warehouse guy at a company and we were testing Tableau with MySQL at the time actually (small tables), just playing around, he poked around, created some basic graphs and drilled down into them. In all we spent about 5 minutes on this task and we found some interesting information. He said if he had to do that in MySQL queries himself it would of taken him roughly two days. Running query after query and then building new queries based on results etc.  From two days to roughly five minutes — for a very experienced SQL/data warehouse person.

Tableau has a server component as well, which you can publish your reports for others to see with a web browser or mobile device, the server can also of course directly link to your data to get updates as frequently as you want them.

You can have profiles and policies, one example Tableau gave me last year was one big customer enforces certain color codes across their organization so no matter what they are looking at they know Blue means X and Orange means Y. This is enforced at the server level, so it’s not something people have to worry about remembering. They can also enforce policies around reporting so that the term “XYZ” is always the result of “this+that”, so people get consistent results every time — not a situation where someone interprets something one way, and another person another way. Again this is enforced at the server level, reducing the need for double checking and additional training.

They also have APIs – and users are able to embed Tableau reports directly into their applications and web sites(through the server component). I know one organization where almost all of their customer reporting is presented with Tableau – I’m sure it saved them a ton of time trying to replicate the behavior in their own code. I’ve seen folks try to write reporting UIs in past companies and usually what comes out is significantly sub par because it’s a complicated thing to get right. Tableau makes it easy, and probably very cost effective relative to full time developers taking months/years to try to do it yourself.

It’s one of the few products out there that I am really excited about, and I’ve seen some amazing stuff done with the software in a very minimal amount of time.

Tableau has a 15 day evaluation period if you want to try it out — it really should be more, but whatever.  Vertica has a community edition which you can use as a sort of long term evaluation – it’s limited to 1TB of data and 3 cluster nodes. You can get a full fledged enterprise evaluation from Vertica as well if you want to test all of the features.

I wrote some scripts at my current company to refresh/import about 150GB of data from our MySQL systems to Vertica each night. It is interesting to see MySQL struggle to read the data out, and Vertica is practically idle as it ingests it (I’d otherwise normally think that the writing of the data would be more intensive than the reading). In order to improve performance I compiled a few custom MySQL binaries that allowed me to run MySQL queries and pipe the results directly into Vertica (instead of writing 10s of GBs to disk only to read it back again). The need for the custom binaries is MySQL by default only supports tab delimited results which was not sufficient for this data set (I actually compiled 3-4 different binaries with different delimiters depending on the tables  – managed to get ~99.99999999% of the rows in without further effort). Also wrote a quick perl script to fix some of the invalid data like invalid time stamps which MySQL happily allows but Vertica does not.

Sample command:

$MYSQL --raw --batch --quick --skip-column-names -u $DB_USERNAME --password="${DB_PASSWORD}" --host=${DB_SERVER} $SOURCE_DBNAME -e "select * from $MY_TABLE" | $DATA_FIX | vsql -w $VERTICA_PASSWORD -c "COPY ${VERTICA_SCHEMA}.${MY_TABLE} FROM STDIN DELIMITER '|' RECORD TERMINATOR '##' NULL AS 'NULL' DIRECT"


Oh and back to the topic of the post – Tableau IPO’d today (ticker is DATA) – as of last check it is up 55%.

So, congrats Tableau on a great IPO!


May 14, 2013

Some pictures of large scale Microsoft ITPAC Deployments

Filed under: Datacenter — Tags: — Nate @ 11:16 am

Just came across this, at Data Center Knowledge. I have written about these in the past, from a high level perspective they are incredibly innovative, far more so than any other data center container I have seen at least.

Once you walk outside, you begin to see the evolution of Microsoft’s data center design. The next building you enter isn’t really a building at all, but a steel and aluminum framework. Inside the shell are pre-manufactured ITPAC modules. Microsoft has sought to standardize the design for its ITPAC – short for Information Technology Pre-Assembled Component – but also allows vendors to work with the specifications and play with the design. These ITPACs use air side economization, and there are a few variations.

I heard about some of these being deployed a couple years ago from some friends, though this is the first time I’ve seen pictures of any deployment.

You can see a video on how the ITPAC works here.

May 10, 2013

Activist investors and not wanting to be public

Filed under: Random Thought — Nate @ 11:49 am

Another somewhat off topic post but still kinda tech related.  There has been somewhat of a rash of companies the past few years that say they either don’t want to go public or some that already are public and want to go private again.

Obviously the leading reason behind this is often cited as companies not wanting to have to deal with the pressure of short term investors. Some are more problematic than others.

Two events caught my eye this morning, the first one is Carl Ichan’s attempt to crush Dell. The second is apparently a dual headed assault against the board of Emulex for not taking the buyout offer from Broadcom a few years ago.

The average amount of time people hold stocks for has collapsed over the past decade or two( I saw some charts of this at one point but can’t find them). People are becoming less and less patient demanding higher returns in shorter amounts of time. The damage is being done everywhere, from companies surrendering their intellectual property in order to be allowed to do business in China, to companies just squeezing their staff trying to get everything they can out of them. I saw someone who claimed to be an IBMer (widely regarded as giving investors what they want) claiming IBM is being gutted from the inside out, all the good people are leaving, it’s only a matter of time.. This certainly may not be true, I don’t know. But it wouldn’t surprise me given IBM’s performance.

Corporate profits are at record highs as many like to cite. Though I see severe long term damage being done across the board in order to get to these profits in the short term.

HP is probably a prime example of this, as a result they felt they had to go spend a ton of money acquiring companies to make up for the lack of investment in R&D in previous years.

But of course HP is not alone. This problem is everywhere, and it’s really depressing to see. The latest attempt from Icahn to kill Dell is just sad.  Here is Michael Dell the founder and CEO trying to save his company for the long term, and here comes in Icahn who just wants to squeeze it dry. Dell may not succeed if they go private maybe they go bust and investors lose out. Maybe Icahn’s deal is better for investors. For some reason I rather view the situation as for what’s best for the company. Dell has done some smart acquisitions over the past few years, they need (more)time to sort through them. If investors don’t like what Dell is doing they are free to sell their shares.

Not long ago BMC got acquired as well (I really have no knowledge of BMC’s products) by private equity and it’s quite possible the same happens there too.

HP attacked Dell when the plan was announced to go private, saying things like “Dell is distracted now”, and HP is a better fit for customers because they are not distracted. I got news for anyone who believed that – HP has more distractions than Dell. I have absolutely no doubt if HP could manage a way to go private they would do it in a heart beat.  I think HP will get past their distractions, they have some great products in Proliant, 3PAR, Vertica, and others I’m sure that I can’t cite off the top of my head.

Same sort of short sightedness I see all the time with folks wanting to embrace the cloud, and the whole concept of shifting things from CAPEX to OPEX (though some accountants and CFOs are wising up to this scam).

The same thing is happening in governments around the world as well. It’s everywhere.

It’s just one of those things that has kept my vision of the future of our economy at such low levels (well if I said what I really thought you might think I’m even more crazy than I already come off as being 🙂 )

anyway, sad to see….

May 9, 2013

Lost almost all respect for Consumer Reports today

Filed under: Random Thought — Tags: — Nate @ 10:30 pm

I was getting my morning dose of CNBC this morning(I’m not an investor – I watch CNBC purely for entertainment purposes) when news came over the wire that Tesla had gotten a 99/100 rating on their ultra luxury green car by Consumer Reports.

I watched the interview with the guy at Consumer Reports and I was shocked, and I still am. A bit disgusted too.

Let me start out by saying I have no issues with the car itself. I’m sure it’s a fine automobile. My issue is with the double talking standard of Consumer Reports in this particular case.

(forgive me the quotes will not be precise, see the video link above for the full interview)

The guy who wrote this report starts off by saying it’s better than the other $90,000 cars in it’s price range… (also goes on to say it’s better than pretty much ANY car they have EVER EVER EVER TESTED — not just better than any other electric car — ANY CAR)


It is an electric car, while it has a long range compared to other electric cars, I can take a Toyota Corolla and drive to Cleveland from New York  — I can’t do that in this car yet.

You can only go about 200 miles before charging it up, that is a severe limitation. (those last two are his words)

CNBC goes on to quote him as saying If you leave it unplugged, you experience what you describe as a parasitic loss of energy(Consumer Report’s words) that amounts to 12-15 miles per day, and asks him on that topic. He responds –

The concern is this is a new vehicle from a new automaker and there’s going to be growing pains. If you’re really looking for something to be trouble free off the bat – look elsewhere (his words too!)

I’m really glad I saw the interview. My disgust here is not with the car. I have no doubt it is a fine car! I think many people should go buy it. But if these sorts of flaws knock only a single point off the score ….that just seems wrong. Very wrong.  Especially the last bit — if you want something to be trouble free look elsewhere — for something that got rated 99 out of 100!!!

He goes on to talk about how it takes 30 minutes to charge the battery to half strength at one of Tesla’s charging stations. He thinks people would be happier (duh) if they could fully charge it in four minutes.

CNBC half seriously asked him if you could charge a Tesla from a hotel room power outlet. Consumer reports guy said yes but it would take a VERY long time.

People buying this $90k car obviously are not concerned about the price of gas. It’s really more for the image of trying to show your green than anything else, which is sad in itself(you can be more green by buying carbon offset credits – folks that can afford a $90k car should have no trouble buying some). But that’s fine.

Again my issue isn’t with the car. It’s with the rating. Maybe it should be a 75, or an 85. I don’t know. If it was me I’d knock at least 20 points off for the lack of range and lack of charging stations alone. Now if gas was say fifteen dollars a gallon I could see giving it some good credit for saving on that.

I think what Tesla is doing is probably a good thing, it seems like decent technology, good (relative to other electric vehicles) range. Likely can’t take it on a road trip between SFO and Seattle any time soon..

I have relied at least in part on Consumer Reports for a few different purchases I have made (including my current car – which if I recall right Consumer Reports had no rating on it at the time it was too new. One of my friends just bought the 2013 model year of my car a few weeks ago).  It was extremely disappointing to see this result today. Maybe I should not of been surprised. I don’t pay too close attention to what Consumer Reports does, I check them usually once every couple years at most. This may be par for the course for them.

Internet tech review sites have often had a terrible reputation for giving incorrect ratings for some reason or another (most often it seems to be because they want to keep getting free stuff from the vendors). I had thought(hoped) Consumer Reports was in a sort of league by itself for independent reporting.

I think at the end of the day the rating doesn’t matter – people who are in the market for a $90k car will do their homework. It just sort of reeks to me of a back room deal or perhaps a bunch of hippies over at Consumer reports dreaming of a future where all vehicles are electric, never mind the fact the power grid can’t handle it. (I can hear the voice of Cartman in the back of my head – Screw you hippies!). Tesla wants all the positive press they can get after all.

So let’s see. We have a perfect score of 99/100 along which we have the words severe limitation, parasitic loss of energy, and look elsewhere if you want a trouble free experience ……..

I’ll say it one more time – my problem is with the perfect rating of 99/100 not with the car itself.

May 7, 2013

Internet Hippies at it again

Filed under: Networking — Tags: , — Nate @ 8:50 am

I was just reading a discussion on slashdot about IPv6 again.  So apparently BT has announced plans to deploy carrier grade NAT (CGN) for some of their low tier customers. Which is of course just a larger scope higher scale deployment of NAT.

I knew how the conversation would go but I found it interesting regardless. The die hard IPv6 folks came out crying fowl

Killing IPv4 is the only solution. This is a stopgap measure like carpooling and congestion charges that don’t actually fix the original problem of a diminishing resource.

(disclaimer – I walk to work)

[..]how on earth can you make IPv6 a premium option if you don’t make IPv4 unbearably broken and inconvenient for users?

These same folks often cry out about how NAT will break the internet, because they can’t do peer to peer stuff (as easily in some cases, others may not be possible at all). At the same time they advocate a solution (IPv6) that will break FAR more things than NAT could ever hope to break. At least an order of magnitude more.

They feel the only way to make real progress is essentially to tax the usage of IPv4 high enough that people are discouraged from using it, thus somehow bringing immediate global change to the internet and get everyone to switch to IPv6.  Which brings me to my next somewhat related topic.

Maybe they are right – I don’t know. I’m in no hurry to get to IPv6 myself.

Stop! Tangent time.

The environmentalists are of course doing the same thing — not long ago a law took effect here in the county I am at where they have banned plastic bags at grocery stores and stuff. You can still get paper pags at a cost of $0.10/bag but no more plastic.  I was having a brief discussion on this with a friend last week and he was questioning the stores for charging folks he didn’t know it was the law that was mandating it. I have absolutely, not a shred of doubt that if the environmentalists could have their way they would of banned all disposable bags. That is their goal – the tax is only $0.10 now but it will go up in the future they will push it as high as they can for the same reason, to discourage use. Obviously customers were already paying for plastic and paper bags before – the cost was built into the margins of the products they buy – just like they were paying for the electricity to keep the dairy products cool.

In Washington state I believe there was one or two places that actually tried to ban ALL disposable bags. I don’t remember if the laws passed or not, but I remember thinking that I wanted to just go to one or more of their grocery stores, load up a cart full of stuff, go to checkout. Then they tell me I have to buy bags and I would just walk out. I wanted to soo badly though I am more polite than that so I didn’t.

Safeway gave me 3 “free” reusable bags the first time I was there after the law passed and I bought one more since. I am worried about contamination more than anything else, there have been several reports of the bags being contaminated mainly by meat and stuff because people don’t clean them regularly.

I’ll admit (as much as it pains me) that there is one good reason to use these bags over the disposable ones that didn’t really hit me until I went home that first night – they are a lot stronger, so they hold more. I was able to get a full night’s shopping in 3 bags, and those were easier to carry than the ~8 or so that would otherwise be used with disposable.

I think it’s terrible to have the tax on paper since that is relatively much more green than plastic. I read an article one time that talked about paper vs plastic and the various regions in our country at least – what is more green. The answer was it varied, on the coast lines like where I live paper is more green. In the middle parts of the country plastic was more green. I forgot the reasons given but they made sense at the time. I haven’t been able to dig up the article I have no idea where I read it.

I remember living in China almost 25 years ago now, and noticing how everyone was using reusable bags, similar to what we have now but they were, from what I remember, more like knitted plastic.  They used them I believe mainly because they didn’t have an alternative – they didn’t have the machines and stuff to cheaply mass produce those bags.  I believe I remember reading at some point the usage of disposable bags really went up in the following years before reversing course again towards the reusables.

Myself I have recycled my plastic bags (at Safeway) for a long time now, as long as I can remember.  Sad to see them go.

I’ll end with a quote from Cartman (probably not a direct quote I tried checking)

Hippies piss me off

(Hey Hippies – go ban bottled water now too while your at it – I go through about 60 bottles a week myself, I’ve been stocking up recently because it was cheap(er than normal) I think I have more than 200 bottles in my apartment now – I like the taste of Arrowhead water). I don’t drink much soda at home these days basically replaced it with bottled water so I think cost wise it’s an improvement 🙂 )

(same goes for those die hard IPv6 folks – you can go ahead, slap CGNAT on my internet connection at home – I don’t care. I already have CGNAT on my cell phone(it has a 10.x IP) and when it is in hotspot mode I notice nothing is broken. The only thing I do that is peer to peer is skype(for work, I don’t use it otherwise), everything else is pure client-server).  I have a server(a real server that this blog is hosted on) in a data center (a real data center not my basement) with 100Mbps and unlimited bandwidth to do things that I can’t do on my home connection (mainly due to bandwidth constraints and dynamic IP).

I proclaim IPv6 die hards as internet hippies!

My home network has a site to site VPN with the data center, and if I need to access my home network remotely, I just VPN to the data center and access it that way. If you don’t want to host a real server(it’s not cheap), there are other cheaper solutions like VPS or whatever that are available for pennies a day.

Powered by WordPress