TechOpsGuys.com Diggin' technology every day

July 11, 2011

The Decline of Mozilla

Filed under: General,Random Thought — Tags: — Nate @ 11:45 am

It’s quite possible that Firefox (and Mozilla in general) may of peaked already.

There’s been a lot of discussion and reporting recently on some pretty big changes either being implemented or being pushed by influential members of the Mozilla organization around their flagship product Firefox.

 

Ad for Firefox v4 viewed a short time ago

 

Much of the controversy is around one vocal member of Mozilla saying they should move to a much faster release cycle and not be afraid of breaking things for users in the process because it’s what’s best for the Internet.

It seems that Mozilla’s shift in policy is because of Google’s Chrome browser which already does this and has been gaining market share among those who don’t care about their privacy.

Mozilla gets a very large percentage of their revenue from the little Google search bar on the top right of the browser(and just in case your wondering yes I do block all of Google’s cookies). Apparently this contract deal with Google expires later this year. Who knows what the new deal may look like (I’d say it’s safe to assume there will be a new deal).

Google obviously wasn’t too happy with the lack of progress in the web browser arena which is why they launched Chrome.

Now Firefox feels more threatened by Chrome so appears to be trying to stem the losses by adopting a more Chrome-like approach, which has upset a decent part of their user base, whom, like myself just want a stable browser.

The web standards world has been clearly lagging, being that HTML 4.01 was released in 1999, and we don’t have a ratified HTML 5 yet.

And despite what some folks say, version numbers are important when used properly at least. A major version number for the most part implies a high level of compatibility(hopefully 100% compatibility) for all minor versions residing under the major version.

When used improperly, as with the Firefox 4 to Firefox 5 migration it causes needless confusion (also consider MS Win95 – 98 – XP – Vista – 7). If version numbers really don’t matter then perhaps they should use the release date as the version number so at least people know about how old it is.

Stories like this certainly don’t help either.

It is unfortunate that Mozilla seems to lack the resources of a more traditional model of developing both newer more feature full versions of software while being able to simultaneously being able to provide security and other minor fixes to more established, stable versions of the product.

Combine the factors of what will likely be a less lucrative contract with Google, with the rise in Chrome, and the alienation of (what seems to me at least) a pretty large portion of their potential market out there(whether or not they are current users), it really seems like Firefox, and Mozilla has peaked, and likely will face significant declines in the coming few years.

It is sad for me, as someone who has used Firefox since it was Phoenix 0.3, and have been using Mozilla Seamonkey for a long time as well (usually put “work” things in Seamonkey so if a browser crashes then it only takes a portion of my stuff away).

There are a few plug-ins I use for Firefox (by no means is it a long list!) that for the most part has kept me on Firefox otherwise I probably would of jumped ship to Opera or something (originally stopped using Opera on Linux what seems like almost 10 years ago because of memory leaks with SSL).

I can only hope that the various long term distributions Red Hat, Debian, Ubuntu LTS etc can band together to support a stable version of Firefox in the event it’s completely abandoned by Mozilla. Ubuntu has already mentioned they are considering Chrome(?!) for some future release of LTS.

The privacy implications of Chrome are just too much for me to even consider using it as a browser.

While there are some bugs, myself I am quite satisfied with Firefox 3.6 on this Ubuntu 10.04 laptop.

At one point it seemed plausible that the engine that powers Firefox, Gecko was going to take over the world, especially in the mobile/embedded space, but it seems that it never caught on in the mobile space with most everyone going to Webkit instead. In the mobile space, again Opera seems to have poured a lot more work into mobile versions of their browser than Mozilla ever did. I was just looking at my Sharp Zaurus’ a couple of days ago before I give them to a friend before I move, and saw they all had a mobile version of Opera going back to the 2003-2004 time frame.

If Firefox simply wanted a bigger version number, they could’ve just pulled a Slackware, and skip a few major version numbers (Slackware was the first distribution I used until I switched to Debian in 1998).

The big winner in all of this I think is Microsoft, who is already not wasting any time in wooing their corporate customers many of which were already using Firefox to some extent or at least had it on their radar.

I guess this is just another sign I’m getting older. There was a time when, for no real reason I would get excited about compiling the latest version of Xfree86, the latest Linux kernel, and downloading the latest beta of KDE (yes that’s what the screen shot to the left shows! – from 1998)

Now, for the most part, things are good enough, that the only time I seek newer software is if what I have is not yet compatible with some new hardware.

(Seeing that Ad on Yahoo! earlier today is what prompted this post, ironically when I clicked on the Firefox 4 link it took me to a page to download Firefox 5. Apparently Firefox development moves too fast for advertisers.)

July 8, 2011

Wired or Wireless?

Filed under: Networking,Random Thought,Uncategorized — Tags: — Nate @ 9:58 am

I’ll start out by saying I’ve never been a fan of Wifi, it’s always felt like a nice gimmick-like feature to have but other than that I usually steered clear. Wifi has been deployed at all companies I have worked at in the past 7-8 years though in all cases I was never responsible for that (I haven’t done internal IT since 2002, at which time wifi was still in it’s early stages(assuming it was out at all yet? I don’t remember) and was not deployed widely at all – including at my company). I could probably count on one hand the number of public wifi networks I have used over the years, excluding hotels (of which there was probably ten).

In the early days it was mostly because of paranoia around security/encryption though over the past several years encryption has really picked up and helped that area a lot. There is still a little bit of fear in me that the encryption is not up to snuff, and I would prefer using a VPN on top of wifi to make it even more secure, only really then would I feel comfortable from a security standpoint of using wifi.

From a security standpoint I am less concerned about people intercepting my transmissions over wifi than I am about people breaking into my home network over wifi (which usually happens by intercepting transmissions – my point is more of the content of what I’m transferring, if it is important is always protected by SSL or SSH or in the case of communicating with my colo or cloud hosted server there is a OpenVPN SSL layer under that as well).

Many years ago, I want to say 2005-2006 time frame, there was quite a bit of hype around the Linksys WRT-54G wifi router, for being easy to replace the firmware with custom stuff and get more functionality out of it. So I ordered one at the time, put dd-wrt on it, which is a custom firmware that was talked a lot about back then (is there something better out there? I haven’t looked). I never ended up hooking it to my home network, just a crossover cable to my laptop to look at the features.

Then I put it back in it’s box and put it in storage.

Until earlier this week, when I decided to break it out again to play with in combination with my new HP Touchpad, which can only talk over Wifi.

My first few days with the Touchpad involved having it use my Sprint 3G/4G Mifi access point. As I mentioned earlier I don’t care about people seeing my wifi transmissions I care about protecting my home network. Since the Mifi is not even remotely related to my home network I had no problem using it for extended periods.

The problem with the Mifi, from my apartment is the performance. At best I can get 20% signal strength for 4G, and I can get maybe 80% signal strength for 3G, latency is quite bad in both cases, and throughput isn’t the best either, a lot of times it felt like I was on a 56k modem. Other times it was faster. For the most part I used 3G because it was more reliable for my location, however I do have a 5 gig data cap/month for 3G so considering I started using the Touchpad on the 1st of the month I got kind of concerned I may run into that playing with the new toy during the first month. I just checked Sprint’s site and I don’t see a way to see intra month data usage, only data usage for the month once it’s completed. The mifi tracks data usage while it is running but this data is not persisted across reboots, and I think it’s also reset if the mifi changes between 3G and 4G services. I have unlimited 4G data, but the signal strength where I’m at just isn’t strong enough.

I looked into the possibility of replacing my Mifi with newer technology, but after reading some customer reviews of the newer stuff it seemed unlikely I would get a significant improvement in performance at my location, enough to justify the cost of the upgrade at least so I decided against that for now.

So I broke out the WRT-54G access point and hooked it up. Installed the latest recommended version of firmware, configured the thing and hooked up the touchpad.

I knew there was a pretty high number of personal access points deployed near me, it was not uncommon to see more than 20 SSIDs being broadcast at any given time. So interference was going to be an issue. At one point my laptop showed me that 42 access points were broadcasting SSIDs. And that of course does not even count the ones that are not broadcasting, who knows how many there are there, I haven’t tried to get that number.

With my laptop and touchpad being located no more than 5 feet away from the AP, I had signal strengths of roughly 65-75%. To me that seemed really low given the proximity. I suspected significant interference was causing signal loss. Only when I put the touchpad within say 10 inches of the antenna from the AP did the signal strength go above 90%.

 

Looking into the large number of receive errors told me that those errors are caused almost entirely by interference.

So then I wanted to see what channels were most being used and try to use a channel that has less congestion, the AP defaulted to channel 6.

The last time I mucked with wifi on linux there seemed to be an endless stream of wireless scanning, cracking, hacking tools. Much to my shock and surprise these days most of those tools haven’t been maintained in 5-6-7-8+ years. There aren’t many left. Sadly enough the default Ubuntu wifi apps do not report channels they just report SSIDs. So I went on a quest to find a tool I could use. I finally came across something called wifi radar, which did the job more or less.

I counted about 25 broadcasting SSIDs using wifi radar, nearly half of them if I recall right were on channel 6. A bunch more on 11 and 1, the other two major channels. My WRT54G had channels going all the way up to 14. I recall reading several years ago about frequency restrictions in different places, but in any case I tried channel 14 (which is banned in the US). Wifi router said it was channel 14, but neither my laptop nor Touchpad would connect. I suspect since they flat out don’t support it. No big deal.

Then I went to channel 13. Laptop immediately connected, Touchpad did not. Channel 13 is banned in many areas, but is allowed in the U.S. if the power level is low.

Next I went to channel 12. Laptop immediately connected again, Touchpad did not. This time I got suspicious of the Touchpad. So I fired up my Palm Pre, which uses an older version of the same operating system. It saw my wifi router on channel 12 no problem. But the Touchpad remained unable to connect even if I manually input the SSID. Channel 12 is also allowed in the U.S. if the power level is low enough.

So I ended up on channel 11. Everything could see everything at that point. I enabled WPA2 encryption, enabled MAC address filtering (yes I know you can spoof MACs pretty easily on wifi, but at the same time I have only 2 devices I’ll ever connect so blah). I don’t have a functional VPN yet mainly because I don’t have a way (yet) to access VPN on the Touchpad, it has built in support for two types of Cisco VPNs but that’s it. I installed OpenVPN on it but I have no way to launch it on demand without being connected to the USB terminal.  I suppose I could just leave it running and in theory it should automatically connect when it finds a network but I haven’t tried that.

So on to my last point on wifi – interference. As I mentioned earlier signal quality was not good even being a few feet away from the access point. I decided to try out speedtest.net to run a basic throughput test on both the Touchpad and the Laptop. All tests were using the same Comcast consumer broadband connection

DeviceConnectivity TypeLatencyDownload PerformanceUpload Performance
HP Touchpad802.11g Wireless18 milliseconds5.32 Megabits4.78 Megabits
Toshiba dual core Laptop with Ubuntu 10.04 and Firefox 3.6802.11g Wireless13 milliseconds9.46 Megabits4.89 Megabits
Toshiba dual core Laptop with Ubuntu 10.04 and Firefox 3.61 Gigabit ethernet9 milliseconds27.48 Megabits5.09 Megabits

The test runs in flash, and as you can see of course the Touchpad’s browser (or flash) is not nearly as fast as the laptop, not too unexpected.

Comparing LAN transfer speeds was even more of a joke of course, I didn’t bother involving the Touchpad in this test just the laptop. I used iperf to test throughput(no special options just default settings).

  • Wireless – 7.02 Megabits/second (3.189 milliseconds latency)
  • Wired – 930 Megabits/second (0.3 milliseconds latency)

What honestly surprised me though was over the WAN, how much slower wifi was on the laptop vs wired connection, it’s almost 1/3rd the performance on the same laptop/browser. I justed measured to be sure – my laptop’s screen (where I believe the antenna is at) is 52 inches from the WRT54G router.

It’s “fast enough” for the Touchpad’s casual browsing, but certainly wouldn’t want to run my home network on it, defeats the purpose of paying for the faster connectivity.

I don’t know how typical these results out there. One place I recently worked at was plagued with wireless problems, performance was soo terrible and unreliable. They upgraded the network and I wasn’t able to maintain a connection for more than two minutes which sucks for SSH. To make matters worse the vast majority of their LAN was in fact wireless, there was very little cable infrastructure in the office. Smart people hooked up switches and stuff for their own tables which made things more usable, though still a far cry from optimal.

In a world where we are getting even more dense populations and technology continues to penetrate driving more deployments of wifi, I suspect interference problems will only get worse.

I’m sure it’s great if the only APs within range are your own, if you live or work at a place that is big enough. But small/medium businesses frequently won’t be so lucky, and if you live in a condo or apartment like me, ouch…

My AP is not capable of operating in the 5Ghz range 802.11a/n, that very well could be significantly less congested. I don’t know if it is accurate or not but wifi radar claims every AP within range of my laptop(47 at the moment) is 802.11g (same as me). My laptop’s specs say it supports 802.11b/g/n, so I’d expect if anyone around me was using N then wifi radar would pick it up, assuming the data being reported by wifi radar is accurate.

Since I am moving in about two weeks I’ll wait till I’m at my new apartment before I think more about the possibility of going to a 802.11n capable device for reduced interference. On that note does any of my 3-4 readers have AP suggestions?

Hopefully my new place will get better 4G wireless coverage as well, I already checked the coverage maps and there are two towers within one mile of me, so it all depends on the apartment itself, how much interference is caused by the building and stuff around it.

I’m happy I have stuck with ethernet for as long as I have at my home, and will continue to use ethernet at home and at work wherever possible.

June 22, 2011

Buy your HP Touchpad at…a Furniture store?

Filed under: Random Thought — Tags: , , — Nate @ 10:53 am

I’ve been a fan of WebOS since I got my Pre. Contrary to what some may believe I had never owned a Palm product prior to that. Although I do have a pair of Handspring Visors which was pretty much a better palm than palm back in the day, eventually they got bought out by Palm.

I’ve been awaiting the release of both the HP TouchPad and the Pre 3 for some time now. I almost pre ordered the Touchpad then figured I will just go buy it when it comes out in a local store, like most, I am not expecting a line around the block of people waiting to buy it on the first day like your typical Apple product.

For no particular reason I was browsing the Palm site (aka hpwebos.com) and saw a list of the official places you can pre order the Touchpad.

Much to my surprise, was what seems to be a big furniture store out in Nebraska. Nebraska Furniture Mart – America’s Largest Home Furnishings Store.

I’ve never been to that part of the country so maybe it’s not uncommon, maybe it’s the only place people have in Nebraska to buy electronics from?

I mean of all the places to sell some new piece of technology. I realize now after looking at their site they have an electronics section(which I can’t view because I declined their cookie requests), but still of all the places to launch a product…….

I look at the TouchPad myself is mainly a toy, something to play with, maybe I’ll find some good uses for it with work I’m not sure. Would be nice to see support for wide ranging VPN options as well as perhaps native versions of various HP management tools (looking at you 3PAR).  To those out there that say your better off with a notebook or netbook, I agree. I already have a netbook and a notebook.

June 15, 2011

Farewell Seattle!

Filed under: Random Thought — Nate @ 4:33 pm

Long story short: I’m moving back to California (this time the Bay Area) on or about July 23rd.

I moved to the Seattle area a bit over twelve years ago from Orange County, CA on a leap of faith for a potential job opportunity. That opportunity fell through but that leap of faith turned out to be the best life choice I have ever made. It was a big risk moving so far away from family and stuff at that age and my lacking job experience or higher education. I had some local friends who helped me get started and the rest is history. I can only hope my move to the Bay Area will bring me an equal number of opportunities (already have a great start and I’m not even there yet).

Being here has been a fun ride, learned such a massive amount I’m still trying to absorb it all, met a lot of awesome people, kept as many as I could as friends (am terrible at keeping in touch thankfully I have LinkedIn to keep that aspect handled).

Now it’s time for another change in my life, this time returning to near where I grew up (which was in Santa Cruz county in California), which I left in 1989 when I moved to live in China then Hawaii, then Thailand.

This time I’ll be in the north part of the bay, just south of San Fransisco, about 1.5 hour drive from where I spent most of my childhood.

I will miss Washington, but at the same time am pretty excited for something new. I don’t doubt I’ll be back at some point in the future.

(ok to be honest I hate Seattle, but I do like the east side, having lived in Bellevue for almost all of my 12 years here).

The company I am working for down there generated more revenue in 2010 than all of my previous companies combined (7 companies over the past 12 years) for any given year, and may very well be the first profitable company I’ve ever worked for, so I’m pretty excited for that aspect as well as the team I’ll be working with.

Could not stream Netflix HD for months – solved

Filed under: Random Thought — Tags: , — Nate @ 10:58 am

I have been a Netflix subscriber for a couple years now but really haven’t been using it much I can’t find much on it that I’m interested in watching.

One issue that cropped up several months ago for me was I was no longer able to stream in HD. No matter what various “internet speed tests” reported Netflix always resorted to SD streams. Most recently speedtest.net reported my pipe as having 27Mbps of throughput.

Since I don’t use it that much I didn’t care too much, and just stopped streaming stuff for a while (I stream to my Tivo Series 3). Today I decided to try to dig a little deeper, there wasn’t much help on the Netflix site, and calling them was not too helpful they just suggested I ask my ISP to perform a longer running test to see if the connection was stable and reboot the modem.

Before trying that though (well I did reboot the modem to no avail), I decided to run tcpdump on my firewall and see where Tivo was sending it’s packets, and then use something like mtr to measure latency to that destination.

I noticed within seconds my Tivo was sending packets to a Lime Light node in Miami, not exactly next door to the Seattle area where I am at. Sure enough the Miami node is 16 hops away and right at around 100 milliseconds of latency.

Why was this going there?! Well it has to be related to DNS, as I’m sure at some point I started forwarding all of my DNS packets to my personal virtual server(same one that runs this site) which is run out of Miami. So Limelight must be using BGP Anycast for their DNS which is common among other global DNS providers, but it ended up biting me in the ass.

I originally was routing all of my DNS traffic over to my personal system (across a VPN no less) because I don’t know what kind of crap might go on on my consumer broadband connection with Comcast (at one point I remember some ISPs doing funky things with negative DNS responses for example). Probably nothing but I thought what the hell, why not (the VPN is already in place, and I’m already running local caching name servers as well as a remote caching name server (not the same name server that hosts my domains externally those are different), it’s 1 line in a config file to forward the traffic).

Well now I know why not.. at some point I may invest the time to try to figure out how to send Netfix DNS traffic to a local site and the rest go to my server, but for now I’m not going to spend the time.

Once I disabled forwarding of DNS packets to my remote system, and restarted my name server to flush the cache, Tivo started using a Seattle Limelight node, and the hops dropped to 10, and latency dropped to around 15 milliseconds, HD streaming was now possible once again.

It’s also gotten me wondering how many other services that I use that may of been impacted by routing my DNS traffic 3,000 miles away. Though other than Netflix I have not noticed any ill effects, though the amount of data that traverses my connection is pretty minimal (62GB of data since the beginning of March until June 15th according to Comcast, that includes a pretty big backup I did of my personal server to my local network a few weeks ago).

April 7, 2011

AT&T+T-Mobile – Spectrum

Filed under: Random Thought — Tags: — Nate @ 5:45 am

One of the bigger things announced recently was that AT&T and T-Mobile are going to try to merge.

I’m watching an interview on CNBC with some bigwig from some company called Evercore Partners, whom is apparently an advisor in this merger.

Anyways one of the main arguments the guy tried to make (along with AT&T) is spectrum. That there isn’t enough spectrum for all of the wireless data that is out there and this merger will some how make more spectrum magically appear.

First off I have for a long time said that the edge data technology (wired or wireless) just doesn’t scale, period (and when I say scale I mean cost effectively scale you can make it go faster in many cases but then it becomes too expensive for almost everyone out there limiting the market opportunities, of course there are those out there that expect and demand gigabit speeds to their home for $20/mo).

This guy seems to forget that there are tens of millions of people using this spectrum already, giving it, and the customers to AT&T really isn’t going to have much of an impact from a spectrum standpoint. They may be able to drive higher capacity utilization so maybe they get an extra 10-20-25% out of it by segmenting their network better in some way, but the bandwidth available to that spectrum is going to be eaten up so fast customers won’t even notice it was there to begin with.

Both AT&T and T-Mobile have very large footprints in the Seattle area, and with all of the job cuts expected during the merger I suspect it will have a harmful impact on the local economy here.

The only good thing about this merger is at least AT&T picked a compatible technology to merge with (that is GSM to GSM), unlike the Sprint Nextel merger which was of course about as polar opposite technologies as you can get.

I like many believe the merger will hurt competition, specifically because T-Mobile has a someone unique position in the market from a pricing standpoint, and being a national carrier they have a lot of coverage. AT&T tries to bring up all these small regional companies as evidence of competition, but in the grand scheme of things they are just the scraps on the plate. I can’t help but assume the merger will result in T-mobile plans turning into AT&T plans at some point.

What we need is something like sub space communications from Star Trek, where data rates are a billion times faster than they are today that will give us enough buffer to grow in to.

My solution to the bandwidth crunch on mobile? Broadcast TV. Want streaming video on your phone? Stream it from the local TV stations in your area via digital antenna (e.g. don’t use the phone network, don’t use wifi). I’m not aware of phones that have this ability at this point though. Don’t have the content your looking for?  Oh well.

March 14, 2011

Right vs Privilege for Broadband

Filed under: Networking,Random Thought — Tags: — Nate @ 8:38 am

I wrote about this a while back, at the time the topic was AT&T imposing caps on mobile data plans, so won’t go into all the same arguments again.

But this time it is AT&T imposing caps on their various broadband plans. I don’t know whether to laugh at or feel sorry for some of these people (see the comments on the site) that believe they have a right to maximum performance, unlimited bandwidth for a few bucks a month.

**** YOU and your troll crap. DONT BE MAD BECAUSE IM TELLING THE TRUTH YOU AT&T DRONE. YOU WEEP FOR THE COMPANY WHO HAS MORE MONEY THAN THE U.S TREASURY. GET YOUR HEAD EXAMINED.

IF YOUR CRAPPY DSL IS SLOW IT’S BECAUSE YOUR ISP IS TOO CHEAP TO UPGRADE TO THE NEEDS OF THE WORLD IN 2011!!!!

This post really is funny

If only two percent of people are affected, why do you feel the need to screw the rest of the 98%?!

As a Comcast broadband customer I have a 250GB cap a month. I have no doubt though that I fall far short of the cap, I’d be surprised if I do more than 20GB a month (that is with occasional netflix streaming though these days I can’t find anything I want to stream on Netflix, I’ve watched one or two things in the past month) UPDATE – I forgot Comcast does have a bandwidth meter you can check, so I got my account info and checked it out. I wonder where I stand as far as a percentile of their customers – low usage on average? medium?

I do run a server as well in the Terremark cloud, so I checked out the bandwidth on it as well (which hosts this blog along with my email services, other web sites etc)

  • February – 6GB data transfer (I assume they charge on inbound and outbound transfers?)
  • January – 2GB data transfer
  • December – 2GB data transfer

The world is built in over subscription, that’s a big driver to keeping costs low. Whether it’s bandwidth, or phone/mobile call capacity, or even your local grocery store.

I for one think AT&T’s plan is very reasonable, they will charge you $10 per 50GB over their limits, $100 for 500GB of data transferred. They will also provide notifications when you hit certain levels of that cap.

The big mistake all of these providers made was of course to offer unlimited plans in the first place.

March 3, 2011

No.

Filed under: Random Thought — Tags: — Nate @ 2:01 pm

Here’s some more color to my blog.

 

I saw this on LinkedIn a few minutes ago and couldn’t help but laugh.


 

March 2, 2011

Innovation Unleashed

Filed under: Random Thought — Tags: — Nate @ 11:13 pm

This has really nothing to do with IT, but it has to do with innovation, and my three readers know I like innovation, whether it is in IT systems or other technology.

So, in a nutshell I bought a new car this past weekend. I’m very happy and excited about it, it’s really my first new car that I have owned, past vehicles I’ve always bought used.

The tag line for the car is Innovation Unleashed.

My previous vehicle had 113,000 miles on it and was 10 years old. The check engine light seemed to be coming on once every 3-4 months and I was getting tired of it. Bottom line – if I knew how much it was going to cost to maintain for the next two years I would be happy, but for all I knew it may be another $5k in repairs and parts, I don’t know. I’m not a car guy.

So a couple of weeks ago the check engine light comes on again and I start thinking about the possibility of a new car, I wanted:

  • Something that I could fit into, need leg room, I’m not a small person
  • Something that was smallish on the outside so it’s easier to park than my previous SUV
  • Wanted a SUV of sorts, I didn’t want to have to seriously climb down into a really low riding car.
  • Something that was more fun to drive (want more speed for passing)

So after some research, and a few test drives…

I decided on the 2011 Nissan Juke SV. There is a good video-review of it here.

Those are generic pictures, not of my car specifically.

Innovations in the Juke

First off, let me start by saying cars have come a long way since I last really looked at them, I mean features that I would of expected to be on $50,000+ cars seem standard on cars that cost half as much.

Torque Vectoring All wheel drive

That just sounds cool doesn’t it? Anyways, I learned something new from this buying experience (again I’m not a car person so don’t keep up to date on this stuff). Traditional all wheel drive systems transfer power between the front and back wheels to increase traction. That much I knew of course.

Torque vectoring all wheel drive goes one step further, in addition to front and back it can control power side to side as well, individual wheels can have their power levels adjusted for maximum control. By default the car tries to say in 2WD mode to improve fuel economy but of course automatically switches to 4WD/AWD when it feels a disturbance in the force and needs more traction.

Here is a video that shows it in action.

This really does make it pretty fun to drive, you can make some crazy tight turns and it doesn’t seem to lose any grip.

In between the gauges for MPH and RPM is a dynamic LCD, which has many modes, one of which shows real time information as to which wheels are getting the power applied to them, so you can see when making tight turns that typically one wheel gets almost all power removed from it, and the others get more power.

I-CON System

The I-CON, or Integrated Control system is just below the stereo / navigation system and is a really neat way to control the car, and is very easy to use.

The same set of controls manages two different modes, either climate controls, or driver controls which change how the car performs. The same buttons and interface are used and the functions change seamlessly at the touch of a button, here is part of a video of it in action.

Climate controls are pretty typical, hot/cold, fan speed etc. The graphics on the LCD are neat to see though.

Driver mode is a bit different though, there are 3 modes – Normal, Sport, and Eco. Changing modes adjusts a few things dynamically in the car to suit more towards sporty driving or more towards fuel economy. Me, at the moment like sport mode, only down side is the car defaults to Normal whenever it starts so I have to manually switch it to sport each time, it doesn’t remember the last mode it was in.

Then there are more ..how can I say, cosmetic things the LCD displays such as

  • Boost level (the car has a turbo boost)
  • Torque level
  • Fuel economy information (MPG over the last X number of starts, or last X number of days etc)
  • G-force information

Performance and Fuel Economy

The car has a 4-cylinder direct injected gas turbo charged engine. To me that says, it has a smaller engine for fuel economy, but has the turbo charger for performance. So you get a balance of both. It really works well together.

The official specs are 188 horsepower and 177 pound feet of torque. If only they could give me a number that measured performance in IOPS…

Fuel economy for the AWD version is 25 city, 30 highway, the FWD version gets slightly better economy. My previous vehicle was 12 city, 17 highway, so I’m coming out ahead in either case! Not that fuel economy is at the top of my list of priorities.

Other misc features

It has a standard (but to me fancy since I don’t think I’ve ever used such a system before) key less entry and operation system, push button starter, I put a big fancy audio system in it with multiple amps, sub woofer (which turned out a lot bigger than I expected, and got a custom fiberglass enclosure for the sub woofer), high end navigation system(which is windows based – it’s already crashed on me once and I had to turn the car off than on again to reboot it, there might be another way to reboot it I’m not sure). After market backup camera (again, heard of them never used one before).

It comes standard with a CVT, or Continuously Variable Transmission, where it does not have traditional gears, instead has hundreds (thousands?) of smaller gear ratios or something which provides for smoother shifting and stuff. AWD models are automatic only, manual transmission not available. But even in the automatic version it has a manual mode, which emulates a six speed transmission. The only thing lacking is paddle shifters…some day hopefully someone will come out with some. I do prefer manual, but if I have to make a choice, AWD or manual I’ll take AWD. My last vehicle didn’t have the best of traction (even with new tires) on slick surfaces.

It comes standard with 17″ wheels.

The transmission comes standard with a 120,000 (or is it 110,000) mile / 10 year warranty, I opted for the 100,000 / 7 year extended warranty as well. Cars are so complicated now, and given this is the first model year for this car and it has a lot of brand new things, who knows what might break in the coming years or how much it’ll cost to fix.

How it drives

It’s a mean little car, it has some solid power to it, I haven’t pushed it too hard yet the manual says to keep it under 4,000 RPM for the first 1200 miles (have about 400 on it now), so doing my best to keep it under 4,000 RPM, sometimes unavoidable though with the turbo, since there is some lag before turbo kicks in the RPMs tend to spike really high, so i try to slow down quickly so it doesn’t stay above 4k RPMs for more than a couple seconds.

I can’t help but think I’m driving a cross between a Prius and a Porsche.

Sound system is pretty amazing, Navigation system is nice. I have no sense of direction so navigation is a must, past few years I have been using Sprint Navigation on my various phones, it gets the job done but certainly not as nice as an in-dash unit, especially a Navigation system that doesn’t rely on a 3G signal, that has screwed me up on Sprint Navigation more than once since it requires 3G connectivity to get map data.

It has a really tight turning radius, and is significantly shorter than all other SUV-type vehicles on the road, so makes it easy to park. Despite it’s small exterior it has a lot of space in the front seats. The back seats are cramped as is the trunk, all I really care about is the front seats though.

What’s missing

Nothing is perfect, and Juke is no exception, there are a few minor things I would like to see:

  • Paddle shifter option (mentioned above)
  • Some sort of compartment to put sun glasses
  • Arm rest for drivers right arm

Only Complaint

Not about the car itself but rather the process of buying the car. For the most part it went very smoothly and I was very happy with the service I got. When trading in my previous vehicle the sales rep came back and said there was an accident reported on my vehicle by carfax and that would lower the resale value, I asked him What? Why is there an accident reported? And he said there just is, so I asked to see the report and there it was.

I bought the vehicle used in late January 2004 in Washington. I have traveled to Washington, Oregon, California and Arizona in it, that’s it.

So you imagine my surprise when he said there was an accident reported on my vehicle in New York. In 2008. I’ve never been to New York. I never intend to ever visit that city ever in my lifetime (too crowded). So I was kind of confused. I owned the car in 2008, and it never got further east than Arizona.

I of course ran a carfax report when I bought the vehicle in 2004, and it came back clean. So I naturally wasn’t too happy.

It turns out that my vehicle came from New Jersey and was sold at an auction in 2004 in the northwest. So I can only assume, for some really stupid #$%@ reason that someone decided to wait 4-5 years before reporting to whatever system carfax uses to get it’s data. I mean I can understand a few months, six months maybe a year, but practically half a decade? That’s not right. Maybe it was a mistake, I don’t know. Cost me about $1000 in value though.

I’m sure there may be things I could of done to contest it and stuff I just wanted to be done with the whole situation so said screw it, I don’t care, just put it behind me and move on, so I did.

Overall

Overall I am very satisfied with the Juke so far (only been driving it for 4 days now), it is a good value (base model price of the version I got is roughly $24,000). It’s small enough for easy parking, has good space up front (compared it to much larger SUVs and it has comparable or even better space than them for the driver’s seat at least). I can’t wait to take it on some kind of road trip, at least a couple hundred miles, that will be fun.

While I have seen some comments on line how some people hate the way it looks (for whatever reason), I think it looks fine and so far everyone I have come across really likes it as well, so I wouldn’t be surprised if it became a really successful model for Nissan, especially given it’s low cost.

The Juke looks even meaner at night, with the various gauges and the illuminated kick plates that have the Juke logo.

At the moment Jukes are made only in Japan and imported to the U.S. Supplies are tight, in fact there was only one Juke SV (the one I bought) that did not have a navigation system(remember I put in an after market system) in the entire northwest region. It came from somewhere in Oregon, they managed to get it to the dealership here in a matter of hours and I picked it up the next day. My dealership didn’t even have an AWD model to test drive so my test drives were only on FWD.

February 24, 2011

So easy it could be a toy, but it’s not

Filed under: General,Random Thought — Tags: — Nate @ 8:44 pm

I was at a little event thrown for the Vertica column-based database, as well as Tableau Software, a Seattle-based data visualization company. Vertica was recently acquired by HP for an undisclosed sum. I had not heard of Tableau until today.

I went in not really knowing what to expect, have heard good things about Vertica from my friend over there but it’s really not an area I have much expertise in.

I left with my mouth on the floor. I mean holy crap that combination looks wicked. Combining the world’s fastest column based data warehouse with a data visualization tool that is so easy some of my past managers could even run it. I really don’t have words to describe it.

I never really considered Vertica for storing IT-related data, and they brought up a case study with one of their bigger customers – Comcast who sends more than 65,000 events a second into a vertica database (including logs, SNMP traps and other data). Hundreds of terabytes of data with sub second query response times. I don’t know if they use Tableau software’s products or not. But there was a good use case for storing IT data in Vertica.

(from Comcast case study)

The test included a snapshot of their application running on a five-node cluster of inexpensive servers with 4 CPU AMD 2.6 GHz core processors with 64-bit 1 MB cache; 8 GB RAM; and ~750 GBs of usable space in a RAID- 5 configuration.
To stress-test Vertica, the team pushed the average insert rate to 65K samples per second; Vertica delivered millisecond-level performance for several different query types, including search, resolve and accessing two days’ worth of data. CPU usage was about 9%, with a fluctuation of +/- 3%, and disk utilization was 12% with spikes up to 25%.

That configuration could of course easily fit on a single server. How about a 48-core Opteron with 256GB of memory and some 3PAR storage or something? Or maybe a DL385G7 with 24 cores, 192GB memory(24x8GB), and 16x500GB 10k RPM SAS disks with RAID 5  and dual SAS controllers with 1GB of flash-backed cache(1 controller per 8 disks). Maybe throw some Fusion IO in there too?

Now I suspect that there will be additional overhead with trying to feed IT data into a Vertica database since  you probably have to format it in some way.

Another really cool feature of Vertica – all of it’s data is mirrored at least once to another server, nothing special about that right? Well they go one step further, they give you the ability to store the data pre-sorted in two different ways, so mirror #1 may be sorted by one field, and mirror #2 is sorted by another field, maximizing use of every copy of the data, while maintaining data integrity.

Something that Tableu did really well that was cool was you don’t need to know how you want to present your data, you just drag stuff around and it will try to make intelligent decisions on how to represent it. It’s amazingly flexible.

Tableu does something else well, there is no language to learn, you don’t need to know SQL, you don’t need to know custom commands to do things, the guy giving the presentation basically never touched his keyboard. And he published some really kick ass reports to the web in a matter of seconds, fully interactive, users could click on something and drill down really easily and quickly.

This is all with the caveat that I don’t know how complicated it might be to get the data into the database in the first place.

Maybe there are other products out there that are as easy to use and stuff as Tableau I don’t know as it’s not a space I spend much time looking at. But this combination looks incredibly exciting.

Both products have fully functional free evaluation versions available to download on the respective sites.

Vertica licensing is based on the amount of data that is stored (I assume regardless of the number of copies stored but haven’t investigated too much), no per-user, no per-node, no per-cpu licensing. If you want more performance, add more servers or whatever and you don’t pay anything more. Vertica automatically re-balances the cluster as you add more servers.

Tableau is licensed as far as I know on a named-user basis or a per-server basis.

Both products are happily supported in VMware environments.

This blog entry really does not do the presentation justice, I don’t have the words for how cool this stuff was to see in action, there aren’t a lot of products or technologies that I get this excited about, but these has shot to near the top of my list.

Time to throw your Hadoop out the window and go with Vertica.

« Newer PostsOlder Posts »

Powered by WordPress