Diggin' technology every day

November 6, 2010

The cool kids are using it

Filed under: Random Thought — Tags: , , — Nate @ 8:24 pm

I just came across this video, which is animated, involves a PHP web developer ranting to a psychologist about how stupid the entire Ruby movement is. It’s really funny.

I remember being in a similar situation a few years ago, the company had a Java application which drove almost all of the revenue of the company(90%+), and a perl application that they acquired from a ~2 person company and were busy trying to re-write it in Java.

Enter stage left: Ruby. At that point (sometime in 2006/2007), I honestly don’t think I had ever heard of Ruby before. But a bunch of the developers really seemed to like it, specifically the whole Ruby on Rails thing. We ran it on top of Apache with fastcgi. It really didn’t scale well at all (for fairly obvious reasons that are documented everywhere online). As time went on the company lost more and more interest in the Java applications and wanted to do everything in Ruby. It was cool (for them). Fortunately scalability was never an issue for this company since they had no traffic. At their peak they had four web servers, that on average peaked out at about 30-35% CPU.

It was a headache for me because of all of the modules they wanted to install on the system, and I was not about to use “gem install” to install those modules(that is the “ruby way” I won’t install directly from CPAN either BTW), I wanted proper version controlled RPMs. So I built them, for the five different operating platforms we supported at the time (CentOS 4 32/64bit, CentOS 5 32/64bit Fedora Core 4 32-bit — we were in transition to CentOS 5 32/64-bit). Looking back at my cfengine configuration file there was a total of 108 packages I built while I was there to support them, and it wasn’t a quick task to do that.

Then add to the fact that they were running on top of Oracle (which is a fine database IMO), mainly because that was what they had already running with their Java app. But using Oracle wasn’t the issue — the issue was their Oracle database driver didn’t support bind variables. If you have spent time with Oracle you know this is a bad thing. We used a hack which involved setting a per-session environment variable in the database to force bind variables to be enabled, this was OK most of the time, but it did cause major issues for a few months when a bad query got into the system, caused the execution plans to get out of whack and massive latch contention. The fastest way to recover the system was to restart Oracle. The developers, and my boss at the time were convinced it was a bug in Oracle. I was convinced it was not because I had seen latch contention in action several times in the past. After a lot of debugging the app and the database in consultation with our DBA consultants they figured out what the problem was — bad queries being issued from the app. Oracle was doing exactly what they told it to do, even if it means causing a big outage. Latch contention is one of the performance limits of Oracle that you cannot solve by adding more hardware. It seems like it could be at first because the results of it are throughput drops to the floor, and CPUs go to 100% usage instantly.

At one point to try to improve performance and get rid of memory leaks I migrated the Ruby apps from fastcgi to mod_fcgid. Which had a built in ability to automatically restart it’s threads after they had served X number of requests. This worked out great, really helped improve operations. I don’t recall if it had any real impact on performance but because the memory leaks were no longer a concern that was one less thing to worry about.

Then one day we got in some shiny new HP DL380 G5s with dual proc quad core processors with either 8 or 16GB of memory. Very powerful, very nice servers for the time. So what was the first thing I tried? I wanted to try out 64-bit, be able to take better advantage of the larger amount of memory. So I compiled our Ruby modules for 64-bit, installed a 64-bit CentOS 5.2 I think it was at the time(other production web servers were running CentOS 5.2 32-bit), installed 64-bit Ruby etc. Launched the apps, from a functional perspective they worked fine. But from a practical perspective it was worthless. I enabled the web server in production and it immediately started gagging on it’s own blood, load shot through the roof, requests were slow as hell. So I disabled it, and things returned to normal. Tried that a few more times and ended up giving up — went back to 32-bit. The 32-bit system could handle 10x the traffic of the 64-bit system. Never found out what the issue was before I left the company.

From an operational perspective, my own personal preference for web apps is to run Java. I’m used to running Tomcat myself but really the container matters less to me. I like war files, it makes deployment so simple. And in the Weblogic world I liked ear files (I suspect it’s not weblogic specific it’s just the only place I’ve ever used ear files). One archive file that has everything you need built into it. Any extra modules etc are all there. I don’t have to go compile anything, install a JVM, install a container and drop a single file to run the application. OK maybe some applications have a few config files (one I used to manage had literally several hundred XML config files — poor design of course).

Maybe it’s not cool anymore to run Java I don’t know. But seeing this video reminded me of those days when I did have to support Ruby on production and pre-production systems, it wasn’t fun, or cool.


  1. Love to understand more about why you don’t like Ruby Gems, and what benefit you see in compiling. Also, how are you handling this compilation into RPMs?

    Would love to see a blog post about this or pick your brain, as I am supporting a ROR + Mongo implementation… that were currently in the process of migrating to Chef for deployment.

    Comment by Justin — November 11, 2010 @ 11:37 pm

  2. It’s not the gems themselves just how they are managed(outside the package manager of the OS). I can write an article on how I built the RPMs, it wasn’t pretty, the bulk of the RPM specific work was done using alien which can convert tarballs into RPMs.

    I did the same for many other types of packages, take ESX for example. I do not use the VMware RPMs for tools or drivers, I build my own. The drivers are built for a specific kernel version and only include the drivers, so they can be installed at the same time as a new kernel RPM is installed. With the splitting out of the tools from the drivers they can be changed independently of each other(which usually results in tools not being upgraded nearly as often as drivers). And I strip the tools down significantly removing as much as I can that I don’t need(all those other prebuilt drivers, X11 support, source code, help files etc)

    My last drivers RPM came to 1.5MB, and “toolsonly” came to 26MB, vs 49MB for the vmware version.

    But the main purpose for this was the drivers, making sure I had compatible drivers installed at the same time the kernel was installed.

    Comment by Nate — November 12, 2010 @ 8:56 am

  3. […] is written in Ruby, and is very Ruby-centric. I guess you could say I am very biased against Ruby given my past experience supporting Ruby (on Rails) […]

    Pingback by Making the easy stuff hard, the hard stuff possible « — February 3, 2012 @ 5:19 am

  4. […] these two configuration settings are on their own lines? Come on, this is stupid. I go back to this post on Ruby, how I am reminded of it almost every time I use Chef. I had to support Ruby+Rails apps back from […]

    Pingback by Opscode Chef folks still have a lot to learn « — April 9, 2013 @ 8:03 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress