I just came across this video, which is animated, involves a PHP web developer ranting to a psychologist about how stupid the entire Ruby movement is. It’s really funny.
I remember being in a similar situation a few years ago, the company had a Java application which drove almost all of the revenue of the company(90%+), and a perl application that they acquired from a ~2 person company and were busy trying to re-write it in Java.
Enter stage left: Ruby. At that point (sometime in 2006/2007), I honestly don’t think I had ever heard of Ruby before. But a bunch of the developers really seemed to like it, specifically the whole Ruby on Rails thing. We ran it on top of Apache with fastcgi. It really didn’t scale well at all (for fairly obvious reasons that are documented everywhere online). As time went on the company lost more and more interest in the Java applications and wanted to do everything in Ruby. It was cool (for them). Fortunately scalability was never an issue for this company since they had no traffic. At their peak they had four web servers, that on average peaked out at about 30-35% CPU.
It was a headache for me because of all of the modules they wanted to install on the system, and I was not about to use “gem install” to install those modules(that is the “ruby way” I won’t install directly from CPAN either BTW), I wanted proper version controlled RPMs. So I built them, for the five different operating platforms we supported at the time (CentOS 4 32/64bit, CentOS 5 32/64bit Fedora Core 4 32-bit — we were in transition to CentOS 5 32/64-bit). Looking back at my cfengine configuration file there was a total of 108 packages I built while I was there to support them, and it wasn’t a quick task to do that.
Then add to the fact that they were running on top of Oracle (which is a fine database IMO), mainly because that was what they had already running with their Java app. But using Oracle wasn’t the issue — the issue was their Oracle database driver didn’t support bind variables. If you have spent time with Oracle you know this is a bad thing. We used a hack which involved setting a per-session environment variable in the database to force bind variables to be enabled, this was OK most of the time, but it did cause major issues for a few months when a bad query got into the system, caused the execution plans to get out of whack and massive latch contention. The fastest way to recover the system was to restart Oracle. The developers, and my boss at the time were convinced it was a bug in Oracle. I was convinced it was not because I had seen latch contention in action several times in the past. After a lot of debugging the app and the database in consultation with our DBA consultants they figured out what the problem was — bad queries being issued from the app. Oracle was doing exactly what they told it to do, even if it means causing a big outage. Latch contention is one of the performance limits of Oracle that you cannot solve by adding more hardware. It seems like it could be at first because the results of it are throughput drops to the floor, and CPUs go to 100% usage instantly.
At one point to try to improve performance and get rid of memory leaks I migrated the Ruby apps from fastcgi to mod_fcgid. Which had a built in ability to automatically restart it’s threads after they had served X number of requests. This worked out great, really helped improve operations. I don’t recall if it had any real impact on performance but because the memory leaks were no longer a concern that was one less thing to worry about.
Then one day we got in some shiny new HP DL380 G5s with dual proc quad core processors with either 8 or 16GB of memory. Very powerful, very nice servers for the time. So what was the first thing I tried? I wanted to try out 64-bit, be able to take better advantage of the larger amount of memory. So I compiled our Ruby modules for 64-bit, installed a 64-bit CentOS 5.2 I think it was at the time(other production web servers were running CentOS 5.2 32-bit), installed 64-bit Ruby etc. Launched the apps, from a functional perspective they worked fine. But from a practical perspective it was worthless. I enabled the web server in production and it immediately started gagging on it’s own blood, load shot through the roof, requests were slow as hell. So I disabled it, and things returned to normal. Tried that a few more times and ended up giving up — went back to 32-bit. The 32-bit system could handle 10x the traffic of the 64-bit system. Never found out what the issue was before I left the company.
From an operational perspective, my own personal preference for web apps is to run Java. I’m used to running Tomcat myself but really the container matters less to me. I like war files, it makes deployment so simple. And in the Weblogic world I liked ear files (I suspect it’s not weblogic specific it’s just the only place I’ve ever used ear files). One archive file that has everything you need built into it. Any extra modules etc are all there. I don’t have to go compile anything, install a JVM, install a container and drop a single file to run the application. OK maybe some applications have a few config files (one I used to manage had literally several hundred XML config files — poor design of course).
Maybe it’s not cool anymore to run Java I don’t know. But seeing this video reminded me of those days when I did have to support Ruby on production and pre-production systems, it wasn’t fun, or cool.