About 2.4 billion people live in “water-stressed” countries such as China, according to a 2009 report by the Pacific Institute, an Oakland, California-based nonprofit scientific research group
China’s 1.33 billion people each have 2,117 cubic meters of water available per year, compared with 1,614 cubic meters in India and as much as 9,943 cubic meters in the U.S., according to the Food and Agriculture Organization of the United Nations.
Nothing new really if you have been paying attention for the past few years. I really try very hard to keep this blog as technical as possible no matter how strong my emotions are to rant against the government and society in general, in this case I'll venture a bit outside of the technical realm thanks to the above article mentioning Intel's water intensive business.
Another water intensive business is data centers, perhaps one of the more extreme examples is the SuperNap outside Las Vegas, where one person is quoted as saying it will require millions of gallons of water per day:
"They're in the middle of the desert and will need almost 3 million gallons of water per day for blowdown and evaporation for their 30,000 ton evaporative cooling plant."
While I can't vouch for the sources, just take a look at where some people think we are headed as far as global population growth is concerned, and notice similar trend lines from those that are in the global warming camp, and even more similar trend lines from those reporting on U.S. debt.
I'll end the tangent here, but you can probably get an idea of where our civilization is headed.
I don't know if it's true or not, I certainly hope so. Indications point to them being out of gas as far as their big growth earlier this decade. I watched a report on CNBC about a week or two ago where a couple of analysts agreed that Google is going nowhere fast. I don't pay that close attetion to them with whatever products they launch etc. But do feel a bit more at ease if Google has infact peaked. I know a lot of people believe that they pulled out of China to save face because they had trouble competing, which certainly sounds like a more plausable explanation than their official excuse.
It's been a while since I blogged on anything and had this in my head since I saw it, so wanted to mention the report.
There's been a lot of talk (no thanks to Cisco/EMC) about infrastructure blocks recently. Myself I never (and still don't) like the concept. I think it makes sense in the SMB world where you have very limited IT staff and they need a canned, integrated solution. Companies like HP and IBM have been selling these sorts of mini stacks for years. As for Microsoft I think they have a "Small business" version of their server platform which includes a bunch of things integrated together as well.
I think the concept falls apart at scale though, I'm a strong believer in best of breed technologies, and what is best of breed really depends on the requirements of the organization. I have my own favorites of course for the industries I've been working with/in for the past several years but I know they don't apply to everyone.
In their most dense configuration, in 320 square feet of space consuming approximately 1 megawatt of power you can have either:
- More then 45,000 CPU cores
- More than 29 Petabytes of storage
In both cases you can get roughly 45kW per rack, while today most legacy data centers top out at between 2-5kW per rack.
Stop and think about that for a minute, think about the space, think about the density. 320 square feet is smaller than even a studio apartment,, though in Japan it may be big enough to house a family of 10-12 (I hear space is tight over there).
How's that for an infrastructure block? And yes you can stack one on top of another
ICE Cube utilizes an ISO standard commercially available 9.5' x 8' x 40' container. SGI intentionally designed the offering such that the roof of the container is clear of obstruction and fully capable of utilizing its stacking container feature. Because of this, SGI is positioned to supply a compelling density multiplier for future expansion of the data center. If installed in a location without overhead height restriction the 9.5' x 8' x 40' containers in our primary product offering can be stacked up to three-high, thus allowing customers to double or triple the per square foot density of the facility over the already industry-leading density of a single ICE Cube.
All of this made me think of a particular scene from a '80s movie.
Really makes these other blocks some vendors are talking about sound like toys by comparison doesn't it.
Not much to report, got my first bill for my first "real" month of usage (minus DNS I haven't gotten round to transferring DNS yet but I do have the ports opened).
$122.20 for the month which included:
- 1 VM with 1VPU/1.5GB/40GB - $74.88
- 1 External IP address - $0.00 (which is confusing I thought they charged per IP)
- TCP/UDP ports - $47.15
- 1GB of data transferred - $0.17
Kind of funny the one thing that is charged as I use it (the rest being charged as I provision it) I pay less than a quarter for. Obviously I slightly overestimated my bandwidth usage. And I'm sure they round to the nearest GB, as I don't believe I even transferred 1GB during the month of April.
I suppose the one positive thing from a bandwidth and cost standpoint if I ever wanted to route all of my internet traffic from my cable modem at home through my VM (over VPN) for paranoia or security purposes, I could. I believe Comcast caps bandwidth at ~250GB/mo or something which would be about $42/mo assuming I tapped it out(but believe me my home bandwidth usage is trivial as well).
Hopefully this coming weekend I can get around to assigning a second external IP, mapping it to my same DNS and moving some of my domains over to this cloud instead of keeping them hosted on my co-located server. Just been really busy recently.
OK, probably going further out on a limb here but for some reason the idea came to my head and I thought it would be a funny concept.
With all these new things coming up around container based data centers, there still remains a problem that needs to be solved - where do you get the power, cooling and networking.
So I imagined a trailer park of sorts for data centers where companies could drive their container data centers(which can contain well over one thousand systems per container) and plug them in to a network jack and get power, and a water feed.
Data centers of the future may end up just being giant parking lots (above or below ground) with some sort of industrial grade easy to use connectors for plug and play containers. Maybe it goes even further and you are billed on just what you use automatically. A Ethernet jack or perhaps wireless connection at the site and you could authenticate to the facility and provision bandwidth, IP addresses, and plug in and turn on. The system would automatically meter the amount of water you draw, and perhaps even monitor the temperature of the return water feed (those that return it cooler will get charged less). And of course pay per kWH for power. Plus a flat rate fee for parking.
Maybe power companies, water treatment facilities(or other common water provider) and carriers team up to provide some sort of common standard or technique to provide this kind of service.
Then perhaps add in IPv6, I think I've read about it having some good IP mobility features, or maybe you just get some sort of BGP feed where you can advertise your own IPs.
Then say some disaster strikes like a hurricane or earthquake, the facility is robust enough to handle it, but maybe the infrastructure around it is destroyed, go pick up your container and take it to another lot.
By the time I got mid way through this post the concept in my mind sounded more feasable than it was when it first came to mind.