TechOpsGuys.com Diggin' technology every day

October 8, 2010

New TechOpsGuy: Scott

Filed under: General — Nate @ 7:46 pm

Well what do you know, I make a passive comment about whether or not someone will join up with me in the future for this site in response to Robin’s comments, and not long after I get a volunteer. He sounds really good, focusing more on the software end of things than hardware, but from what I read of him he is similar to me — seeking out best of breed technologies to make his life better.

He seems to be a Linux expert, automation expert, knows everything there is about open source stuff. And even seems to knows networking having built an Extreme-based network recently (sorry couldn’t resist).

So welcome Scott to the site, and I’m sure he will post something introducing him self at some point in the not too distant future.

I hope he knows what he is getting into – the bar is high for publishing content! It does take time to get in the grove, it took me a few months to figure out how to write stuff in this manor.

This site gets far more traffic than I ever really thought, I’m really surprised, impressed, and flattered. I get enormous positive feedback and I really feel this site has done more for my professional career than well any job I have had. I’m really glad it’s here and will do my best to continue posting my thoughts.

Thanks a lot to all of the readers, whether you like what I have to say or not 🙂

I’m getting close to launching my non technical blog, hopefully this weekend. Can you believe that I have so much more to say that I need a second blog? You have probably seen hints of what this new blog will contain from past posts. Those of you who know me, well know what I have to say. I will try to keep it as tame and objective as this site (regardless of how I may come across sometimes I try very hard to be calm and controlled, those that know me know this is true I am RAW and honest — too raw for some I’m sure on occasion), but can’t make promises.

Manually inflating the memory balloon

Filed under: Virtualization — Tags: , — Nate @ 12:10 am

As I’m sure you all know, one of the key technologies that VMware has offered for a long time is memory ballooning to free memory from idle guest OSs in order to return that memory to the pool.

My own real world experience managing hundreds of VMs in VMware has really made me want to do one thing more than anything else:

Manually inflate that damn memory balloon

I don’t want to have to wait until there is real memory pressure on the system to reclaim that memory. I don’t use windows so can’t speak for it there, but Linux is very memory greedy. It will use all the memory it can for disk cache and the like.

What I’d love to see is a daemon (maybe vmware-tools even) run on the system monitoring system load, as well as how much memory is actually used, which many Linux newbies do not know how to calculate, using the amount of memory reported being available by the “free” command or the “top” command is wrong. True memory usage on Linux is best calculated:

  • [Total Memory] – [Free Memory] – [Buffers] – [Cache] = Used memory

I really wish there was an easy way to display that particular stat, because the numbers returned by the stock tools are so misleading. I can’t tell you how many times I’ve had to explain to newbies that just because ‘free’ is saying there is 10MB available that there is PLENTY of ram on the box because there is 10 gigs of memory in cache. They say, “oh no we’re out of memory we will swap soon!”. Wrong answer.

So back to my request. I want a daemon that runs on the system, watches system load, and watches true memory usage, and dynamically inflates that baloon to return that memory to the free pool, before the host runs low on memory. So often VMs that run idle really aren’t doing anything, and when your running on high grade enterprise stoage, well you know there is a lot of fancy caching and wide striping going on there, the storage is really fast! Well it should be. Since the memory is not being used(sitting in cache that is not being used) – inflate that balloon and return it.

There really should be no performance hit. 99% of the time the cache is a read cache, not a write cache, so when you free up the cache the data is just dropped, it doesn’t have to be flushed to disk (you can use the ‘sync’ command in a lot of cases to force a cache flush to see what I mean, typically the command returns instantaneously)

What I’d like even more than that though is to be able to better control how the Linux kernel allocates cache, and how frequently it frees it. I haven’t checked in a little while but last I checked there wasn’t much to control here.

I suppose that may be the next step in the evolution of virtualization – more intelligent operating systems that can be better aware they are operating in a shared environment, and return resources to the pool so others can play with them.

One approach might be to offload all of storage I/O caching to the hypervisor. I suppose this could be similar to using raw devices(bypasses several file system functions). Aggregate that caching at the hypervisor level, more efficient.

 

October 7, 2010

Testing the limits of virtualization

Filed under: Datacenter,Virtualization — Tags: , , , , , , — Nate @ 11:24 pm

You know I’m a big fan of the AMD Opteron 6100 series processor, also a fan of the HP c class blade system, specifically the BL685c G7 which was released on June 21st. I was and am very excited about it.

It is interesting to think, it really wasn’t that long ago that blade systems still weren’t all that viable for virtualization primarily because they lacked the memory density, I mean so many of them offered a paltry 2 or maybe 4 DIMM sockets. That was my biggest complaint with them for the longest time. About a year or year and a half ago that really started shifting. We all know that Cisco bought some small startup a few years ago that had their memory extender ASIC but well you know I’m not a Cisco fan so won’t give them any more real estate in this blog entry, I have better places to spend my mad typing skills.

A little over a year ago HP released their Opteron G6 blades, at the time I was looking at the half height BL485c G6 (guessing here, too lazy to check). It had 16 DIMM sockets, that was just outstanding. I mean the company I was with at the time really liked Dell (you know I hate Dell by now I’m sure), I was poking around their site at the time and they had no answer to that(they have since introduced answers), the highest capacity half height blade they had at the time anyways was 8 DIMM sockets.

I had always assumed that due to the more advanced design in the HP blades that you ended up paying a huge premium, but wow I was surprised at the real world pricing, more so at the time because you needed of course significantly higher density memory modules in the Dell model to compete with the HP model.

Anyways fast forward to the BL685c G7 powered by the Opteron 6174 processor, a 12-core 2.2Ghz 80W processor.

Load a chassis up with eight of those:

  • 384 CPU cores (860Ghz of compute)
  • 4 TB of memory (512GB/server w/32x16GB each)
  • 6,750 Watts @ 100% load (feel free to use HP dynamic power capping if you need it)

I’ve thought long and hard over the past 6 months on whether or not to go 8GB or 16GB, and all of my virtualization experience has taught me in every case I’m memory(capacity) bound, not CPU bound. I mean it wasn’t long ago we were building servers with only 32GB of memory on them!!!

There is indeed a massive premium associated with going with 16GB DIMMs but if your capacity utilization is anywhere near the industry average then it is well worth investing in those DIMMs for this system, your cost of going from 2TB to 4TB of memory using 8GB chips in this configuration makes you get a 2nd chassis and associated rack/power/cooling + hypervisor licensing. You can easily halve your costs by just taking the jump to 16GB chips and keeping it in one chassis(or at least 8 blades – maybe you want to split them between two chassis I’m not going to get into that level of detail here)

Low power memory chips aren’t available for the 16GB chips so the power usage jumps by 1.2kW/enclosure for 512GB/server vs 256GB/server. A small price to pay, really.

So onto the point of my post – testing the limits of virtualization. When your running 32, 64, 128 or even 256GB of memory on a VM server that’s great, you really don’t have much to worry about. But step it up to 512GB of memory and you might just find yourself maxing out the capabilities of the hypervisor. At least in vSphere 4.1 for example you are limited to only 512 vCPUs per server or only 320 powered on virtual machines. So it really depends on your memory requirements, If your able to achieve massive amounts of memory de duplication(myself I have not had much luck here with linux it doesn’t de-dupe well, windows seems to dedupe a lot though), you may find yourself unable to fully use the memory on the system, because you run out of the ability to fire up more VMs ! I’m not going to cover other hypervisor technologies, they aren’t worth my time at this point but like I mentioned I do have my eye on KVM for future use.

Keep in mind 320 VMs is only 6.6VMs per CPU core on a 48-core server. That to me is not a whole lot for workloads I have personally deployed in the past. Now of course everybody is different.

But it got me thinking, I mean The Register has been touting off and on for the past several months every time a new Xeon 7500-based system launches ooh they can get 1TB of ram in the box. Or in the case of the big new bad ass HP 8-way system you can get 2TB of ram. Setting aside the fact that vSphere doesn’t go above 1TB, even if you go to 1TB I bet in most cases you will run out of virtual CPUs before you run out of memory.

It was interesting to see, in the “early” years the hypervisor technology really exploiting hardware very well, and now we see the real possibility of hitting a scalability wall at least as far as a single system is concerned. I have no doubt that VMware will address these scalability issues it’s only a matter of time.

Are you concerned about running your servers with 512GB of ram? After all that is a lot of “eggs” in one basket(as one expert VMware consultant I know & respect put it). For me at smaller scales I am really not too concerned. I have been using HP hardware for a long time and on the enterprise end it really is pretty robust. I have the most concerns about memory failure, or memory errors. Fortunately HP has had Advanced ECC for a long time now(I think I remember even seeing it in the DL360 G2 back in ’03).

HP’s Advanced ECC spreads the error correcting over four different ECC chips, and it really does provide quite robust memory protection. When I was dealing with cheap crap white box servers the #1 problem BY FAR was memory, I can’t tell you how many memory sticks I had to replace it was sick. The systems just couldn’t handle errors (yes all the memory was ECC!).

By contrast, honestly I can’t even think of a time a enterprise HP server failed (e.g crashed) due to a memory problem. I recall many times the little amber status light come on and I log into the iLO and say, oh, memory errors on stick #2, so I go replace it. But no crash! There was a firmware bug in the HP DL585G1s I used to use that would cause them to crash if too many errors were encountered, but that was a bug that was fixed years ago, not a fault with the system design. I’m sure there have been other such bugs here and there, nothing is perfect.

Dell introduced their version of Advanced ECC about a year ago, but it doesn’t (or at least didn’t maybe it does now) hold a candle to the HP stuff. The biggest issue with the Dell version of Advanced ECC was if you enabled it, it disabled a bunch of your memory sockets! I could not get an answer out of Dell support at the time at least why it did that. So I left it disabled because I needed the memory capacity.

So combine Advanced ECC with ultra dense blades with 48 cores and 512GB/memory a piece and you got yourself a serious compute resource pool.

Power/cooling issues aside(maybe if your lucky you can get in to SuperNap down in Vegas) you can get up to 1,500 CPU cores and 16TB of memory in a single cabinet. That’s just nuts! WAY beyond what you expect to be able to support in a single VMware cluster(being that your limited to 3,000 powered on VMs per cluster – the density would be only 2 VMs/core and 5GB/VM!)

And if you manage to get a 47U rack, well you can get one of those c3000 chassis in the rack on top of the four c7000 and get another 2TB of memory and 192 cores. We’re talking power kicking up into the 27kW range in a single rack! Like I said you need SuperNap or the like!

Think about that for a minute, 1,500 CPU cores and 16TB of memory in a single rack. Multiply that by say 10 racks. 15,000 CPU cores and 160TB of memory. How many tens of thousands of physical servers could be consolidated into that? A conservative number may be 7 VMs/core, your talking 105,000 physical servers consolidated into ten racks. Well excluding storage of course. Think about that! Insane! I mean that’s consolidating multiple data centers into a high density closet! That’s taking tens to hundreds of megawatts of power off the grid and consolidating it into a measly 250 kW.

I built out, what was to me some pretty beefy server infrastructure back in 2005, around a $7 million project. Part of it included roughly 300 servers in roughly 28 racks. There was 336kW of power provisioned for those servers.

Think about that for a minute. And re-read the previous paragraph.

I have thought for quite a while because of this trend, the traditional network guy or server guy is well, there won’t be as many of them around going forward. When you can consolidate that much crap in that small of a space, it’s just astonishing.

One reason I really do like the Opteron 6100 is the cpu cores, just raw cores. And they are pretty fast cores too. The more cores you have the more things the hypervisor can do at the same time, and there is no possibilities of contention like there are with hyperthreading. CPU processing capacity has gotten to a point I believe where raw cpu performance matters much less than getting more cores on the boxes. More cores means more consolidation. After all industry utilization rates for CPUs are typically sub 30%. Though in my experience it’s typically sub 10%, and a lot of times sub 5%. My own server sits at less than 1% cpu usage.

Now fast raw speed is still important in some applications of course. I’m not one to promote the usage of a 100 core CPU with each core running at 100Mhz(10Ghz), there is a balance that has to be achieved, and I really do believe the Opteron 6100 has achieved that balance, I look forward to the 6200(socket compatible 16 core). Ask anyone that has known me this decade I have not been AMD’s strongest supporter for a very long period of time. But I see the light now.

October 6, 2010

Who’s next

Filed under: Networking,Random Thought — Tags: , , — Nate @ 9:42 pm

I was thinking about this earlier this week or late last week I forget.

It wasn’t long ago that IBM acquired Blade Network Technologies, a long time partner of IBM as Blade made a lot of switches for the Blade Center, and also for the HP blade system as well I believe.

I don’t think that Blade Networks was really well known outside of their niche of being a supplier to HP and IBM (and maybe others I don’t recall and haven’t checked recently) on the back end. I certainly never heard of them until in the past year or two and I do keep my eyes out there for such companies.

Anyways that is what started my train of thought. The next step in the process was watching several reports on CNBC about companies pulling their IPOs due to market conditions. Which to me is confusing considering how high the “market” has come recently. It apparently just boils down to investors and IPO companies not able to agree on a “market price” or whatever. I don’t really care what the reason is, but the point is this — earlier this year Force10 Networks filed for IPO, and well haven’t heard much of a peep since.

Given the recent fight over 3PAR between Dell and HP, and the continuing saga of stack wars, it got me speculating.

What I think should happen, is Dell should go buy Force10 before they IPO. Dell obviously has no networking talent in house, last I recall their Powerconnect crap was OEM’d from someone like SMC or one of those really low tier providers. I remember someone else making the decision to use that product last year, and then when we tried to send 5% of our network traffic to the site that was running those switches they flat out died, had to get remote hands to reboot them. Then shortly afterwards one of them bricked themselves when upgrading the firmware on them, had to RMA. I just pointed and laughed, since I knew it was a mistake to go with them to begin with, the people making the decisions just didn’t know any better. Several outages later they ended up replacing them, and I tought them the benefits of a true layer 3 network, no more static routes.

Then HP should go buy Extreme Networks, which is my favorite network switching company, I think HP could do well with them. Yes we all know HP bought 3COM last year, but we also know HP didn’t buy 3COM for the technology (no matter what the official company line is), they bought them for their presence in China. 3COM was practically a Chinese company by the time HP bought them, really! And yes I did read the news that HP finished kicking Cisco out of their data centers replacing their stuff with a combination of Procurve and 3COM. Juniper tried & failed to buy Extreme a few years ago shortly after they bought Netscreen.

That would make my day though, a c-Class blade system with an Extreme XOS-powered VirtualConnect Ethernet fabric combined with 3PAR storage on the back end. Hell, that’d make my year 🙂

And after that, given that HP bought Palm earlier in the year (yes I own a Palm Pre – mainly so I can run older Palm apps otherwise I’d still be on a feature phone). HP likes the consumer space so they should go buy Tivo and break into the set top box market. Did I mention I use Tivo too? I have 3 of them.

Amazon EC2: Not your father’s enterprise cloud

Filed under: Datacenter — Tags: , — Nate @ 9:00 am

OK, so obviously I am old enough that my father did not have clouds back in his days, well not the infrastructure clouds that are offered today. I just was trying to think of a somewhat zingy type of topic. And I understand enterprise can have many meanings depending on the situation, it could mean a bank that needs high uptime for example. In this case I use the term enterprise to signify the need for 24×7 operation.

Here I am, once again working on stuff related to “the cloud”, and it seems like everything “cloud” part of it revolves around EC2.

Even after all the work I have done recently and over the past year or two with regards to cloud proposals, I don’t know why it didn’t hit me until probably in the past week or so but it did (sorry if I’m late to the party).

There are a lot of problems with running traditional infrastructure in the Amazon cloud, as I’m sure many have experienced first hand. The realization that occured to me wasn’t that of course.

The realization was that there isn’t a problem with the Amazon cloud itself, but there is a problem with how it is:

  • Marketed
  • Targeted

Which leads to people using the cloud for things it was not intended to ever be used for. In regards to Amazon, one has to look no further than their SLA on EC2 to immediately rule it out for any sort of “traditional” application which includes:

  • Web servers
  • Database servers
  • Any sort of multi tier application
  • Anything that is latency sensitive
  • Anything that is sensitive to security
  • Really, anything that needs to be available 24×7

Did you know that if they lose power to a rack, or even a row of racks that is not considered an outage? It’s not as if they provide you with the knowledge of where your infrastructure is in their facilities, they rather you just pay them more and put things in different zones and regions.

Their SLA says in part that they can in fact lose an entire data center (“availability zone”), and that’s not considered an outage.  Amazon describes this as an “availability zone”

Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.

And while I can’t find it on their site at the moment, I swear not too long ago their SLA included a provision that said even if they lost TWO data centers it’s still not an outage unless you can’t spin up new systems in a THIRD. Think of how many hundreds to thousands of servers are knocked off line when an Amazon data center becomes unavailable. I think they may of removed the two availability zones clause because not all of their regions have more than two zones(last I checked only us-east did, but maybe more have them now).

I was talking to someone who worked at Amazon not too long ago and had in fact visited the us-east facilities, and said all of the availability zones were in the same office park, really quite close to each other. They may of had different power generators and such, but quite likely if a tornado or flooding hit, more than one zone would be impacted, likely the entire region would go out(that is Amazon’s code word for saying all availability zones are down). While I haven’t experienced it first hand I know of several incidents that impacted more that one availability zone, indicating that there is more things shared between them than customers are led to believe.

Then there is the extremely variable performance & availability of the services as a whole. On more than one occasion I have seen Amazon reboot the underlying hardware w/o any notification (note they can’t migrate the work loads off the machine! anything on the machine at the time is killed!).  I also love how unapologetic they are when it comes to things like data loss. Basically they say you didn’t replicate the data enough times, so it’s your fault. Now I can certainly understand that bad things happen from time to time, that is expected, what is not expected though is how they handle it. I keep thinking back to this article I read on The Register a couple years ago, good read.

Once you’re past that, there’s the matter of reliability. In my experience with it, EC2 is fairly reliable, but you really need to be on your shit with data replication, because when it fails, it fails hard. My pager once went off in the middle of the night, bringing me out of an awesome dream about motorcycles, machine guns, and general ass-kickery, to tell me that one of the production machines stopped responding to ping. Seven or so hours later, I got an e-mail from Amazon that said something to the effect of:

There was a bad hardware failure. Hope you backed up your shit.

Look at it this way: at least you don’t have a tapeworm.

-The Amazon EC2 Team

I’m sure I have quoted it before in some posting somewhere, but it’s such an awesome and accurate description.

So go beyond the SLAs, go beyond the performance and availability issues.

Their infrastructure is “built to fail” which is a good concept at very large scale, I’m sure every big web-type company does something similar. The concept really falls apart at small scale though.

Everyone wants to get to the point where they have application level high availability and abstract the underlying hardware from both a performance and reliability standpoint. I know that, you know that. But what a lot of the less technical people don’t understand is that this is HARD TO DO. It takes significant investments in time & money to pull off. And at large scale these investments do pay back big. But at small scale they can really hurt you. You spend more time building your applications and tools to handle unreliable infrastructure when you could be spending time adding the features that will actually make your customers happy.

There is a balance there, as with anything. My point is that with the Amazon cloud those concepts are really forced upon you, if you want to use their service as a more “traditional” hosting model. And the overhead associated with that is ENORMOUS.

So back to my point as to the problem isn’t with Amazon itself, it’s with whom it is targeted to and the expectations around it. They provide a fine service, if you use it for what it was intended. EC2 stands for “elastic compute”, the first thing that comes to my mind when I hear that kind of term I think of HPC-type applications, data processing, back end type stuff that isn’t latency sensitive, and is more geared towards infrastructure failure.

But even then, that concept falls apart if you have a need for 24×7 operations. The cost model even of Amazon, the low cost “leader” in cloud computing doesn’t hold water vs doing it yourself.

Case in point, earlier in the year at another company I was directed to go on another pointless expedition comparing the Amazon cloud to doing it in house for a data intensive 24×7 application. Not even taking into account the latency introduced by S3, operational overhead with EC2, performance and availability problems. Assuming everything worked PERFECTLY, or at least as good as physical hardware – the ROI for the project for keeping it in house was less than 7 months(I re-checked the numbers and revised the ROI from the original 10 months to 7 months, I was in a hurry writing this morning before work). And this was for good quality hardware with 3 years of NBD on site support. This wasn’t scraping bottom of the barrel. To give you an idea on the savings after those 7 months it could more than pay for my yearly salary and benefits, and other expenses a company has for an employee for each and every month after that.

OK so we’re passed that point now. Onto a couple of really cool slides I came up for a pending presentation, which I really thing illustrate the Amazon cloud quite well, another one of those “picture is worth fifty words” type of thing. The key point here is capacity utilization.

What has using virtualization over the past half decade (give or take..) taught us? What has the massive increases in server and storage capacity taught us? Well they taught me that applications no longer have the ability to exploit the capacity of the underlying hardware. There are very rare exceptions to this but in general over the past  I would say at least 15 years of my experience applications really have never had the ability to exploit the underlying capacity of the hardware. How many systems do you see averaging under 5% cpu? Under 3%? Under 2% ? How many systems do you see with disk drives that are 75% empty? 80%?

What else has virtualization given us? It’s given us the opportunities to logically isolate workloads into different virtual machines, which can ease operational overhead associated with managing such workloads, both from a configuration standpoint as well as a capacity planning standpoint.

That’s my point. Virtualization has given us the ability to consolidate these workloads onto fewer resources. I know this is a point everyone understands I’m not trying to make people look stupid, but my point here with regards to Amazon is their model doesn’t take us forward — it takes us backward. Here are those two slides that illustrate this:

(Click image for full size)

And the next slide

(Click image for full size)

Not all cloud providers are created equal of course. The Terremark Enterprise cloud (not vCloud Express mind you), for example is resource pool based. I have no personal experience with their enterprise cloud (I am a vCloud express user for my personal stuff[2x1VCPU servers – including the server powering this blog!]). Though I did interact with them pretty heavily earlier in the year on a big proposal I was working on at the time. I’m not trying to tell you that Terremark is more or less cost effective, just that they don’t reverse several years of innovation and progress in the infrastructure area.

I’m sure Terremark is not the only provider that can provide resources based on resource pools instead of hard per-VM allocations. I just keep bringing them up because I’m more familiar with their stuff due to several engagements with them at my last company(none of which ever resulted in that company becoming a customer). I originally became interested in Terremark because I was referred to them by 3PAR, and I’m sure by now you know I’m a fan of 3PAR, Terremark is a very heavy 3PAR user. And they are a big VMware user, and you know I like VMware by now right?

If Amazon would be more, what is the right word, honest? up front? Better at setting expectations I think their customers would be better off, mainly they would have less of them because such customers would realize what that cloud is made for. Rather than trying to fit a square peg in a round hole. If you whack it hard enough you can usually get it in, but well you know what I mean.

As this blog  entry exceeds 1,900 words now I feel I should close it off. If you read this far, hopefully I made some sense to you. I’d love to share more of my presentation as I feel it’s quite good but I don’t want to give all of my secrets away 🙂

Thanks for reading.

October 5, 2010

HP Launches new denser SL series

Filed under: Datacenter,News — Nate @ 11:49 am

[domain name transfer still in progress but at least for now I managed to update the name servers to point to mine so the blog is being directed to the right server now]

Getting closer! Not quite there yet though.

Earlier in the year I was looking at the HP SL6000 series of systems for a project that needed high efficiency and density.

The biggest drawback to the system in my opinion is it wasn’t dense enough, it was no denser than 1U servers for the configuration I was looking at (needing 4×3.5″ drives per system). It was more power efficient though, and hardware serviceability was better.

The limitation was in the chassis, and HP acknowledged this at the time saying they were working on a new and improved version but it wasn’t available at the time. Well looks like they have launched it today, in the form of the SL6500. It seems to deliver(on the statements HP gave to me earlier in the year), I don’t see much info on the chassis itself on their site but looks significantly more dense, with the key here being the chassis is a lot deeper than the original 2U.

But they still have a ways to go, as far as I know the SGI Cloudrack C2 is the density leader in this space, at least from material that is publically available, who knows what the likes of IBM/Dell/HP come up with behind the scenes for special customers.

I did, what was to me a pretty neat comparison earlier this year comparing the power efficiency of the Cloudrack against the 3PAR T-class storage enclosures (granted the density technology behind the 3PAR is 8 years old at this point they haven’t felt the need to go more dense, though HP may encourage them to since they waste up to 10U of space in each of the racks but weight and power can become issues in many facilities going even as dense as 3PAR can go).

Anyways, onto the comparison, this is one place where the picture tells the story, pretty crazy huh? Yeah I know the products are aimed at very diffierent markets, I just thought it was a pretty crazy comparison.

You can think of the Cloudrack as one giant chassis. The rack is the chassis(literally). So while HP has gone from a 2U chassis to a 4U chassis, SGI is waiting for them with a 38U chassis. Another nice advantage of the Cloudrack is you can get true N+1 power (3 diverse power sources), most systems can only support two power sources, the Cloudrack can go much, MUCH higher. And with the power supplies built into the chassis, the servers can benefit from that extra fault tolerance and high efficiency(no fans or power supplies in the servers! Same as the HP SL series)

Keeping TechOpsGuys around a bit longer

Filed under: General — Nate @ 7:52 am

Well before the domain transferred Robin from StorageMojo sent a good comment my way and it made sense. He’s a much more experienced blogger than me so I decided to take his advice and do a couple of things:

  • Keep the TechOpsGuys name for now – even though it’s just me – until I manage to find something better
  • Keep the original layout – it annoys me but I can live with it with the Firefox zoom feature(zoomed in 150%)

Thanks Robin for the good suggestions, (I don’t know enough about MySQL to recover the data since I did the original migration)

Maybe someone else will join my blogging in the future..

The old TechOpsGuys is officially dead.. Well you may be able to hit it if you have the IP (not that you care!),  my former partners in crime are welcome to contribute to the site still if they want.

I’ll bring up www.techopsguys.com again probably this weekend to rave about non technical topics, so I can keep this site technical..since I run the server and can run as many blogs as I want! Well as many as I have time to..

October 3, 2010

Welcome to the new site

Filed under: General — Nate @ 1:43 pm

Hey there, new blog site..migrated data from http://www.techopsguys.com/ (well my posts at least). Let me know if you see anything that’s really broken. I had to edit a bunch of sql to change the names,paths, etc. Put in symlinks to fix other things.. but I think it’s working…new theme too! Myself I like to read things that are easier to read in low(er) light levels, white is very..bright! hurts my eyes

« Newer Posts

Powered by WordPress