TechOpsGuys.com Diggin' technology every day

August 18, 2009

It’s not a bug, it’s a feature!

Filed under: Storage,Uncategorized,Virtualization — Tags: , , — Nate @ 5:01 pm

I must be among a tiny minority of people who have automated database snapshots moving between systems on a SAN.

Earlier this year I setup an automated snapshot process to snapshot a production  MySQL database and bring it over to QA. This runs every day, and runs fine as-is. There is another on-demand process to copy byte-for-byte the same production MySQL DB to another QA mysql server(typically run once every month or two, and runs fine too!).

I also setup a job to snapshot all of the production MySQL DBs(3 currently), and bring them to a dedicated “backup” VM which then backs up the data and compresses it onto our NFS cluster. This runs every day, and runs fine as-is.

ENTER VMWARE VSPHERE.

Apparently they introduced new “intelligence” in vSphere in the storage system that tries to be smarter about what storage devices are present. This totally breaks these automated processes. Because the data on the LUN is different after I remove the LUN, delete the snapshot, create a new one, and re-present the LUN to vSphere it says HEY THERE IS DIFFERENT DATA SO I’LL GIVE IT A UNIQUE UUID (Nevermind the fact that it is the SAME LUN). During that process the guest VM loses connectivity to the original storage(of course) and does not regain connectivity because VSPHERE thinks the LUN is different so doesn’t give the VM access to it. The only fix at that point is to power off the VM, delete all of the Raw device maps, re-create all of the raw device maps and then power on the VM again. @#)!#$ No you can’t gracefully halt the guest OS because there are missing LUNs and the guest will hang on shutdown.

So I filed a ticket with vmware, the support team worked on it for a couple of weeks, escalating it everywhere, but as far as anyone could tell it’s “doing what it’s supposed to do”. And they can’t imagine how this process works in ESX 3.5 except for the fact that ESX 3.5 was more “dumb” when it came to this sort of thing.

ITS RAW FOR A REASON, DON’T TRY TO BE SMART WHEN IT COMES TO A RAW DEVICE MAP, THAT’S WHY IT’S RAW.

http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf

With ESX Server 2.5, VMware is encouraging the use of raw device mapping in the following
situations:
• When SAN snapshot or other layered applications are run in the virtual machine. Raw
device mapping better enables scalable backup offloading systems using the features
inherent to the SAN.

[..]

HELLO ! SAN USER HERE TRYING TO OFFLOAD BACKUPS!

Anyways there are a few workarounds for these processes going forward:
– Migrate these LUNs to use Software iSCSI instead of Fiber channel, there is a performance hit(not sure how much)
– Keep one/more ESX 3.5 systems around for this type of work
– Use physical servers for things that need automated snapshots

The VMWare support rep sounded about as frustrated with the situation as I was/am. He did appear to try his best, but this behavior by vSphere is just unacceptable.  After all it works flawlessly in ESX 3.5!

WAIT! This broken-ness extends to NFS as well!

I filed another support request on a kinda-sorta-similar issue a couple of weeks ago regarding NFS data stores. Our NFS cluster operates with multiple IP addresses. Many(all?) active-active NFS clusters have at least two IPs (one per controller). In vSphere it once again assigns a unique ID based on the IP address rather than the host name to identify the NFS system. As a result if I use the host name on multiple ESX servers there is a very high likelihood(pretty much guaranteed) that I will not be able to do a migration of a VM that is on NFS from one host to another, because vSphere identifies the volumes differently because they are accessing it via a different IP. And if I try to rename the volume to match what is on the other system it tells me there is already a volume named that(when there is not) so I cannot rename it. The only workaround is to hard code the IP to each host, which is not a good solution because you lose multi-node load balancing at that point. Fortunately I have a Fiber channel SAN as well and have migrated all of my VMs off of NFS onto Fiber Channel, so this particular issue doesn’t impact me. But I wanted to illustrate this same sort of behavior with UUIDs is not unique to SAN, it can easily affect NAS as well.

You may not be impacted by the NFS stuff if your NFS system is unable to serve out the same file system over multiple controller systems simultaneously. I believe most fall into this category of being limited to 1 file system per controller at any given point in time. Our NFS cluster does not have this limitation.

« Newer Posts

Powered by WordPress