XenApp Physical to Virtual migration notes
Yesterday we succeeded in the last step of a migrating a physical Citrix XenApp farm to a virtualized XenApp farm.
The old environment consisted of 48 physical servers (HP Blades) running Windows 2003 (x86) and XenApp 5 with an average 30 CCU per server.
The migration goal was to reduce the number of physical servers to 16 (using server virtualization) and to introduce Internet Explorer 8 (old was still IE6 based).
We did a loadtest (using the DeNamiK Loadgen) to get an answer to the following questions:
- What is the optimal configuration regarding the number of Virtual XenApp servers per physical machine (on this hardware with our apps and our usage)?
- What is the impact on the CCU per server of introducing IE8?
- Take 300GB or 500GB hard drives (the 500GB drives are slower)?
The hard drive choice was of importance because the next step will be a POC with Windows 2008 R2 and XenApp 6.
We have tested configurations ranging from 4 to 8 VM’s per physical machine with either 1 or 2 vCPU’s. We also tested the 300GB and the 500GB disks both with and without BBWC.
Based on the loadtest we made the following decisions:
- The optimal configuration is 4 VM’s per physical machine, each VM having 4 GB of memory and 2 vCPU’s.
- Introduction of IE8 lowers the CCU per XenApp Server from 30 to 25.
- The Battery Backed Cache Module (BBWC) is essential for this hardware (see Extremely slow Virtual Machines on HP Smart Array P410)
The environment consists of 2 blade enclosures located in 2 datacenters and basically the migration steps were:
- Prepare VMWare ESX-i on separate disks in a few spare blades and pre-deploy XenApp VM’s on it.
- Expand the existing blade servers with an extra physical CPU and additional memory.
- Final migration: just swap the disks (as little impact for the users as possible)/
Step 1 and 2 were already completed so the most important and final step was yesterday and, as in each migration, we had a few small challenges we needed to fix:
Change of Raid Settings
When we started swapping the hard drives we did of course shut down the blade servers but 2 servers were configured for auto power on. During the drive swap those 2 servers powered on automagically and this caused data loss on the disks.
It wasn’t a big deal because we could rapidly redeploy both VMWare and the VM’s, XenApp and Applications but my advise is: take the blade out of the enclosure before swapping the drives!
Virtual Machines halted at 95% on Power On
When Powering On the Virtual Machines they appeared with a yellow exclamation mark in the vSphere console and the Power On operation halted at 95 percent:
Launching the console shows us the reason and selecting “I moved it” made the VM Power On immediately:
Remko Weijnen
Was once an enthusiastic PepperByte employee but is now working elsewhere. His blogs are still valuable to us and we hope to you too.