Working on a project recently I had a bit of a rude reminder of the realities of sizing storage for VMs on vSphere.
I wrote a few days ago about using RAM to store Provisioning services cache. With that config we have an option to create VMs with very little storage...or so I thought.
Turns out there is some important storage you need to account for.
In my case I wanted to create around 50 virtual machines each with 24GB of memory. Figured they'd be fully non persistent and keep the cache in ram. PVS needs a small disk on the VM to allow the SCSI driver to load so I figured 1GB would be plenty. With 1GB per VM, I thought that a 100GB datastore would would be a nice size and offer headroom for additional VM's down the road.
If you read the subject of this post you already know the punch line. I forgot something important.
vSphere creates a page file equal to the size of memory for each vm which typically resides with the VM.
And it does this when the VM is powered on. As a result instead of needing 50GB of disk for my VMs, I needed 1250GB! A rude awakening since I'd thought I didn't need much storage! What's more this was disk that I really hoped never to use.
What to do?
Well it turns out that using memory reservations can alleviate some of this. If you reserve 100% of the VM's memory allocation then you get a 0KB vswp file. But you've removed your ability to overcomitt memory. We design for no overcommitment but it's good to have the ability to deal with 'bubble' situations where you need some extra capacity to deal with immediate needs (like HA events). A middle ground here is reserving 50% of VM memory, and then you only end up with a swap file half the size of your RAM. ... in my case I still need 500+GB of disk I didn't plan for.
Also keep in mind that these files are created at VM power on, and deleted at power off; So they have implications on your I/O behavior.
Moral of the story ... Don't forget the Page files when planning your disk space allocation!
Random thoughts on Storage, Backup, Virtualization, Servers, Scuba, Dogs, Woodworking, Home Renovation and whatever else happens to come to mind.
Friday, March 09, 2012
Saturday, March 03, 2012
Citrix Provisioning Services and RAM cache
I've been working on a problem recently involving a situation where we want to do some desktop virtualization but are having a very hard time addressing the disk I/O requirements as the environment scales up.
Staring at some architecture diagrams it occurred to me that if these were physical systems I'd be using RAM cache with PVS ... and why not do that with my VM's? In this particular instance it's more convenient and cost effective to put all that temp data in RAM than it is to build a SAN infrastructure that's capable of keeping up. Putting the PVS cache in RAM means that the SAN won't ever see the disk I/O associated. We can still attach a small disk to the VM to deal with things like page files, print spoolers, and event logs if needed - but the I/O on these should be light.
This led me to another question - how much RAM can we use for cache? The answer is important as the amount of cache we have will be a factor in determining how frequently we have to reboot the systems. Turns out the answer depends a bit on the version of Windows you're using.
To find out I built a couple VM's and started playing with settings. Turns out -
Staring at some architecture diagrams it occurred to me that if these were physical systems I'd be using RAM cache with PVS ... and why not do that with my VM's? In this particular instance it's more convenient and cost effective to put all that temp data in RAM than it is to build a SAN infrastructure that's capable of keeping up. Putting the PVS cache in RAM means that the SAN won't ever see the disk I/O associated. We can still attach a small disk to the VM to deal with things like page files, print spoolers, and event logs if needed - but the I/O on these should be light.
This led me to another question - how much RAM can we use for cache? The answer is important as the amount of cache we have will be a factor in determining how frequently we have to reboot the systems. Turns out the answer depends a bit on the version of Windows you're using.
To find out I built a couple VM's and started playing with settings. Turns out -
- RAM cache is definitely still available in PVS 6.0 (someone I talked to suggested it might not be - Citrix if you're thinking about removing this ... DON'T!)
- RAM cache does work with VMware VM's
- You can allocate GOBS of RAM cache if you want on 64-bit OS (see Win 2K8R2 screenshot below), The VM's OS sees the memory, but knows its' not available. The screenshot is from a 24GB VM with 16GB of PVS cache.
- You can allocate up to about 3.5GB of RAM cache if you want on 32-bit OS (see Win 2K3R2 screenshot below).
- Allocating more PVS cache on the 32-bit OS will 'hide' the ram from the OS, but the PVS cache won't use more than the 3.5GB. I tried this with a 12GB VM, with 8192MB for PVS cache- ended up with windows seeing 4096MB of RAM, PVS using 3584MB of RAM, and the other 4+/-GB of ram vanishing; Tacking the PVS RAM cache for the vDisk down to 4192MB left me with the 32-bit OS seeing 8GB, and PVS using an additional 3584MB of RAM for the cache.
![]() |
| Windows 2008R2 with 24GB RAM and 16GB PVS Cache |
![]() |
| Windows Server 2003 R2 x32 with 6GB of RAM and 4096MB PVS Cache |
Subscribe to:
Comments (Atom)

