Thursday, July 05, 2018

A Desktop Virtualization Primer

I find myself answering questions about "how does feature X interact with virtualization solution Y" and generally have to start by assessing how much the person asking knows about Feature X and solution Y, and how the various bits and pieces interoperate.

This has led me to thinking about what does a generic Desktop Virtualization solution look like and if I'm deploying one what sort of functions or roles should we be thinking about needing to include within it?

Note that I say "Desktop Virtualization" and not "VDI" - this is because I really see these solutions today being able to offer desktops from Windows Desktop OS (Windows 10, 7, XP , etc.) as well as desktops from Windows Server OS (2008R2, 2016, 2003R2), and in most cases offering a seamless apps path as well.   Certainly this is true of Citrix XenDesktop/XenApp and VMware Horizon View, and even Microsoft's Remote Desktop Services (RDS) offerings today.     

For today's discussion I'm going to assume that we want to connect users to desktops that are hosted on virtual machines (and not physical workstations).   This too is possible, though it complicates the generic model a bit and seems to be an infrequently deployed use case.    Go look up Citrix RemotePC (and my earlier blogs on the topic) if you're interested.

A generic model 

Customers deploying VDI or even RDS based application delivery are going to face a number of challenges in the process.   The overall strategy and components might look something like this.


Each of the components in the diagram has a role to fulfill that’s necessary for a business to successfully deliver Windows based application to users at scale and over time.   Most of these components represent services that require some level of redundancy and availability to ensure that the delivered applications remain available. 

Components and Functions

Before going too much deeper let's explore what the components in this diagram are responsible for and how they contribute to desktop and application delivery.
  • Gateway Services – The gateway is responsible for providing secure access to the solution from outside of the corporate network.  This could be a traditional VPN or an application specific solution.  Key functionality is allowing an external user access to the internally hosted resources. 
  • Load Balancing – Load Balancers (or Application Delivery Controllers) are used to manage redundant components which are active-active or even active-passive in nature.   They allow the solution to scale by adding more of these components and provide availability by directing connections to only services which are currently alive.
  • Virtual Machine Security Services – This is a suite of services consumed by the VMs within the solution for purposes of hardening the environment.   Examples can include network firewalls and IDS/IPS, Data Loss Prevention solutions, and Anti-virus.   These services are probably managed externally to the VMs themselves but may be implemented internally or externally or both (hybrid) to the VMs.  
  • User Portal – The users need some method of discovering what is available to them within the environment and interface to consume it.   This interface is provided by the user portal which identifies the user and shows them what they can use.  It also provides a mechanism for launching the desktops and applications offered.
  • Brokering and Provisioning – Within the environment the current state needs to be tracked, which machines are currently doing what, how many are available, and the ability to create more if needed.   Brokering refers to connecting users to an available system to which they are entitled, and provisioning refers to creating more machines, and updating existing machines as required.
  • User Profiles and Environment Management – This component is responsible to ensure that the user has their customized experience within the environment.   This may be based upon corporate policies (everyone gets the corporate wallpaper, and Q: and R: drives) or based upon user preferences (arranging icons by date rather than by name).  These components are responsible to ensure that the correct experience is delivered.
  • Virtual Desktops and RDS VMs – This is the place where the user experience all comes together.  The applications run in these virtual machines to allow users to do work.  The rest of the environment is about allowing these VMs to exist and be useful.
  • Hardware Virtualization Services – To have VMs, you need virtualization services.  These services provided by a hypervisor in conjunction with storage allows multiple VMs to run on a collection of physical computers.    The virtualization services are responsible for allowing the VMs to share the physical resources and gain access to their capabilities (such as RAM, CPU, storage, network, and hardware GPU)
  • Hypervisor and Storage Management – Once you have hardware virtualization in place you need capabilities to manage and report on what they are doing.  To add more resources and to correct issues.
  • Desktop and Environment Management – Like Hypervisors and storage the desktop environment requires an interface to manage what’s going on with the brokering and provisioning services; and to see and correct what’s happening.
  • Application and Image Management – This component is about creating templates and packages which are used to encapsulate individual or collections of Windows applications and the Windows OS itself.   It allows for version control as well as revision of the packages as necessary.  It’s likely that this component consists not only of technology but also operational procedures governing how and when updates occur.
  • Enterprise File Services – Users care about applications and applications operate against and produce data.  Where that data exists as unstructured files it should live within an enterprise file store.   User documents, data, and profiles (from the Profiles and Environment component) also need a place to live.  Because these services are key to the proper function of the environment and to users being productive the file services need to be resilient and high performance.  Some solutions will also utilize the file services as part of the application and image management module, which increases the availability and performance requirements.

The diagram doesn't address standard datacenter core services like DNS, DHCP, Active Directory, NTP, routing, firewalls, power and cooling, and such.  Rest assured that these services are required to have a successful solution.

Once you have a handle on the model above it is pretty easy to map a product or feature to the model and see where it lives and what you'd expect it to be responsible for or how it might interact with other components.

An example - If you're looking at Liquidware Labs Stratusphere UX product and wondering where it lives; we need to know that it provides statistics on the virtual machines, applications, and user sessions.    This feels like a good fit for the "Desktop and Environment Management" block.   

Another example - If you're looking at VMware's NSX product and wondering where it fits we can think in terms of the API's it provides for Anti-Virus offload and network segmentation and map it to the Virtual Machine Security Services function.    

A last example - If you're looking at deploying VDI using Microsoft Remote Desktop Services you can start walking through the model and attaching features or products to the boxes.   Will you use Hyper-V for the Virtualization services?   the RDS Gateway for Gateways?   SCCM  for Application and Image management?   Will the RDS Broker and SCVMM do what you need for brokering and provisioning? What other products will you need to map the remaining functions?    Once you have all the functions covered you know you have a potentially viable solution.

Hopefully this generic model will help you evaluate where stuff fits, and perhaps what you're missing.

Until next time - Kenneth