Wednesday, October 31, 2012

Citrix XenDesktop and WebRoot SecureAnywhere

I've been working for a while (3+ Months) on an upgrade for a customer.  It seemed like a pretty straight forward project in concept; upgrade from XenDestkop 5.0 to 5.6FP1. 
The first portion of the project (DDC) was very straight forward and was completed without complications.  Everything worked as it should.

Part two was to update the gold images with the updated virtual desktop agent software and latest version of the receiver.   This is where things went sideways.

Updated the OS by applying all the latest windows updates.  All was fine.
Updated the client's AV software (Webroot SecureAnywhere).  All was fine.
Updated the Citrix Receiver.  All was fine.
Updated the Virtual Desktop Agent....all looked fine till after the required reboot...

Following the reboot the system booted fine and registered with the DDC.  However upon login the Windows Explorer shell would crash and restart, crash and restart, crash and restart ... over and over and over. 

Uninstall the VDA and everything is fine.  Install 5.0 VDA and everything is fine. Install 5.5, 5.6, or 5.6.1 and we're back to the crashing explorer situation.    A little more digging and we found that things would also stabilize if we removed Webroot.   Really neither of these was an acceptable "solution" but we were able to use the 5.0 VDA and keep the users working.

I'm going to ignore the 3-months of troubleshooting, and the time spend on the phone with support collecting crash dumps and jump right to the conclusion.

In VDA version 5.5 (the next rev up from 5.0) Citrix introduced some new hooking mechanisms.  Specifically adding createProcessHook.dll; which apparently is actually unused in the current (5.6FP1) version of the code, but is loaded to support future enhancements.  If this .dll is renamed no ill effects are felt by the system.  In this case renaming the .dll allows the 5.6FP1 VDA to be used with WebRoot without causing explorer.exe to crash. 

So, hopefully sharing this brings someone else to the concusion quicker than I did.  Webroot SecureAnywhere + Citrix XenDesktop VDA will result in an explorer crash.  This can be corrected by renaming teh createProcessHook.dll in the VDA directory to anything else and rebooting the machine.  (a tip, if you're using PVS, just mount the image on the PVS server, rename the file and unmount).

Wednesday, May 09, 2012

Citrix RemotePC feature for XenDesktop

I recently posted an article discussing a scenario where I wanted to connect to a user's physical desktop when they were away from the office. In that article I discussed how to configure that using then available Citrix XenDesktop components, and the issues I'd seen trying to do this in customer environments. Well, today at Citrix's Synergy conference Citrix announced the inclusion of "RemotePC" as an additional FlexCast delivery model. RemotePC is exactly the use case I described, with some extra goodies thrown in to make things even better for enterprise deployments. If this use case is attractive to you, take a close look at RemotePC.

Tuesday, May 01, 2012

Connect to My Physical Desktop PC

One of the big sales pitches for Desktop Virtualization is the ability of users to connect to their computers (or environment) from anywhere and from any device.  I see customers putting in hosted virtual desktop infrastructures just so they can run Outlook on an iPad.  I see XenApp deployed just so users can work from home without using a VPN. 

It works, but it's overkill.  Like gopher hunting with an AK-47.

There are simpler solutions out there.  And really users' just want to connect to their PC anyway. 

You can (still) go buy PC Anywhere, or you can enable remote desktop services.  You can even grab a copy of VNC.  Problem is none of these really address things like firewalls, and for users to use them you have to pretty much put every PC on public internet.

You can use LogMeIn, or GoToMyPC, which are great consumer level solutions that take care of the firewall issue but really don't provide much in terms of corporate control of data and access.

Desktop Virtualization solutions typically address data control and movement, firewalls, and corporate control, but often with a huge infrastructure investment (Servers, SANs, SSD Disk, Hypervisors...).  And hey.. what if I just did a desktop refresh and have lots of powerful modern hardware sitting under user's desks?

What if we could offer a solution with the control of Hosted Desktop Virt, but using the existing desktops people already have?  If you could connect to your own desktop from your iPad, but IT could still ensure governance and control over data movement?

 Well.. you can .. mostly.

With Citrix XenDesktop you can (today) make all this work if your end users are running Windows XP.   You build out your Access Gateway, Web Interface, and Delivery Controller infrastructure just like you would for a typical HVD deployment, but instead of using virtual machines for desktops you install the virtual desktop agent on the user's existing physical desktop.  A little data collection to assist with maping desktops to users and pretty quick a user can launch a session on their desktop from any device that supports a Citrix Receiver. 

Reality is that this can be done with as few as two VM's if you really want to (one gateway appliance and one Delivery Controller).  As a benefit you get all the features and mobility provided by XenDesktop, and a killer remote user experience.  And by the way this is not new ... this has worked since at least XenDesktop 3.

Hold on... Windows XP?

Yes... Windows 7 (and Vista for that matter) introduce a problem.  It's not that it can't work, but I'm afraid I have to add some caveats which aren't issues for Windows XP.

Windows Vista introduced a new display driver model (WDDM vs XPDM) which sent our Desktop Virtualization vendors into a bit of a tailspin.  The software used to capture the video display (to send it to the remote system) installs on windows a video driver, and windows does not support switching between drivers of different models.    Think of your higher end laptops which have both a low power integrated video chipset and a high performance video chip and dynamically switch between them based on the situation at hand.  The desktop agent does the same kinda thing based on if a user is connected to the machine remotely or not. 

So in order to support switching to and from an XPDM video driver for the Desktop agent, an XPDM video driver must be used to support the local console.  Turns out the windows standard VGA driver is such an entity.  If you are able to live with the single-monitor with a display size of 1600x1200 or less, then you can just install the Virtual Desktop Agent on the Windows 7 PC, and use the standard Windows VGA driver for console and all of this works.  Victory!  And by the way, the virtual desktop agent will switch the video drivers for you so all you really have to do is install the agent and go.

Oh... but you're like me, have two monitors, and they're wide-screen 24-inchers?  Well... maybe...

Citrix offers the HDX-3D versions of the virtual desktop agent. This desktop agent is designed for use on systems where 3D rendering is a requirement, but users still want the mobility and capability of connecting from remote systems.  Generally we're talking blade PC's here, but the agent doesn't really care.  The important thing is that this agent is a WDDM driver, and will co-exist with other WDDM video drivers.

Unfortunately the HDX-3D driver does have a fairly narrow official compatibility list.  That doesn't mean it won't work with other stuff, but doesn't mean it will either.  Note that most of the hardware on the list is nVidia, with a handful of ATI chipsets (but none oft the ATI's support dual monitors). 

So again, if you've got supported hardware you're golden.  Enjoy running 3D Studio Max on that iPad at the golf course!

If it's unsupported ...well your mileage will vary.  Some examples:
  • On my Laptop with Intel and FireGL video it works ok as long as I don't do anything strange like try to change display resolution remotely, or attach  a second monitor to the laptop.   
  • At a customer recently with a dual-video card ATI setup it worked ok, but the local (Physcial) console didn't blank when connected so someone could physically watch everything that happened in the session, and he couldn't get video on a Xenith client - but Windows and Mac receivers worked fine.
  • Another customer with dual monitors found that remotely it worked ok, but that if the session was started from a remote device it wouldn't resize properly when it returned to the physical console.
  • For another customer .. well, it just worked perfectly...
So, for users with simpler physical systems this is an easy win for relatively little infrastructure cost and a whole lot of capability.   For higher end users, some additional thought or effort might be required.  That said a new video card is a whole lot cheaper than a nice SSD SAN.

If you haven't tried this, you really should.



 

It's Mine, not Yours! (or IT's)

Folks, Bring Your Own Device (BYOD) is really all about users owning the equipment, and by association the base environment on that device.  It means that any other folks with a footprint, apps, or data on that device are guests and not owners.  And only the Owner should make decisions about the device.
That's not to say that IT organizations don't have the right to protect their data and applications - they absolutely do; but they need to do it without imposing on the Owner's right to use his device as/how they want to. 

I'll point out a few failures of being a good guest in this context.  

I was talking to a IT professional not long ago about his company's policy on smartphones and tablets and was told that "Users can connect and use anything they want, as long as we can remotely wipe it."   If IT wipes my device and I loose my apps and personal data as a result that's much like inviting a friend into my home and having them decide that they don't like my furnishings and packing them all up and taking them to the dump.  I'm left with an empty house because someone else didn't like my photos!

When I left a previous employer, I had an older Windows based Smartphone that had been configured to connect to the corporate Exchange environment (which worked quite well!) and to my personal Gmail account.  It was my personal device, but I wanted to read/reply to my work email and sync my calendar.  After my employer terminated my accounts the phone became very cranky about not being able to connect to Exchange.   When I removed the Exchange account from the phone it promptly deleted all of my contacts off the device - even those that were really part of the gMail account, and the local phone book.   Suddenly I didn't have my father's phone number anymore.  Fail!  My device. My data.  Why should deciding to detach from corporate email remove my personal phonebook?  That's like my guest emptying my clothes closet when the leave because they brought some clothes with them.

I was working with a very nice client hypervisor which seems like a perfect solution for a consultant.  The idea is that I have a computer, I go onsite with a customer who provides me with a corporate VM for their environment and I use that to connect to their systems.  I keep my own stuff separate and never touch their net.  The problem here is in implementation - as soon as I connect my hypervisor to their environment to get the VM, the hypervisor marries itself to their systems.  I can't login to my own computer anymore without authenticating the to client's servers.  Further it can only marry to one client system at a time, and when you separate them all the VM's on my computer get deleted.  FAIL!  That's like a guest changing the locks when the arrive and buldozing the house when the leave!

Many users choose not to connect to the corporate resources from their equipment because the cost of that guest is simply too high.  It's easier to have a separate phone or do without then it is to invite IT to come visit.

All of the above are examples of well meaning folks trying to protect corporate data.  But the implementation and execution are simply wrong.  At least for BYOD.  If this were company hardware this would all be fine, but in all cases it was Mine, not Theirs.

So how do we be a better guest on someone else's machine?  How do we protect our data?

Well first things first - Users get to pick their own devices, just like they get to pick their own cars.  If it gets them to work then it's done it's job.  That means we don't get to say "as long as we can remotely wipe it." or "as long as it's got Anti-Virus" ... (Yeah, I know that last hurts).

If we can accept the above premise, then we know we have to treat the device as an unstrusted entity - that is we can't trust the device to not do bad things, nor (really) to not disclose what it knows to someone we'd rather we didn't know.  It's a little like having a party line and not knowing who else it listening.

As an untrusted device we don't want to store data on it.  We don't want to accept data from it. and we want to control what data it sees.  Hm... if only we had a way of offering corporate applications and data without actually sending or storing them on the PC/Tablet/Phone ... A way that we could control and filter what the device can see.

Ok, the above sounds fine, but I need to work offline? 

Well, the need for being disconnected does need to be evaluated; but if you have to then it's time to think about protected, safe, trusted, containers ... Endpoint Inspection (are you clean right now?), and data encryption not of the whole device, but of the corporate data with it's own access controls.  These containers should have controls in place to prevent their 'leaking' and to facilitate their destruction, but in either case without harming the device or hindering it's ability to concurrently entertain other guests.  It's ok if a guest wants to blow up their suitcase, as long as they don't blow up the guest-room too.

Well, we can do that but our solution only works on <insert platform here>.

Sorry to say, that's a little like "I'll come visit, but only if your house is blue." If your solution is restrictive then it's not really a solution.   If the user can't pick the device they want, then it's not their device.  Sooner or later someone important will have a white house and you'll have to figure out how to make it work.  Better to get ahead of the curve on this one.

One last point here - End-User IT technology is a consumer commodity item these days.  Manufactures market to your users, to your children, your spouse, and your executives.  They do not market to IT.  Consumers want what is sexy and hot today, not what IT tells them they should have.  Let's be clear, IT has already lost this battle.

So what do we do?  Well I suggest you invest in providing access that protects your data and applications.  That doesn't expose your data, or require trust of the end-user device.  And that allows flexibility and end-user choice. But in the end remember that the device is Mine, not Yours!



Friday, March 09, 2012

Don't Forget the Pagefile!

Working on a project recently I had a bit of a rude reminder of the realities of sizing storage for VMs on vSphere.

I wrote a few days ago about using RAM to store Provisioning services cache. With that config we have an option to create VMs with very little storage...or so I thought.

Turns out there is some important storage you need to account for. In my case I wanted to create around 50 virtual machines each with 24GB of memory. Figured they'd be fully non persistent and keep the cache in ram. PVS needs a small disk on the VM to allow the SCSI driver to load so I figured 1GB would be plenty. With 1GB per VM, I thought that a 100GB datastore would would be a nice size and offer headroom for additional VM's down the road.

If you read the subject of this post you already know the punch line.  I forgot something important.

vSphere creates a page file equal to the size of memory for each vm which typically resides with the VM. And it does this when the VM is powered on.  As a result instead of needing 50GB of disk for my VMs, I needed 1250GB! A rude awakening since I'd thought I didn't need much storage! What's more this was disk that I really hoped never to use. What to do?

Well it turns out that using memory reservations can alleviate some of this.   If you reserve 100% of the VM's memory allocation then you get a 0KB vswp file.  But you've removed your ability to overcomitt memory.  We design for no overcommitment but it's good to have the ability to deal with 'bubble' situations where you need some extra capacity to deal with immediate needs (like HA events).   A middle ground here is reserving 50% of VM memory, and then you only end up with a swap file half the size of your RAM. ... in my case I still need 500+GB of disk I didn't plan for.

Also keep in mind that these files are created at VM power on, and deleted at power off;  So they have implications on your I/O behavior.

Moral of the story ... Don't forget the Page files when planning your disk space allocation!




Saturday, March 03, 2012

Citrix Provisioning Services and RAM cache

I've been working on a problem recently involving a situation where we want to do some desktop virtualization but are having a very hard time addressing the disk I/O requirements as the environment scales up. 

Staring at some architecture diagrams it occurred to me that if these were physical systems I'd be using RAM cache with PVS ... and why not do that with my VM's?   In this particular instance it's more convenient and cost effective to put all that temp data in RAM than it is to build a SAN infrastructure that's capable of keeping up. Putting the PVS cache in RAM means that the SAN won't ever see the disk I/O associated.  We can still attach a small disk to the VM to deal with things like page files, print spoolers, and event logs if needed - but the I/O on these should be light.

This led me to another question - how much RAM can we use for cache?  The answer is important as the amount of cache we have will be a factor in determining how frequently we have to reboot the systems.  Turns out the answer depends a bit on the version of Windows you're using.
 
To find out I built a couple VM's and started playing with settings.  Turns out -
  1. RAM cache is definitely  still available in PVS 6.0 (someone I talked to suggested it might not be - Citrix if you're thinking about removing this ... DON'T!)
  2. RAM cache does work with VMware VM's
  3. You can allocate GOBS of RAM cache if you want on 64-bit OS (see Win 2K8R2 screenshot below), The VM's OS sees the memory, but knows its' not available.  The screenshot is from a 24GB VM with 16GB of PVS cache.
  4. Windows 2008R2 with 24GB RAM and 16GB PVS Cache
  5. You can allocate up to about 3.5GB of RAM cache if you want on 32-bit OS (see Win 2K3R2 screenshot below). 
  6. Windows Server 2003 R2 x32 with 6GB of RAM and 4096MB PVS Cache
  7. Allocating more PVS cache on the 32-bit OS will 'hide' the ram from the OS, but the PVS cache won't use more than the 3.5GB.  I tried this with a 12GB VM, with 8192MB for PVS cache- ended up with windows seeing 4096MB of RAM, PVS using 3584MB of RAM, and the other 4+/-GB of ram vanishing;  Tacking the PVS RAM cache for the vDisk down to 4192MB left me with the 32-bit OS seeing 8GB, and PVS using an additional 3584MB of RAM for the cache.
So my conclusion - my concept of using RAM cache with VM's is viable; though we'll need to make sure we get things dialed in so that we stay within the RAM cache.  This will be particularly important for the 32-bit VMs.  However this challenge isn't any different than using PVS RAM cache in the physical world.



Saturday, January 21, 2012

Bitcasa Invites for Everyone

I got an email this evening from the folks at Bitcasa encouraging me to send invites (infinite!) out to everyone -

"You have been chosen to lead your flock of friends, frenemies, and family into the valley of Infinite Storage. For a short time, you can invite as many people as you want to skip the line and enter the pure bliss we call: INFINITE STORAGE!"

So, on that happy note, please follow this invite and take your place among the Bitcasa bretheren.

https://portal.bitcasa.com/invited/dc6708f819b34e9d829bcfaf16a1f17b/

Happy Cloudifying!

Friday, January 20, 2012

When does an ICA session use a SSL-U license?

Had a call from a customer yesterday, complaining that he could only have 5 user connections through his AGEE on his NetScaler.

I figured he was missing his AGEE platform license and once applied he'd have his 10,000 ICA connections and all would be well.

Turns out not to be the case.  First, we did confirm that all the proper licenses were installed and appeared to be active.  With his NS and AGEE licenses applied he did show the 10,000 ICA connections and 5 Access Gateway connections.

After stumbling around a bit, we ended up looking at the following Citrix KB article - http://support.citrix.com/article/CTX125567 – which essentially states that if the AGEE virtual server is configured in smartaccess mode then it consumes the AGEE/SSL-U licenses.

Since the customer's problem was repeatable (and matched the 5 licenses his MPX5500's said he has) we tested and  verified the failure, then moved the little radio button on the AG Virtual Server to 'Basic' and tested again – volia, now he has more than 5 connections.  He wasn't using SmartAccess for anything so no harm no foul.

Lesson learned.  If you're not using SmartAccess or the VPN feature, set the VS to 'Basic' - unless you've got SSL-U licenses (XD/XA Platinum).

Also of interest, this was a MPX5500 and the user had 5 AG connections provided by the licenses for the device.   A VPX200 that I put in a month ago had 10 AG connections …  Enough in either case that the 'problem' probably won't surface till the customer tries to go production.

Tuesday, January 17, 2012

Virtual Desktop Optimization Guides

Virtual Desktop Optimization Resources
Over time I've been collecting resources to optimize virtual desktop images.  Not all optimizaitons work for all environments to it's very important to understand what each tweak is actually doing and why you care about it; and probably most of all what's going to break when you turn it on.

The challenge is understanding what all the tweaks are doing – some may be helpful, some may be detrimental to your particular install (for instance most recommend turning of UAC and other security measures).

I've included references to optimizing for VMware View  as well as XenDesktop because 1.) both need the same types of changes and 2.) if you're living on VMware Hypervisor so the related tweaks are very relevant and 3.) I work with both solutions.

Links follow roughly grouped by source but otherwise in no particular order.

Citrix XenDesktop Optimization Guides

VMware View Desktop Optimization Guides

Additional References

Friday, January 13, 2012

The Blog Lives!

This is version 2.0 of my personal blog presence.   My plan (and we'll see how it pans out) is to keep most of my work related stuff over on the company blog at http://blog.lewan.com which is a great resource for information both from myself and my co-workers.    This page will be more of my own stuff along the lines of what's up in my life and the random thoughts (and outbursts) that occur to me to publish.

You'll find stuff about VMware and Citrix virtulization, EqualLogic, Lefthand, Compellent and NetApp storage here from time to time; HP, Dell and Cisco hardware; and even CommVault Backup, Archive, and Storage Managment;  That's my job and it occupies my mind quite a bit. 

You'll also find stuff from other aspects of my life - My Siberian huskies, Woodworking, Home Renovation, Motorcycling, Camping, Scuba Diving, Skiing, 4x4 Off Road, and of couse the obligatory home network and general geekery.

With that, happy reading; hope you enjoy it.

You'll also find me on Twitter (@KFingerlos), Linked-In, and Flicker (http://www.flicker.com/kafingerlos) but most editorial content will live here.