I authored the e-book Windows Virtual Desktop Migration Guide for Remote Desktop Services. This book takes you through a 7-step process of migrating your RDS workloads to WVD. 

The book focusses on the migration process and with only 40 pages it's easy to digest! 

The e-book is published by Microsoft as a free download: https://azure.microsoft.com/en-gb/resources/windows-virtual-desktop-migration-guide-for-remote-desktop-services/

(2020-Dec-21) While working with Azure Functions that provide a serverless environment to run my computer program code, I’m still struggling to understand how it actually works. Yes, I admit, there is no bravado in my conversation about Function Apps, I really don’t understand what happens behind a scene, when a front-end application submits a request to execute my function code in a cloud environment, and how this request is processed via a durable function framework (starter => orchestrator => activity). 

Azure Data Factory provides an interface to execute your Azure Function, and if you wish, then the output result of your function code can be further processed in your Data Factory workflow. The more I work with this couple, the more I trust how a function app can work differently under various Azure Service Plans available for me. The more parallel Azure Function requests I submit from my Data Factory, the more trust I put into my Azure Function App that it will properly and gracefully scale out from “Always Ready instances”, to “Pre-warmed instances”, and to “Maximum instances” available for my Function App. Supported runtime version for PowerShell durable functions, along with data exchange possibilities between orchestrator function and activity function requires a lot of trust too because the latter is still not well documented.

My current journey of using Azure Functions in Data Factory has been marked with two milestones so far:

  1. Initial overview of what is possible - https://server.hoit.asia/2020/04/using-azure-functions-in-azure-data.html
  2. Further advancement to enable long-running function processes and keep data factory from failing - https://server.hoit.asia/2020/10/using-durable-functions-in-azure-data.html

Photo by Jesse Dodds on Unsplash

Recently I realized that the initially proposed HTTP Polling of long-running function process in a data factory can be simplified even further.

An early version (please check the 2nd blog post listed above) suggested that I can execute a durable function orchestrator, which eventually will execute a function activity. Then I would check the status of my function app execution by polling the statusQueryGetUri URI from my data factory pipeline, if its status is not Completed, then I would poll it again. 

In reality, the combination of Until loop container along with Wait and Web call activities can just be replaced by a single Web call activity. The reason for this is that simple: when you initially execute your durable Azure Function (even if it will take minutes, hours, or days to finish), it will almost instantly provide you with an execution HTTP status code 202 (Accepted). Then Azure Data Factory Web activity will poll the statusQueryGetUri URI of your Azure Function on its own until the HTTP status code becomes 200 (OK). Web activity will run this step as long as necessary or unless the Azure Function timeout is reached; this can vary for different pricing tiers - https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale#timeout

The structure of statusQueryGetUri URI is simple: it has a reference to your azure function app along with the execution instance GUID. And how Azure Data Factory polls this URI, is unknown to me, it's all about trust, please see the beginning of this blog post :-)


This has been an introduction, now the real blog post begins. Naturally, you can execute multiple instances of your Azure Function at the same time (event-driven processes or front-end parallel execution steps) and the Azure Function App will handle them. My recent work project requirement indicated that when a parallel execution happens, a certain operation still needed to be throttled and artificially sequenced, again, it was a special use case, and it may not happen in your projects.

I tried to put such throttling logic inside of my durable azure function activity, however, with many concurrent requests to execute this one particular operation, my function had app used all of the available instances, while the instances were active and running, my function became not available to the existing data factory workflows.

There is a good wiki page about Writing Tasks Orchestrators that states, “Code should be non-blocking i.e. no thread sleep or Task.WaitXXX() methods.” So, that was my aha moment to remove the throttling logic from my azure function activity to the data factory.

Now, when an instance of my Azure Function finds itself that it can’t proceed further due to other operation running, it completes with HTTP status code 200 (OK), releases the azure function instance, and also provides an additional execution output status that it’s not really “OK” and needs to re-executed.

The Until loop container now will handle two types of scenario:

  1. HTTP Status Code 200 (OK) and custom output Status "OK", then it exits the loop container and proceeds further with the "Get Function App Output" activity.
  2. HTTP Status Code 200 (OK) and custom output Status is not "OK" (you can provide more descriptive info of what your not OK scenario might be), then execution continues with another round of "Call Durable Azure Function" & "Get Current Function Status".

This new approach for gracefully handling conflicts in functions required some changes in Azure Function Activity: (1) to run regular operation and completes with the custom "OK" status or identify another running instance, completes the current function instance and proved "Conflict" custom status, (2) Data Factory adjustments to check the custom Status output and decides what to do next.

Azure Function HTTP long polling mission was accomplished, however, now it has two layers of HTTP polling: natural webhook status collection and data factory custom logic to check if my webhook received OK status was really OK.

Recently Parallels announced version 18 of their Remote Application Server (RAS) product is coming soon! The release notes with added features and improvements for this version is huge! The highlights are as:

  • Windows Virtual Desktop Integration
  • FSLogix Profile Containers Integration
  • UX Evaluator & Advanced Session Metrics
  • Automated Image Optimizations
  • RDSH & VDI Local Storage Distribution
  • Management Portal

For an explanation of the full list of this release see this article: Coming Soon in Parallels RAS 18. In this article I want to focus on the Windows Virtual Desktop Integration. I remember conversations with many people of the Parallels RAS team discussing Windows Virtual Desk integration approaches based on their early mock up diagrams. I had the privilege of testing version 18 release during technical preview and it’s great to see the ideas and discussions come to life integration into their product. They did a great job! The parallels RAS product is known for it’s feature richness without the complexity, and they managed to pull that off once again with the WVD Integration!

If you’re familiar with installing Parallels RAS, the installation process of version 18 will look very similar. You do of course have to have a couple of additional requirements in place specifically for the WVD integration. Because of the integration, a couple of permissions on the Azure AD and Azure Resources side need to be in place and Parallels RAS uses an App registration for all of this. I will not cover these steps for now as they will be shared in great detail when version 18 will hit general availability.

Below is the architecture of the integration with WVD. It shows a hybrid deployment with on premises setup, however RAS can of course also be deployment entirely on Azure as well.

After the installation you will immediately notice the new Wizard in the admin console that guides you through the process of configuring WVD integration.

The first steps is to provide a location where the WVD agent and bootloader will be placed. This is because Parallels RAS will perform the installation of these components for you. Next, you provide the credentials that Parallels RAS will use to communicate with Azure, this result into a Provider object.

The next step in the wizard is creating the WVD Workspace, you’ll obviously see the same parameters compared to creating the Workspace directly in Azure.

The next step is the creation of the WVD Host pool including the properties like load balancing etc. Notice that Parallels RAS already provides the option to configure a Power On for hosts on demand, which includes pooled configurations! This means the first user to connect to a host pool causes a WVD host to Power On in case all hosts are powered off due to auto scaling.

Next we can define the Template to be used for the provisioning of WVD Hosts. We have 2 options here.

  • Custom Host means we can point to any running VM in Azure that we want to use as a template source. Upon selecting a VM, Parallels will take a snapshot of that VM and use that to provision WVD hosts.
  • Azure Gallery means we can select any existing Template Image from the Azure Marketplace or our own Shared Image Gallery (SIG).

Note that if you already have existing hosts and do not need autoscaling, then you can also use Standalone (unmanaged) option instead.

The next step allows us to configure the naming convention for WVD hosts, the number of VM’s, buffers and whether the newly deployed hosts should remain powered on after deployment or turned off for later use.

The next step is to provide sizing, and networking details. The wizard presents easy to use dropdown boxes with information taken directly from your Azure Subscription.

And finally, the wizard allows us to configure image optimization settings that Parallels RAS provides out of he box, which is a great feature. It also allows us to either use Sysprep or Parallels RASprep to prepare the images.

Upon completion, Parallels RAS will create our Workspace, Host pool and App group, prepare the Image in Azure and deploy x number of WVD hosts, join them to the domain and add them to the WVD host pool. Below is a quick summary of what the result looks like in the console.

Similar to how we are used to publish applications and desktops in the Parallels console, we can also do this for WVD resources. The great thing is that we can now also mix and match resources from RDS on premises and WVD using a single console and providing these to the user using a single Parallels RAS client, which means Hybrid scenarios!

When opening the Parallels Client for Windows and logging on were a presented with the Desktop hosted in WVD.

Upon connection we leverage the WVD Client (default option) which means we get all the capabilities of the WVD Windows client.

And this release of Parallels RAS also contains a fully integrated way of configuring FSLogix including all of the advanced settings, very cool!

The RAS Console also provides great session details

And we can also interact with the session directly, including shadowing.

And finally, below you see a Hybrid scenario with Applications and Desktops coming from an RDSH farm as well as a Desktop coming from WVD, all in a single client with a single identity!

This article focussed on the WVD integration of release of Parallels RAS, the team once again did an amazing job! As said before, they provide great additional features on top of native VWD without the penalty of over complexing things. Special thanks to Christian Aquilina, Director of Program Management at Parallels for providing me tech preview access and reviewing this article. Stay tuned for Parallels RAS 18 to become Generally available and try it out for your self!

 I had the privilege to test drive the integration of MSIX app attach in the Azure portal at an early stage. In this article I’ll share my early test results!

On October 16, 2019 I wrote the article MSIX app attach will fundamentally change working with application landscapes on Windows Virtual Desktop! This was based on a pre-private preview providing a sneak peek on what’s coming. Since that time I wrote several other articles covering the evolution of the technology. I covered publishing a heavy design application using MSIX app attach, I shared some scripts to transform MSIX applications into packages, and recently I published a video showing the staging and registering of seven applications inside a single MSIX app attach container. Until now, MSIX app attach was all based on PowerShell to stage and register MSIX app attach applications.

Recently Microsoft shared more details on MSIX app attach and Stefan Georgiev, Senior Program Manager on the WVD team, published a great video on the Azure Portal Integration with for MSIX app attach.

Before we get started we obviously needs MSIX app attach packages ready on a share and accessible from the Host Severs. This process is identical to way we performed that when using the PowerShell method of staging and registering, so I won’t repeat that here. Instead, let’s jump right in and take a look at the integration in the Azure Portal! Inside the Azure portal on the Host Pool blade, we now have a new option MSIX packages.

Image for post

This allows us to perform the first step, staging applications to all WVD Hosts that are part of this host pool. By clicking Add we can specify the UNC path towards the MSIX app attach package (IaaS File Share, Azure Files et cetera). And after doing that the list of MSIX applications (entry points) inside that package is retrieved and presented in a dropdown box. In the example below I have seven applications inside a single package. For this step to work, the host pool obviously needs to be in a healthy state, and at least 1 WVD Host needs to be running. Adding MSIX packages to a host pool will trigger the RD Agent on a healthy WVD Host inside the pool which is randomly selected. That WVD host will then load, parse, and validate the MSIX image and the MSIX packages stored inside it.

Image for post

After selecting an application we are able to complete the application details by providing the application name, display name description etc.

Image for post

After completing this for a couple of example applications this looks like below.

Image for post

At a five minute interval the RD Agent on each WVD Host that is part of the host pool will contact the WVD management service to check for any updates. During that time the Staging step takes place and the WVD Host will mount the MSIX app attach package. This can be confirmed by taking a look at disk manager.

Image for post

In order to prevent users from using the MSIX package before it is staged on all WVD Hosts, you can set to the state “Inactive” in the Azure Portal.

Now that we have covered the staging step, let’s take a look at the registering step. For that we open the WVD App group we want to use and select Add. This allows us to specify the MSIX source package and select an application.

Image for post

After completing the step for in this case 4 sample applications, that looks like below.

Image for post

We can now log on with a test user who is a assigned to this application group and see the end result. Upon logon, the registering step takes place and the MSIX app attach application now appear in the session!

Image for post

And here are all four applications running in my user session coming from, in this case, a single MSIX app attach container!

Image for post

We covered working with MSIX app attach in the Azure Portal, but the PowerShell equivalents are of course also available to help automate the configuration. For that to work, make sure you update your Az.DesktopVirtualization to version 2.0.1. Below is an example of using Get-AzWvdMsixPackage.

Image for post

In case you want to change the 5-minute interval that WVD hosts use to check for MSIX app attach updates, here is the corresponding registry location.



Image for post

The Event log also provides information about the various staging and registering steps.

Image for post

Below is a nice visual representation of the MSIX staging and registering steps, as shared by Tom Hickling.

Image for post

And finally, special thanks to Stefan Georgiev for providing the preview information and the ability to test drive this early! It’s super great to see huge steps MSIX app attach is taking. The integration in the Azure Portal works really well! Looking forward to adding more applications and test-driving other scenarios!

 A couple of weeks ago I posted an article on my first experiences with Project ‘Bicep’. Back then, Project ‘Bicep’ was just released as an 0.1 Alpha version. In case you missed that article, follow this link.

Image for post

Recently, version 0.2 was released which contains some great features as listed on the left. I was especially looking forward to the option of using Modules and Scopes and having IntelliSense and code formatting to improve the overall authoring experience.

To get started, install the latest Bicep 0.2 release using the URL below.


To use bicep VS Code extension, simply install the latest extension from within VSCode. If you were using a previous version of the Code Extension you have to uninstall that version first.

Image for post

We are now ready to start testing some of the new features in this 0.2 release. First of all, modules! Modules can be used to separate Bicep code, that creates a specific (set of) resources, into different files. Not only does this allow you to further simplify a single bicep file into multiple files, it also allows you to reuse code by simply calling the modules.

Creating a new bicep module is no different than creating a regular bicep file. Modules also have the .bicep extension and the same way of declaring parameters, variables and resources.

Let’s start by creating a super simple module that deploys a new Vnet and Subnet. Below is the Bicep code. If you have played with Bicep 0.1 before you’ll notice that this is no different than creating a regular bicep file. I still love how extremely simply the Bicep syntax is!

Before we start: Yes, the code snippets throughout this article are screenshots and therefor not easy to copy-paste and reuse, but I shared the entire set of all Bicep files, Bicep Modules and transpiled JSON code on Github here: Multi-module Bicep project for WVD. I’m using screenshots as this allows for easy reading and annotations.

Image for post

Before Bicep 0.2 release we would build (transpile) a corresponding JSON file from the Bicep file by running the command below.

Image for post

And since a Bicep Module file is no different than a regular Bicep file, we still can. We are however not going to do that. Instead we are going to call the module from another Bicep file. To call a Bicep module we use the code below. We specify the keyword ‘Module’ followed an identifier and then specifying the ‘type’ which in this case is the location of the bicep file, in this case the module that creates the Vnet and Subnet. Note that we can pass parameters to overwrite any defaults specified in the module itself. Also, the ‘name’ that we specify here is not the name of the vnet, but it is the name of the nested deployment that gets created in JSON, more on that later.

Image for post

Second thing you’ll notice that we specified a scope. This is great because it allows us to specify the, in this case, resourceGroup where the resources of this module are going to be deployed. So this also allows us to call several different modules that deploy different resources into various different resource groups. This comes in very handy for larger deployments that expand a single resource group. Do make sure that you specify targetScope as ‘subscription’ inside the Bicep file from where you call your Modules.

Image for post

We are now only calling a single Bicep module from another Bicep file. That of course does not really add value. Let’s step up our game and create another module. This time a module that creates a Storage Account including a file share.

Let’s start with the Storage Account and deliberately leave out a property inside resource declaration that defined the storage account. Note that the VSCode with the extension notifies us of the exact parameter that is missing. Great authoring experience! Not only when mistakenly leaving out a parameter, but throughout the creation of the resource, intellisense provides great feedback in properties to create.

Image for post

Let’s now add the creation of a basic file share inside this module. Note that VSCode with the Bicep Extension allows us to easily browse through all resources including in this case all available api versions of the storage account resource.

Image for post

Let’s now move forward to the test case of this blog post, creating a multi-module Bicep Project that deploys the following for us: a WVD Environment containing the following:

  • 4 new Resource Groups
  • Vnet and Subnet
  • Storage account including a File Share
  • Log Analytics Workspace
  • Windows Virtual Desktop Hostpool, AppGroup and Workspace
  • Configuring the diagnostic information for all WVD Backplacne components

Let’s start by creating the main bicep file from where we will be calling the various Bicep Modules. We start by defining the parameters. Since we will be calling various Bicep Modules from within the main file, we define the parameters we need for all modules. Again, we also set the targetScope to ‘Subscription’ to allow us to create Azure objects into various different Resource Groups.

Image for post

Next, we create 4 new Resource Groups and we generate the names based on the a defined prefix. We could of course also have created these 4 Resource Groups using a module, but I chose not to do so in order to show that you can combine the creation of resources with calling modules from a single main bicep file.

Image for post

Let’s now call our first module. This module will create a WVD Hostpool, AppGroup and Workspace, create relationships between all objects and also configure the diagnostic information for all of the WVD objects.

In a previous article I already focussed on the creation of the WVD Objects itself, so I will not repeat that here. In order to create the Diagnostic information for the WVD Objects, obviously the WVD Objects themselves need to be created first. And the Log Analytics Workspace needs to exist too. To achieve this, I’m actually calling another module from within this module :)

Image for post
Image for post

Note that for the parameter logAnalyticsWorkspaceID, I’m referring to the ID of the log analytics workspace. This causes Bicep to auto-create the necessary dependency in JSON.

Image for post

In fact, Bicep took care of a lot more depends on sections for us throughout the generated JSON file, we call this implicit dependencies. When authoring JSON directly, these would all have to be explicitly created.

Image for post

To finish off the main bicep file we call the 2 other modules that create the network and file services components respectively.

Image for post

The end result is a main bicep file with 5 Bicep module files as shown below.

Image for post

When Bicep modules are transpiled into ARM template JSON, they are turned into a nested inline deployment automatically. Basically, each module equals one nested deployment regardless of the number of resources that are defined inside a single module. This means we only have to transpile the main bicep file. The build process, which only takes a second to complete, transpiles all bicep files automatically including all the module files that are used. Below is the build command I used. I did get a warning. This is a know issue with version 0.2 because extension resources like e.g. diagnosticsettings and locks are not detected in this release yet. This functionality is going to be added soon. For now we can safely ignore the warning, and the JSON is successfully created.

Image for post
Image for post

From within VSCode we can directly deploy the generated JSON file to Azure Resource Manager.

Image for post

And here is the end result in Azure, we have 4 new Resource Groups created.

Image for post

A WVD Hostpool, AppGroup and Workspace are created and the objects are connected to each other.

Image for post
Image for post

A Storage account with a File Share is created.

Image for post

A Log Analytics Workspace is created including the diagnostic configuration of all WVD Objects.

Image for post
Image for post

And lastly, a Virtual Network with a Subnet is created.

Image for post

This concludes my first test drive with Bicep 0.2. I must say impressed by the added features and can’t wait for new features to be added in the already planned 0.3 release!

As mentioned before, I made the Bicep files including all the Bicep modules and the transiled JSON code available on GitHub. Feel free to reuse and contribute!


Image for post