Posts Tagged ‘ClickOnce’

Azure Blob Storage, Click Once deployment, and recursive file uploads

July 17, 2014

In this blog post, I am going to show you how to upload an folder and all of its contents from your local computer to Azure blob storage, including subdirectories, retaining the directory structure. This can have multiple uses, but I want to call out one use that people still using Click Once deployment might appreciate.

I used to be the (only) Click Once MVP, and still blog about it once in a while. Click Once is a Microsoft technology that allows you to host the deployment of your client application, console application, or VSTO add-in on a file share or web site. When updates are published, the user picks them up automatically. This can be a very handy for those people still dealing with these technologies, especially since Microsoft removed the Setup & Deployment package feature from Visual Studio after VS2010 and replaced it with a lame version of InstallShield (example of lameness: it wouldn’t let you deploy 64-bit applications). But I digress.

I wrote a blog article showing how you can host your Click Once deployment in Azure blob storage very inexpensively. (It’s even cheaper now.) The problem is you have to get your deployment up to Blob Storage, and for that, you need to write something to upload it, use something like Cerebrata’s Azure Management Studio, or talk the Visual Studio team and ClickOnce support into adding an option to the Publish page for you. I tried the latter — what a sales job I did! “Look! You can work Azure into Click Once and get a bunch of new Azure customers!” “Oh, that’s a great idea. But we have higher priorities.” (At least I tried.)

Having been unsuccessful with my rah! rah! campaign, I thought it would be useful if I just provided you with the code to upload a folder and everything in it. You can create a new Windows Forms app (or WPF or Console, whatever makes you happy) and ask the user for two pieces of information:

  • Local folder name where the deployment is published. (For those of you who don’t care about ClickOnce, this is the local folder that you want uploaded.)
  • The name of the Blob Container you want the files uploaded to.

Outside of a Click Once deployment, there are all kinds of uses for this code. You can store some of your files in Blob Storage as a backup, and use this code to update the files periodically. Of course, if you have an excessive number of files, you are going to want to run the code in a background worker and have it send progress back to the UI and tell it what’s going on.

Show me the code

I’m going to assume you understand recursion. If not, check out the very instructive wikipedia article. Or put the code in and just step through it. I think recursion is really cool; I once wrote a program in COBOL that would simulate recursion that went up to 10 levels deep. (For you youngsters, COBOL is not a recursive programming language.)

In your main method, you need to add all of the following code (up until the next section).

First, you need to set up your connection to the Azure Storage Account and to the blob container that you want to upload your files to. Assuming you have the connection string to your storage account, here’s what you need to do.

First you’re going to get an instance of the CloudStorageAccount you’re going to use. Next, you get an reference to the CloudBlobClient for that storage account. This is what you use to access the actual blob storage. And lastly, you will get a reference to the container itself.

The next thing I always do is call CreateIfNotExists() on the container. This does nothing if the container exists, but it does save you the trouble of creating the container out in Blob Storage in whatever account you’re using if you haven’t already created it. Or if you have forgotten to create it. If you add this, it makes sure that the container exists and the rest of your code will run.

//get a reference to the container where you want to put the files, 
//  create the container if it doesn't exist
CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobCOntainer = cloudBlobClient.GetContainerReference(containerName);
cloudBlobContainer.CreateIfNotExists();

I also set the permissions on the container. If this code actually creates the container, the default access is private, and nobody will be able to get to the blobs without either using a security token with the URL or having the storage account credentials.

//set access level to "blob", which means user can access the blob 
//  but not look through the whole container
//  this means the user must have a URL to the blob to access it
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
cloudBlobContainer.SetPermissions(permissions);

So now we have our container reference (cloudBlobContainer) set up and the container is ready for use.

Next we need to get a list of files to upload, and then we need to upload them. I’ve factored this into multiple methods. Here are the top commands:

List<String> listOfiles = GetListOfFilesToUpload(folderPath);
string status = UploadFiles(listOfiles, folderPath);

After this finishes, all of your files are uploaded. Let’s look at the methods called.

GetListOfFilesToUpload(string folderPath)

This is the hard part – getting the list of files. This method calls the recursive routine. It starts with instantiating the list of files that will be the final result. Then it retrieves the list of files in the requested directory and adds them to the list. This is “the root”. Any files or directories in this root will be placed in the requested container in blob storage. [folderName] is the path to the local directory you want to uploaded.

//this is going to end up having a list of the files to be uploaded
//  with the file names being in the format needed for blob storage
List<string> listOfiles = new List<string>();
            
//get the list of files in the top directory and add them to our overall list
//these will have no path in blob storage bec they go in the root, so they will be like "mypicture.jpg";
string[] baseFiles = Directory.GetFiles(folderName);
for (int i = 0; i < baseFiles.Length; i++)
{
    listOfiles.Add(Path.GetFileName(baseFiles[i]));
}

Files will be placed in the same relative path in blob storage as they are on the local computer. For example, if D:\zAzureFiles\Images is our “root” upload directory, and there is a file with the full path of “D:\zAzureFiles\Images\Animals\Wolverine.jpg”, the path to the blob will be “Animals/Wolverine.jpg”.

Next we need to get the directories under our “root” upload directory, and process each one of them. For each directory, we will call GetFolderContents to get the files and folders in each directory. GetFolderContents is our recursive routine. So here is the rest of GetListOfFilesToUpload:

//call GetFolderContents (this routine) for each folder to retrieve everything under the top directory
string[] directories = Directory.GetDirectories(folderName);
for (int i = 0; i < directories.Length; i++)
{
    // an example of a directory : D:\zAzureFiles\Images\NatGeo; (root is D:\zAzureFiles\Images
    // topDir gives you the just the directory name, which is NatGeo in this example
    string topDir = GetTopDirectory(directories[i]);
    //GetFolderContents is recursive
    List<String> oneList = GetFolderContents(directories[i], topDir);
    //you have a list of files with blob storage paths for everything under topDir (incl subfolders)
    //  (like topDir/nextfolder/nextfolder/filename.whatever)
    //add the list of files under this folder to the list going in this iteration
    //eventually when it works its way back up to the top, 
    //  it will end up with a complete list of files
    //  under the top folder, with relative paths
    foreach (string fileName in oneList)
    {
        listOfiles.Add(fileName);
    }
}

And finally, return the list of files.

return listOfiles;

GetTopDirectory(string fullPath)

This is a helper method that just pulls off the last directory. For example, it reduces “D:\zAzureFiles\Images\Animals to “Animals”. This is used to pass the folder name to the next recursion.

private string GetTopDirectory(string fullPath)
{
    int lastSlash = fullPath.LastIndexOf(@"\");
    string topDir = fullPath.Substring(lastSlash + 1, fullPath.Length - lastSlash - 1);
    return topDir;
}

GetFolderContents(string folderName, string blobFolder)

This is the recursive routine. It returns List<string>, which is a list of all the files in and below the folderName passed in, which is the full path to the local directory being processed, like D:\zAzureFules\Images\Animals\.

This is similar to GetListOfFilesToUpload; it gets a list of files in the folder passed in and adds them to the return object with the appropriate blob storage path. Then it gets a list of subfolders to the folder passed in, and calls GetFolderContents for each one, adding the items returned from the recursion in to the return object before returning up a level of recursion.

This sets the file names to what they will be in blob storage, i.e. the relative path to the root. So a file on the local computer called D:\zAzureFiles\Images\Animals\Tiger.jpg would have a blob storage path of Animals/Tiger.jpg.

returnList is the List<String> returned to the caller.

List<String> returnList = new List<String>();
            
//process the files in folderName, set the blob path
string[] fileLst = Directory.GetFiles(folderName);
for (int i = 0; i < fileLst.Length; i++)
{
    string fileToAdd = string.Empty;
    if (blobFolder.Length > 0)
    {
        fileToAdd = blobFolder + @"\" + Path.GetFileName(fileLst[i]);
    }
    else
    {
        fileToAdd = Path.GetFileName(fileLst[i]);
    }
    returnList.Add(fileToAdd);
}

//for each subdirectory in folderName, call this routine to get the files under each folder
//  and then get the files under each folder, etc., until you get to the bottom of the tree(s) 
//  and have a complete list of files with paths
string[] directoryLst = Directory.GetDirectories(folderName);
for (int i = 0; i < directoryLst.Length; i++)
{
    List<String> oneLevelDownList = new List<string>();
    string topDir = blobFolder + @"\" + GetTopDirectory(directoryLst[i]);
    oneLevelDownList = GetFolderContents(directoryLst[i], topDir);
    foreach (string oneItem in oneLevelDownList)
    {
        returnList.Add(oneItem);
    }
}
return returnList;

UploadFiles(List<string> listOfFiles, string folderPath)

This is the method that actually uploads the files to Blob Storage. This assumes you have a reference to the cloudBlobContainer instance that we created at the top.

[listOfFiles] contains the files with relative paths to the root. For example “Animals/Giraffe.jpg”. [folderPath] is the folder on the local drive that is being uploaded. In our examples, this is D:\zAzureFiles\Images. Combining these gives us the path to the file on the local drive. All we have to do is set the reference to the location of the file in Blob Storage, and upload the file. Note – the FileMode.Open refers to the file on the local disk, not to the mode of the file in Blob Storage.

internal string UploadFiles(List<string> listOfFiles, string folderPath)
{
    string status = string.Empty;
    //now, listOfiles has the list of files you want to upload
    foreach (string oneFile in listOfFiles)
    {
        CloudBlockBlob blob = cloudBlobContainer.GetBlockBlobReference(oneFile);
        string localFileName = Path.Combine(folderPath, oneFile);
        blob.UploadFromFile(localFileName, FileMode.Open);
    }
    status = "Files uploaded.";
    return status;
}

Summary

So you have the following:

  • The code for the calling routine that sets the reference to the cloudBlobContainer and makes sure the container exists. This calls GetsListOfFilesToUpload and UploadFiles to, well, get the list of files to upload and then upload them.
  • GetListOfFilesToUpload calls GetFolderContents (which is recursive), and ultimately returns a list of the files as they will be named in Blob Storage.
  • GetFolderContents – the recursive routine that gets the list of files in the specified directory, and then calls itself with each directory found to get the files in the directory.
  • UploadFiles is called with the list of files to upload; it uploads them to the specified container.

If the files already exist in Blob Storage, they will be overwritten. For those of you doing ClickOnce, this means it will overlay the appname.application file (the deployment manifest) and the publish.htm if you are generating it.

One other note to those doing ClickOnce deployment – if you publish your application to the same local folder repeatedly, it will keep creating versions under Application Files. This code uploads everything from the top folder down, so if you have multiple versions under Application Files, it will upload them all over and over. You might want to move them or delete them before running the upload.

This post provided and explained the code for uploading a folder and all of its sub-items to Azure Blob Storage, retaining the folder structure. This can be very helpful for people using ClickOnce deployment and hosting their files in Blob Storage, and for anyone else wanting to upload a whole directory of files with minimal effort.

Windows 8 and ClickOnce : the definitive answer

February 24, 2013

There have been a lot of copies of Windows 8 sold since it came out a few months ago, and the Surface Pro was just released. (In fact, I’m writing this on my brand new Surface Pro, which I really like, but that’s a subject for another time.)

If you’re using ClickOnce deployment, you’re probably wondering how (or if) it’s going to work with Windows 8. I’ve worked with Saurabh Bhatia at Microsoft to ensure that this article will cover what you need to know. We use ClickOnce at GoldMail (whose product is now called Point Across) for our desktop product and VSTO applications, as well as several internal utility applications, so I’ve also tested this on our products to make sure it’s accurate.

If you are hosting your deployment on a file share or on an intranet, you won’t have to make any changes. You can go get ice cream now while the rest of us soldier on.

If you are hosting your deployment on the internet, you will eventually get calls from your customers who have upgraded to Windows 8 or purchased a Windows 8 machine. So let’s talk about that.

I’m not going to talk about the bootstrapper right now; that’s going to come up later. For now, let’s concentrate on the ClickOnce application itself. When a user installs a ClickOnce application on Windows 8, here’s what happens:

  • ClickOnce gets the manifest, checks the certificate, and shows the ClickOnce prompt with “trusted publisher” or “unknown publisher” (depending on your signing certificate).
  • The user clicks the Install button.
  • It checks the certificate on the executable. If it’s not signed, the Smart Screen Filter is triggered.

So here’s what the user experience looks like when you install a ClickOnce application on Windows 8:

You get the standard install prompt:

The publisher is known because I am signing the deployment with a signing certificate purchased from a Certificate Authority – in this case, Verisign.

If you click Install, it shows the standard install dialog and actually installs the application. But then it shows a blue band across your screen saying, “Windows SmartScreen prevented an unrecognized app from starting. Running this app might put your PC at risk.”

There is a small “More Info” link under the warning, and a big “OK” button on the bottom of the dialog. Which one would you click? Which one would your customers click? Most people will click the OK button.

If the user clicks OK, the dialog closes, and nothing else happens. Now let’s say the user goes to TileWorld (I’m borrowing David Pogue’s name for the new Windows 8 interface formerly known as Metro). The user can see the application there in the list of apps because it actually got installed. If he clicks on it to run it, nothing happens. So congratulations! The user has installed your application, but he can’t run it.

What happens if the user clicks “More Info” instead of “OK”? He sees the following screen, and he can choose “Run Anyway” or “Don’t run”.

For “Publisher”, it says “Unknown publisher” – this is referring to the executable, which is not specifically signed. Only the manifests are signed. This has never been a requirement for ClickOnce deployments. Until now.

If the user chooses “Run Anyway”, it will run the application. Yay! And when he goes back to TileWorld and tries to run it from there the next time, it will work and will not prompt him again. Yay!

So let’s say he clicks “Run Anyway”, and now he has no problem running your application. What happens when an update is published and he installs it? Uh-oh. The smart screen filter interrupts again, and he has to select “More Info” and “Run Anyway” again.

Is there a way to circumvent your ClickOnce application being captured and stopped by the Smart Screen Filter? Yes. Otherwise, this would be a much shorter (and depressing) article. All you have to do is sign the application executable after building it and before deploying it. For this, you need your signing certificate and signtool.exe, which is one of the .NET Framework tools. There are three points in the build/publish process at which you can do this:

1. Post-publish

2. Post-build

3. Pre-publish

#1: Signing the application executable post-publish

To do it post-publish, you have to do the following:

  • a. Publish the files to a local directory.
  • b. Use signtool to sign the exe for the application.
  • c. Use mage or mageUI to re-sign the application manifest (.exe.manifest).
  • d. Use mage or mageUI to re-sign the deployment manifest (.application).
  • e. Copy the files to the deployment location.

If you’ve already automated your deployment with a script and msbuild, this may be the choice you make. If you publish directly from Visual Studio, the other two options are easier.

#2: Signing the application executable post-build

To do this, you define a post-build command in your project. Assuming your certificate (pfx file) is in the top level of your project, you can use something like this:

"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\signtool.exe" sign /f "$(ProjectDir)TestWin8CO_TemporaryKey.pfx" /p nightbird /v "$(ProjectDir)obj\x86\$(ConfigurationName)\$(TargetFileName)"

  • The double quotes are required.
  • “C:Program Files (x86)Microsoft SDKsWindows\v7.0A\bin\signtool.exe” is the path to the signtool application, used to sign the executable.
  • $(ProjectDir) points to the top directory of the project. The subfolder “\obj\x86” will vary depending on your build output path. The above was created and tested on VS2010. On VS2012, my subfolder is just \obj.
  • $(ConfigurationName) is the build configuration name, such as Debug or Release – this is required because it signs it in the obj directory and has to know which folder to use.
  • $(TargetFileName) is the name of the application executable.
  • TestWin8CO_TemporaryKey.pfx is the name of my certificate file, which is in the top folder of my project.
  • /p nightbird – this is the password for my temporary certificate

I have specified the full path to signtool.exe. I tried to do this with one of the msbuild variables that points to the location of the .NET framework files, but it doesn’t work – it doesn’t translate the variable until after it executes the statement. If you print it out in the post-build command, it shows the right location in the Visual Studio output window, but gives you an error that it can’t find it when it actually runs this statement. I’m saving you some time here, because I messed around with that for quite a while trying to get it to work, and after asking Saurabh at Microsoft, he couldn’t get it to work without specifying the whole path, either. So if you get it to work with a msbuild variable, let me know how.

After you’ve created your version of the post-build command, you need to put it in the project properties. Double-click on Properties and click on the Build Events tab. Put your command in the Post-build event command line box.

Now build the project, and the output window will show the results.

If you now publish the application and put the files in the deployment directory, the user can install it and will not see the Smart Screen Filter. Yay!

What if you have multiple programmers working on the application, and they all build and run the application? Every programmer must have signtool.exe in the exact same location for this post-build command to work for everybody. If you have a 32-bit machine, the folder for the “Microsoft SDKs” is under “C:Program Files”, without the “(x86)” on the end. And someone might actually install Windows to a drive other than C. If their signtool.exe file is not in the same location, they can’t build and run the application, which means they can’t put in changes and test them.

Only the person publishing the application really needs this build command to work. So how do you execute this only for the person publishing the application? You can set up a pre-publish command.

#3: Signing the application executable pre-publish (recommended solution)

The pre-publish command is executed after building the application and right before publishing it. There is no box for this under Build Events, so you have to add it to the project yourself. (Be sure to clear out the post-build event command line before doing this.)

To add a pre-publish command, right-click on the project in Visual Studio and select “Unload Project”.

Now right-click on the project again and select “Edit yourprojectname.csproj”.

It will open the csproj file in Visual Studio so you can edit it. Go down to the bottom and add a new section before the </Project> line. You’re going to put your pre-publish command line in this section.

<Target Name=”BeforePublish”>

</Target>

So what do you put in this section? You are going to specify a command to execute, so you have to use Exec Command, and put the command to execute in double quotes. Since you can’t put double-quotes inside of double-quotes (at least, not if you want it to work), you need to change the double-quotes in your command to &quot; instead. So my build command from above now looks like this:

<Exec Command="&quot;C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\signtool.exe&quot; sign /f &quot;$(ProjectDir)TestWin8CO_TemporaryKey.pfx&quot; /p nightbird /v &quot;$(ProjectDir)obj\x86\$(ConfigurationName)\$(TargetFileName)&quot;" />

After making this match your parameters, save the csproj file and then close it. Then right-click on the project and reload it:

Now if you build your project, you won’t see anything about signing the application executable in the output window. It will only do it if you publish, and there won’t be logging letting you know it signed it. How do you know if it worked? Go to the folder you published to, and look in the Application Files folder. Locate the application executable in the folder for the new version. Right-click on it, choose properties. Look for a tab called “Digital Signatures”. If it’s not found, it’s not signed. If you do see it, go to that tab; it will show the signature list and the signer of the certificate. You can double-click on the signer and then view the signing certificate.

How will the application work after publishing it with a signed executable?

If you sign your executable and your deployment with a valid certificate from a Certificate Authority like Verisign using one of the methods above, when the user clicks install, it will install without stopping and showing the SmartScreen filter, and updates will do the same. Yay!

Do I have to use a certificate from a Certificate Authority to circumvent the Smart Screen Filter?

Yes.

Is there any workaround?

No.

If you try the old tried and true “install the certificate in the trusted publishers store on the client computer”, you will find that this does not circumvent the Smart Screen Filter. You must have a certificate from a valid Certificate Authority. Without one, your customer will get the Smart Screen filter when he installs the application, and every time he installs an update.

What about the bootstrapper (setup.exe)?

The bootstrapper (setup.exe) is signed the same way as the ClickOnce deployment; this happens when you publish. When run, this installs the prerequisites and then calls the ClickOnce application installation. If your certificate is not from a valid CA, the Smart Screen Filter will catch it. This isn’t as critical a problem as the ClickOnce deployment itself because in most cases, your users will only run this the first time.

What about VSTO applications?

If your VSTO application is deployed via a file share or the intranet zone, you will not be impacted. If your VSTO application is deployed via the Internet zone, you may be impacted.

There is no executable for a VSTO application, just an assembly, so you don’t have to do any extra signing. However, the following is true:

If you sign your deployment with a certificate from a CA, everything will work fine, and the Smart Screen filter will not interrupt either the setup.exe or the vsto file from installing the app or keep the app from running.

If you are using a test certificate, setup.exe will be caught by the Smart Screen filter. If you click ‘Run Anyway’, it will install the prerequisites, but it will not let you install the VSTO application.

If you install the test certificate in the Trusted Publishers store, setup.exe will still be caught by the Smart Screen filter, but the VSTO application can be installed and run. This is strongly advised against, as installing the certificate on the user’s machine introduces a significant security risk.

Which method to you recommend?

The advantage of the post-build command is that it is transparent. You can easily go into the Build properties and see there is a post-build command. A pre-publish command is kind of hidden in the project file. However, everybody has to have signtool.exe in the same place, and for us that’s a non-starter. Also, if I did leave the post-build command in there, someone might change it to match their path and check in the change, causing a problem when we actually try to build the application for production.

I used the post-build methods to test my build command until I got it to work, and then ported it to a pre-publish command. 

To summarize, a flowchart:

In summary, here’s a flowchart to help you easily see whether your users will get the Smart Screen filter when they install your application on Windows 8.

One last note: The first version of VS2012 had a bug where the bootstrapper created when publishing a ClickOnce application would not work on a Windows XP machine. This problem was fixed in the first update.

[edit: Fixed build paths, some \’s were missing. Added recommendation. –Robin 2.26.2013]

[edit: After publishing this article, I heard from a couple of people who were still having problems. Please check out the next blog article about this if you are still having problems with the Smart Screen filter, or getting the dreaded “exe has a different computed hash than the manifest” error. –Robin 4.14.2013]

Host your ClickOnce deployment in Azure for pennies per month

July 18, 2011

A while back, I wrote an article that shows you how to host your ClickOnce deployment in Windows Azure Blob Storage. The article assumed that you already had a Windows Azure account.

Since prequels are so popular in Hollywood (Star Wars I-III, anyone?), I thought I would write a prequel to explain how much it costs to host your deployment in Azure, and how to sign up for an Azure account and create the storage account. Hopefully, this article will be more popular than Jar Jar Binks.

Show me the money

How much does it cost to host your ClickOnce deployment in Windows Azure Storage? Well, for a pay-as-you-go account, here are the costs as of today, which I found by going to here and clicking on “Pay-As-You-Go”.

Windows Azure Storage

  • $0.15 per GB stored per month
  • $0.01 per 10,000 storage transactions

    Data Transfers

    • North America and Europe regions
      • $0.15 per GB out
    • Asia Pacific Region
      • $0.20 per GB out
    • All inbound data transfers are at no charge.

Let’s take an example. Let’s say we have a deployment consisting of 150 files and a total size of 30MB. We have 100 customers, and we are going to publish a new version every month, starting in January, and all 100 customers are going to update to every version. At the end of the year, how much will this have cost us?

Put your mathlete hats on and get out your calculators. Ready? Here we go…

The storage cost for one month = $0.15 / GB * 30MB * 1GB/1000MB = $.0045. So January will be (1*value), February will be (2*value) because we’ll have two versions. March will be (3*value), and so on until December when it hits (12*value) because we have 12 versions stored. After calculating that out for the whole year, the total cost of storing the deployment files for the year will cost $0.2475. This is affordable for most people.

Let’s talk about the storage transactions. If you have a file bigger than 32MB, it is one transaction per 4MB and one at the end of the list of blocks. If the file is smaller than 32MB, it’s 1 transaction for that file. All of the files in our case are less than 32MB. So when we upload a new version of the deployment, here are the costs:

Storage Transaction cost when uploading once = 30 files * $.01/10000 = $0.00003.

Data Transfer costs are free going up, so nothing to calculate there. How about coming back down to your customer?

Transaction cost when downloading once = 30 files * $.01/10000 = $0.00003.

Data transfer cost when downloading once = 30 MB * 1GB/1000MB * $0.15/GB = $0.0045

Now you’re wishing you’d paid attention in all of those math classes, aren’t you? And we’re not done yet. Let’s calculate our total for the entire year.

  • $0.00036 = Storage Transaction cost for uploading 12 versions throughout the year.
  • $0.00 = Data Transfer cost for uploading 12 versions.
  • $0.2475 = Storage for 12 versions uploaded once per month and retained throughout the year.
  • $0.036 = Storage Transaction cost for downloading 12 versions for 100 customers.
  • $5.40 = Data Transfer cost when downloading 12 versions for 100 customers.

So our grand total is $5.68386, which is an average of 47 cents per month.

For more detailed information on Windows Azure storage costs, check out this blog entry from the Windows Azure Storage Team; it was written before they eliminated the Data Transfer cost of uploading to blob storage so don’t include that cost. Thanks to Neil McKenzie for clarification, and for providing the link to the Windows Azure Storage Team blog.

Hook me up with an Azure account

You have three basic options.

  1. If you have an MSDN subscription either through your company or because you are a bizspark customer, you probably get an MSDN benefit that more than covers your ClickOnce deployment costs. The basic mechanism for signing up will be similar, but the way you set up your storage account will be the same, so that information below should work for you as well as for those who have no MSDN account. You will have to give your credit card to cover any charges over the free usage benefit.
  2. If you want to try this out for free without giving your credit card, you can sign up for a free 30-day Azure pass. At the end of 30 days, you will have to delete the storage account and set it up on a real account if you want to continue using it. (If you use the same storage account name on the new account, the URL will be the same and your users will be able to pick up updates even though you changed accounts.)
  3. If you sign up for a pay-as-you-go account, you have to give your credit card, but you get a free benefit which would make my deployment example free for the first 3 months. Then at the end of 3 months, it will start charging your credit card, and you will not have to move your storage account. Let’s take a look at how to sign up for this type of account.

Go to http://www.microsoft.com/windowsazure/offers/ This should take you to the Windows Azure Platform Offers shown in Figure 1.


Figure 1: Windows Azure Platform Offers

Click on the Pay-As-You-Go tab and then click the Buy button on the right. Next, you will be given a choice to sign up for a new Windows Live account, or use one you already have (Figure 2).


Figure 2: Sign up or sign in.

They are going to send you e-mail on this account, so be sure it’s an account you actually check periodically. After logging in with your Windows Live account, you will be prompted for your profile information (Figure 3).


Figure 3: Profile information.

Fill in your address and phone number and click the Next button. You will be prompted for company information (Figure 4). I think you’ll find that a lot of people work for “n/a”. I doubt Microsoft looks at that information, but you can amuse yourself by putting in the name of the most popular fruit in America, just in case someone IS looking at the company names — give them a surprise. Although, it is widely reported that Apple uses Windows Azure Storage for their new iCloud service, so it might not surprise them at all. (Google would definitely surprise them!)


Figure 4: Company information

Now they will ask for your Service Usage Address. (You can check the box to use the information you filled in on the profile page.) This is displayed in Figure 5.


Figure 5: Service Usage Address.

Fill in the information and click Finish. Next you will get directions to close this page and go to the Services page. You will find yourself at the Customer Portal for the Microsoft Online Services (Figure 6).


Figure 6: Customer Portal for Microsoft Online Services

Now you get to pick a plan. If you pick the Windows Azure Platform Introductory Special, they provide some benefit for free for the first 90 days. This benefit covers our ClickOnce deployment example above, so it would be free for the first three months, and then would cost you as noted above. If you’re nuts and you don’t like free stuff and just want to pay now, You can select the Windows Azure Platform Consumption. Click the Buy Now button on your selection; you will be prompted to log in again and then taken to the Pricing and Online Subscription Agreement screen (Figure 7).


Figure 7: Pricing and Online Subscription Agreement.

Fill in your subscription name. Pick something that you like and can remember. Then read the Online Subscription agreement as carefully as you read all of these things, check the box and hit the Next button. If you don’t read it carefully, and Microsoft comes to your house to pick up your firstborn child, don’t say I didn’t warn you.

Next comes the hard part. Fill in your credit card information and click the Submit button. If your credit card information is correct, you will be sent to the Azure portal (Figure 8).

I now have an Azure account! How do I set up my new storage account?

This is the Windows Azure Portal, which you can reach through this URL: http://windows.azure.com


Figure 8: Windows Azure Portal

This screen is where you manage all of your Azure services. You can define services, set up databases, and set up storage accounts, which is what we’re here to do. Click on the ‘New Storage Account’ icon at the top of the screen as shown in Figure 9.


Figure 9:Create a new storage account

Next you will be prompted for your new storage account name (Figure 10). This will be used in the URLs for accessing your deployment, so you should probably think twice before making it something like “myapplicationsux” or “mypornpix”. The name must have only lowercase letters and numbers. After you fill it in, it will tell you if it’s already used. If it doesn’t give you any errors, it’s available.

In regards to the region, you will be asked to either choose a region, choose an affinity group, or create a new affinity group. This is not something you can change later, so choose wisely. (Unlike Walter Donovan in Indiana Jones and the Last Crusade, if you choose poorly, you will not instantly grow ancient and disintegrate.)


Figure 10: Create a new storage account

An affinity group is basically specifying a location and naming it. You can then choose the affinity group when setting up other services to ensure that your compute instances and your data are in the same region, which will make them as performant as possible.

Just in case you ever want to use this account  for something other than Blob Storage, I recommend setting up an affinity group. Select the radio button for “Create or choose an affinity group”, and then select the dropdown. Then you can select the location – be sure to use the dropdown. Mine defaulted to “anywhere in the US”, but it’s better to select a specific region, such as North Central or South Central, or whatever region is closest to you. Then click OK to go ahead and create the storage account. You should now see your storage account in the Windows Azure Portal (Figure 11).


Figure 11: Storage Account

You can assign a custom DNS entry to your storage account by clicking the Add Domain button on the top of the screen and following the instructions.

The URL for accessing your blob storage is on the right side of the screen. Mine is robindotnet.blob.core.windows.net. On the right are also the View buttons for retrieving the primary access key that you will need to set up a client application to access your blob storage. With these two pieces of information, you should be able to view your data.

For uploading and maintaining your files in blob storage, I use Cloud Storage Studio from Cerebrata which is excellent, but not free. There are free storage explorers available, such as the Azure Storage Explorer from CodePlex and the Cloudberry Explorer for Azure Blob Storage.

You should be good to go. Now go read the article on how to actually put your ClickOnce deployment in your new storage account, and start racking up those pennies.

How to host a ClickOnce deployment in Azure Blob Storage

February 13, 2011

Now that Microsoft Azure is becoming more widely used, I’m going to do some blogging about it, since I’ve had an opportunity to work with it quite a bit. What better place to start than to do a crossover blog entry on both ClickOnce deployment and Microsoft Azure? So I’m going to show you how to host your ClickOnce deployment in your Azure Blob Storage.

To do this, you need an application that you can use to manage blob storage. I use the Cloud Storage Studio from cerebrata in my example. A free application recommended by Cory Fowler (Microsoft Azure MVP) is the Azure Storage Explorer from codeplex.

Here  is a video that explains this process in detail, complete with screenshots. There is a summary below.

To summarize:

Create a container in blob storage for your ClickOnce deployment. You’ll need the container name when setting your url. I selected ‘clickoncetest’. The only characters allowed are lower case letter, numbers, and the hyphen (-).

In your project properties, set your Publishing Folder Location to somewhere on your local drive. Set the Installation Folder URL to the URL that will point to the container in blob storage that is going to host your deployment.

For example, I set the first one to E:\__Test\clickoncetest. My account is goldmailrobin, so my installation URL will be http://goldmailrobin.blob.core.windows.net/clickoncetest/

Publish your application. Then go to the local folder and copy the files and folders up to the container in blob storage. When you are finished, in the root of that container you should have the deployment manifest (yourapp.application file) and the bootstrapper (setup.exe) (and publish.htm if you included it). You should also have a folder called “Application Files”.

In “Application Files”, you should see the ‘versioned folders’ that contain the assemblies for each version of your application that you have published.

When doing updates, you need to upload the versioned folder for the new update, and replace the files in the root folder (yourapp.application, setup.exe, and publish.htm).

If you have any problems, you can check the MIME types on the published files and make sure they are right. These can be changed for each file if needed. With ClickOnce deployments, you should definitely be using the option that appends .deploy to all of your assemblies, so you should not have any files with unknown extensions. If you want to double-check, the MIME types for a ClickOnce deployment are explained here.

Remember that with Blob Storage, it is not going to be storing the files that is going to be the biggest cost factor, it is going to be the transfer of the files to and from the client.

How do I programmatically find the deployed files for a VSTO Add-In?

July 11, 2010

You can use ClickOnce deployment to install Office Add-ins for Office 2007 and Office 2010. It is very similar to deploying a desktop application, but not identical. With all types of ClickOnce deployments, you may include resources that you need to access programmatically. Files with a file extension of .xml, .mdf, and .mdb are assumed to be data and are deployed by default to the ApplicationDeployment.CurrentDeployment.DataDirectory, but non-data files will be found in the folder with the deployed assemblies.

With a non-VSTO application, you can programmatically find the location of the deployment by either accessing System.Windows.Forms.Application.StartupPath or by checking the Location of the executing assembly. With a VSTO application, the location of the executing assembly does not match the location of the deployment files.

The dll for a VSTO add-in is copied from the deployment directory into a separate location, and it is loaded by the host application from there. I would guess that this happens whenever you run the host application (such as Outlook), and this is why the Office add-in can be uninstalled, reinstalled, updated, etc., without impacting the host application. The changes don’t take effect until the host application is closed and reopened. So when you retrieve the executing assembly’s location, it points to the dll in the run location, and you can’t use that to track down the deployment files.

So how do you find the location of the deployment? You have to examine the CodeBase property of the executing assembly. This comes across as a URI, so you have to retrieve the LocalPath from the URI. It also includes the file name of the main assembly, so you have to retrieve just the directory name specifically.

Here’s the code in VB:

'Get the assembly information
Dim assemblyInfo As System.Reflection.Assembly = System.Reflection.Assembly.GetExecutingAssembly()

'Location is where the assembly is run from 
Dim assemblyLocation As String = assemblyInfo.Location

'CodeBase is the location of the ClickOnce deployment files
Dim uriCodeBase As Uri = New Uri(assemblyInfo.CodeBase)
Dim ClickOnceLocation As String = Path.GetDirectoryName(uriCodeBase.LocalPath.ToString())

Here’s the code in C#:

//Get the assembly information
  System.Reflection.Assembly assemblyInfo = System.Reflection.Assembly.GetExecutingAssembly();
                    
  //Location is where the assembly is run from 
  string assemblyLocation = assemblyInfo.Location;

  //CodeBase is the location of the ClickOnce deployment files
  Uri uriCodeBase = new Uri(assemblyInfo.CodeBase);
  string ClickOnceLocation = Path.GetDirectoryName(uriCodeBase.LocalPath.ToString());

When you compare these values for a non-VSTO ClickOnce application, the directories are the same. When I run this code for my Outlook Add-In, I get these values:

assemblyLocation =

C:\Users\Robin\AppData\Local\assembly\dl3\VBJZ5WH8.6NJ\HRZA3JXN.LVG\b0520efe\3a5b99ef_2e21cb01\GoldMail Outlook Add-In.DLL

ClickOnceLocation =

C:\Users\Robin\AppData\Local\Apps\2.0\ZMBZ82EH.TDG\WHXVWE4L.GZ7\gold..vsto_7a251ffffc558391_0002.0000_10ff9c34a357cc30

If you’ve ever looked at the ClickOnce cache, the ClickOnceLocation will be familiar to you, being in the same format as and location as other types of ClickOnce applications.

So the assemblyLocation is where the dll is actually being run from by the hosting application (Outlook in this case), and the ClickOnceLocation is the location of the other deployment files. Any files that you deploy with your VSTO add-in and want to access programmatically can be found there.

MIME Types for ClickOnce deployment

June 12, 2010

When you are hosting a ClickOnce deployment on a webserver, you need have certain MIME types defined so the server knows how to handle the files. You can host a ClickOnce deployment on any server regardless of the operating system. So you can host a ClickOnce deployment on an Apache server just as easily as a server running Microsoft Windows Server. You just need to set up the right MIME types.

When you install the .NET Framework, it registers the MIME types automatically. This is why you don’t have to set up MIME types if you install IIS on your desktop machine and test your deployments by deploying to localhost. Carrying that forward, if you have the .NET Framework installed on your server, the MIME types should already be registered.

This is generally one of the first things to check when you’re having problems downloading the deployment. A definite sign that your MIME types are not set up correctly is if your customers try to install the application and it shows the XML of the deployment manifest (the .application file) in Internet Explorer rather than installing the application.

Here are the basic MIME types you need for every ClickOnce deployment:

.application –> application/x-ms-application
.manifest –> application/x-ms-manifest
.deploy –> application/octet-stream

If you are targeting .NET 3.5 or .NET 4.0, you need these as well:

.msp –> application/octet-stream
.msu –> application/octet-stream

If you are deploying an Office application (VSTO add-in), you need this one:

.vsto –> application/x-ms-vsto

If you are deploying a WPF application, you need these:

.xaml –> application/xaml+xml
.xbap –> application/x-ms-xbap

Click one of these links to see how to set MIME types in IIS 6 or IIS 7.

If your application is hosted on an Apache webserver, you can set up your own MIME types by putting entries in the .htaccess file in the root folder of your deployment. The syntax for adding the MIME types is this:

AddType Mime-type file-extension

For example, for the first three MIME types above, you would add these lines to your .htaccess file:

AddType application/x-ms-application application
AddType application/x-ms-manifest manifest
AddType application/octet-stream deploy

You can create the .htaccess file simply by opening notepad or some other text editor and adding the lines above to it, and saving it with the file name of .htaccess. Then copy it to the root of your deployment folders, and those MIME types will work for that folder and all of its subfolders.

For more information than you ever wanted to know about .htaccess files, check out this article.

Enhanced Logging in ClickOnce Deployment

May 31, 2010

With the .NET 4.0 Framework, Microsoft has added the ability to turn on verbose logging for the install, update, and uninstall of a ClickOnce application. Of course, it doesn’t do much good to modify the amount of logging without providing access to the log file, so they have also added the ability to let you specify the location and name of the output file for the logging. When you define the log file path, it creates or appends to the log file every time a ClickOnce application is installed, updated, or uninstalled. It even works if you are doing programmatic updates.

What’s particularly keen about these features is that they work for any ClickOnce application as long as .NET 4.0 is installed. In other words, if your ClickOnce application targets the .NET 2.0, 3.0, or 3.5 Framework, you can still install the .NET 4.0 Framework on the computer and take advantage of the new logging features. (Installing .NET 4.0 also installs the updated ClickOnce engine, which is more stable.)

Now what you really want to know is what the log files look like, so you’ll know whether this is worth the trouble. I created a small application that has one Windows Form, and I deployed it with ClickOnce. I installed and uninstalled it with verbose logging turned on and with verbose logging turned off. Then I added programmatic updates and let it update asynchronously and restart the application. The logging for automatic updates is the same as an install, but it’s less detailed if you are using programmatic updates. Here are the log files:

ClickOnceLog_Verbose_Install
ClickOnceLog_Verbose_Uninstall
ClickOnceLog_Verbose_Update_Programmatic
ClickOnceLog_NotVerbose_Install
ClickOnceLog_NotVerbose_Uninstall
Zip file of all five log files

Since you’re still reading this, you probably want to know how to turn on the verbose logging and set up the log file. Or at least, how to set up the log file (you may think the verbose logging is too chatty). It’s very simple. You need to add some entries to the Windows registry.

The log file path goes here:
HKEY_Current_User\Software\Classes\Software\Microsoft\Windows\CurrentVersion\Deployment\LogFilePath

The registry key for verbose logging goes here:
HKEY_Current_User\Software\Classes\Software\Microsoft\Windows\CurrentVersion\Deployment\LogVerbosityLevel

To add these, open the registry editor and drill down to the Deployment sub-key. Right-click on Deployment and select “New” and then “String Value” as displayed here:

Type in the key name and press Enter. Then press Enter again or double-click on the entry to set the value.

Type in the value and press Enter.

If you want to stop outputting the logging, delete the registry key for the LogFilePath.

If you change or set the LogFilePath, it takes effect immediately. If you change or set the LogVerbosityLevel, it does not take effect until the user logs off and logs back in.

Another fun piece of information: If you set the LogFilePath to an empty string instead of deleting the registry entry, every time you run the ClickOnce application, you will get an error that dfsvc.exe (the ClickOnce service) is no longer working. (I’m thinking that’s not a feature.)

One word of caution: there is no automatic cleanup of the log file, no rollover when it gets to a certain size or anything like that. ClickOnce continues to append to the log file with every install, update, and uninstall. So if you start up the logging on someone’s computer, you must remember to come back around and turn it back off.

If you want to leave the logging turned on, I strongly recommend that you have your application manage the log file. For example, you could rename the log file every time a new one is created, and make sure you only keep the most recent one. You can tell if the application is newly installed or if an update is newly installed by checking the IsNetworkDeployed property of System.Deployment.Application.ApplicationDeployment. If true, the user just installed the application, or installed an update, and you could manage the log file at that time.

Programmatically modifying the registry

Editing the registry is fraught with peril. I remember the old days when you used to be able to call Microsoft for help with Windows (the REALLY old days). One of the first things they would ask you was, “Did you edit the Windows registry?” It only took one phone call to learn that if you said “Yes”, they would not help you. This taught many developers to, well, lie.

So just for grins, I wrote a small .NET application that provides a user interface you can use to change the registry entries for the ClickOnce logging. Basically it displays a screen where you can browse to the folder where you want your log files, and you can turn verbose logging on and off. Here’s what the UI looks like:

You could take this project and set the target Framework to the same as your application. If your user has the .NET Framework installed on their computer, you can just build this application and then copy the executable to the user’s machine and run it to set the values.  Or better yet, put it on a thumb drive and run the executable from there.

Since the registry entries are in HKCU, you could also have your ClickOnce application update these values, but that’s not helpful for the first install!

Let’s take a look at the LogSettings class that does all the work. This is in C#, but the whole solution is available in both C# and VB at the end of this article.

First, here are the namespaces you need to include. Microsoft.Win32 is for the registry changes.

using System.IO;
using Microsoft.Win32;

Next are the constants and properties. This includes the registry sub-key name. When you retrieve the registry sub-key, you specify which hive, such as HKCU, so it is not part of the sub-key name.

//This is where the values go for the ClickOnce logging, in HKEY_Current_User.
private static string ClickOnceLoggingSubKeyName = 
  @"Software\Classes\Software\Microsoft\Windows\CurrentVersion\Deployment";

/// <summary>
///This is the registry value name for the log file path. 
///The value should be the fully-qualified path and file name of the log file.
/// </summary>
private static string rkName_LogFilePath = "LogFilePath";
      
/// <summary>
/// This is the registry value name for the logging level. 
/// The value should be = 1 if you want verbose logging. 
/// To turn off verbose logging, you can delete the entry or set it to 0.
/// </summary>
private static string rkName_LogVerbosityLevel = "LogVerbosityLevel";

/// <summary>
/// Fully-qualified path and name of the log file.
/// </summary>
public string LogFileLocation { get; set; }

/// <summary>
/// Set this to 1 for verbose logging.
/// </summary>
public int LogVerbosityLevel { get; set; }

/// <summary>
/// Set to true if doing verbose logging.
/// </summary>
public bool VerboseLogging { get; set; }

Here is the method called to create an instance of the class; this reads the current entries from the registry and stores them in the class properties. First you open the sub-key, and then you retrieve the two values that you need.

/// <summary>
/// Create a new instance of this class and get the value for the registry entries (if found).
/// </summary>
/// <returns>An instance of this class.</returns>
public static LogSettings Create()
{
  LogSettings ls = new LogSettings();

  //open the Deployment sub-key.
  RegistryKey rk = Registry.CurrentUser.OpenSubKey(ClickOnceLoggingSubKeyName);

  //get the values currently saved (if they exist) and set the fields on the screen accordingly
  string logLevel = rk.GetValue(rkName_LogVerbosityLevel, string.Empty).ToString();
  if (logLevel == "1")
  {
    ls.VerboseLogging = true;
    ls.LogVerbosityLevel = 1;
  }
  else
  {
    ls.VerboseLogging = false;
    ls.LogVerbosityLevel = 0;
  }
  ls.LogFileLocation = rk.GetValue(rkName_LogFilePath, string.Empty).ToString();           
  return ls;
}

And last, but not least, here is the method to save the entries back to the registry. You open the sub-key in write mode, so you can modify the associated values. If the log level is not set to verbose, this deletes the value for the logging level from the registry. If no file name is specified for the log file, this deletes that registry entry. Otherwise, the entries are updated with the values set on the screen.

/// <summary>
/// Save the values to the registry.
/// </summary>
/// <returns></returns>
public bool Save()
{
  bool success = false;
  try
  {
    //Open the Deployment sub-key.
    RegistryKey rk = Registry.CurrentUser.OpenSubKey(ClickOnceLoggingSubKeyName, true);
    //Set the values associated with that sub-key.
    if (this.VerboseLogging)
      rk.SetValue(rkName_LogVerbosityLevel, "1");
    else
    {
      //check to make sure the [value] exists before trying to delete it 
      object chkVal = rk.GetValue(rkName_LogVerbosityLevel);
      if (chkVal != null)
      {
        rk.DeleteValue(rkName_LogVerbosityLevel);
      }
    }

    if (this.LogFileLocation.Length == 0)
    {
      //check to make sure the [value] exists before trying to delete it 
      //Note: If you set the values to string.Empty instead of deleting it,
      //  it will crash the dfsvc.exe service.
      object chkPath = rk.GetValue(rkName_LogFilePath);
      if (chkPath != null)
        rk.DeleteValue(rkName_LogFilePath);
    }
    else
    {
      rk.SetValue(rkName_LogFilePath, this.LogFileLocation);
      string logFolder = Path.GetDirectoryName(this.LogFileLocation);
      if (!Directory.Exists(logFolder))
          Directory.CreateDirectory(logFolder);
    }
    success = true;
  }
  catch
  {
    throw;
  }
  return success;
}

The Visual Studio 2010 solutions can be downloaded in either CSharp or VB.

Download the C# version

Download the VB version

Acknowledgements

Thanks very much to Jason Salameh at Microsoft, who I believe was largely responsible for adding this feature. I believe it will provide some clarity and help with troubleshooting when people have problems installing ClickOnce applications.

[Edit 7/7/2011 Move downloads to Azure blob storage]

[Edit 3/8/2014 Move to different Azure blob storage]

What’s New in ClickOnce Deployment in .NET 4.0

May 23, 2010

There are some cool new features in .NET 4.0 and VS2010 for ClickOnce deployment. I’m going to summarize the new features here and provide some links to the details.

The ClickOnce engine has been updated and strengthened, and the team that supports the runtime says that the errors that seem to have no solution, such as "dfsvc.exe has stopped working", should be fixed. They have also added enhanced logging and the option to set the location (and name) of the ClickOnce log file — no more searching through your Temp folder! What’s cool about these changes is that they apply if you install .NET 4.0 on the computer, regardless of the version your application targets. So if you have a Windows Forms application that targets .NET 2.0 and you are having problems installing it, you can install .NET 4.0 on the computer and turn on the verbose logging. This has already been useful to me, so I will blog about it next.

They have fixed “the certificate problem” in all cases. The problem is discussed in detail here. The basic problem was that when you changed the certificate used to sign the ClickOnce deployment, the customers would have to uninstall and reinstall the application. They fixed this in .NET 3.5 SP-1 if you used automatic updates, but not for programmatic updates, and not for VSTO projects. Now they have fixed it in all cases when targeting the .NET 4.0 Framework.

With VS2010, you can configure your application to target multiple versions of the .NET Framework. Of course, this doesn’t mean it will run on multiple versions – you will have to verify that yourself. To use this feature, you have to manually edit the manifest files and app.config file and re-sign the manifests. For details, click here.

For XBAP applications (browser-hosted WPF applications), these can now elevate to Full Trust just like any other ClickOnce application. For more info, check out this article.

For VSTO projects, you can now deploy multiple Office solutions together with one ClickOnce installation. Like the Three Musketeers, it’s all for one and one for all. All of them are installed together, and if you uninstall through Programs, it uninstalls all of them. 

For VSTO projects, you also now have the ability to do some post-deployment actions. For example, you can create registry keys, modify a config file, or copy a document to the end user computer.

Another interesting change for VSTO projects is if you are targeting the .NET 4.0 Framework, you no longer have to include the Primary Interop Assemblies as a prerequisite. The type information for the PIAs that you use in your add-in is embedded into the solution assembly and used at runtime instead of the PIAs.

And last, but not least — in .NET 3.5 SP-1, they quietly introduced the ability to create your own custom UI for your ClickOnce deployment, while still using the ClickOnce engine to install and update your application. They have improved this feature for .NET 4.0; check out the walkthrough example here.

I think there’s something useful in the new features for everyone. If there are other features you would like to see – other than “install for all users”, which is a complete paradigm shift – please leave a comment. I will summarize the requests and pass them on to the ClickOnce team at Microsoft.

The Future of ClickOnce Deployment

April 18, 2010

People frequently ask about the future of ClickOnce deployment. I hear and read things like “Microsoft hasn’t updated their ClickOnce blog since 2006.” “They never change anything in ClickOnce.” “You never hear anything about ClickOnce deployment updates.” “Are they going to keep supporting it?” “Why doesn’t Microsoft use ClickOnce themselves?”

The answers to those questions are: 1. It’s not sexy so nobody talks about it. 2. Yes they do. .NET 3.5 included ClickOnce deployment for VSTO applications, which is awesome. And SP-1 included optional signing and hashing, file associations, and other fun stuff. 3. You do if you know where to listen. 4. Yes. 5. They use it for many of their apps used internally. I don’t think they can use it for Visual Studio or Office. Can you imagine what the prerequisite list would look like?

Silverlight is sexy. Windows Phone 7 is sexy. WPF is sexy. Deployment? Not so much. The release of Silverlight 4 was even noted on one of the Apple News sites. How does “I created this really cool component that you can embed in a WPF application and it cleans your computer screen and tidies up your desk” compare with “I figured out how to install this really cool component on your computer.” See what I mean?

Deployment is like Amazon.com delivery. You don’t ever think about how your books get from that cool page on the web to your Kindle in two minutes (or, if you’re a traditionalist, to your front porch in two days), but aren’t you excited when they show up?

Even though Microsoft doesn’t go on Oprah to discuss their feelings about ClickOnce deployment, I have discovered over the past few months that they really do care about it.

Saurabh Bhatia, the ClickOnce expert at Microsoft, has been helping me over the past year to respond to some of the more difficult questions in the forums. When I attended the MVP Summit in February, I met with Saurabh and some of the other people who work in and around ClickOnce. The 1-hour meeting stretched into 3-1/2 hours as we discussed feedback and information I had collected from the MSDN Forums, StackOverflow, blog articles, and from individuals who e-mailed me or talked to me after my presentations. I passed on complaints, common problems, and most frequently requested new features. They really wanted to know, and were glad to get the information.

In return, they provided me with a look at what’s coming in .NET 4.0 (that’s the next blog post). Since I’ve returned, they have followed up with answers to my questions. (They were sending me e-mails with answers before I’d even left Washington!) Saurabh and one of his cohorts, Jason Salameh, continue to provide resources to help me support the ClickOnce Deployment community, and Saurabh set up regular meetings just to touch bases and help me with any difficult issues or questions that come up that I can’t answer. I think of it as a “Stump Saurabh!” session, but so far I’ve only managed to stump him once (proxy authentication). I learn something new with every conversation.

Also coming soon is an update of the Patterns and Practices Smart Client Software Factory and the ClickOnce documentation for it. (I know that because they asked me to do the update to the docs. I was so flattered!)

It’s safe to say that Microsoft will continue to support and enhance ClickOnce deployment. My next blog post will be a summary of the new features available in .NET 4.0. If there are features you want, post a comment and I’ll pass it along. If you have questions about problems you’re having with ClickOnce deployment, please post a question in the MSDN ClickOnce and Setup & Deployment Forum. I’ll see you there.

How to pass arguments to an offline ClickOnce application

March 21, 2010

In ClickOnce Deployment, it is a common belief that you can not pass arguments to an application unless:

  1. The application is deployed to a web server, and
  2. The application is online-only.

If you are interested in passing query parameters to an online-only application deployed to a web server, check out the article on MSDN. About offline applications, or applications deployed via a file share, that page says this:

 

However, this is no longer the case. I suspect it was changed when they added the ability to do file associations in .NET 3.5 SP-1. It turns out that you can now pass parameters to:

  • an offline ClickOnce application,
  • a ClickOnce application deployed to a file share, and even to
  • an offline ClickOnce application deployed to a file share.

And of course you can pass parameters to an online-only application using query parameters, but we already knew that (see article referenced above).

Here’s how you call the application and pass the arguments:

System.Diagnostics.Process.Start(shortcutPath, argsToPass);

This is how you read the argument string:

//Get the ActivationArguments from the SetupInformation property of the domain.
string[] activationData = 
  AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData;

Here are three different ways to pass and receive arguments:

  1. Pass a file path and name as a URI. This mimics what happens when you set up a file association and double-click on a file with that extension.  The argument will start with “file:”.
  2. Pass a query string with key-value pairs in it. This is the same way you pass and parse query strings for an online-only application. The argument will start with a question mark.
  3. Pass one value or a list of comma-delimited values.

Locate the shortcut for the application.

The first thing you need to do is locate the shortcut in the start menu for the application you want to run — this is made up of the Publisher Name and Product Name. Here is how to find the location of the application’s shortcut:

StringBuilder sb = new StringBuilder();
sb.Append(Environment.GetFolderPath(Environment.SpecialFolder.Programs));
sb.Append("\\");
//publisher name is Nightbird
sb.Append("Nightbird");  
sb.Append("\\");
//product name is TestRunningWithArgs
sb.Append("TestRunningWithArgs.appref-ms ");  
string shortcutPath = sb.ToString();

Call the application with an argument string:

System.Diagnostics.Process.Start(shortcutPath, argsToPass);

The argument string that you pass can not have spaces or double-quotes in it. If you pass [a b c], you will only get [a] on the receiving side. If you pass [“a”,”b”,”c”], you will get [a]. If you need to pass arguments with spaces, pass your arguments as a query string.

How to create the argument string to be passed, and how to parse it on the receiving side.

Case 1: Sending file path and name. This mimics what happens when you use the ClickOnce properties to set up a file association, and the user double-clicks on an associated file.

Build the argument string:

//If you have file associations, 
//and you double-click on an associated file,
//it is passed in to the application like this: 
//file:///c:/temp/my%20test%20doc.blah
//So format it as a URI and send it on its way.
string fileName = @"D:\MyPictures\thePicture.jpg";
Uri uriFile = new Uri(fileName);
string argsToPass = uriFile.ToString();

On the receiving side, retrieve the file path from the URI:

//This is what you get when you set up a file association 
//  and the user double-clicks on an associated file. 
Uri uri = new Uri(activationData[0]);
string fileNamePassedIn = uri.LocalPath.ToString();

Case 2: Sending a querystring

Define the argument string:

//querystring 
string argsToPass = "?state=California&city=San%20Francisco";

Parse the string on the other side:

//NameValueCollection is like a dictionary.
NameValueCollection nvc = 
  System.Web.HttpUtility.ParseQueryString(activationData[0]);
//Get all the keys in the collection, 
//  then pull the values for each of them.
//I know I'm only passing each key once, with one value, 
//  in the querystring.
string[] theKeys = nvc.AllKeys;
foreach (string theKey in theKeys)
{
  string[] theValue = nvc.GetValues(theKey);
  //key is theKey
  //value is theValue[0]
  System.Diagnostics.Debug.Print("Key = {0}, Value = {1}", 
    theKey, theValue[0]);
}

Case 3: Pass a list of comma-delimited values

Define the argument string:

//pass a comma-delimited list of values
//don't use any spaces
string argsToPass = "arg1,arg2,arg3,arg4";

Parse the string on the other side:

//I've only ever seen activationData have one entry,
//  but I'm checking for multiples just in case. 
//This takes each entry and splits it by comma 
//  and separates them into separate entries.            
char[] myComma = { ',' };
foreach (string arg in activationData)
{
  string[] myList = activationData[0].Split(myComma, 
    StringSplitOptions.RemoveEmptyEntries);
  foreach (string oneItem in myList)
    System.Diagnostics.Debug.Print("Item = {0}", oneItem);
}

If you only want to send one argument, just use activationData[0].

Summary and Code Samples

This showed three ways to pass arguments to an offline ClickOnce application, and how to parse them on the receiving side.

Code samples can be found here. They are available in both C# and VB. They were built with VS2008 and they target .NET 3.5 SP-1, which is the minimum version for which this will work.

The first solution (RunningWithArgs) receives the arguments, parses them, and displays them in a listbox. You need to deploy this one to a webserver or a file share and then install it. To determine which kind of argument it is receiving, it checks the first characters of the argument string.

The second solution (CallRunningWithArgs) calls the first one and passes arguments to it. It uses the conventions mentioned previously to determine the kind of arguments being passed. If the first application is installed, you can just run this one out of Visual Studio.