Archive for 2013

Windows Azure at San Diego Code Camp – 27th and 28th of July 2013

July 15, 2013

There is a code camp in San Diego on July 27th and 28th that has a lot of really interesting sessions available. If you live in the area, you should check it out – there’s something for everyone. It also gives you an opportunity to talk with other developers, see what they’re doing, and make some connections.

I’m going to be traveling from the San Francisco Bay Area to San Diego to speak, as are some of my friends — Mathias Brandewinder, Theo Jungeblut, and Steve Evans.

For those of you who don’t know me, I’m VP of Technology for GoldMail (soon to be renamed PointAcross), and a Microsoft Windows Azure MVP. I co-run the Windows Azure Meetups in San Francisco and the Bay.NET meetups in Berkeley. I’m speaking about Windows Azure. One session is about Windows Azure Web Sites and Web Roles; the other one is about my experience migrating my company’s infrastructure to Windows Azure, some things I learned, and some keen things we’re using Azure for. Here are my sessions:

Theo works for AppDynamics, a company whose software has some very impressive capabilities to monitor and troubleshoot problems in production applications. He is an entertaining speaker and has a lot of valuable expertise. Here are his sessions:

Mathias is an Microsoft MVP in F# and a consultant. He is crazy smart, and runs the San Francisco Bay.NET Meetups. Here are his sessions:

Steve (also a Microsoft MVP) is giving an interactive talk on IIS for Developers – you get to vote on the content of the talk! As developers, we need to have a better understanding of some of the system components even though we may not support them directly.

Also of interest are David McCarter’s sessions on .NET coding standards and how to handle technical interviews. He’s a Microsoft MVP with a ton of experience, and his sessions are always really popular, so get there early! And yet another Microsoft MVP speaking who is always informative and helpful is Jeremy Clark, who has sessions on Generics, Clean Code, and Design patterns.

In addition to these sessions, there are dozens of other interesting topics, like Estimating Projects, Requirements Gathering, Building cross-platform apps, Ruby on Rails, and Mongo DB (the name of which always makes me think of mangos), and that’s just to name a few.

So register now for the San Diego Code Camp and come check it out. It’s a great opportunity to increase your knowledge of what’s available and some interesting ways to get things done. Hope to see you there!

Moving our Windows Azure SQL Databases for the Rebranding/AzureSDK2.0 project

July 14, 2013

As detailed in my last couple of posts, we have rebranded our company as PointAcross, which means we rebranded all of the products except the desktop Composer, which is still known as GoldMail. I created all new services and storage accounts with pointacross in the name in a new region, and everything is published in production and working fine.

Only one thing is left – the Windows Azure SQL Databases are still in US North Central. This is a disaster waiting to happen. It is a well-known fact that WASD’s have issues with connections, and you always have to put in connection management code. Running your services in one data center with your database in another is just asking for trouble. So now I have to move the primary database as well as the tiny session state database used in a couple of our web applications.

Although we have somewhere around 15 services running in Azure, only four of them access the primary database. These are the services for our primary application (PointAcross, used to make messages), and the Players used to play those messages.

Some investigation revealed several ways to migrate the data.

1. Copy the database from one place to another.
2. Use the SQL Azure Migration Wizard on codeplex.
3. Use the bacpac feature to back up a WASD to blob storage, then to restore it to the new database server.
4. Use the wrapper that Redgate has written around the bacpac capability that makes it very easy to backup/restore a WASD from/to blob storage.

1. Copy the database

This is pretty simple. You can make a copy of your database using T-SQL. It’s as simple as logging into the SQLServer Management Studio, connecting to the destination server, and running this command against the master database.

create database MyDatabaseBkp as copy of myserver.MyDatabase

After you do that, you will probably get back a message telling you it’s finished. What this means is that the query is finished, not the copy! It’s doing the copy in the background. There are two queries you can run against the system database to check the status of the database copy.

select * from sys.databases

This one shows you all the databases and the status of them. Look at the [state_desc] column. It will say “ONLINE” if it’s finished, or “COPYING” if it’s not.

The second command only shows running jobs, and will show a % completion in the [percent_complete] column. I read somewhere that this will update every 5 minutes, but don’t bet the farm on it. Mine went to 24%, stayed a long time, then jumped to 65%, and then finished only a few minutes later. So take it as an indication that something is happening, but don’t assume it’s definitive. Here’s the second command:

select * from sys.dm_database_copies

Copying a database is fairly easy, and works great. Our database is about 8 GB, and it took 10-15 minutes to run. Don’t forget that you have to pay for the second database!

There is a major gotcha with this option. This only works if the source and destination are within the same datacenter. Since we’re moving our database from USNorth Central to US West, this won’t work for us.

For more information about this feature, check out this article.

2. Use the SQL Azure Migration Wizard

This application scripts all of the objects in the database, and extracts the data to your local hard drive using BCP. Then it runs the script on the destination server and uploads the data to the new database. This means you end up downloading all of the data to your local machine, and then uploading it to the new server. I didn’t really want to download 8GB of data, and then wait for it to upload again. If you have all the time in the world, this will probably work for you. We used it when we initially migrated our infrastructure to Windows Azure, and still use it occasionally to migrate new tables and things like that that are developed locally, but I don’t want to have our systems down for as long as it will take to run this in production. For more information, check it out on codeplex.

3. Use the bacpac feature

This is a good choice. Back up the database in US North Central to blob storage, then restore it to the new server in the USWest region. For more information, check out this article. I would have stopped and figured out how to do this, but I found something easier.

4. Use the Redgate Cloud Services tool

It turns out that Redgate has written an application to let you backup your Windows Azure SQL Database to blob storage (or Amazon storage), and to restore it from storage back to Windows Azure. This looks a lot like a wrapper around the bacpac function to me. To check it out, click here.  This is what I used to move our primary database.

After creating an account, you can set up a job. The screen looks like this:

Choose to back up a SQL Azure database to Azure and you will see this screen:

On the left, fill in the Azure server, User name, and Password, and click the Test button attached to the Password field. If you got the information right, this will populate the list of databases, and you can select the one you want to back up.

Note the checkbox about creating a temporary copy. When I used this application, I created my own copy of the database using the Copy Database command explained in option 1, and then used this to back up the copy. This ensured that I didn’t have any conflicts or problems from the database being accessed while it was being backed up. We actually shut down all of the client applications while I did this, so that shouldn’t have been a problem, but I figured better safe than sorry. If you don’t want to create your own copy, you can check the box and this application will do it for you. Note that you still have to pay for that extra database just as if you had done it yourself.

On the right side, fill in the storage account name and access key and click Test. If you got the information right, it will populate the list of containers. Select the one you want to put your database backup in, and specify the file name of the backup. This can be anything you want to call it; it is not the name that will be used for the database when you restore it – it’s just a file name.

After getting everything the way you want it, click Continue and you will see this screen:

Now you can schedule the job to be run later, or you can just run it now. I chose to run it now, but it’s cool that Redgate offers this job scheduling feature – this means you can use this tool to make regular backups of your database to your blob storage account. It would be nice if they would let you put a date marker in the file name so you can keep multiple copies, but I don’t see any way to do that right now.

First it shows the job getting ready to start.

It will show progress as it’s running.

When it’s finished, it sends out an e-mail (unless you have turned the notifications off), and the FINISH TIME will show the date/time that it finished.

For my sample run, the text is showing up in red. That’s because this one failed. It wouldn’t back up our ASPState database. The error said, “Microsoft Backup Service returned an error: Error encountered during the service operation. Validation of the schema model for data package failed.” (It said a lot more, too, but I’ll spare you.)

I didn’t know how to fix it or what the problem was, so for this tiny database, I used the SQL Azure Migration Wizard to move it into production.

The Redgate Cloud Services worked great for our production database. I did a dry run before running this in production. Our production database is around 8 GB. When backing it up, I did a dry run around 9 p.m. PDT on a Monday night and it took about an hour. The production run was on a Friday night around 10:30 p.m. PDT and took 20 minutes.

So now my database is backed up.

After you back it up, you can restore it to any of your WASD servers in any data center. Here’s what the screen looks like in the Redgate Cloud Services.

This works just like the backup – fill in the information (don’t forget to click the Test buttons to make sure the other boxes are populated) and schedule a job or run it now.

Note that you specify the database name on the SQL Azure side. The database does not have to exist; it will be created for you. If it does exist, and you want to replace it, there’s a checkbox for that.

We did a dry run and a production run on the restore. The dry run on a Monday night at 11:35 p.m. PDT took 50 minutes. The production run on a Friday night at 11 p.m. it took 44 minutes to run. Much, much faster than using the SQL Azure Migration Wizard would have been.

Now what?

After I moved the database, I had to update the Azure Service configurations for the four services that access the database, and change the connection strings to point to the new database. Awesome Jack (our Sr. Systems Engineer who also does our builds) published them and we tried them out. After ensuring everything worked, Jack re-enabled access to all of the client applications, and we declared the Rebranding release completed.

One issue

We did have one issue. We have an account in our Windows Azure SQL Database that only has EXEC rights that is used by the services. This ensures that nobody puts dynamic SQL in a service call, and that if someone gets a copy of our Azure configuration, they can only execute stored procedures, they can’t log in and query our database. We had to set these up again because the Login is defined in the Master database. So we had to set it up on the target server, then run the setup queries against the restored database in order for the account to work. Not a big deal, but it’s a good thing we remembered to test this.

Summary

There are several ways to copy a Windows Azure SQL Database from one server to another. I used the Redgate Cloud Services tools, which I think are a wrapper around the bacpac feature. They were very easy to use, and worked perfectly for our production database. For our tiny session state database, we used the SQL Azure Migration Wizard. Our database is now moved, and the rebranding/upgrade project has been finished successfully.

Moving Azure blob storage for the Rebranding/AzureSDK2.0 Project

July 13, 2013

In my last post, I talked about the work required to upgrade all of the cloud services to SDK 2.0 and rebrand “goldmail” as “pointacross” in the code. After all these changes were completed and turned over to QA, the real difficulties began – moving the data. Let’s talk about moving blob storage.

Storage accounts and data involved

We set up new storage accounts in US West with “pointacross” in the name, to replace the “goldmail” storage in US North Central. We decided to leave the huge amounts of diagnostics data behind, because we can always retrieve it later if we need to, but it’s not worth the trouble of moving it just to have it in the same Windows Azure storage tables and blob storage as the new diagnostics. So here’s what we had to move:

GoldMail main storage (includes the content libraries and diagnostics)
GoldMail Cloud Composer storage
GoldMail Saved Projects

When you use the PointAcross application (previously known as the Cloud Composer), you upload images and record audio. We store those assets, along with some other application information, in blob storage. This yields many benefits, but primarily it means you can work on a PointAcross project logged in from one computer, then go somewhere completely different and log in, and all of your assets are still there. You can also save your project after you publish your message, or while working on it – these go into the Saved Projects storage. Those two storage accounts have a container for each customer who has used that application or feature. Fortunately, we’re just in the beginning phases of releasing the PointAcross project widely, so there are only about a thousand containers in each storage account.

The main storage includes a bunch of data we use here and there, and the content library assets. There are a lot of files, but only about 20 containers.

Option 1 : Using AZCopy

So how do we move this data? The first thing we looked at is AZCopy, from the Windows Azure Storage team. This is a handy-dandy utility, and we used it to test migrating the data from one storage account to the other. You run it from the command window. Here’s the format of the command:

AzCopy [1]/[2] [3]/[4] /sourcekey:[5] /destkey:[6] /S

    [1] is the URL to the source storage account, like http://mystorage.blob.core.windows.net
    [2] is the container name to copy from
    [3] is the URL to the target storage account, like http://mynewstorage.blob.core.windows.net
    [4] is the container name to copy to
    [5] is the key to the source storage account, either primary or secondary
    [6] is the key to the destination storage account, either primary or secondary
    /S means to do a recursive copy, i.e. include all folders in the container.

I set up a BAT file called DoTheCopy and substituted %1% for the container name in both places, so I could pass it in as a parameter. (For those of you who are too young to know what a BAT file is, it’s just a text file with a file extension of .bat that you can run from the command line.) This BAT file had two lines and looked like this (I’ve chopped off the keys in the interest of space):

ECHO ON
AzCopy http://mystorage.blob.core.windows.net/%1% http://mynewstorage.blob.core.windows.net/%1%

I called this for each container:

E:\AzCopy> DoTheCopy containerName

I got tired of this after doing it twice (I hate repetition), so I set up another BAT file to call the first one repeatedly; its contents looked like this:

DoTheCopy container1
DoTheCopy container2
DoTheCopy container3

The AZCopy application worked really well, but there are a couple of gotchas. First, when it sets up the container in the target account, it makes it private. So if you want the container to be private but the blobs inside to be public, you have to change that manually yourself. You can change it programmatically or using an excellent tool such as Cerebrata’s Azure Management Studio.

The second problem is that those other two storage accounts have over a thousand containers each. So now I either have to type all of those container names in (not bloody likely!), or figure out a way to get a list of them. So I wrote a program to get the list of container names and generate the BAT file. This creates a generic list full of the command lines, converts it to an array, and writes it to a BAT file.

//sourceConnectionString is the connection string pointing to the source storage account
CloudStorageAccount sourceCloudStorageAccount = 
    CloudStorageAccount.Parse(sourceConnectionString);
CloudBlobClient sourceCloudBlobClient = sourceCloudStorageAccount.CreateCloudBlobClient();
List<string> outputLines = new List<string>();
IEnumerable<CloudBlobContainer> containers = sourceCloudBlobClient.ListContainers();
foreach (CloudBlobContainer oneContainer in containers)
{
    string outputLine = string.Format("DoTheCopy {0}", oneContainer.Name);
    outputLines.Add(outputLine);
}
string[] outputText = outputLines.ToArray();
File.WriteAllLines(@"E:\AzCopy\MoveUserCache.bat", outputText);

That’s all fine and dandy, but what about my container permissions? So I wrote a program to run after the data was moved. This iterates through the containers and sets the permissions on every one of them. If you want any of them to be private, you have to hardcode the exceptions, or fix them after running this.

private string SetPermissionsOnContainers(string dataConnectionString)
{
  string errorMessage = string.Empty;
  string containerName = string.Empty;
  try
  {
    CloudStorageAccount dataCloudStorageAccount = CloudStorageAccount.Parse(dataConnectionString);
    CloudBlobClient dataCloudBlobClient = dataCloudStorageAccount.CreateCloudBlobClient();

    int i = 0;
    List<string> containersToDo = new List<string>();

    IEnumerable<CloudBlobContainer> containers = dataCloudBlobClient.ListContainers();
    foreach (CloudBlobContainer oneContainer in containers)
    {                   
      i++;
      System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer.Name);

      CloudBlobContainer dataContainer = dataCloudBlobClient.GetContainerReference(containerName);
      //set permissions
      BlobContainerPermissions permissions = new BlobContainerPermissions();
      permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
      dataContainer.SetPermissions(permissions);
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to change permission on container '{0}' "
        + "= {1}, inner exception = {2}",
      containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

Option 2: Write my own solution

Ultimately, I decided not to use AZCopy. By the time I’d written this much code, I realized it was just as easy to write my own code to move all of the containers from one storage account to another, and set the permissions as it iterated through the containers, and I could add trace logging so I could see the progress. I could also hardcode exclusions if I wanted to. Here’s the code for iterating through the containers. When getting the list of containers, if it is the main account, I only want to move specific containers. This is because I moved some that were static ahead of time. So for this condition, I just set up an array of container names that I want to process. For the other accounts, it retrieves a list of all containers and processes all of them.

private string CopyContainers(string sourceConnectionString, string targetConnectionString, 
  string accountName)
{
  string errorMessage = string.Empty;
  string containerName = string.Empty;
  try 
  {
    CloudStorageAccount sourceCloudStorageAccount = CloudStorageAccount.Parse(sourceConnectionString);
    CloudBlobClient sourceCloudBlobClient = sourceCloudStorageAccount.CreateCloudBlobClient();
    CloudStorageAccount targetCloudStorageAccount = CloudStorageAccount.Parse(targetConnectionString);
    CloudBlobClient targetCloudBlobClient = targetCloudStorageAccount.CreateCloudBlobClient();

    int i = 0;
    List<string> containersToDo = new List<string>();
    if (accountName == "mainaccount")
    {
      containersToDo.Add("container1");
      containersToDo.Add("container2");
      containersToDo.Add("container3");

      foreach (string oneContainer in containersToDo)
      {
        i++;
        System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer);
        MoveBlobsInContainer(oneContainer, accountName, targetCloudBlobClient, sourceCloudBlobClient);
      }
    }
    else
    {
      IEnumerable<CloudBlobContainer> containers = sourceCloudBlobClient.ListContainers();                    
      foreach (CloudBlobContainer oneContainer in containers)
      {
        i++;
        System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer.Name);
        MoveBlobsInContainer(oneContainer.Name, accountName, targetCloudBlobClient, sourceCloudBlobClient);
      }
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to move files for account '{0}', " +
      "container '{1}' = {2}, inner exception = {3}",
      accountName, containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

And here’s the code that actually moves the blobs from the source container to the destination container.

private string MoveBlobsInContainer(string containerName, string accountName, 
  CloudBlobClient targetCloudBlobClient, CloudBlobClient sourceCloudBlobClient)
{
  string errorMessage = string.Empty;
  try
  {
    long totCount = 0;
    //first, get a reference to the container in the target account, 
    //  create it if needed, and set the permissions on it 
    CloudBlobContainer targetContainer = 
      targetCloudBlobClient.GetContainerReference(containerName);
    targetContainer.CreateIfNotExists();
    //set permissions
    BlobContainerPermissions permissions = new BlobContainerPermissions();
    permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
    targetContainer.SetPermissions(permissions);

    //get list of files in sourceContainer, flat list
    CloudBlobContainer sourceContainer = 
      sourceCloudBlobClient.GetContainerReference(containerName);
    foreach (IListBlobItem item in sourceContainer.ListBlobs(null, 
      true, BlobListingDetails.All))
    {
      totCount++;
      System.Diagnostics.Debug.Print("Copying container {0}/blob #{1} with url {2}", 
        containerName, totCount, item.Uri.AbsoluteUri);
      CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference(item.Uri.AbsoluteUri);
      CloudBlockBlob targetBlob = targetContainer.GetBlockBlobReference(sourceBlob.Name);
      targetBlob.StartCopyFromBlob(sourceBlob);
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to move files for account '{0}', "
      + "container '{1}' = {2}, inner exception = {3}",
        accountName, containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

You can hook this up to a fancy UI and run it in a background worker and pass progress back to the UI, but I didn’t want to spend that much time on it. I create a Windows Forms app with 1 button. When I clicked the button, it ran some code that set the connection strings and called CopyContainers for each storage account.

Did it work?

When we put all of the services in production, as our Sr. Systems Engineer, Jack Chen, published all of the services to PointAcross production, I ran this to move the data from the goldmail storage accounts to the pointacross storage accounts. It worked perfectly. The only thing left at this point is moving the Windows Azure SQL Databases (the database previously known as SQL Azure Winking smile ).

Rebranding and Upgrading to Azure SDK 2.0 — Details, details

July 13, 2013

As I discussed in my last post, we at GoldMail are rebranding our company and services to PointAcross, and updating everything to SDK/Tools 2.0 at the same time. (No reason to do full regression testing twice. Plus, no guts, no glory!)

Setting up the Azure bits

For the rebranding, I decided to create all new services and storage accounts with “pointacross” in the name instead of “goldmail”. Yes, we could have CNAMEd the goldmail*.cloudapp.net URLs as pointacross, but there are several benefits to creating new services. For one thing, this removes any confusion about the services on the part of us in the tech department. Also, we can run two production systems in parallel until the DNS entries for the goldmail services redirect appropriately to the pointacross URLs.

Another issue we have is our services are currently located in the US North Central data center, which is very full. I can’t create VMs, and new Azure subscribers can’t set up services there. US North and South Central were the first two data centers in the US, so the hardware is older as well. At the rate my CEO is going, it seems likely that he will generate enough business that we will need to scale out in the next few months, and I was concerned about being able to do that with our services based in North Central. I don’t know if that’s a valid concern, but I figured better safe than sorry.

So I set up a new affinity group for USWest, and created all of the new services and storage accounts there. I also took advantage of this opportunity to create a storage account just for diagnostics. We don’t use our main storage account for a LOT of other things, but this is always advised, and this is a good time to take care of that.

Our Sr. systems engineer, Jack Chen, set up all the DNS entries for everything, and I set to work on updating the SDK and doing the rebranding.

Updating the SDK version

The next order of business was to update everything to Azure SDK 2.0. I downloaded and installed all of the updates, and installed the tools for Visual Studio 2010. 

Azure SDK/Tools 2.0 runs side-by-side with 1.8 just fine. You can open solutions that have cloud projects targeting 1.8 and have no problem. However, here’s something you need to know: Once you install SDK/Tools 2.0, you can no longer create a NEW cloud project targeting 1.8. I installed this SDK just to check out the impact of the breaking changes in the Storage Client Library, and when I needed to add a cloud project to an existing (SDK 1.8) solution, there was no way to tell it to target anything except SDK 2.0. So if you need to add a new cloud project and the rest of the projects in that solution are 1.8 or before, you have to uninstall SDK 2.0 in order to create your cloud project.

In the screenshots below, I am using VS2010. We haven’t upgraded to VS2012 because we are always working like wildfire on the next release, and the TFS Pending Changes window was just a pain in the butt we didn’t want to deal with yet. Procrastination worked in my favor this time (that’s a first!) – they have changed the Pending Changes window in VS2013, but we can’t use that because they haven’t provided Azure Tools for the VS 2013 Preview yet. Argh!

So how do you you update a current solution? Right-click on each cloud project in the solution and select Properties. You should see this:

Click the button to upgrade to SDK 2.0. You will be led through a wizard to do the upgrade – it asks if you want to backup the current project first, and offers to show you the conversion log.

We have multiple cloud projects in each solution – one for staging, one for production, and one for developers. (Click here to read why.) So we had to convert each project.

The next thing to do is update the NuGet packages. You can right-click on the Solution and select “Manage NuGet packages for solution”, or do it one project at a time. I did mine one project at a time for no particular reason other than wanting to be methodical about it (and being a little anal-retentive). You will be prompted with a list of packages that can be updated.

For this exercise, you need to update Windows Azure Storage and the Windows Azure Configuration Manager. When you do this, it updates the references in the project(s), but doesn’t change any code or using statements you may have. Try doing a build and see what’s broken. (F5 – most people’s definition of “unit test”. Winking smile).

Handling breaking changes

For us, since we were still using Storage Client Library 1.7, I have a number of things I had to fix.

1. I configure our diagnostics programmatically. To do this in 1.7 and before, I grab an instance of the storage account in order to get an instance of the RoleInstanceDiagnosticManager. Here is the old code.

string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";   
CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    storageAccount.CreateRoleInstanceDiagnosticManager(
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

They have removed this dependency, so I had to change this code to instantiate a new instance of the diagnostic manager and pass in the connection string to the storage account used for diagnostics. Here is the new code.

string wadConnectionString = RoleEnvironment.GetConfigurationSettingValue
    ("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    new RoleInstanceDiagnosticManager(
    wadConnectionString,
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

2. In the startup of our web roles, we have a exception handler that writes any startup problems to blob storage (because it can’t write to diagnostics at that point). This code looks like this:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExist();
container.GetBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

They changed CreateIfNotExist() to CreateIfNotExists(), and you now have to specify the type of blob used, so when I get the reference to the blob, I have to call GetBlockBlobReference. Also, UploadText has been removed. More on that in a minute. This code becomes the following:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExists();
container.GetBlockBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

3. As noted above, you have to change the more generic CloudBlob, etc., to specify the type of blob. So I changed all occurrences of CloudBlob to CloudBlockBlob and GetBlobReference to GetBlockBlobReference.

4. I had a method that checked to see if a blob existed by fetching the attributes and checking the exception thrown. They added Exists() as a method for the blobs, so I’ve replaced all uses of my method with blob.Exists() and removed the method entirely.

5. Now let’s talk about Upload/Download Text, Upload/Download FIle, and Upload/Download ByteArray. They removed these methods from the CloudBlockBlob class, and now only support Upload/Download Stream. So you can rewrite all your code, or you can get the CloudBlobExtensions written by Maarten Balliauw. I can’t find my link to his, so I’ve posted a copy of them here. Just change the namespace to match yours, and voila!

6. I had to add a using statement for Microsoft.WindowsAzure.Storage.Blob everywhere I use blobs, and the corresponding one for queues where I use queues. I had to add a using statement for Microsoft.WindowsAzure.Storage anywhere I was accessing the CloudStorageAccount. Basically, I had to make sure I had using clauses for the new namespaces wherever I was using them.

7. I also used the “Organize Usings/Remove and Sort” context menu option to clean up the using clauses in every class I changed. This removed the old WindowsAzure.StorageClient.

That was the bulk of the work for updating the Storage Client Library. Once I got the list from going through the first application, doing the others was fairly simple, as I knew what I was looking for.

Gaurav Mantri (an amazing Windows Azure MVP who is always very helpful) has a very good blog series about updating the Storage Client Library, discussing blobs, queues, and table storage.

After I fixed all the breaking changes, I made the rebranding changes. In some cases, this was as easy as just changing “goldmail.com” to “pointacross.com”, but I also had to search the entire code base for “goldmail” and decide how to change each one of them, especially in the case of error messages returned to the client applications.

Every occurrence of “goldmail” had to be assessed, and I had to make sure any “secondary references” were updated. For example, the desktop application (now called GoldMail) has some content hosted in web controls that is located on our website, so I had to be sure those bits were updated in the website. And finally, I updated the storage keys and storage account names in the Azure configurations, and any URLs or occurrences of “goldmail” that I found there.

RDP, SSL, and HTTPS

We purchased a new SSL certificate for pointacross.com, which I uploaded to all of the services for RDP access, and for the HTTPS services. Then I went through and updated the certificate thumbprints specified in the cloud projects. (I have never managed to use the “browse” option for certificates in the Visual Studio UI – it can never seem to find the certificate in my certificate store on my machine, so I update the thumbprints in the Azure configuration, which works just fine.)

After doing this, I right-clicked on each cloud project and selected Manage RDP Connections, then put the password in again. I didn’t do this with the first service, and we couldn’t RDP into it. Oops! I suspect it uses the certificate to encrypt the RDP information and store it in the Azure configuration, and putting it in again after changing the certificate forces it to re-encrypt with the new certificate.

And finally, we set up new publishing profiles and published everything to the new PointAcross staging services, and turned the whole thing over to QA.

Once more, unto the breach. –Shakespeare, Henry V

After everything was tested and checked, we had a release meeting – at night, to minimize customer disruption. We shut down access to our client applications, and then published the new cloud services to production. We also moved the data in the storage accounts. After we tried everything out, we redirected the goldmail DNS entries that were likely to be “out in the wild” to the pointacross services, and deleted the rest of them. After a few days went by, we shut down and deleted the goldmail services, and then removed them from our Azure subscription. We are now rebranded and updated.

In my next post, I’ll talk about how I moved the cloud storage from the goldmail storage accounts to the pointacross storage accounts.

Rebranding GoldMail and keeping up with Azure SDK updates

June 30, 2013

I am VP of Technology for a small ISV called GoldMail; all of our applications run on Windows Azure. We are rebranding – changing from “GoldMail” to “PointAcross” – and we will change the company name at some point as well. People think “GoldMail” is an e-mail product (it isn’t), and PointAcross (as in, we help you get your point across) is a more focused brand. We are changing everything except our desktop product, which will remain GoldMail, and will eventually be deprecated. We have been working on a web version of our desktop product for quite some time now, and we’re calling it PointAcross. With our web application, we can accommodate both Windows and Mac users. We’ve also simplified the usability.

Our infrastructure – over a dozen cloud services – runs in Windows Azure. With each update, we have to update the SDK in every project, do regression testing, and then release them to production. Our product roadmap is packed; we are pushing out releases as fast as we can, with some big product features coming in the next few months. The trick is to minimize disruption by combining the SDK updates with new releases. This works pretty well, although some of our products have faster release cycles than others, so not everything gets updated at the same time.

As you know if you’ve read my post about our implementation of Windows Azure, I am a glutton for punishment, so I’ve commited to rebranding everything and upgrading everything to SDK 2.0 in the same release. We updated to SDK 1.8 a few weeks ago, but we are still targeting Storage Client Library 1.7, so we have breaking changes to contend with. Did I mention I committed to completing all of those changes in two weeks?

Now, there are two ways to do a rebranding project of this size. The easy way is to change all the public-facing URLs by just adding DNS entries that point to the goldmail services, storage accounts, etc., using pointacross in the names. This leads to major confusion later, and makes it difficult to get rid of any goldmail artifacts down the line. Plus, if we add more services, do we brand them as goldmail internally (to be consistent), or switch to pointacross?

The harder way is to set up all new services and storage accounts in Windows Azure, and add new DNS entries for them. If we do it this way, we have to change all of the URLs in the configurations (and code) for all of the services, and all of the storage account names and keys we have in the Azure service configurations. But in the end, we end up with an infrastructure that is fully branded with the new name, and there won’t be any slip-ups where the old name will be displayed. Also, we can set up the new services, publish everything, and run the two production systems in parallel, then cut over when we’re satisfied that everything is working correctly. After cutting over, we change any goldmail DNS entries that might be used “out in the wild” (like in HTML pointing to our embedded player) to point to the PointAcross addresses.

Of course, I’ve chosen to do the more difficult way of rebranding. I think of it as “the right way” – I hate to do anything halfway, and we’ll never have time to go back and rebrand internally, so if we don’t do it now, we’ll end up with goldmail services forever. I’m also taking advantage of this opportunity to move all of our services and storage accounts from the North Central data center to the US West data center. I have several reasons for doing this.

First and foremost, the North Central data center is full. If you don’t already have services running in North Central, you can’t select it as an option. This also means that we can’t create VMs and use IAAS in the same location as our other services, because they don’t have that feature enabled for North Central. My second reason is because the new hardware in the newer data centers has the newer, faster hard drives, and our applications will gain performance with no effort on our part. Well, no effort except for the huge effort of moving everything, but that’s a one-time hit, versus the performance gain for our customers.

The other thing I will do is add a new storage account for diagnostics. When we started, we only used one storage account, because our use of Azure Blob Storage was almost non-existent. We have added some usage for blob storage, so it makes sense to use our main storage account for that, and create a new storage account for the diagnostics. Also, we have 2-1/2 years of diagnostics in the storage account that we don’t need to be retaining and paying for, and we can let it go when we delete the old storage account some time down the line.

In my next post, I’ll talk more about this project and how we’re going to move data from one storage account to another, as well as any issues I hit upgrading to SDK 2.0. In the meantime, if you have any advice or comments, please leave them in the comments section!

3/8/2014 GoldMail is now doing business as PointAcross.

Problem debugging WCF service in Azure compute emulator

June 30, 2013

I’m using VS2010 SP-1 with all updates installed, Azure SDK 1.8, and the Storage Client Library 1.7. Storage Client Library 1.8 had breaking changes in it, and it’s going to take a few days to update and test all of our services, so we’ve been waiting for SDK 2.0 (which came out not too long ago).

I want to talk about the problems I’m having running my WCF services in the compute emulator. I’m curious to see if anybody else is having these kinds of problems, and wanted to see if upgrading to SDK 2.0 fixes the problems.

First, I have a startup task in my service that registers some dll’s. To run my WCF services in the Compute Emulator, I have to comment out that startup task in the csdef file. I also change the number of instances of the web and worker roles to 1, to make it run faster. I change the web.config to have debug=”true” so I can debug the code. I hit F5.

I am running the staging Azure cloud project in my solution, so this runs against blob storage and the SQL database in Azure, not on the local machine, so that complication is removed. When I hit F5 to run the application in the compute emulator, I get this:

This is fairly new. I’ve been having problems for several months, but this is a recent addition. I also see this message when running any of our web applications in the compute emulator. I do have “Launch browser for endpoint” checked, so it does open IE. I think I’ started seeing this message when I updated to IE10, but I couldn’t swear to it.

I click Okay. Now my service is running in the local compute emulator, and I can run my client that calls it and test the calls and stop it at the breakpoints and look at what’s going on, etc. I hit Stop in Visual Studio to stop the service so I can make some changes.

I make changes and hit F5 to run the service in the compute emulator again.  At some point (sometimes it’s the first time, sometimes it’s after a couple of iterations of changes), I get this error:

It worked just a minute ago just fine, so I know the file is right. And once I get this error, the service won’t run right, and I can’t debug into it. If I hit Continue (just for grins), the dialog goes away and then pops up again. If I hit OK, it closes the dialog, but doesn’t stop the service – I have to do that in VS. If I stop it and try to run it again, I get that error again. To get it to run again in the compute emulator, I have to clean the solution, wait a few seconds, and then I can run it.

At this point, I can no longer make changes, do a build to make sure it compiles, and then run it. From this point onward, I have to clean the solution and just run it. I have this happening on both of my development machines, so if it’s something about my setup, it’s on both of them.

In case you’re wondering, I did consider upgrading to VS2012, but the “updates” to the Pending Changes window just add one more layer of difficulty to our work, so my team has decided to wait for VS2013, where they have responded to that feedback from myself (and many, many others).

Next: I’m upgrading to SDK and Tools 2.0, and am interested to see if the problem is going to be fixed. If you’ve had this problem and have managed to fix it, please post your solution in the comments.

Fun stuff to do if you’re in San Francisco for the BUILD conference

June 21, 2013

Is there any tourist-y stuff to do in San Francisco?

I have to start by saying what’s fun for some people will not be fun for everyone. I’m not going to repeat all the San Francisco treats (such as Rice-a-Roni) for you; that’s what guidebooks are for. Everyone knows about Fisherman’s Wharf, Pier 39, Alcatraz, the Golden Gate Park, the California Academy of Sciences at the park, and of course the famous Golden Gate Bridge (the best view of which is from the north side of the bridge, from the battlements in the Marin Headlands). For people who like to shop, the Westfield Center is on Market and Powell St, and Union Square is two blocks up Powell.

There’s also the Letterman Center for the Digital Arts in the Presidio, home to Lucasfilm and ILM. (Take your picture with the Yoda fountain!). If you have a car, you can drive north on 101, and take the Lucas Valley Drive exit and go west to Nicasio, and drive by the entrance to Skywalker Ranch. (I’m not posting the # here, I don’t want the double-L’s coming after me (the Lucasfilm Lawyers)). (Did you notice I closed all of my parentheses correctly? Good to know that learning LISP was relevant to my life.) Oh, by the way, you can’t see anything from the entrance, and they have a lot of security cameras, so don’t try climbing over the fence and running for the main house. (Don’t ask.)

Any tech events going on around \\build?

So let’s talk about fun things to do if you’re a tech person coming to \\build – you know, the parties, the events happening at the same time, where you can see people you haven’t seen since the last conference or MVP summit? Here’s a list I managed to cobble together. If you know of any I’ve missed, please add them in the comments so I can go to them, too. Be sure to check the event pages themselves in case there are any changes after I post this.

Monday, 21 June

  • Microsoft .NET client-side development futures / panel discussion. Microsoft offices, SF, 6:30 p.m.
    Discuss Microsoft .NET client-side development and the future thereof with Laurent Bugion and Ward Bell (both are Silverlight MVPs). There will also be 1 or 2 guys from Xamarin joining in. More info here.
  • Preparing applications for production environments. Microsoft offices, Mountain View, 6:30 p.m.
    You need a car to get from SF to this meetup in Silicon Valley about preparing applications for production environments. More info here.

Tuesday, 22 June

  • Vittorio Bertocci speaking about Identity/AD/ACS and Windows Azure. Microsoft offices, SF, 6:30 p.m. 
    Come see Vittorio Bertocci, a superstar from Microsoft who is the expert in Identity/AD/ACS in Windows Azure! He’s always entertaining and informative, and great at answering questions. This event is kindly being sponsored by AppDynamics, so there will be pizza and drinks; please sign up ahead of time so we get enough pizza! More info here.
  • Bay Area F# User Group meetup. Microsoft offices, SF, 6:30 p.m.
    Meetup of the Bay Area F# user group. More info here.
  • Xamarin Welcome Party 7:00-midnight
    This is conveniently about a block from the Microsoft offices, and I suspect their numbers will coincidentally increase at about the time the two meetups end. More info here.

Wednesday, 23 June

  • Glenn Block speaking about scriptcs. Github HQ, SF, 7:00 p.m.
    Come see Glenn Block from Microsoft talk about scriptcs at the second SF .NET User Group (yes, there’s two, don’t ask). Github is on 4th St; I hear they are letting in the first 55 people who have RSVP’d to the meetup. More info here.
  • \\Build Welcome Reception
    If the pattern of the past few \\build conferences holds, the \\build welcome reception will be Wednesday night. I’ll post more information when I find out if I’m right or now.

Thursday, 24 June

  • Deep Fried Bytes party. Thirsty Bear Brewing Co., South of Market 8:00-10:00 p.m.
    To get tickets to this, you have to track down Chris Woodruff or Keith Elder at the \\build conference. More info here.
  • \\Build Attendee Party
    This is another educated guess. If the pattern holds, there will be an Attendee party on Thursday night. I’ll post details here when/if I get them!

How do I find the Microsoft office in San Francisco?

Several of these events are at the Microsoft offices in San Francisco. They are very generous with their space, and we who run the meetups and user groups really appreciate their support, especially that of Bruno Terkaly with DPE for hosting all of our SF meetups.

The office is about two blocks from the Moscone Center, where the \\build conference is being held, on Market Street where Powell runs into Market. Of course, they don’t have a big sign on the street that says Microsoft, you have to be “in the know” to find it. Fortunately, Microsoft Research (apparently in the same location) has a very nice page here that shows you where it is.

Is Rice-a-Roni really the San Francisco Treat they claim it is?

Yes it is. Do you have the Rice-a-Roni song in your head yet?

Windows 8 and ClickOnce : the definitive answer revisited

April 14, 2013

Since I posted the article titled Windows 8 and Click Once: the definitive answer, it became clear that it was not actually the definitive answer.

I got a ping on twitter from Phil Haack from GitHub telling me that this did not fix their Smart Screen filter problem.

After talking to him, and seeing his build and signing commands, I discovered they recently changed their signing certificate. For those of you who remember the early days of ClickOnce (2005) when you changed the signing certificate and everybody had to uninstall and reinstall the application, this seemed too likely an indicator to ignore.

Reputation

I didn’t talk in my article about “reputation” (and I should have, so I duly apologize here for that). In my first conversations with Microsoft, they mentioned that an application had a reputation, and this reputation had some bearing on the appearance of the Smart Screen Filter, and this reputation was built based on how many people had installed your application.

When I asked how many people had to install your application before the Smart Screen filter stopped interrupting the running of the application, I could not get a clear answer. Of course, this makes sense that they wouldn’t want to make their algorithm public, because you could publish your app, install it X number of times yourself, and make it reputable. (I’m not suggesting you do that, but if you do, please post back and tell us your results. Inquiring minds want to know.)

Since we’ve been in business for a few years, and have well over a thousand customers (probably thousands) who have installed the desktop application, this didn’t impact us. The reason I didn’t mention it in the blog post is because I created a new Windows Forms test application and deployed it solely for the purpose of testing the information in the article, and had no problem with the Smart Screen Filter. I installed the application maybe a dozen times while messing with the build commands, so I figured, “Wow, the number of installs required is pretty small.” Haha!

So on behalf of Phil, I pinged my contact at Microsoft, and he went off to investigate. After a bit of research, he found some information internal to Microsoft. I won’t quote it directly in case I shouldn’t, but the gist of it was this: The digital certificate information may be taken into account when determining the reputation of the application. A-HA! I thought to myself (and immediately started humming that song, “Take On Me”.)

So the problem at GitHub is probably due to the certificate being updated right about the same time they start signing their assembly for customers using Windows 8. I expect that fairly soon, as people install or get updates (if they are using automatic updates), their reputation will be sterling and nobody will ever see the Smart Screen Filter again when installing GitHub.

Knowing this, it makes sense that my test application didn’t get stopped even though it was a brand new application. I signed it with my company’s signing certificate, which has been in use for several months.

Which leads me to another issue I noticed when talking to Phil. I noticed that rather than using PostBuild or BeforePublish commands, he was using AfterCompile commands to sign his executable. I asked him about it.

PostBuild, BeforePublish, and AfterCompile, oh my!

Apparently when Phil signs his executable using PostBuild or BeforePublish commands, when the user installs it, he gets the dreaded “exe has a different computed hash than specified in the manifest” error. He found that using AfterCompile instead fixed the problem.

I went back to Microsoft, and they soon verified the problem, and said it is due to WPF applications having a different set of targets and execution order, so the standard AfterBuild/BeforePublish commands don’t quite work. So the bottom line is this: The signing of the exe doesn’t work right with BeforePublish or PostBuild if you are using VS2012 and you have a WPF application. In that case, you must use AfterCompile. So in the original post, use case #3, but put in AfterCompile instead of BeforePublish.

If you are using VS2010, OR you have a Windows Forms or Console application, you can use PostBuild or BeforePublish with no problem.

Hopefully we now have the definitive answer to handling the Smart Screen filter and signing a ClickOnce application that will be run on a Windows 8 machine.

Thanks to Zarko Hristovski and Paul Keister, who also reported the problem with the BeforePublish command, and who verified that AfterCompile worked for them. Thanks to Phil Haack for the answer to a problem I didn’t know existed yet. And thanks to Saurabh Bhatia at Microsoft for his help with Windows 8 and ClickOnce.

Tech Days San Francisco, 2-3 May 2013, through Azure-colored glasses

April 9, 2013

Living in the San Francisco Bay Area is awesome if you work in tech. There are so many companies springing up all the time and so many interesting places to work. The hard part of working in tech is keeping up with the current technologies and learning the new skills that can help you advance your career. A great way to do that is to keep your eyes open for conferences, dev days, tech days, etc., in your area, sign up and go. There are so many great opportunities being offered by the community leaders in your area.

A really interesting opportunity is coming up in the San Francisco Bay Area in early May – Tech Days SF. While primarily for IT Pros, there are also sessions that will be interesting to developers. What developer couldn’t benefit from knowing more on the IT Pro side? I was recently talking to another Azure MVP, and we agreed that now with all of the features in Windows Azure, it would behoove us to learn about virtual networks and some of the other IT-type features we never had to know when just doing software development.

There are some great speakers coming, which I doubly appreciate, because I managed to poach Glenn Block from Microsoft to speak at the Azure Meetup in San Francisco the night before (5/1) about mobile services (official announcement coming soon). And there is going to be a wide variety of topics; here is a random selection that just coincidentally seem Azure-related or Azure-useful:

  • Windows Azure
  • Managing the Cloud from the CmdLine
  • Microsoft IT – Adopted O365 and Azure
  • Windows Azure Virtual Machines (IAAS)
  • PowerShell Tips and Tricks (You can use PowerShell scripts with Windows Azure)
  • Manage Server 2012 Like a Pro or Better, Like an Evil Overlord (I like the title)

This is just one of many opportunities available to keep your skills up-to-date. So check it out, sign up, and go expand your knowledge!

(Reminder – There’s also a Global Windows Azure Bootcamp in San Francisco on 4/27!)

Global Windows Azure Bootcamp SF April 27th 2013

April 6, 2013

What is it?

On April 27th, the Windows Azure community is going to have a Global Windows Azure Bootcamp. This will be a one-day hands-on deep dive class for developers in locations all over the world. Last I heard, the count was up to around 80 locations.

The local Windows Azure experts will be in attendance to run the bootcamp in each location, provide training, answer questions, and provide support with doing the labs. I heard a rumor that there is also going to be a huge rendering project that each site can run that will test the power and capability of Windows Azure. It should be a lot of fun, so please sign up and attend the one closest to you.

What about the San Francisco Bay Area?

I will be organizing and running the event in San Francisco; registration is here. The event location is the Microsoft office in San Francisco (835 Market Street, Suite 700). This is adjacent to and above the Westfield Shopping Mall, so after the event, you can go to the Microsoft Specialty Store and get a new Surface Pro tablet, because you’ll love mine so much you’ll want your own.

Each bootcamp’s agenda and material are up to the organizer, so you will be at my mercy. I mean, I will be deciding what we’re going to do. Since it’s three weeks off, and everybody knows that developers usually don’t write talks until the night before the event, I haven’t decided on the agenda yet. (Don’t worry, this time I’m not going to wait until the night before.)

I’ve attended these in the past, and always think the leaders talk too much and the developers develop too little, so I am going to try to focus on the development rather than the talking. I’ll do introductory talks with overview information, and then provide a corresponding lab that we can do to understand the topic. Since I live in the San Francisco Bay Area, and there are a lot of non-Microsoft developers, my current thinking is that I will focus on Windows Azure Web Sites, Infrastructure as a Service (IAAS), and Mobile services, which can be used by everybody.

What do I need to bring?

You need to install the prerequisites BEFORE the class. It can take a couple of hours to get set up, so if you don’t do it ahead of time, you won’t get nearly as much out of the class and will probably be concentrating so hard, you will miss some of my crackling jokes and dry witty comments. Also note that your Commodore 64 will probably not work with Windows Azure’s SDK.

Here’s what you need to have on your system to make the most out of the day:

This should be a lot of fun, and will be a great introduction to some of the cool things you can do with Windows Azure. (For more information about how much fun I’ve had with Windows Azure, check out this blog post.) Sign up for the event closest to you and have a great time!