Archive for July, 2013

Windows Azure at San Diego Code Camp – 27th and 28th of July 2013

July 15, 2013

There is a code camp in San Diego on July 27th and 28th that has a lot of really interesting sessions available. If you live in the area, you should check it out – there’s something for everyone. It also gives you an opportunity to talk with other developers, see what they’re doing, and make some connections.

I’m going to be traveling from the San Francisco Bay Area to San Diego to speak, as are some of my friends — Mathias Brandewinder, Theo Jungeblut, and Steve Evans.

For those of you who don’t know me, I’m VP of Technology for GoldMail (soon to be renamed PointAcross), and a Microsoft Windows Azure MVP. I co-run the Windows Azure Meetups in San Francisco and the Bay.NET meetups in Berkeley. I’m speaking about Windows Azure. One session is about Windows Azure Web Sites and Web Roles; the other one is about my experience migrating my company’s infrastructure to Windows Azure, some things I learned, and some keen things we’re using Azure for. Here are my sessions:

Theo works for AppDynamics, a company whose software has some very impressive capabilities to monitor and troubleshoot problems in production applications. He is an entertaining speaker and has a lot of valuable expertise. Here are his sessions:

Mathias is an Microsoft MVP in F# and a consultant. He is crazy smart, and runs the San Francisco Bay.NET Meetups. Here are his sessions:

Steve (also a Microsoft MVP) is giving an interactive talk on IIS for Developers – you get to vote on the content of the talk! As developers, we need to have a better understanding of some of the system components even though we may not support them directly.

Also of interest are David McCarter’s sessions on .NET coding standards and how to handle technical interviews. He’s a Microsoft MVP with a ton of experience, and his sessions are always really popular, so get there early! And yet another Microsoft MVP speaking who is always informative and helpful is Jeremy Clark, who has sessions on Generics, Clean Code, and Design patterns.

In addition to these sessions, there are dozens of other interesting topics, like Estimating Projects, Requirements Gathering, Building cross-platform apps, Ruby on Rails, and Mongo DB (the name of which always makes me think of mangos), and that’s just to name a few.

So register now for the San Diego Code Camp and come check it out. It’s a great opportunity to increase your knowledge of what’s available and some interesting ways to get things done. Hope to see you there!

Moving our Windows Azure SQL Databases for the Rebranding/AzureSDK2.0 project

July 14, 2013

As detailed in my last couple of posts, we have rebranded our company as PointAcross, which means we rebranded all of the products except the desktop Composer, which is still known as GoldMail. I created all new services and storage accounts with pointacross in the name in a new region, and everything is published in production and working fine.

Only one thing is left – the Windows Azure SQL Databases are still in US North Central. This is a disaster waiting to happen. It is a well-known fact that WASD’s have issues with connections, and you always have to put in connection management code. Running your services in one data center with your database in another is just asking for trouble. So now I have to move the primary database as well as the tiny session state database used in a couple of our web applications.

Although we have somewhere around 15 services running in Azure, only four of them access the primary database. These are the services for our primary application (PointAcross, used to make messages), and the Players used to play those messages.

Some investigation revealed several ways to migrate the data.

1. Copy the database from one place to another.
2. Use the SQL Azure Migration Wizard on codeplex.
3. Use the bacpac feature to back up a WASD to blob storage, then to restore it to the new database server.
4. Use the wrapper that Redgate has written around the bacpac capability that makes it very easy to backup/restore a WASD from/to blob storage.

1. Copy the database

This is pretty simple. You can make a copy of your database using T-SQL. It’s as simple as logging into the SQLServer Management Studio, connecting to the destination server, and running this command against the master database.

create database MyDatabaseBkp as copy of myserver.MyDatabase

After you do that, you will probably get back a message telling you it’s finished. What this means is that the query is finished, not the copy! It’s doing the copy in the background. There are two queries you can run against the system database to check the status of the database copy.

select * from sys.databases

This one shows you all the databases and the status of them. Look at the [state_desc] column. It will say “ONLINE” if it’s finished, or “COPYING” if it’s not.

The second command only shows running jobs, and will show a % completion in the [percent_complete] column. I read somewhere that this will update every 5 minutes, but don’t bet the farm on it. Mine went to 24%, stayed a long time, then jumped to 65%, and then finished only a few minutes later. So take it as an indication that something is happening, but don’t assume it’s definitive. Here’s the second command:

select * from sys.dm_database_copies

Copying a database is fairly easy, and works great. Our database is about 8 GB, and it took 10-15 minutes to run. Don’t forget that you have to pay for the second database!

There is a major gotcha with this option. This only works if the source and destination are within the same datacenter. Since we’re moving our database from USNorth Central to US West, this won’t work for us.

For more information about this feature, check out this article.

2. Use the SQL Azure Migration Wizard

This application scripts all of the objects in the database, and extracts the data to your local hard drive using BCP. Then it runs the script on the destination server and uploads the data to the new database. This means you end up downloading all of the data to your local machine, and then uploading it to the new server. I didn’t really want to download 8GB of data, and then wait for it to upload again. If you have all the time in the world, this will probably work for you. We used it when we initially migrated our infrastructure to Windows Azure, and still use it occasionally to migrate new tables and things like that that are developed locally, but I don’t want to have our systems down for as long as it will take to run this in production. For more information, check it out on codeplex.

3. Use the bacpac feature

This is a good choice. Back up the database in US North Central to blob storage, then restore it to the new server in the USWest region. For more information, check out this article. I would have stopped and figured out how to do this, but I found something easier.

4. Use the Redgate Cloud Services tool

It turns out that Redgate has written an application to let you backup your Windows Azure SQL Database to blob storage (or Amazon storage), and to restore it from storage back to Windows Azure. This looks a lot like a wrapper around the bacpac function to me. To check it out, click here.  This is what I used to move our primary database.

After creating an account, you can set up a job. The screen looks like this:

Choose to back up a SQL Azure database to Azure and you will see this screen:

On the left, fill in the Azure server, User name, and Password, and click the Test button attached to the Password field. If you got the information right, this will populate the list of databases, and you can select the one you want to back up.

Note the checkbox about creating a temporary copy. When I used this application, I created my own copy of the database using the Copy Database command explained in option 1, and then used this to back up the copy. This ensured that I didn’t have any conflicts or problems from the database being accessed while it was being backed up. We actually shut down all of the client applications while I did this, so that shouldn’t have been a problem, but I figured better safe than sorry. If you don’t want to create your own copy, you can check the box and this application will do it for you. Note that you still have to pay for that extra database just as if you had done it yourself.

On the right side, fill in the storage account name and access key and click Test. If you got the information right, it will populate the list of containers. Select the one you want to put your database backup in, and specify the file name of the backup. This can be anything you want to call it; it is not the name that will be used for the database when you restore it – it’s just a file name.

After getting everything the way you want it, click Continue and you will see this screen:

Now you can schedule the job to be run later, or you can just run it now. I chose to run it now, but it’s cool that Redgate offers this job scheduling feature – this means you can use this tool to make regular backups of your database to your blob storage account. It would be nice if they would let you put a date marker in the file name so you can keep multiple copies, but I don’t see any way to do that right now.

First it shows the job getting ready to start.

It will show progress as it’s running.

When it’s finished, it sends out an e-mail (unless you have turned the notifications off), and the FINISH TIME will show the date/time that it finished.

For my sample run, the text is showing up in red. That’s because this one failed. It wouldn’t back up our ASPState database. The error said, “Microsoft Backup Service returned an error: Error encountered during the service operation. Validation of the schema model for data package failed.” (It said a lot more, too, but I’ll spare you.)

I didn’t know how to fix it or what the problem was, so for this tiny database, I used the SQL Azure Migration Wizard to move it into production.

The Redgate Cloud Services worked great for our production database. I did a dry run before running this in production. Our production database is around 8 GB. When backing it up, I did a dry run around 9 p.m. PDT on a Monday night and it took about an hour. The production run was on a Friday night around 10:30 p.m. PDT and took 20 minutes.

So now my database is backed up.

After you back it up, you can restore it to any of your WASD servers in any data center. Here’s what the screen looks like in the Redgate Cloud Services.

This works just like the backup – fill in the information (don’t forget to click the Test buttons to make sure the other boxes are populated) and schedule a job or run it now.

Note that you specify the database name on the SQL Azure side. The database does not have to exist; it will be created for you. If it does exist, and you want to replace it, there’s a checkbox for that.

We did a dry run and a production run on the restore. The dry run on a Monday night at 11:35 p.m. PDT took 50 minutes. The production run on a Friday night at 11 p.m. it took 44 minutes to run. Much, much faster than using the SQL Azure Migration Wizard would have been.

Now what?

After I moved the database, I had to update the Azure Service configurations for the four services that access the database, and change the connection strings to point to the new database. Awesome Jack (our Sr. Systems Engineer who also does our builds) published them and we tried them out. After ensuring everything worked, Jack re-enabled access to all of the client applications, and we declared the Rebranding release completed.

One issue

We did have one issue. We have an account in our Windows Azure SQL Database that only has EXEC rights that is used by the services. This ensures that nobody puts dynamic SQL in a service call, and that if someone gets a copy of our Azure configuration, they can only execute stored procedures, they can’t log in and query our database. We had to set these up again because the Login is defined in the Master database. So we had to set it up on the target server, then run the setup queries against the restored database in order for the account to work. Not a big deal, but it’s a good thing we remembered to test this.

Summary

There are several ways to copy a Windows Azure SQL Database from one server to another. I used the Redgate Cloud Services tools, which I think are a wrapper around the bacpac feature. They were very easy to use, and worked perfectly for our production database. For our tiny session state database, we used the SQL Azure Migration Wizard. Our database is now moved, and the rebranding/upgrade project has been finished successfully.

Moving Azure blob storage for the Rebranding/AzureSDK2.0 Project

July 13, 2013

In my last post, I talked about the work required to upgrade all of the cloud services to SDK 2.0 and rebrand “goldmail” as “pointacross” in the code. After all these changes were completed and turned over to QA, the real difficulties began – moving the data. Let’s talk about moving blob storage.

Storage accounts and data involved

We set up new storage accounts in US West with “pointacross” in the name, to replace the “goldmail” storage in US North Central. We decided to leave the huge amounts of diagnostics data behind, because we can always retrieve it later if we need to, but it’s not worth the trouble of moving it just to have it in the same Windows Azure storage tables and blob storage as the new diagnostics. So here’s what we had to move:

GoldMail main storage (includes the content libraries and diagnostics)
GoldMail Cloud Composer storage
GoldMail Saved Projects

When you use the PointAcross application (previously known as the Cloud Composer), you upload images and record audio. We store those assets, along with some other application information, in blob storage. This yields many benefits, but primarily it means you can work on a PointAcross project logged in from one computer, then go somewhere completely different and log in, and all of your assets are still there. You can also save your project after you publish your message, or while working on it – these go into the Saved Projects storage. Those two storage accounts have a container for each customer who has used that application or feature. Fortunately, we’re just in the beginning phases of releasing the PointAcross project widely, so there are only about a thousand containers in each storage account.

The main storage includes a bunch of data we use here and there, and the content library assets. There are a lot of files, but only about 20 containers.

Option 1 : Using AZCopy

So how do we move this data? The first thing we looked at is AZCopy, from the Windows Azure Storage team. This is a handy-dandy utility, and we used it to test migrating the data from one storage account to the other. You run it from the command window. Here’s the format of the command:

AzCopy [1]/[2] [3]/[4] /sourcekey:[5] /destkey:[6] /S

    [1] is the URL to the source storage account, like http://mystorage.blob.core.windows.net
    [2] is the container name to copy from
    [3] is the URL to the target storage account, like http://mynewstorage.blob.core.windows.net
    [4] is the container name to copy to
    [5] is the key to the source storage account, either primary or secondary
    [6] is the key to the destination storage account, either primary or secondary
    /S means to do a recursive copy, i.e. include all folders in the container.

I set up a BAT file called DoTheCopy and substituted %1% for the container name in both places, so I could pass it in as a parameter. (For those of you who are too young to know what a BAT file is, it’s just a text file with a file extension of .bat that you can run from the command line.) This BAT file had two lines and looked like this (I’ve chopped off the keys in the interest of space):

ECHO ON
AzCopy http://mystorage.blob.core.windows.net/%1% http://mynewstorage.blob.core.windows.net/%1%

I called this for each container:

E:\AzCopy> DoTheCopy containerName

I got tired of this after doing it twice (I hate repetition), so I set up another BAT file to call the first one repeatedly; its contents looked like this:

DoTheCopy container1
DoTheCopy container2
DoTheCopy container3

The AZCopy application worked really well, but there are a couple of gotchas. First, when it sets up the container in the target account, it makes it private. So if you want the container to be private but the blobs inside to be public, you have to change that manually yourself. You can change it programmatically or using an excellent tool such as Cerebrata’s Azure Management Studio.

The second problem is that those other two storage accounts have over a thousand containers each. So now I either have to type all of those container names in (not bloody likely!), or figure out a way to get a list of them. So I wrote a program to get the list of container names and generate the BAT file. This creates a generic list full of the command lines, converts it to an array, and writes it to a BAT file.

//sourceConnectionString is the connection string pointing to the source storage account
CloudStorageAccount sourceCloudStorageAccount = 
    CloudStorageAccount.Parse(sourceConnectionString);
CloudBlobClient sourceCloudBlobClient = sourceCloudStorageAccount.CreateCloudBlobClient();
List<string> outputLines = new List<string>();
IEnumerable<CloudBlobContainer> containers = sourceCloudBlobClient.ListContainers();
foreach (CloudBlobContainer oneContainer in containers)
{
    string outputLine = string.Format("DoTheCopy {0}", oneContainer.Name);
    outputLines.Add(outputLine);
}
string[] outputText = outputLines.ToArray();
File.WriteAllLines(@"E:\AzCopy\MoveUserCache.bat", outputText);

That’s all fine and dandy, but what about my container permissions? So I wrote a program to run after the data was moved. This iterates through the containers and sets the permissions on every one of them. If you want any of them to be private, you have to hardcode the exceptions, or fix them after running this.

private string SetPermissionsOnContainers(string dataConnectionString)
{
  string errorMessage = string.Empty;
  string containerName = string.Empty;
  try
  {
    CloudStorageAccount dataCloudStorageAccount = CloudStorageAccount.Parse(dataConnectionString);
    CloudBlobClient dataCloudBlobClient = dataCloudStorageAccount.CreateCloudBlobClient();

    int i = 0;
    List<string> containersToDo = new List<string>();

    IEnumerable<CloudBlobContainer> containers = dataCloudBlobClient.ListContainers();
    foreach (CloudBlobContainer oneContainer in containers)
    {                   
      i++;
      System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer.Name);

      CloudBlobContainer dataContainer = dataCloudBlobClient.GetContainerReference(containerName);
      //set permissions
      BlobContainerPermissions permissions = new BlobContainerPermissions();
      permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
      dataContainer.SetPermissions(permissions);
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to change permission on container '{0}' "
        + "= {1}, inner exception = {2}",
      containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

Option 2: Write my own solution

Ultimately, I decided not to use AZCopy. By the time I’d written this much code, I realized it was just as easy to write my own code to move all of the containers from one storage account to another, and set the permissions as it iterated through the containers, and I could add trace logging so I could see the progress. I could also hardcode exclusions if I wanted to. Here’s the code for iterating through the containers. When getting the list of containers, if it is the main account, I only want to move specific containers. This is because I moved some that were static ahead of time. So for this condition, I just set up an array of container names that I want to process. For the other accounts, it retrieves a list of all containers and processes all of them.

private string CopyContainers(string sourceConnectionString, string targetConnectionString, 
  string accountName)
{
  string errorMessage = string.Empty;
  string containerName = string.Empty;
  try 
  {
    CloudStorageAccount sourceCloudStorageAccount = CloudStorageAccount.Parse(sourceConnectionString);
    CloudBlobClient sourceCloudBlobClient = sourceCloudStorageAccount.CreateCloudBlobClient();
    CloudStorageAccount targetCloudStorageAccount = CloudStorageAccount.Parse(targetConnectionString);
    CloudBlobClient targetCloudBlobClient = targetCloudStorageAccount.CreateCloudBlobClient();

    int i = 0;
    List<string> containersToDo = new List<string>();
    if (accountName == "mainaccount")
    {
      containersToDo.Add("container1");
      containersToDo.Add("container2");
      containersToDo.Add("container3");

      foreach (string oneContainer in containersToDo)
      {
        i++;
        System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer);
        MoveBlobsInContainer(oneContainer, accountName, targetCloudBlobClient, sourceCloudBlobClient);
      }
    }
    else
    {
      IEnumerable<CloudBlobContainer> containers = sourceCloudBlobClient.ListContainers();                    
      foreach (CloudBlobContainer oneContainer in containers)
      {
        i++;
        System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer.Name);
        MoveBlobsInContainer(oneContainer.Name, accountName, targetCloudBlobClient, sourceCloudBlobClient);
      }
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to move files for account '{0}', " +
      "container '{1}' = {2}, inner exception = {3}",
      accountName, containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

And here’s the code that actually moves the blobs from the source container to the destination container.

private string MoveBlobsInContainer(string containerName, string accountName, 
  CloudBlobClient targetCloudBlobClient, CloudBlobClient sourceCloudBlobClient)
{
  string errorMessage = string.Empty;
  try
  {
    long totCount = 0;
    //first, get a reference to the container in the target account, 
    //  create it if needed, and set the permissions on it 
    CloudBlobContainer targetContainer = 
      targetCloudBlobClient.GetContainerReference(containerName);
    targetContainer.CreateIfNotExists();
    //set permissions
    BlobContainerPermissions permissions = new BlobContainerPermissions();
    permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
    targetContainer.SetPermissions(permissions);

    //get list of files in sourceContainer, flat list
    CloudBlobContainer sourceContainer = 
      sourceCloudBlobClient.GetContainerReference(containerName);
    foreach (IListBlobItem item in sourceContainer.ListBlobs(null, 
      true, BlobListingDetails.All))
    {
      totCount++;
      System.Diagnostics.Debug.Print("Copying container {0}/blob #{1} with url {2}", 
        containerName, totCount, item.Uri.AbsoluteUri);
      CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference(item.Uri.AbsoluteUri);
      CloudBlockBlob targetBlob = targetContainer.GetBlockBlobReference(sourceBlob.Name);
      targetBlob.StartCopyFromBlob(sourceBlob);
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to move files for account '{0}', "
      + "container '{1}' = {2}, inner exception = {3}",
        accountName, containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

You can hook this up to a fancy UI and run it in a background worker and pass progress back to the UI, but I didn’t want to spend that much time on it. I create a Windows Forms app with 1 button. When I clicked the button, it ran some code that set the connection strings and called CopyContainers for each storage account.

Did it work?

When we put all of the services in production, as our Sr. Systems Engineer, Jack Chen, published all of the services to PointAcross production, I ran this to move the data from the goldmail storage accounts to the pointacross storage accounts. It worked perfectly. The only thing left at this point is moving the Windows Azure SQL Databases (the database previously known as SQL Azure Winking smile ).

Rebranding and Upgrading to Azure SDK 2.0 — Details, details

July 13, 2013

As I discussed in my last post, we at GoldMail are rebranding our company and services to PointAcross, and updating everything to SDK/Tools 2.0 at the same time. (No reason to do full regression testing twice. Plus, no guts, no glory!)

Setting up the Azure bits

For the rebranding, I decided to create all new services and storage accounts with “pointacross” in the name instead of “goldmail”. Yes, we could have CNAMEd the goldmail*.cloudapp.net URLs as pointacross, but there are several benefits to creating new services. For one thing, this removes any confusion about the services on the part of us in the tech department. Also, we can run two production systems in parallel until the DNS entries for the goldmail services redirect appropriately to the pointacross URLs.

Another issue we have is our services are currently located in the US North Central data center, which is very full. I can’t create VMs, and new Azure subscribers can’t set up services there. US North and South Central were the first two data centers in the US, so the hardware is older as well. At the rate my CEO is going, it seems likely that he will generate enough business that we will need to scale out in the next few months, and I was concerned about being able to do that with our services based in North Central. I don’t know if that’s a valid concern, but I figured better safe than sorry.

So I set up a new affinity group for USWest, and created all of the new services and storage accounts there. I also took advantage of this opportunity to create a storage account just for diagnostics. We don’t use our main storage account for a LOT of other things, but this is always advised, and this is a good time to take care of that.

Our Sr. systems engineer, Jack Chen, set up all the DNS entries for everything, and I set to work on updating the SDK and doing the rebranding.

Updating the SDK version

The next order of business was to update everything to Azure SDK 2.0. I downloaded and installed all of the updates, and installed the tools for Visual Studio 2010. 

Azure SDK/Tools 2.0 runs side-by-side with 1.8 just fine. You can open solutions that have cloud projects targeting 1.8 and have no problem. However, here’s something you need to know: Once you install SDK/Tools 2.0, you can no longer create a NEW cloud project targeting 1.8. I installed this SDK just to check out the impact of the breaking changes in the Storage Client Library, and when I needed to add a cloud project to an existing (SDK 1.8) solution, there was no way to tell it to target anything except SDK 2.0. So if you need to add a new cloud project and the rest of the projects in that solution are 1.8 or before, you have to uninstall SDK 2.0 in order to create your cloud project.

In the screenshots below, I am using VS2010. We haven’t upgraded to VS2012 because we are always working like wildfire on the next release, and the TFS Pending Changes window was just a pain in the butt we didn’t want to deal with yet. Procrastination worked in my favor this time (that’s a first!) – they have changed the Pending Changes window in VS2013, but we can’t use that because they haven’t provided Azure Tools for the VS 2013 Preview yet. Argh!

So how do you you update a current solution? Right-click on each cloud project in the solution and select Properties. You should see this:

Click the button to upgrade to SDK 2.0. You will be led through a wizard to do the upgrade – it asks if you want to backup the current project first, and offers to show you the conversion log.

We have multiple cloud projects in each solution – one for staging, one for production, and one for developers. (Click here to read why.) So we had to convert each project.

The next thing to do is update the NuGet packages. You can right-click on the Solution and select “Manage NuGet packages for solution”, or do it one project at a time. I did mine one project at a time for no particular reason other than wanting to be methodical about it (and being a little anal-retentive). You will be prompted with a list of packages that can be updated.

For this exercise, you need to update Windows Azure Storage and the Windows Azure Configuration Manager. When you do this, it updates the references in the project(s), but doesn’t change any code or using statements you may have. Try doing a build and see what’s broken. (F5 – most people’s definition of “unit test”. Winking smile).

Handling breaking changes

For us, since we were still using Storage Client Library 1.7, I have a number of things I had to fix.

1. I configure our diagnostics programmatically. To do this in 1.7 and before, I grab an instance of the storage account in order to get an instance of the RoleInstanceDiagnosticManager. Here is the old code.

string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";   
CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    storageAccount.CreateRoleInstanceDiagnosticManager(
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

They have removed this dependency, so I had to change this code to instantiate a new instance of the diagnostic manager and pass in the connection string to the storage account used for diagnostics. Here is the new code.

string wadConnectionString = RoleEnvironment.GetConfigurationSettingValue
    ("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    new RoleInstanceDiagnosticManager(
    wadConnectionString,
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

2. In the startup of our web roles, we have a exception handler that writes any startup problems to blob storage (because it can’t write to diagnostics at that point). This code looks like this:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExist();
container.GetBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

They changed CreateIfNotExist() to CreateIfNotExists(), and you now have to specify the type of blob used, so when I get the reference to the blob, I have to call GetBlockBlobReference. Also, UploadText has been removed. More on that in a minute. This code becomes the following:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExists();
container.GetBlockBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

3. As noted above, you have to change the more generic CloudBlob, etc., to specify the type of blob. So I changed all occurrences of CloudBlob to CloudBlockBlob and GetBlobReference to GetBlockBlobReference.

4. I had a method that checked to see if a blob existed by fetching the attributes and checking the exception thrown. They added Exists() as a method for the blobs, so I’ve replaced all uses of my method with blob.Exists() and removed the method entirely.

5. Now let’s talk about Upload/Download Text, Upload/Download FIle, and Upload/Download ByteArray. They removed these methods from the CloudBlockBlob class, and now only support Upload/Download Stream. So you can rewrite all your code, or you can get the CloudBlobExtensions written by Maarten Balliauw. I can’t find my link to his, so I’ve posted a copy of them here. Just change the namespace to match yours, and voila!

6. I had to add a using statement for Microsoft.WindowsAzure.Storage.Blob everywhere I use blobs, and the corresponding one for queues where I use queues. I had to add a using statement for Microsoft.WindowsAzure.Storage anywhere I was accessing the CloudStorageAccount. Basically, I had to make sure I had using clauses for the new namespaces wherever I was using them.

7. I also used the “Organize Usings/Remove and Sort” context menu option to clean up the using clauses in every class I changed. This removed the old WindowsAzure.StorageClient.

That was the bulk of the work for updating the Storage Client Library. Once I got the list from going through the first application, doing the others was fairly simple, as I knew what I was looking for.

Gaurav Mantri (an amazing Windows Azure MVP who is always very helpful) has a very good blog series about updating the Storage Client Library, discussing blobs, queues, and table storage.

After I fixed all the breaking changes, I made the rebranding changes. In some cases, this was as easy as just changing “goldmail.com” to “pointacross.com”, but I also had to search the entire code base for “goldmail” and decide how to change each one of them, especially in the case of error messages returned to the client applications.

Every occurrence of “goldmail” had to be assessed, and I had to make sure any “secondary references” were updated. For example, the desktop application (now called GoldMail) has some content hosted in web controls that is located on our website, so I had to be sure those bits were updated in the website. And finally, I updated the storage keys and storage account names in the Azure configurations, and any URLs or occurrences of “goldmail” that I found there.

RDP, SSL, and HTTPS

We purchased a new SSL certificate for pointacross.com, which I uploaded to all of the services for RDP access, and for the HTTPS services. Then I went through and updated the certificate thumbprints specified in the cloud projects. (I have never managed to use the “browse” option for certificates in the Visual Studio UI – it can never seem to find the certificate in my certificate store on my machine, so I update the thumbprints in the Azure configuration, which works just fine.)

After doing this, I right-clicked on each cloud project and selected Manage RDP Connections, then put the password in again. I didn’t do this with the first service, and we couldn’t RDP into it. Oops! I suspect it uses the certificate to encrypt the RDP information and store it in the Azure configuration, and putting it in again after changing the certificate forces it to re-encrypt with the new certificate.

And finally, we set up new publishing profiles and published everything to the new PointAcross staging services, and turned the whole thing over to QA.

Once more, unto the breach. –Shakespeare, Henry V

After everything was tested and checked, we had a release meeting – at night, to minimize customer disruption. We shut down access to our client applications, and then published the new cloud services to production. We also moved the data in the storage accounts. After we tried everything out, we redirected the goldmail DNS entries that were likely to be “out in the wild” to the pointacross services, and deleted the rest of them. After a few days went by, we shut down and deleted the goldmail services, and then removed them from our Azure subscription. We are now rebranded and updated.

In my next post, I’ll talk about how I moved the cloud storage from the goldmail storage accounts to the pointacross storage accounts.