Rebranding and Upgrading to Azure SDK 2.0 — Details, details

As I discussed in my last post, we at GoldMail are rebranding our company and services to PointAcross, and updating everything to SDK/Tools 2.0 at the same time. (No reason to do full regression testing twice. Plus, no guts, no glory!)

Setting up the Azure bits

For the rebranding, I decided to create all new services and storage accounts with “pointacross” in the name instead of “goldmail”. Yes, we could have CNAMEd the goldmail*.cloudapp.net URLs as pointacross, but there are several benefits to creating new services. For one thing, this removes any confusion about the services on the part of us in the tech department. Also, we can run two production systems in parallel until the DNS entries for the goldmail services redirect appropriately to the pointacross URLs.

Another issue we have is our services are currently located in the US North Central data center, which is very full. I can’t create VMs, and new Azure subscribers can’t set up services there. US North and South Central were the first two data centers in the US, so the hardware is older as well. At the rate my CEO is going, it seems likely that he will generate enough business that we will need to scale out in the next few months, and I was concerned about being able to do that with our services based in North Central. I don’t know if that’s a valid concern, but I figured better safe than sorry.

So I set up a new affinity group for USWest, and created all of the new services and storage accounts there. I also took advantage of this opportunity to create a storage account just for diagnostics. We don’t use our main storage account for a LOT of other things, but this is always advised, and this is a good time to take care of that.

Our Sr. systems engineer, Jack Chen, set up all the DNS entries for everything, and I set to work on updating the SDK and doing the rebranding.

Updating the SDK version

The next order of business was to update everything to Azure SDK 2.0. I downloaded and installed all of the updates, and installed the tools for Visual Studio 2010. 

Azure SDK/Tools 2.0 runs side-by-side with 1.8 just fine. You can open solutions that have cloud projects targeting 1.8 and have no problem. However, here’s something you need to know: Once you install SDK/Tools 2.0, you can no longer create a NEW cloud project targeting 1.8. I installed this SDK just to check out the impact of the breaking changes in the Storage Client Library, and when I needed to add a cloud project to an existing (SDK 1.8) solution, there was no way to tell it to target anything except SDK 2.0. So if you need to add a new cloud project and the rest of the projects in that solution are 1.8 or before, you have to uninstall SDK 2.0 in order to create your cloud project.

In the screenshots below, I am using VS2010. We haven’t upgraded to VS2012 because we are always working like wildfire on the next release, and the TFS Pending Changes window was just a pain in the butt we didn’t want to deal with yet. Procrastination worked in my favor this time (that’s a first!) – they have changed the Pending Changes window in VS2013, but we can’t use that because they haven’t provided Azure Tools for the VS 2013 Preview yet. Argh!

So how do you you update a current solution? Right-click on each cloud project in the solution and select Properties. You should see this:

Click the button to upgrade to SDK 2.0. You will be led through a wizard to do the upgrade – it asks if you want to backup the current project first, and offers to show you the conversion log.

We have multiple cloud projects in each solution – one for staging, one for production, and one for developers. (Click here to read why.) So we had to convert each project.

The next thing to do is update the NuGet packages. You can right-click on the Solution and select “Manage NuGet packages for solution”, or do it one project at a time. I did mine one project at a time for no particular reason other than wanting to be methodical about it (and being a little anal-retentive). You will be prompted with a list of packages that can be updated.

For this exercise, you need to update Windows Azure Storage and the Windows Azure Configuration Manager. When you do this, it updates the references in the project(s), but doesn’t change any code or using statements you may have. Try doing a build and see what’s broken. (F5 – most people’s definition of “unit test”. Winking smile).

Handling breaking changes

For us, since we were still using Storage Client Library 1.7, I have a number of things I had to fix.

1. I configure our diagnostics programmatically. To do this in 1.7 and before, I grab an instance of the storage account in order to get an instance of the RoleInstanceDiagnosticManager. Here is the old code.

string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";   
CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    storageAccount.CreateRoleInstanceDiagnosticManager(
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

They have removed this dependency, so I had to change this code to instantiate a new instance of the diagnostic manager and pass in the connection string to the storage account used for diagnostics. Here is the new code.

string wadConnectionString = RoleEnvironment.GetConfigurationSettingValue
    ("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    new RoleInstanceDiagnosticManager(
    wadConnectionString,
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

2. In the startup of our web roles, we have a exception handler that writes any startup problems to blob storage (because it can’t write to diagnostics at that point). This code looks like this:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExist();
container.GetBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

They changed CreateIfNotExist() to CreateIfNotExists(), and you now have to specify the type of blob used, so when I get the reference to the blob, I have to call GetBlockBlobReference. Also, UploadText has been removed. More on that in a minute. This code becomes the following:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExists();
container.GetBlockBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

3. As noted above, you have to change the more generic CloudBlob, etc., to specify the type of blob. So I changed all occurrences of CloudBlob to CloudBlockBlob and GetBlobReference to GetBlockBlobReference.

4. I had a method that checked to see if a blob existed by fetching the attributes and checking the exception thrown. They added Exists() as a method for the blobs, so I’ve replaced all uses of my method with blob.Exists() and removed the method entirely.

5. Now let’s talk about Upload/Download Text, Upload/Download FIle, and Upload/Download ByteArray. They removed these methods from the CloudBlockBlob class, and now only support Upload/Download Stream. So you can rewrite all your code, or you can get the CloudBlobExtensions written by Maarten Balliauw. I can’t find my link to his, so I’ve posted a copy of them here. Just change the namespace to match yours, and voila!

6. I had to add a using statement for Microsoft.WindowsAzure.Storage.Blob everywhere I use blobs, and the corresponding one for queues where I use queues. I had to add a using statement for Microsoft.WindowsAzure.Storage anywhere I was accessing the CloudStorageAccount. Basically, I had to make sure I had using clauses for the new namespaces wherever I was using them.

7. I also used the “Organize Usings/Remove and Sort” context menu option to clean up the using clauses in every class I changed. This removed the old WindowsAzure.StorageClient.

That was the bulk of the work for updating the Storage Client Library. Once I got the list from going through the first application, doing the others was fairly simple, as I knew what I was looking for.

Gaurav Mantri (an amazing Windows Azure MVP who is always very helpful) has a very good blog series about updating the Storage Client Library, discussing blobs, queues, and table storage.

After I fixed all the breaking changes, I made the rebranding changes. In some cases, this was as easy as just changing “goldmail.com” to “pointacross.com”, but I also had to search the entire code base for “goldmail” and decide how to change each one of them, especially in the case of error messages returned to the client applications.

Every occurrence of “goldmail” had to be assessed, and I had to make sure any “secondary references” were updated. For example, the desktop application (now called GoldMail) has some content hosted in web controls that is located on our website, so I had to be sure those bits were updated in the website. And finally, I updated the storage keys and storage account names in the Azure configurations, and any URLs or occurrences of “goldmail” that I found there.

RDP, SSL, and HTTPS

We purchased a new SSL certificate for pointacross.com, which I uploaded to all of the services for RDP access, and for the HTTPS services. Then I went through and updated the certificate thumbprints specified in the cloud projects. (I have never managed to use the “browse” option for certificates in the Visual Studio UI – it can never seem to find the certificate in my certificate store on my machine, so I update the thumbprints in the Azure configuration, which works just fine.)

After doing this, I right-clicked on each cloud project and selected Manage RDP Connections, then put the password in again. I didn’t do this with the first service, and we couldn’t RDP into it. Oops! I suspect it uses the certificate to encrypt the RDP information and store it in the Azure configuration, and putting it in again after changing the certificate forces it to re-encrypt with the new certificate.

And finally, we set up new publishing profiles and published everything to the new PointAcross staging services, and turned the whole thing over to QA.

Once more, unto the breach. –Shakespeare, Henry V

After everything was tested and checked, we had a release meeting – at night, to minimize customer disruption. We shut down access to our client applications, and then published the new cloud services to production. We also moved the data in the storage accounts. After we tried everything out, we redirected the goldmail DNS entries that were likely to be “out in the wild” to the pointacross services, and deleted the rest of them. After a few days went by, we shut down and deleted the goldmail services, and then removed them from our Azure subscription. We are now rebranded and updated.

In my next post, I’ll talk about how I moved the cloud storage from the goldmail storage accounts to the pointacross storage accounts.

Tags: ,

One Response to “Rebranding and Upgrading to Azure SDK 2.0 — Details, details”

  1. Moving Azure blob storage for the Rebranding/AzureSDK2.0 Project | RobinDotNet's Blog Says:

    […] Azure Like It / ClickOnce « Rebranding and Upgrading to Azure SDK 2.0 — Details, details […]

Leave a comment