Diagnostics and Logging in Azure Web Sites

April 14, 2014

A couple of months ago, Microsoft Press asked me to write a blog article about something Azure. I chose to write about diagnostics and logging in Azure Web Sites. If you are deploying anything to run in the cloud, you should be including logging in all of your applications. Because the software is running in the cloud, you have less visibility into the machine it’s running on, and trace diagnostics and logging can be a lifesaver when there’s a problem going on.

To check out the article, click here. I hope it’s helpful!

Bay Area Azure events in March and April 2014

March 14, 2014

There are several upcoming Windows Azure events in the SF Bay Area. All of these events are free and open to everyone. Food and drinks will be provided, so please register if you’re coming so we can make sure we have enough food!

March 18: A Real Story of Azure Migration

On March 18, Giscard Biamby is coming to speak about his company’s experience migrating one of their larger legacy applications to Windows Azure and how they implemented continuous delivery. It’s always interesting to hear these stories from real customers rather than from Microsoft marketing. For more details or to register, click here.

March 29: Global Windows Azure Bootcamp

On March 29, I will be running a Global Windows Azure Bootcamp at Foothill College in Los Altos Hills with the help of several of my friends. This is a free event run by community leaders worldwide on the same day. So far, there are over a hundred locations confirmed. Everyone is welcome. If you know nothing about Azure and would like to have an opportunity to learn a bit and have people available to help you with some hands-on labs, this is a great opportunity. Also, if you’re already doing Azure and have questions about your project, feel free to attend this bootcamp and take advantage of the opportunity to ask some experts for advice.

I’ll be presenting an overview of Windows Azure. Neil Mackenzie will be speaking on IAAS and Networking. Eugene Chuvyrov and Fabien Lavocat will be showing how to use Mobile Services from an iOS device and a Windows device. The rest of the day will be hands-on labs. For more details or to register, click here.

April 2: Vittorio Bertocci on Identity and Active Directory

On April 2nd, the Windows Azure meetup in San Francisco will be hosting Vittorio Bertocci from Microsoft. Vittorio will be in SF for the Microsoft \\build conference (April 2nd through April 4th). Vittorio is brilliant, and is a vibrant, entertaining speaker, focusing on Identity and Active Directory in Windows Azure. He spoke last year, and we had a huge turnout, lots of conversation and audience participation, and it was a great event. This should be another great meetup. For more details or to register, click here.

April 22nd: Deep Dive on Windows Azure Web Sites

On April 22nd, Aidan Ryan will be speaking at the Windows Azure meetup in San Francisco, doing a deep dive on Windows Azure Web Sites. This feature becomes more robust every day, and Aidan will cover the basics as well as the recent releases, such as web jobs. He’s also promised to incorporate anything new announced at \\build. For more details or to register, click here.

I feel very fortunate to live in the San Francisco Bay Area where there are so many opportunities for keeping up with technology are available. I hope I’ll see you at one or more of these events!

Big Data Hackathon with Big Data Names, February 8-9, 2014

February 5, 2014

There’s a great opportunity coming up this weekend (2/8/14-2/9/14) for those who have an interest in Big Data. Microsoft is hosting a (free) hackathon at the Microsoft Silicon Valley Campus in Mountain View (California). This is a Future Cities Hackathon; the focus will be looking at how Big Data can be used to solve problems in San Francisco. There are prizes on offer for the winning teams. (At the time of this writing, I was unable to pry the list of prizes out of Microsoft. I’ll update this post if I get the list before the hackathon.)

There will be 3 categories which will be judged by a panel of experts.

  1. Data modelling: Can you finds trends in data on the movements of SF citizens? Can you make accurate predictions on the behaviour of the pedestrians and traffic? Can you alleviate traffic chaos and aid in the ergonomic redistribution of parking? Can you set up your own pre-crime division!? Any tools or languages can be used in this category from iPython Notebook, Hadoop, R, C++, Spark, Matlab or any other data analysis tool. The most innovative winning entry will win and the solution published.
  2. Data Visualisation: Can you find the best way to visualise the movements of pedestrians around the City? Is it mapping, D3 charts or a more interactive web driven approach? How can you show relative behaviours of groups of people, traffic or hotspots? For example, can you write a training application for traffic officers? Or do you build a dashboard for City planners? The best and most innovative idea will win!
  3. Mobile application: Do you want to combine modelling and visualisation so that mobile users can find things out about the City when they’re on the move? Will you deliver this through a combination of maps and text? Which audience will you target? The best entry wins. This is a Windows Phone competition and will get support from Nokia who will help the app to be marketed and published in the app store.

To help with the hackathon, they are bringing in some of the world’s foremost experts in Cloud and Data Science, including Richard Conway and Andy Cross, two Windows Azure MVP’s based in London whose company, Elastacloud, specializes in Big Data consulting. They are both brilliant, and will be speaking and helping people throughout the hackathon, alongside experts from Hortonworks and Microsoft. This is a rare opportunity to talk to several Big Data experts in person, and see how anyone can get up and running in hours solving large scale data problems.

If you know nothing about Cloud and/or Big Data, this is your chance. Microsoft will be supplying cloud time for all attendees. Interested in “machine learning” or advanced analytics? Then come to the Microsoft Future Cities Big Data Hackathon!

This hackathon is hosted by Microsoft, Hortonworks, and Elastacloud. To sign up, click here.

Windows Azure at San Diego Code Camp – 27th and 28th of July 2013

July 15, 2013

There is a code camp in San Diego on July 27th and 28th that has a lot of really interesting sessions available. If you live in the area, you should check it out – there’s something for everyone. It also gives you an opportunity to talk with other developers, see what they’re doing, and make some connections.

I’m going to be traveling from the San Francisco Bay Area to San Diego to speak, as are some of my friends — Mathias Brandewinder, Theo Jungeblut, and Steve Evans.

For those of you who don’t know me, I’m VP of Technology for GoldMail (soon to be renamed PointAcross), and a Microsoft Windows Azure MVP. I co-run the Windows Azure Meetups in San Francisco and the Bay.NET meetups in Berkeley. I’m speaking about Windows Azure. One session is about Windows Azure Web Sites and Web Roles; the other one is about my experience migrating my company’s infrastructure to Windows Azure, some things I learned, and some keen things we’re using Azure for. Here are my sessions:

Theo works for AppDynamics, a company whose software has some very impressive capabilities to monitor and troubleshoot problems in production applications. He is an entertaining speaker and has a lot of valuable expertise. Here are his sessions:

Mathias is an Microsoft MVP in F# and a consultant. He is crazy smart, and runs the San Francisco Bay.NET Meetups. Here are his sessions:

Steve (also a Microsoft MVP) is giving an interactive talk on IIS for Developers – you get to vote on the content of the talk! As developers, we need to have a better understanding of some of the system components even though we may not support them directly.

Also of interest are David McCarter’s sessions on .NET coding standards and how to handle technical interviews. He’s a Microsoft MVP with a ton of experience, and his sessions are always really popular, so get there early! And yet another Microsoft MVP speaking who is always informative and helpful is Jeremy Clark, who has sessions on Generics, Clean Code, and Design patterns.

In addition to these sessions, there are dozens of other interesting topics, like Estimating Projects, Requirements Gathering, Building cross-platform apps, Ruby on Rails, and Mongo DB (the name of which always makes me think of mangos), and that’s just to name a few.

So register now for the San Diego Code Camp and come check it out. It’s a great opportunity to increase your knowledge of what’s available and some interesting ways to get things done. Hope to see you there!

Moving our Windows Azure SQL Databases for the Rebranding/AzureSDK2.0 project

July 14, 2013

As detailed in my last couple of posts, we have rebranded our company as PointAcross, which means we rebranded all of the products except the desktop Composer, which is still known as GoldMail. I created all new services and storage accounts with pointacross in the name in a new region, and everything is published in production and working fine.

Only one thing is left – the Windows Azure SQL Databases are still in US North Central. This is a disaster waiting to happen. It is a well-known fact that WASD’s have issues with connections, and you always have to put in connection management code. Running your services in one data center with your database in another is just asking for trouble. So now I have to move the primary database as well as the tiny session state database used in a couple of our web applications.

Although we have somewhere around 15 services running in Azure, only four of them access the primary database. These are the services for our primary application (PointAcross, used to make messages), and the Players used to play those messages.

Some investigation revealed several ways to migrate the data.

1. Copy the database from one place to another.
2. Use the SQL Azure Migration Wizard on codeplex.
3. Use the bacpac feature to back up a WASD to blob storage, then to restore it to the new database server.
4. Use the wrapper that Redgate has written around the bacpac capability that makes it very easy to backup/restore a WASD from/to blob storage.

1. Copy the database

This is pretty simple. You can make a copy of your database using T-SQL. It’s as simple as logging into the SQLServer Management Studio, connecting to the destination server, and running this command against the master database.

create database MyDatabaseBkp as copy of myserver.MyDatabase

After you do that, you will probably get back a message telling you it’s finished. What this means is that the query is finished, not the copy! It’s doing the copy in the background. There are two queries you can run against the system database to check the status of the database copy.

select * from sys.databases

This one shows you all the databases and the status of them. Look at the [state_desc] column. It will say “ONLINE” if it’s finished, or “COPYING” if it’s not.

The second command only shows running jobs, and will show a % completion in the [percent_complete] column. I read somewhere that this will update every 5 minutes, but don’t bet the farm on it. Mine went to 24%, stayed a long time, then jumped to 65%, and then finished only a few minutes later. So take it as an indication that something is happening, but don’t assume it’s definitive. Here’s the second command:

select * from sys.dm_database_copies

Copying a database is fairly easy, and works great. Our database is about 8 GB, and it took 10-15 minutes to run. Don’t forget that you have to pay for the second database!

There is a major gotcha with this option. This only works if the source and destination are within the same datacenter. Since we’re moving our database from USNorth Central to US West, this won’t work for us.

For more information about this feature, check out this article.

2. Use the SQL Azure Migration Wizard

This application scripts all of the objects in the database, and extracts the data to your local hard drive using BCP. Then it runs the script on the destination server and uploads the data to the new database. This means you end up downloading all of the data to your local machine, and then uploading it to the new server. I didn’t really want to download 8GB of data, and then wait for it to upload again. If you have all the time in the world, this will probably work for you. We used it when we initially migrated our infrastructure to Windows Azure, and still use it occasionally to migrate new tables and things like that that are developed locally, but I don’t want to have our systems down for as long as it will take to run this in production. For more information, check it out on codeplex.

3. Use the bacpac feature

This is a good choice. Back up the database in US North Central to blob storage, then restore it to the new server in the USWest region. For more information, check out this article. I would have stopped and figured out how to do this, but I found something easier.

4. Use the Redgate Cloud Services tool

It turns out that Redgate has written an application to let you backup your Windows Azure SQL Database to blob storage (or Amazon storage), and to restore it from storage back to Windows Azure. This looks a lot like a wrapper around the bacpac function to me. To check it out, click here.  This is what I used to move our primary database.

After creating an account, you can set up a job. The screen looks like this:

Choose to back up a SQL Azure database to Azure and you will see this screen:

On the left, fill in the Azure server, User name, and Password, and click the Test button attached to the Password field. If you got the information right, this will populate the list of databases, and you can select the one you want to back up.

Note the checkbox about creating a temporary copy. When I used this application, I created my own copy of the database using the Copy Database command explained in option 1, and then used this to back up the copy. This ensured that I didn’t have any conflicts or problems from the database being accessed while it was being backed up. We actually shut down all of the client applications while I did this, so that shouldn’t have been a problem, but I figured better safe than sorry. If you don’t want to create your own copy, you can check the box and this application will do it for you. Note that you still have to pay for that extra database just as if you had done it yourself.

On the right side, fill in the storage account name and access key and click Test. If you got the information right, it will populate the list of containers. Select the one you want to put your database backup in, and specify the file name of the backup. This can be anything you want to call it; it is not the name that will be used for the database when you restore it – it’s just a file name.

After getting everything the way you want it, click Continue and you will see this screen:

Now you can schedule the job to be run later, or you can just run it now. I chose to run it now, but it’s cool that Redgate offers this job scheduling feature – this means you can use this tool to make regular backups of your database to your blob storage account. It would be nice if they would let you put a date marker in the file name so you can keep multiple copies, but I don’t see any way to do that right now.

First it shows the job getting ready to start.

It will show progress as it’s running.

When it’s finished, it sends out an e-mail (unless you have turned the notifications off), and the FINISH TIME will show the date/time that it finished.

For my sample run, the text is showing up in red. That’s because this one failed. It wouldn’t back up our ASPState database. The error said, “Microsoft Backup Service returned an error: Error encountered during the service operation. Validation of the schema model for data package failed.” (It said a lot more, too, but I’ll spare you.)

I didn’t know how to fix it or what the problem was, so for this tiny database, I used the SQL Azure Migration Wizard to move it into production.

The Redgate Cloud Services worked great for our production database. I did a dry run before running this in production. Our production database is around 8 GB. When backing it up, I did a dry run around 9 p.m. PDT on a Monday night and it took about an hour. The production run was on a Friday night around 10:30 p.m. PDT and took 20 minutes.

So now my database is backed up.

After you back it up, you can restore it to any of your WASD servers in any data center. Here’s what the screen looks like in the Redgate Cloud Services.

This works just like the backup – fill in the information (don’t forget to click the Test buttons to make sure the other boxes are populated) and schedule a job or run it now.

Note that you specify the database name on the SQL Azure side. The database does not have to exist; it will be created for you. If it does exist, and you want to replace it, there’s a checkbox for that.

We did a dry run and a production run on the restore. The dry run on a Monday night at 11:35 p.m. PDT took 50 minutes. The production run on a Friday night at 11 p.m. it took 44 minutes to run. Much, much faster than using the SQL Azure Migration Wizard would have been.

Now what?

After I moved the database, I had to update the Azure Service configurations for the four services that access the database, and change the connection strings to point to the new database. Awesome Jack (our Sr. Systems Engineer who also does our builds) published them and we tried them out. After ensuring everything worked, Jack re-enabled access to all of the client applications, and we declared the Rebranding release completed.

One issue

We did have one issue. We have an account in our Windows Azure SQL Database that only has EXEC rights that is used by the services. This ensures that nobody puts dynamic SQL in a service call, and that if someone gets a copy of our Azure configuration, they can only execute stored procedures, they can’t log in and query our database. We had to set these up again because the Login is defined in the Master database. So we had to set it up on the target server, then run the setup queries against the restored database in order for the account to work. Not a big deal, but it’s a good thing we remembered to test this.

Summary

There are several ways to copy a Windows Azure SQL Database from one server to another. I used the Redgate Cloud Services tools, which I think are a wrapper around the bacpac feature. They were very easy to use, and worked perfectly for our production database. For our tiny session state database, we used the SQL Azure Migration Wizard. Our database is now moved, and the rebranding/upgrade project has been finished successfully.

Moving Azure blob storage for the Rebranding/AzureSDK2.0 Project

July 13, 2013

In my last post, I talked about the work required to upgrade all of the cloud services to SDK 2.0 and rebrand “goldmail” as “pointacross” in the code. After all these changes were completed and turned over to QA, the real difficulties began – moving the data. Let’s talk about moving blob storage.

Storage accounts and data involved

We set up new storage accounts in US West with “pointacross” in the name, to replace the “goldmail” storage in US North Central. We decided to leave the huge amounts of diagnostics data behind, because we can always retrieve it later if we need to, but it’s not worth the trouble of moving it just to have it in the same Windows Azure storage tables and blob storage as the new diagnostics. So here’s what we had to move:

GoldMail main storage (includes the content libraries and diagnostics)
GoldMail Cloud Composer storage
GoldMail Saved Projects

When you use the PointAcross application (previously known as the Cloud Composer), you upload images and record audio. We store those assets, along with some other application information, in blob storage. This yields many benefits, but primarily it means you can work on a PointAcross project logged in from one computer, then go somewhere completely different and log in, and all of your assets are still there. You can also save your project after you publish your message, or while working on it – these go into the Saved Projects storage. Those two storage accounts have a container for each customer who has used that application or feature. Fortunately, we’re just in the beginning phases of releasing the PointAcross project widely, so there are only about a thousand containers in each storage account.

The main storage includes a bunch of data we use here and there, and the content library assets. There are a lot of files, but only about 20 containers.

Option 1 : Using AZCopy

So how do we move this data? The first thing we looked at is AZCopy, from the Windows Azure Storage team. This is a handy-dandy utility, and we used it to test migrating the data from one storage account to the other. You run it from the command window. Here’s the format of the command:

AzCopy [1]/[2] [3]/[4] /sourcekey:[5] /destkey:[6] /S

    [1] is the URL to the source storage account, like http://mystorage.blob.core.windows.net
    [2] is the container name to copy from
    [3] is the URL to the target storage account, like http://mynewstorage.blob.core.windows.net
    [4] is the container name to copy to
    [5] is the key to the source storage account, either primary or secondary
    [6] is the key to the destination storage account, either primary or secondary
    /S means to do a recursive copy, i.e. include all folders in the container.

I set up a BAT file called DoTheCopy and substituted %1% for the container name in both places, so I could pass it in as a parameter. (For those of you who are too young to know what a BAT file is, it’s just a text file with a file extension of .bat that you can run from the command line.) This BAT file had two lines and looked like this (I’ve chopped off the keys in the interest of space):

ECHO ON
AzCopy http://mystorage.blob.core.windows.net/%1% http://mynewstorage.blob.core.windows.net/%1%

I called this for each container:

E:\AzCopy> DoTheCopy containerName

I got tired of this after doing it twice (I hate repetition), so I set up another BAT file to call the first one repeatedly; its contents looked like this:

DoTheCopy container1
DoTheCopy container2
DoTheCopy container3

The AZCopy application worked really well, but there are a couple of gotchas. First, when it sets up the container in the target account, it makes it private. So if you want the container to be private but the blobs inside to be public, you have to change that manually yourself. You can change it programmatically or using an excellent tool such as Cerebrata’s Azure Management Studio.

The second problem is that those other two storage accounts have over a thousand containers each. So now I either have to type all of those container names in (not bloody likely!), or figure out a way to get a list of them. So I wrote a program to get the list of container names and generate the BAT file. This creates a generic list full of the command lines, converts it to an array, and writes it to a BAT file.

//sourceConnectionString is the connection string pointing to the source storage account
CloudStorageAccount sourceCloudStorageAccount = 
    CloudStorageAccount.Parse(sourceConnectionString);
CloudBlobClient sourceCloudBlobClient = sourceCloudStorageAccount.CreateCloudBlobClient();
List<string> outputLines = new List<string>();
IEnumerable<CloudBlobContainer> containers = sourceCloudBlobClient.ListContainers();
foreach (CloudBlobContainer oneContainer in containers)
{
    string outputLine = string.Format("DoTheCopy {0}", oneContainer.Name);
    outputLines.Add(outputLine);
}
string[] outputText = outputLines.ToArray();
File.WriteAllLines(@"E:\AzCopy\MoveUserCache.bat", outputText);

That’s all fine and dandy, but what about my container permissions? So I wrote a program to run after the data was moved. This iterates through the containers and sets the permissions on every one of them. If you want any of them to be private, you have to hardcode the exceptions, or fix them after running this.

private string SetPermissionsOnContainers(string dataConnectionString)
{
  string errorMessage = string.Empty;
  string containerName = string.Empty;
  try
  {
    CloudStorageAccount dataCloudStorageAccount = CloudStorageAccount.Parse(dataConnectionString);
    CloudBlobClient dataCloudBlobClient = dataCloudStorageAccount.CreateCloudBlobClient();

    int i = 0;
    List<string> containersToDo = new List<string>();

    IEnumerable<CloudBlobContainer> containers = dataCloudBlobClient.ListContainers();
    foreach (CloudBlobContainer oneContainer in containers)
    {                   
      i++;
      System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer.Name);

      CloudBlobContainer dataContainer = dataCloudBlobClient.GetContainerReference(containerName);
      //set permissions
      BlobContainerPermissions permissions = new BlobContainerPermissions();
      permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
      dataContainer.SetPermissions(permissions);
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to change permission on container '{0}' "
        + "= {1}, inner exception = {2}",
      containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

Option 2: Write my own solution

Ultimately, I decided not to use AZCopy. By the time I’d written this much code, I realized it was just as easy to write my own code to move all of the containers from one storage account to another, and set the permissions as it iterated through the containers, and I could add trace logging so I could see the progress. I could also hardcode exclusions if I wanted to. Here’s the code for iterating through the containers. When getting the list of containers, if it is the main account, I only want to move specific containers. This is because I moved some that were static ahead of time. So for this condition, I just set up an array of container names that I want to process. For the other accounts, it retrieves a list of all containers and processes all of them.

private string CopyContainers(string sourceConnectionString, string targetConnectionString, 
  string accountName)
{
  string errorMessage = string.Empty;
  string containerName = string.Empty;
  try 
  {
    CloudStorageAccount sourceCloudStorageAccount = CloudStorageAccount.Parse(sourceConnectionString);
    CloudBlobClient sourceCloudBlobClient = sourceCloudStorageAccount.CreateCloudBlobClient();
    CloudStorageAccount targetCloudStorageAccount = CloudStorageAccount.Parse(targetConnectionString);
    CloudBlobClient targetCloudBlobClient = targetCloudStorageAccount.CreateCloudBlobClient();

    int i = 0;
    List<string> containersToDo = new List<string>();
    if (accountName == "mainaccount")
    {
      containersToDo.Add("container1");
      containersToDo.Add("container2");
      containersToDo.Add("container3");

      foreach (string oneContainer in containersToDo)
      {
        i++;
        System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer);
        MoveBlobsInContainer(oneContainer, accountName, targetCloudBlobClient, sourceCloudBlobClient);
      }
    }
    else
    {
      IEnumerable<CloudBlobContainer> containers = sourceCloudBlobClient.ListContainers();                    
      foreach (CloudBlobContainer oneContainer in containers)
      {
        i++;
        System.Diagnostics.Debug.Print("Processing container #{0} called {1}", i, oneContainer.Name);
        MoveBlobsInContainer(oneContainer.Name, accountName, targetCloudBlobClient, sourceCloudBlobClient);
      }
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to move files for account '{0}', " +
      "container '{1}' = {2}, inner exception = {3}",
      accountName, containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

And here’s the code that actually moves the blobs from the source container to the destination container.

private string MoveBlobsInContainer(string containerName, string accountName, 
  CloudBlobClient targetCloudBlobClient, CloudBlobClient sourceCloudBlobClient)
{
  string errorMessage = string.Empty;
  try
  {
    long totCount = 0;
    //first, get a reference to the container in the target account, 
    //  create it if needed, and set the permissions on it 
    CloudBlobContainer targetContainer = 
      targetCloudBlobClient.GetContainerReference(containerName);
    targetContainer.CreateIfNotExists();
    //set permissions
    BlobContainerPermissions permissions = new BlobContainerPermissions();
    permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
    targetContainer.SetPermissions(permissions);

    //get list of files in sourceContainer, flat list
    CloudBlobContainer sourceContainer = 
      sourceCloudBlobClient.GetContainerReference(containerName);
    foreach (IListBlobItem item in sourceContainer.ListBlobs(null, 
      true, BlobListingDetails.All))
    {
      totCount++;
      System.Diagnostics.Debug.Print("Copying container {0}/blob #{1} with url {2}", 
        containerName, totCount, item.Uri.AbsoluteUri);
      CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference(item.Uri.AbsoluteUri);
      CloudBlockBlob targetBlob = targetContainer.GetBlockBlobReference(sourceBlob.Name);
      targetBlob.StartCopyFromBlob(sourceBlob);
    }
  }
  catch (Exception ex)
  {
    errorMessage = string.Format("Exception thrown trying to move files for account '{0}', "
      + "container '{1}' = {2}, inner exception = {3}",
        accountName, containerName, ex.ToString(), ex.InnerException.ToString());
  }
  return errorMessage;
}

You can hook this up to a fancy UI and run it in a background worker and pass progress back to the UI, but I didn’t want to spend that much time on it. I create a Windows Forms app with 1 button. When I clicked the button, it ran some code that set the connection strings and called CopyContainers for each storage account.

Did it work?

When we put all of the services in production, as our Sr. Systems Engineer, Jack Chen, published all of the services to PointAcross production, I ran this to move the data from the goldmail storage accounts to the pointacross storage accounts. It worked perfectly. The only thing left at this point is moving the Windows Azure SQL Databases (the database previously known as SQL Azure Winking smile ).

Rebranding and Upgrading to Azure SDK 2.0 — Details, details

July 13, 2013

As I discussed in my last post, we at GoldMail are rebranding our company and services to PointAcross, and updating everything to SDK/Tools 2.0 at the same time. (No reason to do full regression testing twice. Plus, no guts, no glory!)

Setting up the Azure bits

For the rebranding, I decided to create all new services and storage accounts with “pointacross” in the name instead of “goldmail”. Yes, we could have CNAMEd the goldmail*.cloudapp.net URLs as pointacross, but there are several benefits to creating new services. For one thing, this removes any confusion about the services on the part of us in the tech department. Also, we can run two production systems in parallel until the DNS entries for the goldmail services redirect appropriately to the pointacross URLs.

Another issue we have is our services are currently located in the US North Central data center, which is very full. I can’t create VMs, and new Azure subscribers can’t set up services there. US North and South Central were the first two data centers in the US, so the hardware is older as well. At the rate my CEO is going, it seems likely that he will generate enough business that we will need to scale out in the next few months, and I was concerned about being able to do that with our services based in North Central. I don’t know if that’s a valid concern, but I figured better safe than sorry.

So I set up a new affinity group for USWest, and created all of the new services and storage accounts there. I also took advantage of this opportunity to create a storage account just for diagnostics. We don’t use our main storage account for a LOT of other things, but this is always advised, and this is a good time to take care of that.

Our Sr. systems engineer, Jack Chen, set up all the DNS entries for everything, and I set to work on updating the SDK and doing the rebranding.

Updating the SDK version

The next order of business was to update everything to Azure SDK 2.0. I downloaded and installed all of the updates, and installed the tools for Visual Studio 2010. 

Azure SDK/Tools 2.0 runs side-by-side with 1.8 just fine. You can open solutions that have cloud projects targeting 1.8 and have no problem. However, here’s something you need to know: Once you install SDK/Tools 2.0, you can no longer create a NEW cloud project targeting 1.8. I installed this SDK just to check out the impact of the breaking changes in the Storage Client Library, and when I needed to add a cloud project to an existing (SDK 1.8) solution, there was no way to tell it to target anything except SDK 2.0. So if you need to add a new cloud project and the rest of the projects in that solution are 1.8 or before, you have to uninstall SDK 2.0 in order to create your cloud project.

In the screenshots below, I am using VS2010. We haven’t upgraded to VS2012 because we are always working like wildfire on the next release, and the TFS Pending Changes window was just a pain in the butt we didn’t want to deal with yet. Procrastination worked in my favor this time (that’s a first!) – they have changed the Pending Changes window in VS2013, but we can’t use that because they haven’t provided Azure Tools for the VS 2013 Preview yet. Argh!

So how do you you update a current solution? Right-click on each cloud project in the solution and select Properties. You should see this:

Click the button to upgrade to SDK 2.0. You will be led through a wizard to do the upgrade – it asks if you want to backup the current project first, and offers to show you the conversion log.

We have multiple cloud projects in each solution – one for staging, one for production, and one for developers. (Click here to read why.) So we had to convert each project.

The next thing to do is update the NuGet packages. You can right-click on the Solution and select “Manage NuGet packages for solution”, or do it one project at a time. I did mine one project at a time for no particular reason other than wanting to be methodical about it (and being a little anal-retentive). You will be prompted with a list of packages that can be updated.

For this exercise, you need to update Windows Azure Storage and the Windows Azure Configuration Manager. When you do this, it updates the references in the project(s), but doesn’t change any code or using statements you may have. Try doing a build and see what’s broken. (F5 – most people’s definition of “unit test”. Winking smile).

Handling breaking changes

For us, since we were still using Storage Client Library 1.7, I have a number of things I had to fix.

1. I configure our diagnostics programmatically. To do this in 1.7 and before, I grab an instance of the storage account in order to get an instance of the RoleInstanceDiagnosticManager. Here is the old code.

string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";   
CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    storageAccount.CreateRoleInstanceDiagnosticManager(
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

They have removed this dependency, so I had to change this code to instantiate a new instance of the diagnostic manager and pass in the connection string to the storage account used for diagnostics. Here is the new code.

string wadConnectionString = RoleEnvironment.GetConfigurationSettingValue
    ("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = 
    new RoleInstanceDiagnosticManager(
    wadConnectionString,
    RoleEnvironment.DeploymentId, 
    RoleEnvironment.CurrentRoleInstance.Role.Name, 
    RoleEnvironment.CurrentRoleInstance.Id);

2. In the startup of our web roles, we have a exception handler that writes any startup problems to blob storage (because it can’t write to diagnostics at that point). This code looks like this:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExist();
container.GetBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

They changed CreateIfNotExist() to CreateIfNotExists(), and you now have to specify the type of blob used, so when I get the reference to the blob, I have to call GetBlockBlobReference. Also, UploadText has been removed. More on that in a minute. This code becomes the following:

CloudStorageAccount storageAccount = 
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
var container = blobStorage.GetContainerReference("errors");
container.CreateIfNotExists();
container.GetBlockBlobReference(string.Format("error-{0}-{1}",
    RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).
    UploadText("Worker Role Startup Exception = " + ex.ToString());

3. As noted above, you have to change the more generic CloudBlob, etc., to specify the type of blob. So I changed all occurrences of CloudBlob to CloudBlockBlob and GetBlobReference to GetBlockBlobReference.

4. I had a method that checked to see if a blob existed by fetching the attributes and checking the exception thrown. They added Exists() as a method for the blobs, so I’ve replaced all uses of my method with blob.Exists() and removed the method entirely.

5. Now let’s talk about Upload/Download Text, Upload/Download FIle, and Upload/Download ByteArray. They removed these methods from the CloudBlockBlob class, and now only support Upload/Download Stream. So you can rewrite all your code, or you can get the CloudBlobExtensions written by Maarten Balliauw. I can’t find my link to his, so I’ve posted a copy of them here. Just change the namespace to match yours, and voila!

6. I had to add a using statement for Microsoft.WindowsAzure.Storage.Blob everywhere I use blobs, and the corresponding one for queues where I use queues. I had to add a using statement for Microsoft.WindowsAzure.Storage anywhere I was accessing the CloudStorageAccount. Basically, I had to make sure I had using clauses for the new namespaces wherever I was using them.

7. I also used the “Organize Usings/Remove and Sort” context menu option to clean up the using clauses in every class I changed. This removed the old WindowsAzure.StorageClient.

That was the bulk of the work for updating the Storage Client Library. Once I got the list from going through the first application, doing the others was fairly simple, as I knew what I was looking for.

Gaurav Mantri (an amazing Windows Azure MVP who is always very helpful) has a very good blog series about updating the Storage Client Library, discussing blobs, queues, and table storage.

After I fixed all the breaking changes, I made the rebranding changes. In some cases, this was as easy as just changing “goldmail.com” to “pointacross.com”, but I also had to search the entire code base for “goldmail” and decide how to change each one of them, especially in the case of error messages returned to the client applications.

Every occurrence of “goldmail” had to be assessed, and I had to make sure any “secondary references” were updated. For example, the desktop application (now called GoldMail) has some content hosted in web controls that is located on our website, so I had to be sure those bits were updated in the website. And finally, I updated the storage keys and storage account names in the Azure configurations, and any URLs or occurrences of “goldmail” that I found there.

RDP, SSL, and HTTPS

We purchased a new SSL certificate for pointacross.com, which I uploaded to all of the services for RDP access, and for the HTTPS services. Then I went through and updated the certificate thumbprints specified in the cloud projects. (I have never managed to use the “browse” option for certificates in the Visual Studio UI – it can never seem to find the certificate in my certificate store on my machine, so I update the thumbprints in the Azure configuration, which works just fine.)

After doing this, I right-clicked on each cloud project and selected Manage RDP Connections, then put the password in again. I didn’t do this with the first service, and we couldn’t RDP into it. Oops! I suspect it uses the certificate to encrypt the RDP information and store it in the Azure configuration, and putting it in again after changing the certificate forces it to re-encrypt with the new certificate.

And finally, we set up new publishing profiles and published everything to the new PointAcross staging services, and turned the whole thing over to QA.

Once more, unto the breach. –Shakespeare, Henry V

After everything was tested and checked, we had a release meeting – at night, to minimize customer disruption. We shut down access to our client applications, and then published the new cloud services to production. We also moved the data in the storage accounts. After we tried everything out, we redirected the goldmail DNS entries that were likely to be “out in the wild” to the pointacross services, and deleted the rest of them. After a few days went by, we shut down and deleted the goldmail services, and then removed them from our Azure subscription. We are now rebranded and updated.

In my next post, I’ll talk about how I moved the cloud storage from the goldmail storage accounts to the pointacross storage accounts.

Rebranding GoldMail and keeping up with Azure SDK updates

June 30, 2013

I am VP of Technology for a small ISV called GoldMail; all of our applications run on Windows Azure. We are rebranding – changing from “GoldMail” to “PointAcross” – and we will change the company name at some point as well. People think “GoldMail” is an e-mail product (it isn’t), and PointAcross (as in, we help you get your point across) is a more focused brand. We are changing everything except our desktop product, which will remain GoldMail, and will eventually be deprecated. We have been working on a web version of our desktop product for quite some time now, and we’re calling it PointAcross. With our web application, we can accommodate both Windows and Mac users. We’ve also simplified the usability.

Our infrastructure – over a dozen cloud services – runs in Windows Azure. With each update, we have to update the SDK in every project, do regression testing, and then release them to production. Our product roadmap is packed; we are pushing out releases as fast as we can, with some big product features coming in the next few months. The trick is to minimize disruption by combining the SDK updates with new releases. This works pretty well, although some of our products have faster release cycles than others, so not everything gets updated at the same time.

As you know if you’ve read my post about our implementation of Windows Azure, I am a glutton for punishment, so I’ve commited to rebranding everything and upgrading everything to SDK 2.0 in the same release. We updated to SDK 1.8 a few weeks ago, but we are still targeting Storage Client Library 1.7, so we have breaking changes to contend with. Did I mention I committed to completing all of those changes in two weeks?

Now, there are two ways to do a rebranding project of this size. The easy way is to change all the public-facing URLs by just adding DNS entries that point to the goldmail services, storage accounts, etc., using pointacross in the names. This leads to major confusion later, and makes it difficult to get rid of any goldmail artifacts down the line. Plus, if we add more services, do we brand them as goldmail internally (to be consistent), or switch to pointacross?

The harder way is to set up all new services and storage accounts in Windows Azure, and add new DNS entries for them. If we do it this way, we have to change all of the URLs in the configurations (and code) for all of the services, and all of the storage account names and keys we have in the Azure service configurations. But in the end, we end up with an infrastructure that is fully branded with the new name, and there won’t be any slip-ups where the old name will be displayed. Also, we can set up the new services, publish everything, and run the two production systems in parallel, then cut over when we’re satisfied that everything is working correctly. After cutting over, we change any goldmail DNS entries that might be used “out in the wild” (like in HTML pointing to our embedded player) to point to the PointAcross addresses.

Of course, I’ve chosen to do the more difficult way of rebranding. I think of it as “the right way” – I hate to do anything halfway, and we’ll never have time to go back and rebrand internally, so if we don’t do it now, we’ll end up with goldmail services forever. I’m also taking advantage of this opportunity to move all of our services and storage accounts from the North Central data center to the US West data center. I have several reasons for doing this.

First and foremost, the North Central data center is full. If you don’t already have services running in North Central, you can’t select it as an option. This also means that we can’t create VMs and use IAAS in the same location as our other services, because they don’t have that feature enabled for North Central. My second reason is because the new hardware in the newer data centers has the newer, faster hard drives, and our applications will gain performance with no effort on our part. Well, no effort except for the huge effort of moving everything, but that’s a one-time hit, versus the performance gain for our customers.

The other thing I will do is add a new storage account for diagnostics. When we started, we only used one storage account, because our use of Azure Blob Storage was almost non-existent. We have added some usage for blob storage, so it makes sense to use our main storage account for that, and create a new storage account for the diagnostics. Also, we have 2-1/2 years of diagnostics in the storage account that we don’t need to be retaining and paying for, and we can let it go when we delete the old storage account some time down the line.

In my next post, I’ll talk more about this project and how we’re going to move data from one storage account to another, as well as any issues I hit upgrading to SDK 2.0. In the meantime, if you have any advice or comments, please leave them in the comments section!

3/8/2014 GoldMail is now doing business as PointAcross.

Problem debugging WCF service in Azure compute emulator

June 30, 2013

I’m using VS2010 SP-1 with all updates installed, Azure SDK 1.8, and the Storage Client Library 1.7. Storage Client Library 1.8 had breaking changes in it, and it’s going to take a few days to update and test all of our services, so we’ve been waiting for SDK 2.0 (which came out not too long ago).

I want to talk about the problems I’m having running my WCF services in the compute emulator. I’m curious to see if anybody else is having these kinds of problems, and wanted to see if upgrading to SDK 2.0 fixes the problems.

First, I have a startup task in my service that registers some dll’s. To run my WCF services in the Compute Emulator, I have to comment out that startup task in the csdef file. I also change the number of instances of the web and worker roles to 1, to make it run faster. I change the web.config to have debug=”true” so I can debug the code. I hit F5.

I am running the staging Azure cloud project in my solution, so this runs against blob storage and the SQL database in Azure, not on the local machine, so that complication is removed. When I hit F5 to run the application in the compute emulator, I get this:

This is fairly new. I’ve been having problems for several months, but this is a recent addition. I also see this message when running any of our web applications in the compute emulator. I do have “Launch browser for endpoint” checked, so it does open IE. I think I’ started seeing this message when I updated to IE10, but I couldn’t swear to it.

I click Okay. Now my service is running in the local compute emulator, and I can run my client that calls it and test the calls and stop it at the breakpoints and look at what’s going on, etc. I hit Stop in Visual Studio to stop the service so I can make some changes.

I make changes and hit F5 to run the service in the compute emulator again.  At some point (sometimes it’s the first time, sometimes it’s after a couple of iterations of changes), I get this error:

It worked just a minute ago just fine, so I know the file is right. And once I get this error, the service won’t run right, and I can’t debug into it. If I hit Continue (just for grins), the dialog goes away and then pops up again. If I hit OK, it closes the dialog, but doesn’t stop the service – I have to do that in VS. If I stop it and try to run it again, I get that error again. To get it to run again in the compute emulator, I have to clean the solution, wait a few seconds, and then I can run it.

At this point, I can no longer make changes, do a build to make sure it compiles, and then run it. From this point onward, I have to clean the solution and just run it. I have this happening on both of my development machines, so if it’s something about my setup, it’s on both of them.

In case you’re wondering, I did consider upgrading to VS2012, but the “updates” to the Pending Changes window just add one more layer of difficulty to our work, so my team has decided to wait for VS2013, where they have responded to that feedback from myself (and many, many others).

Next: I’m upgrading to SDK and Tools 2.0, and am interested to see if the problem is going to be fixed. If you’ve had this problem and have managed to fix it, please post your solution in the comments.

Fun stuff to do if you’re in San Francisco for the BUILD conference

June 21, 2013

Is there any tourist-y stuff to do in San Francisco?

I have to start by saying what’s fun for some people will not be fun for everyone. I’m not going to repeat all the San Francisco treats (such as Rice-a-Roni) for you; that’s what guidebooks are for. Everyone knows about Fisherman’s Wharf, Pier 39, Alcatraz, the Golden Gate Park, the California Academy of Sciences at the park, and of course the famous Golden Gate Bridge (the best view of which is from the north side of the bridge, from the battlements in the Marin Headlands). For people who like to shop, the Westfield Center is on Market and Powell St, and Union Square is two blocks up Powell.

There’s also the Letterman Center for the Digital Arts in the Presidio, home to Lucasfilm and ILM. (Take your picture with the Yoda fountain!). If you have a car, you can drive north on 101, and take the Lucas Valley Drive exit and go west to Nicasio, and drive by the entrance to Skywalker Ranch. (I’m not posting the # here, I don’t want the double-L’s coming after me (the Lucasfilm Lawyers)). (Did you notice I closed all of my parentheses correctly? Good to know that learning LISP was relevant to my life.) Oh, by the way, you can’t see anything from the entrance, and they have a lot of security cameras, so don’t try climbing over the fence and running for the main house. (Don’t ask.)

Any tech events going on around \\build?

So let’s talk about fun things to do if you’re a tech person coming to \\build – you know, the parties, the events happening at the same time, where you can see people you haven’t seen since the last conference or MVP summit? Here’s a list I managed to cobble together. If you know of any I’ve missed, please add them in the comments so I can go to them, too. Be sure to check the event pages themselves in case there are any changes after I post this.

Monday, 21 June

  • Microsoft .NET client-side development futures / panel discussion. Microsoft offices, SF, 6:30 p.m.
    Discuss Microsoft .NET client-side development and the future thereof with Laurent Bugion and Ward Bell (both are Silverlight MVPs). There will also be 1 or 2 guys from Xamarin joining in. More info here.
  • Preparing applications for production environments. Microsoft offices, Mountain View, 6:30 p.m.
    You need a car to get from SF to this meetup in Silicon Valley about preparing applications for production environments. More info here.

Tuesday, 22 June

  • Vittorio Bertocci speaking about Identity/AD/ACS and Windows Azure. Microsoft offices, SF, 6:30 p.m. 
    Come see Vittorio Bertocci, a superstar from Microsoft who is the expert in Identity/AD/ACS in Windows Azure! He’s always entertaining and informative, and great at answering questions. This event is kindly being sponsored by AppDynamics, so there will be pizza and drinks; please sign up ahead of time so we get enough pizza! More info here.
  • Bay Area F# User Group meetup. Microsoft offices, SF, 6:30 p.m.
    Meetup of the Bay Area F# user group. More info here.
  • Xamarin Welcome Party 7:00-midnight
    This is conveniently about a block from the Microsoft offices, and I suspect their numbers will coincidentally increase at about the time the two meetups end. More info here.

Wednesday, 23 June

  • Glenn Block speaking about scriptcs. Github HQ, SF, 7:00 p.m.
    Come see Glenn Block from Microsoft talk about scriptcs at the second SF .NET User Group (yes, there’s two, don’t ask). Github is on 4th St; I hear they are letting in the first 55 people who have RSVP’d to the meetup. More info here.
  • \\Build Welcome Reception
    If the pattern of the past few \\build conferences holds, the \\build welcome reception will be Wednesday night. I’ll post more information when I find out if I’m right or now.

Thursday, 24 June

  • Deep Fried Bytes party. Thirsty Bear Brewing Co., South of Market 8:00-10:00 p.m.
    To get tickets to this, you have to track down Chris Woodruff or Keith Elder at the \\build conference. More info here.
  • \\Build Attendee Party
    This is another educated guess. If the pattern holds, there will be an Attendee party on Thursday night. I’ll post details here when/if I get them!

How do I find the Microsoft office in San Francisco?

Several of these events are at the Microsoft offices in San Francisco. They are very generous with their space, and we who run the meetups and user groups really appreciate their support, especially that of Bruno Terkaly with DPE for hosting all of our SF meetups.

The office is about two blocks from the Moscone Center, where the \\build conference is being held, on Market Street where Powell runs into Market. Of course, they don’t have a big sign on the street that says Microsoft, you have to be “in the know” to find it. Fortunately, Microsoft Research (apparently in the same location) has a very nice page here that shows you where it is.

Is Rice-a-Roni really the San Francisco Treat they claim it is?

Yes it is. Do you have the Rice-a-Roni song in your head yet?


Follow

Get every new post delivered to your Inbox.

Join 65 other followers