Archive for July, 2012

Azure for Developers Tutorial Step 7: Use Table Storage instead of a SQL Database

July 8, 2012

This is the seventh and final step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

We have a WCF service running in a web role that reads from and writes to a SQL Database. It submits messages to an Azure queue, and there is a worker role that retrieves the entries from the queue and writes them to blob storage. We have the diagnostics working, and we have a client that calls the service.

Why do I care?

If you have a lot of data, it’s much less expensive to store it in Windows Azure Tables than in a SQL Database. But table storage is like indexed sequential flat files from days of yore – there are no secondary indeces. You get to define a partition key for your table; Microsoft tries to keep all of the data in a partition together. You don’t want to have one partition with all of your millions of records in it – this is not efficient. But you might split the data by what country your customer is in, or by a range of customer id’s, or something like that. You also can define a Row Key, which, when combined with the partition key, makes up the primary key for the table. So if country was your partition key, the row key might be customerID, for example.

You can store different kinds of data in the same table, but this is not a good design idea, as it will confuse the people filling in for you when you’re on vacation.

Let’s see the code…

Let’s add a class and write some code to replicate the same calls we make to the SQL Database, but use table storage instead. For GetCustomerList, we are returning a dataset, so we’ll create the dataset programmatically so we don’t have to make any changes to our client for it to run against Table Storage instead of a SQL Database.

To access table storage, we will associate table entities with a model class called Customer and use a context to track instances of that class, which represent entities to be insert in the table or retrieved from the table.

Right-click on References in the CustomerServicesWebRole and select “Add Reference”. Go to the .NET tab and look for System.Data.Services.Client and select it and click OK.

Now right-click on the CustomerServicesWebRole and select Add Class. Call the class Customer. This is going to be our data model class definition. First, let’s add the basic properties:

public string FirstName { get; set; }
public string LastName { get; set; }
public string FavoriteMovie { get; set; }
public string FavoriteLanguage { get; set; }

Now let’s add the properties required for table storage. You need to have properties for the PartitionKey and RowKey. These two combined make up the primary key. I’m going to make my partition key “customer”, which might lead one to believe that I’m going to put customers and something else in the same table. Let’s assume that I have accounts for customers and accounts for employees, and the fields are the same for both types of data. I’m only going to use this model for customers, though, so I’m going to set the partition key when instantiating a new object.

private readonly string partitionKey = "customer";
public string PartitionKey { get; set; }
public string RowKey { get; set; }

Now we need a default constructor, and a constructor that accepts parameters. In the constructor, I am setting the partition key and the rowkey. I’m using firstname + lastname as the rowkey. I realize this is innately stupid, but I just want a simple example. When you write something you’re actually going to use in production, pick your partition key and row key carefully.

public Customer() { }

public Customer(string firstName, string lastName, string favoriteMovie, string favoriteLanguage)
{
  PartitionKey = partitionKey;
  RowKey = firstName + " " + lastName;

  FirstName = firstName;
  LastName = lastName;
  FavoriteMovie = favoriteMovie;
  FavoriteLanguage = favoriteLanguage;
}

Now our class needs an attribute to specify the primary key for our entities:

  [DataServiceKey("PartitionKey", "RowKey")]
  public class Customer

And we need a using statement:

using System.Data.Services.Common;

That takes care of the Customer class. Now we need a class that will replicate the classes that access the SQL database. These are relatively short, so let’s put all of them in one class. Right-click on the web role project and select Add Class. Call the class TableStorageMethods.

Add these using statements:

using Microsoft.WindowsAzure.StorageClient;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.ServiceRuntime;
using System.Diagnostics;
using System.Data.Services.Client;
using System.Data;

Next, add some private variables and a constructor. We need a variable for the table client that you’re going to use to access table storage, and I’m putting the table name in as a private variable rather than hardcode it.

private CloudTableClient cloudTableClient;
string tableName = "customer";

public TableStorageMethods()
{
  //get a reference to the cloud storage account, and then make sure the table exists
  CloudStorageAccount cloudStorageAccount = 
    CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
  cloudTableClient = cloudStorageAccount.CreateCloudTableClient();
  cloudTableClient.CreateTableIfNotExist(tableName);
}

We need four methods. First, let’s look at AddCustomer. You have to get the data service context object, and then add the record and save the changes.

internal string ST_AddCustomer(string firstName, string lastName,
  string favoriteMovie, string favoriteLanguage)
{
  Trace.TraceInformation("[AddCustomer] called. FirstName = {0}, LastName = {1}, Movie = {2}, "
    + "Language = {3}", firstName, lastName, favoriteMovie, favoriteLanguage);

  Customer cust = new Customer(firstName, lastName, favoriteMovie, favoriteLanguage);

  string errorMessage = string.Empty;

  try
  {
    //add the record to the table
    TableServiceContext tableServiceContext = cloudTableClient.GetDataServiceContext();
    tableServiceContext.AddObject(tableName, cust);
    tableServiceContext.SaveChanges();
  }
  //you might want to handle these two exceptions differently
  catch (DataServiceRequestException ex)
  {
    errorMessage = "Error adding entry.";
    Trace.TraceError("[ST_AddCustomer] firstName = {0}, lastName = {1}, exception = {2}", 
      firstName, lastName, ex);
  }
  //this exception could be caused by a problem with the storage account
  catch (StorageClientException ex)
  {
    errorMessage = "Error adding entry.";
    Trace.TraceError("[ST_AddCustomer] firstName = {0}, lastName = {1}, exception = {2}", 
      firstName, lastName, ex);
  }
  //general catch
  catch (Exception ex)
  {
    errorMessage = "Error adding entry.";
    Trace.TraceError("[ST_AddCustomer] firstName = {0}, lastName = {1}, exception = {2}", 
      firstName, lastName, ex);
  }
  return errorMessage;
}

We need a method to get the record and retrieve the favorites for a specific customer. We’re using a linq query to retrieve the record with a matching partition key and row key.

internal string ST_GetCustomerFavorites(out string favoriteMovie, out string favoriteLanguage,
  string firstName, string lastName)
{
  Trace.TraceInformation("[GetCustomerFavorites] called. FirstName = {0}, LastName = {1}", 
    firstName, lastName);
  string errorMessage = string.Empty;
  favoriteMovie = string.Empty;
  favoriteLanguage = string.Empty;

  Customer cust = new Customer(firstName, lastName, string.Empty, string.Empty);

  try
  {
    TableServiceContext tableServiceContext = cloudTableClient.GetDataServiceContext();
    IQueryable<Customer> entities = (from e in tableServiceContext.CreateQuery<Customer>(tableName)
                                     where e.PartitionKey == cust.PartitionKey && e.RowKey == cust.RowKey
                                     select e);

    Customer getCust = entities.FirstOrDefault();
    favoriteMovie = getCust.FavoriteMovie;
    favoriteLanguage = getCust.FavoriteLanguage;
  }
  catch (Exception ex)
  {
    Trace.TraceError("[ST_GetCustomerFavorites] firstName = {0}, lastName = {1}, exception = {2}", 
      firstName, lastName, ex);
    errorMessage = "Error retrieving data.";
  }
  return errorMessage;
}

We need a method to update the favorite movie and favorite language for a specific person:

internal string ST_SetCustomerFavorites(string firstName, string lastName,
  string favoriteMovie, string favoriteLanguage)
{
  Trace.TraceInformation("[SetCustomerFavorites] FirstName = {0}, LastName = {1}, Movie = {2}, "
    + "Language = {3}", firstName, lastName, favoriteMovie, favoriteLanguage);

  string errorMessage = string.Empty;

  Customer cust = new Customer(firstName, lastName, favoriteMovie, favoriteLanguage);

  try
  {
    TableServiceContext tableServiceContext = cloudTableClient.GetDataServiceContext();
    IQueryable<Customer> entities = 
      (from e in tableServiceContext.CreateQuery<Customer>(tableName)
       where e.PartitionKey == cust.PartitionKey && e.RowKey == cust.RowKey
       select e);

    Customer entity = entities.FirstOrDefault();
    entity.FavoriteLanguage = favoriteLanguage;
    entity.FavoriteMovie = favoriteMovie;

    tableServiceContext.UpdateObject(entity);
    tableServiceContext.SaveChanges();

  }
  catch (Exception ex)
  {
    Trace.TraceError("[ST_SetCustomerFavorites] FirstName = {0}, LastName = {1}, ex = {2}", 
      firstName, lastName, ex);
    errorMessage = "Error setting customer favorites.";
  }
  return errorMessage;
}

And lastly, we need a method to get the list of customers. I don’t want to change my client application based on the data source, and the SQL Database method returns a dataset, so I’ve written this one to also return a dataset.

internal string ST_GetListOfCustomers(out DataSet customers)
{
  Trace.TraceInformation("[GetListOfCustomers] called.");
  string errorMessage = string.Empty;

  //since the SQL Azure version returns a dataset, create a dataset and return it.
  //this way you don't have to change the client code
  customers = new DataSet();
  DataTable dt = new DataTable();
  DataColumn wc = new DataColumn("ID", typeof(Int32));
  wc.AutoIncrement = true;
  wc.AutoIncrementSeed = 1;
  wc.AutoIncrementStep = 1;
  dt.Columns.Add(wc);

  dt.Columns.Add("FirstName", typeof(String));
  dt.Columns.Add("LastName", typeof(String));
  dt.Columns.Add("FavoriteMovie", typeof(String));
  dt.Columns.Add("FavoriteLanguage", typeof(String));

  try
  {
    //retrieve the list of customers
    TableServiceContext tableServiceContext = cloudTableClient.GetDataServiceContext();
    DataServiceQuery<Customer> dataServiceQuery = 
      tableServiceContext.CreateQuery<Customer>(tableName);
    IEnumerable<Customer> entities = 
      dataServiceQuery.Where(e => e.PartitionKey == "customer").AsTableServiceQuery<Customer>();
    if (entities != null)
    {
      //add the entries to the DataTable
      foreach (Customer cust in entities)
      {
        DataRow newRow = dt.NewRow();
        newRow["FirstName"] = cust.FirstName;
        newRow["LastName"] = cust.LastName;
        newRow["FavoriteMovie"] = cust.FavoriteMovie;
        newRow["FavoriteLanguage"] = cust.FavoriteLanguage;
        dt.Rows.Add(newRow);
      }
    }
    else
    {
      Trace.TraceError("[ST_GetListOfCustomers] No rows found in table.");
      errorMessage = "No rows found in table.";
    }
  }
  catch (Exception ex)
  {
    Trace.TraceError("[ST_GetListOfCustomers] ex = {0}", ex);
    errorMessage = "Error getting list of customers.";
  }
  //add the data table to the dataset
  customers.Tables.Add(dt);

  return errorMessage;
}

Now we need to change our service to call the TableStorageMethods instead of the SQL Database methods. Let’s put in a toggle that we can change back and forth.

Open CustomerServices.svc in the web role and add an enumeration under the private variables for the queue.

public enum DataBaseType { sqlazure, tablestorage }
private DataBaseType currentDataBase = DataBaseType.tablestorage;

Now let’s change each method to check the value of currentDataBase and call the appropriate routine. When I defined the names of the methods for table storage, I used the same names as the SQL Database methods but with “ST_” prefixed to them so I can easily change these.

In GetFavorites, change this:

CustomerFavorites cf = new CustomerFavorites();
errorMessage = cf.GetCustomerFavorites(out favoriteMovie, out favoriteLanguage, 
  firstName, lastName);

to this:

if (currentDataBase == DataBaseType.sqlazure)
{
  CustomerFavorites cf = new CustomerFavorites();
  errorMessage = cf.GetCustomerFavorites(out favoriteMovie, out favoriteLanguage,
    firstName, lastName);
}
else
{
  TableStorageMethods tsm = new TableStorageMethods();
  errorMessage = tsm.ST_GetCustomerFavorites(out favoriteMovie, out favoriteLanguage,
    firstName, lastName);
}

We’ll follow the same pattern for the rest. In UpdateFavoritesByName, change this:

CustomerFavoritesUpdate cfu = new CustomerFavoritesUpdate();
errorMessage = cfu.SetCustomerFavorites(firstName, lastName, favoriteMovie, favoriteLanguage);

to this:

if (currentDataBase == DataBaseType.sqlazure)
{
  CustomerFavoritesUpdate cfu = new CustomerFavoritesUpdate();
  errorMessage = cfu.SetCustomerFavorites(firstName, lastName, favoriteMovie, favoriteLanguage);
}
else
{
  TableStorageMethods tsm = new TableStorageMethods();
  errorMessage = tsm.ST_SetCustomerFavorites(firstName, lastName, favoriteMovie, favoriteLanguage);
}

In AddACustomer, change this:

CustomerFavoritesAdd cfa = new CustomerFavoritesAdd();
errorMessage = cfa.AddCustomer(firstName, lastName, favoriteMovie, favoriteLanguage);

to this:

if (currentDataBase == DataBaseType.sqlazure)
{
  CustomerFavoritesAdd cfa = new CustomerFavoritesAdd();
  errorMessage = cfa.AddCustomer(firstName, lastName, favoriteMovie, favoriteLanguage);
}
else
{
  TableStorageMethods tsm = new TableStorageMethods();
  errorMessage = tsm.ST_AddCustomer(firstName, lastName, favoriteMovie, favoriteLanguage);
}

And lastly, in GetCustomerList, change this:

CustomerList cl = new CustomerList();
errorMessage = cl.GetListOfCustomers(out customers);

to this:

if (currentDataBase == DataBaseType.sqlazure)
{
  CustomerList cl = new CustomerList();
  errorMessage = cl.GetListOfCustomers(out customers);
}
else
{
  TableStorageMethods tsm = new TableStorageMethods();
  errorMessage = tsm.ST_GetListOfCustomers(out customers);
}

Now let’s run our service. We don’t need to update our service reference, because we didn’t make any changes to the service contract. Run the client application, and click Get Customer List. We will see nothing, because we haven’t added any records to the version running against Table Storage yet.

So add a couple of records, and then retrieve the customer list. So now everything is running against Azure table storage. If I run the Cerebrata Cloud Storage Studio and look in my development storage, and I can see the customer table with the entries I just added.

So now we have a WCF service running in a web role that performs CRUD operations against SQL Azure or Windows Azure Table Storage, and writes diagnostic information. Our WCF service has a method that lets us add an entry to the queue. Then we have a worker role that retrieves the entry from the queue and writes it to blob storage. We have a client that calls the WCF service.

If you set the connection strings correctly in the ServiceConfiguration.Cloud.cscfg file, you can publish your service to the cloud. Then just change the URL at the top of the DAC class in the TestClient, and it will point to that service. Then you can run your client application against the service running in Azure.

That wraps up the 7-part series called “Azure for Developers”, showing the features of Windows Azure and talking about how I’ve used them in my production environment at GoldMail. For a completed version of the code, check out the version from the June 2012 San Diego Code Camp talk, which you can download from here.

Azure for Developers Tutorial Step 6: Processing the queue

July 8, 2012

This is the sixth step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

In the last post, we changed our service to write messages to a queue. In this post, we will set up a worker role to pull the messages off of the queue and process them. But first, a few words about queues.

How would you use a queue?

GoldMail is an application that lets you add "slides” (pictures, screenshots, PowerPoint slides, etc.), and you record audio (voice or sound file) over them and then “share” the GoldMail, which sends your assets up to the cloud. When someone plays your GoldMail, the player retrieves those assets. When played on a mobile device, we don’t want to return the same large images we are using for our desktop player.

We could resize the images on the customer’s computer and send them up with the originals, but why should he have to wait for that to happen? So we use a queue and a worker role to handle this. When the service gets the “finished” message from the desktop application, it puts a message on the queue. A worker role picks the message off of the queue, downloads the large images and resizes them to small images, and then uploads the small images to the same folder where the original images reside and flips a boolean in the database.

Another use of queues if to offload database updates that don’t need to be written immediately. If someone sends a GoldMail to a thousand people, and half of them view it, we would get 500 updates to our database at the same time. To meter these updates, we write those update requests to a queue, and we have a worker role that pulls the entries off of the queue and updates the database.

What is Invisibility?

I have a lot of trace logging, and when I put the processing of the mobile slides in, I noticed that there was a problem. It looked like the first instance of the worker role was picking the entry off the queue, and before it could finish, the second role was picking the entry off the queue and trying to process it, too.

When you read a message from the queue, it doesn’t really remove the message from the queue, it marks it as invisible. After a certain amount of time, if the message has not been deleted from the queue, it reappears to be processed again. The default time is 30 seconds. It takes about 1 minute and 30 seconds to process a hundred slides. So the message was becoming visible again and the second instance of the worker role was picking it up and trying to process it, even though the first instance was still working on it. Oops.

When you take the message off of the queue, you can set the amount of time you want it to be invisible. I ran hundreds of GoldMails through the worker role and determined that the longest conversion time was a minute and a half, so I set the invisibility time to 2 minutes. (Better safe than sorry).

How can I mess up my production queue processing?

Queues are storage, and can be accessed by any instance of any service. We had another case where we published a new version of our service to the staging instance in preparation for going into production, and it started processing entries from the production queue, and since we had changed the format of the messages, the processing failed. Oops.

Since we publish our worker roles and web roles in the same Azure service, we can’t stop one without stopping the other, and we don’t want to stop production. So when we’re publishing new versions, we change the queue name (which is in the Azure role configuration) for the worker role to an unused queue and publish it to staging. After doing the VIP swap, we stop the old production service now in staging, then change the queue name in the production configuration, at which point the worker role will start processing any accumulated messages.

What if a queue message just won’t process?

If we have a problem processing an entry from the queue, we don’t delete it from the queue, we let it come back around and try processing it again. You obviously don’t want to do this infinitely, or you could end up with one message stopping up your queue. You should check the dequeue count for the message, and if it’s over some threshold, add the message to an error queue and delete it from the regular queue. Then check the error queue every now and then to see if you’ve had any problems.

Can we get back to the code now?

With all that information, let’s now add processing for our queue. We’ll add a worker role. Open our AzureCustomerServices solution. In the cloud project, right-click on Roles, and select “Add New Worker Role Project”.

You’ll be prompted for the role to add.

Pick the Worker Role, and name it CustomerWorker, then click Add.

First, let’s set up the configuration. Double-click on the ServiceConfiguration.Local.cscfg in the cloud project. If you go down to the bottom, you will find that it has added a new section for the worker role. Add these configuration settings:

      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" 
               value="UseDevelopmentStorage=true" />
      <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />
      <!-- frequency, in seconds, to retrieve the perf counters -->
      <Setting name="PerfMonSampleRate" value="60" />
      <!-- frequency, in seconds, to transfer the perf counters to the logs from the system-->
      <Setting name="PerfMonScheduledTransferPeriod" value="120" />
      <Setting name="ProcessQueueName" value="codecampqueue" />
      <Setting name="QueueMessageVisibilityTime" value="120" />

 

Then open ServiceConfiguration.Cloud.cscfg and find the worker role section at the bottom, then make the same changes, or put in your real values if you want to test the service in the cloud:

      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" 
           value="DefaultEndpointsProtocol=https;AccountName=PUTYOURACCOUNTNAMEHERE;
           AccountKey=PUTYOURACCOUNTKEYHERE" />
      <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;
               AccountName=PUTYOURACCOUNTNAMEHERE;AccountKey=PUTYOURACCOUNTKEYHERE" />
      <!-- frequency, in seconds, to retrieve the perf counters -->
      <Setting name="PerfMonSampleRate" value="60" />
      <!-- frequency, in seconds, to transfer the perf counters to the logs from the system-->
      <Setting name="PerfMonScheduledTransferPeriod" value="120" />
      <Setting name="ProcessQueueName" value="codecampqueue" />
      <Setting name="QueueMessageVisibilityTime" value="120" />

Now open ServiceDefinition.csdef and find the worker role section at the bottom, and add a section for the configuration settings.

    <ConfigurationSettings>
      <Setting name="DataConnectionString" />
      <Setting name="PerfMonSampleRate" />
      <Setting name="PerfMonScheduledTransferPeriod" />
      <Setting name="ProcessQueueName" />
      <Setting name="QueueMessageVisibilityTime" />
    </ConfigurationSettings>

I’ll explain the settings as we use them. Now let’s add a class to our worker role project and call it GlobalStaticProperties. Rather than retrieve the configuration values from the role repeatedly, I retrieve them from this class, which only reads them the first time they are retrieved. So right-click on CustomerServicesWebRole and choose Add Class. Name it GlobalStaticProperties and click OK. Change the class from Public to Internal Static.

Add these using statements at the top:

using Microsoft.WindowsAzure.ServiceRuntime;
using System.Diagnostics;

And here is the code you need to put in the class. I’ve put comments inline to explain what each variable is.

internal static class GlobalStaticProperties
{

  private static string _ProcessQueueName;
  /// <summary>
  /// name of the queue
  /// </summary>
  internal static string ProcessQueueName
  {
    get
    {
      if (string.IsNullOrEmpty(_ProcessQueueName))
      {
        _ProcessQueueName = RoleEnvironment.GetConfigurationSettingValue("ProcessQueueName");
        Trace.TraceInformation("[CustomerWorker.GlobalStaticProperties] "
          + "ProcessQueueName to {0}", _ProcessQueueName);
      }
      return _ProcessQueueName;
    }
  }

  private static int _QueueMessageVisibilityTime { get; set; }
  /// <summary>
  /// This is the amount of time the message remains invisible after being
  /// read from the queue, before it becomes visible again (unless it is deleted)
  /// </summary>
  internal static int QueueMessageVisibilityTime
  {
    get
    {
      if (_QueueMessageVisibilityTime <= 0)
      {
        //hasn't been loaded yet, so load it 
        string VisTime = 
          RoleEnvironment.GetConfigurationSettingValue("QueueMessageVisibilityTime");
        int intTest = 0;
        bool success = int.TryParse(VisTime, out intTest);
        if (!success || intTest <= 0)
        {
          _QueueMessageVisibilityTime = 120;
        }
        else
        {
          _QueueMessageVisibilityTime = intTest;
        }
        Trace.TraceInformation("[CustomerWorker.GlobalStaticProperties] " 
          + "Setting QueueMessageVisibilityTime to {0}", _QueueMessageVisibilityTime);
      }
      return _QueueMessageVisibilityTime;
    }
  }
}

Now we have the configuration set up and handled, we want to add code to set up the diagnostics configuration. We need the diagnostics configuration code to execute when the worker role starts up, so open WorkerRole.cs. Add this using statement at the top:

using Microsoft.WindowsAzure.Diagnostics.Management;

In the OnStart() method, right after this code:

// Set the maximum number of concurrent connections 
ServicePointManager.DefaultConnectionLimit = 12;

add the diagnostics configuration code. I’ve included comments to explain the code. This is identical to the code we put in the web role back in part 1.

// Get a reference to the initial default configuration.
string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";

// First, get a reference to the storage account where the diagnostics will be written. 
// It is recommended that you use a separate account for diagnostics and data, so the 
//   performance of your data access is not impacted by the diagnostics.
CloudStorageAccount storageAccount =
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));

// Get an reference to the diagnostic manager for the role instance, 
//   and then get the default initial configuration, which we will then change.
RoleInstanceDiagnosticManager roleInstanceDiagnosticManager =
    storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId,
    RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();

// Change the polling interval for checking for configuration changes
//   and the buffer quota for the logs. 
config.ConfigurationChangePollInterval = TimeSpan.FromSeconds(30.0);
config.DiagnosticInfrastructureLogs.BufferQuotaInMB = 256;

// The diagnostics data is written locally and then transferred to Azure Storage. 
// These are the transfer intervals for doing that operation.
config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); //for trace logs
config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); //for iis logs

// Configure the monitoring of one Windows performance counter
// and add it to the configuration.
int sampleRate = 0;
int scheduledTransferPeriod = 0;
bool success = false;
//this is sample rate, in seconds, for the performance monitoring in %CPU.
//By making this configurable, you can change the azure config rather than republish the role.
success = int.TryParse(RoleEnvironment.GetConfigurationSettingValue("PerfMonSampleRate"),
  out sampleRate);
if (!success || sampleRate <= 0)
  sampleRate = 60;  //default is 60 seconds
success =
  int.TryParse(RoleEnvironment.GetConfigurationSettingValue("PerfMonScheduledTransferPeriod"),
  out scheduledTransferPeriod);
if (!success || scheduledTransferPeriod <= 0)
  scheduledTransferPeriod = 120;  //default is 120 seconds

PerformanceCounterConfiguration perfConfig
    = new PerformanceCounterConfiguration();
perfConfig.CounterSpecifier = @"\Processor(*)\% Processor Time";
perfConfig.SampleRate = TimeSpan.FromSeconds((double)sampleRate);
config.PerformanceCounters.DataSources.Add(perfConfig);
config.PerformanceCounters.ScheduledTransferPeriod =
  TimeSpan.FromSeconds((double)scheduledTransferPeriod);

// Configure monitoring of Windows Application and System Event logs,
// including the quota and scheduled transfer interval, and add them 
// to the configuration.
WindowsEventLogsBufferConfiguration eventsConfig
    = new WindowsEventLogsBufferConfiguration();
eventsConfig.BufferQuotaInMB = 256;
eventsConfig.ScheduledTransferLogLevelFilter = LogLevel.Undefined; //was warning
eventsConfig.ScheduledTransferPeriod = TimeSpan.FromMinutes(2.0); //was 10
eventsConfig.DataSources.Add("Application!*");
eventsConfig.DataSources.Add("System!*");
config.WindowsEventLog = eventsConfig;

//set the configuration to be used by the current role instance
roleInstanceDiagnosticManager.SetCurrentConfiguration(config);

//add an event handler for the configuration being changed while the role is running
RoleEnvironment.Changing += 
  new EventHandler<RoleEnvironmentChangingEventArgs>(RoleEnvironment_Changing);
return base.OnStart();

Add the event handler for the RoleEnvironment Changing event:

void RoleEnvironment_Changing(object sender, RoleEnvironmentChangingEventArgs e)
{
  // If a configuration setting is changing
  if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
  {
    // Set e.Cancel to true to restart this role instance
    e.Cancel = true;
  }
}

That takes care of the diagnostics. Now let’s put in the code for handling the queue. We need to define our queue, so add this at the top of the WorkerRole class:

CloudQueue queue;

Now we need to add a method to be run only once after the worker role starts up that makes sure the queue exists, and if it doesn’t, it creates it. Let’s call it StartupQueue(). I’ve added comments to explain what the code is doing.

private void StartUpQueue()
{
  //get a reference to the storage account
  CloudStorageAccount storageAccount =
      CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(
      "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"));

  // initialize the queue client that will be used to access the queue 
  CloudQueueClient queueStorage = storageAccount.CreateCloudQueueClient();
  string queueName = GlobalStaticProperties.ProcessQueueName;
  //get a reference to the queue
  queue = queueStorage.GetQueueReference(queueName);

  //only initialize this once after the role starts up
  //so check this boolean and loop until it manages to make sure the queue is present 
  //because this role can't run without the queue
  bool storageInitialized = false;
  while (!storageInitialized)
  {
    try
    {
      // create the message queue if it doesn't already exist
      queue.CreateIfNotExist();
      // set this to true, because at this point, we know it's there
      storageInitialized = true;
    }
    catch (StorageClientException ex)
    {
      // for this error, give a reminder about the dev storage service being started
      if (ex.ErrorCode == StorageErrorCode.TransportError)
      {
        Trace.TraceError("[CustomerWorker.StartUpQueue] Storage services initialization failure."
          + " Check your storage account configuration settings. If running locally,"
          + " ensure that the Development Storage service is running. Message: '{0}'", 
          ex.Message);
        //sleep 5 seconds and then loop back around and try again
        System.Threading.Thread.Sleep(5000);
      }
      else
      {
        Trace.TraceError("[CustomerWorker.StartUpQueue] StorageClientException thrown. "
          + "Ex = {0}", ex.ToString());
        throw;
      }
    }
    catch (Exception ex)
    {
      Trace.TraceError("[CustomerWorker.StartupQueue] Exception thrown "
        + "trying to initialize the queue. Ex = {0}", ex.ToString());
      throw;
    }
  }
}

Now we need to add the processing. At the top of the worker role, you have a Run() method that looks like this:

public override void Run()
{
  // This is a sample worker implementation. Replace with your logic.
  Trace.WriteLine("$projectname$ entry point called", "Information");

  while (true)
  {
    Thread.Sleep(10000);
    Trace.WriteLine("Working", "Information");
  }
}

After all these years of programming, it totally goes against my grain to put in an infinite loop, but that’s how the worker role works. It will run until it breaks out of the loop or the service goes down. We’re going to replace this code. Our code will call our StartupQueue method, and then loop infinitely looking for a message on the queue, and processing the message when it finds one.

public override void Run()
{
  //start up the queue
  StartUpQueue();

  //loop infinitely until the service shuts down
  while (true)
  {
    try
    {
        // retrieve a new message from the queue, set the visibility
      // this is hours, minutes, seconds, and the global static property is in seconds
      TimeSpan visTimeout = 
        new TimeSpan(0, 0, GlobalStaticProperties.QueueMessageVisibilityTime);
      CloudQueueMessage msg = queue.GetMessage(visTimeout);
      if (msg != null)
      {
        Trace.TraceInformation("[CustomerWorker.Run] message = {0}, time = {1}, "
          + "next visible time = {2}",
          msg.AsString, DateTime.UtcNow, msg.NextVisibleTime.Value.ToString());
        string errorMessage = string.Empty;
        //process the message 
        //assume comma-delimited, first is command. check it and handle the message accordingly
        string[] msgFields = msg.AsString.Split(new char[] { ',' });
        string command = msgFields[0];
        switch (command)
        {
          case "process":
            string firstName = msgFields[1];
            string lastName = msgFields[2];
            string favoriteMovie = msgFields[3];
            string favoriteLanguage = msgFields[4];
            ProcessQueue pq = new ProcessQueue();
            pq.ProcessQueueEntry(firstName, lastName, favoriteMovie, favoriteLanguage,
              container);
            break;
        }

        // remove message from queue
        //http://blog.smarx.com/posts/deleting-windows-azure-queue-messages-handling-exceptions            
        try
        {
          queue.DeleteMessage(msg);
        }
        catch (StorageClientException ex)
        {
          if (ex.ExtendedErrorInformation.ErrorCode == "MessageNotFound")
          {
            // pop receipt must be invalid
            // ignore or log (so we can tune the visibility timeout)
          }
          else
          {
            // not the error we were expecting
            throw;
          }
        }
      }
      else
      {
        //no message found, sleep for 5 seconds
        Thread.Sleep(5000);
      }
    }
    catch (Exception ex)
    {
      Trace.TraceError("[CustomerWorker.Run] Exception thrown "
        + "trying to read from the queue = {0}", ex.ToString());
      Thread.Sleep(5000);
    }
  }    
}

This reads the message off of the queue. The message will be null if there is no message available. After retrieving it, we know it is a comma-delimited string, so just split it into an array to get the different values. I always send a command as the first variable. Your worker role may only process one command, but if you set it up this way, you can send other commands to your worker role to be handled as well.

We want our ProcessQueue class to write the data to blob storage. To do that, it needs a reference to the blob container to pass in to ProcessQueue. So define the container for the blob at the top of the Worker Role class, under the definition for the queue:

CloudBlobContainer container;

In the OnStart event, right before calling base.OnStart(), let’s add some code to set up our container.

//****************Container****************
//get a reference to the blob client
CloudBlobClient cbc = storageAccount.CreateCloudBlobClient();
//get a reference to the container, and create it if it doesn't exist
container = cbc.GetContainerReference("codecamp");
container.CreateIfNotExist();
//now set the permissions so the container is private, 
//  but the blobs are public, so they can be accessed with a specific URL
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
container.SetPermissions(permissions);

This basically makes sure the container exists, and sets the reference to the container. If you don’t set the permissions on the container after it’s created, it will make the container and blobs private, and you won’t be able to access the blobs through a URL. This code sets the permissions so the container as private, but the blobs are public. This means nobody can iterate through the blobs in the container, but they can access a blob if they have a URL for it.

Now let’s set up a method to process our queue entry. Right-click on the CustomerWorker project and select AddClass. Call the class ProcessQueue. You need a method that will take the four attributes, format them, and write the result to the container in blob storage. Here’s the ProcessQueueEntry method:

public string ProcessQueueEntry(string firstName, string lastName, string favoriteMovie, 
  string favoriteLanguage, CloudBlobContainer container)
{
  string errorMessage = string.Empty;

  try
  {
    Trace.TraceInformation("[ProcessQueueEntry] for command [process], " +
        "firstName = {0}, lastName = {1}, favoriteMovie = {2}, favoriteLanguage = {3}",
        firstName, lastName, favoriteMovie, favoriteLanguage);

    //let's write the information to blob storage. First, create the message.
    string messageToWrite = string.Format("FirstName = {0}{1}LastName={2}{1}" +
      "FavoriteMovie = {3}{1}FavoriteLanguage={4}",
      firstName, Environment.NewLine, lastName,
      favoriteMovie, favoriteLanguage);

    //now create the file name -- I'm putting the date/time stamp in the name.
    string fileName = "test_" + DateTime.Now.ToUniversalTime().ToString("yyyyMMdd-hh-mm-ss",
      new System.Globalization.CultureInfo("en-US")) + ".txt";

    //get a reference to the blob 
    var blob = container.GetBlobReference(fileName);

    //upload the text to the blob 
    blob.UploadText(messageToWrite);

  }
  catch (Exception ex)
  {
    errorMessage = "Error processing entry.";
    Trace.TraceError("[ProcessQueueEntry] Exception thrown = {0}", ex);
  }
  return errorMessage;
}

Now run the service. The entries that were in the queue before this step will now be processed. You can look in the development storage blob storage account for a folder called “codecamp”, and you should see your blobs in there. Here’s one of mine, opened in Notepad.

Now run the client and fill in a first and last name and click AddToQueue and it will add it to the queue, and then the worker role will pick it up and process it and write it to blob storage. If it doesn’t work right, put some breakpoints into the service and debug it while it’s running.

So we now have a WCF service running in a web role that reads from and writes to a Windows Azure SQL Database. It submits messages to an Azure queue, and there is a worker role that retrieves the entries from the queue and writes them to blob storage. We have a client that we can use to test the service.

We also have the diagnostics working. If I check the WADLogsTable in development storage, I can see the messages that are being logged:

I’m just showing the message field; it also stores the deploymentID, role instance, etc. There are also tables for the other diagnostics, and in blob storage, I can see my IIS logs.

What if you want to use Windows Azure table storage instead of a SQL database? In the next part, we’ll change our service to do exactly that, and be able to toggle back and forth between the two methods.

Azure for Developers Tutorial Step 5: Adding messages to a queue

July 8, 2012

This is the fifth step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

In this step, we will add a method to our service to allow the user to put messages on the queue. We will let him pass in the first and last name, and we’ll retrieve the Favorites and then put a message on the queue with the four fields in it.

Open CustomerServices.svc in our project. Add these using statements at the top:

using Microsoft.WindowsAzure.StorageClient;
using System.Diagnostics;
using Microsoft.WindowsAzure;
using System.Net;
using Microsoft.WindowsAzure.ServiceRuntime;

Then in the class itself, add these private variables.

private static bool storageInitialized = false;
private static object gate = new Object();
private static CloudQueueClient queueStorage;
private static CloudQueue queue;

We will add a method for adding a message to the queue. In that method, we will call InitializeQueue(). The first time the queue is initialized, it will set a boolean called storageInitialized to true, and then subsequent calls will know it is already initlaized. The object gate is used for locking, to make sure multiple people don’t initialize the queue at the same time.

CloudQueueClient is the client for accessing the queue, and CloudQueue is the queue itself.

So let’s add InitializeQueue(). I’ve added comments in-line to explain what it’s doing.

//initialize the queue, but only the first time 
private void InitializeStorage()
{
  //if it's already initialized, return
  if (storageInitialized)
  {
    return;
  }
  //lock the object
  lock (gate)
  {
    //if someone else initialize the queue while you had the object locked,
    //  return
    if (storageInitialized)
    {
      return;
    }
    //try initializing the queue
    try
    {
      Trace.TraceInformation("[CustomerServices.InitializeStorage] Initializing storage queue");
      // read account configuration settings and get a reference 
      //  to the storage account
      CloudStorageAccount storageAccount =
          CloudStorageAccount.Parse(
          RoleEnvironment.GetConfigurationSettingValue(
          "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"));

      // get a reference to the queue client
      queueStorage = storageAccount.CreateCloudQueueClient();
      // this uses the entry in GlobalStaticProperties that is set in the role config
      queue = queueStorage.GetQueueReference(GlobalStaticProperties.ProcessQueueName);
      //create the queue if it doesn't already exist
      queue.CreateIfNotExist();
    }
    catch (WebException ex)
    {
      //try to give some help
      Trace.TraceError("[CustomerServices.InitializeStorage] WebException thrown trying " 
        + " to initialize the storage services. " 
        + " Check the storage account config settings. If running locally, "
        + "be sure Dev Storage svc is running. Exception = {0}",
          ex.ToString());
      return;
    }
    catch (Exception ex)
    {
      Trace.TraceError("[CustomerServices.InitializeStorage] Exception thrown trying "
        + "to initialize the storage. Exception = {0}", ex.ToString());
      return;
    }
    //this is only set to true if it doesn't throw an exception
    storageInitialized = true;
  }
}

We’re going to add a method to add a message to the queue. This actually formats the message and adds it to the queue. (In my production services, I put this code in a separate class.) Our message is going to be a comma-delimited string. I always pass a ‘command as the first entry in case I want to use the queue to process different requests.

If any of the four input fields have commas in them, they won’t parse right. At GoldMail, we hit this in a case where we are passing the user agent string to the backend in a message, and the user agent string had commas in it. We had to put in special handling for this case.

If you have more information than you want to put in a string, you can write it to blob storage and put a URL to the file in blob storage in the message.

Here’s our new method; add this to ComposerServices.svc.cs.

public string SubmitToQueue(string firstName, string lastName)
{
  //call to make sure the queue exists
  InitializeStorage();
  string errorMessage = string.Empty;
  string favoriteMovie = string.Empty;
  string favoriteLanguage = string.Empty;
  //get the favorites info for the name passed in
  errorMessage = GetFavorites(out favoriteMovie, out favoriteLanguage,
      firstName, lastName);     
  if (errorMessage.Length == 0)
  {
    //I'm passing the message as a comma-delimited string. Format the string.
    string msgString = String.Format("process,{0},{1},{2},{3}", 
      firstName, lastName, favoriteMovie, favoriteLanguage);
    //set the message
    CloudQueueMessage message = new CloudQueueMessage(msgString);
    Trace.TraceInformation("[SubmitToQueue] Message passed to queue = {0}", msgString);
    //add the message to the queue
    queue.AddMessage(message);
  }
  else
  {
    errorMessage = "Entry not submitted to queue. "
      + "Error when retrieving favorites = '" + errorMessage + "'.";
    Trace.TraceError("[SubmitToQueue] firstName = {0}, lastName = {1}, {2}",
        firstName, lastName, errorMessage);
  }
  return errorMessage;
}

Now let’s add the corresponding operation contract  to the interface. If you don’t do this, the method won’t be exposed to the client. Open ICustomerServices.cs and add the operation contract.

[OperationContract]
string SubmitToQueue(string firstName, string lastName);

Run the service locally. Now let’s change the client to add the entries to the queue. Open the TestService. Right-click on the service reference to our service and select Update Service Reference and click OK. This should expose the SubmitToQueue method. Add this to the DAC.cs class in the test client:

  internal static string AddToQueue(string firstName, string lastName)
  {
    string errorMessage = string.Empty;
    CustomerSvc.CustomerServicesClient prx = getClient();
    errorMessage = prx.SubmitToQueue(firstName, lastName);
    return errorMessage;
  }

Now open the code-behind for the form. Look for the btnAddToQueue_Click event handler. Change this:

string errorMessage = string.Empty; // DAC.AddToQueue(txtFirstName.Text, txtLastName.Text);

to this:

string errorMessage = DAC.AddToQueue(txtFirstName.Text, txtLastName.Text);

Run the client application. Click on GetCustomerList to see your list of customers. Now fill in the first and last name of one of your customers and cliick the AddToQueue button. Try a couple of them.

We haven’t written anything to process the messages, so they will sit on the queue for a week before they are automatically removed. If I open Cerebrata Cloud Storage Studio and look at the queue in my development storage, I can see the messages:

In my next post, I’ll show you how to set up the worker role and read the messages and write the information to blob storage.

Azure for Developers Tutorial Step 4: Calling the WCF service from the client

July 8, 2012

This is the fourth step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

In the previous step, we finished the WCF service and ran it in the compute emulator. So if you haven’t got it running, you’ll want to run it now. Put a breakpoint in AddCustomer in CustomerServices.svc.cs. Let’s up the client application. If you haven’t already downloaded the client application we’ll be changing, please do so now; it is here.

The first thing you want to do is add a service reference to your service running in the compute emulator. This will retrieve the service definition. So right-click on Service References and select “Add Service Reference”. Find the open browser showing your service information and copy the link. It will be something like http://127.0.0.1:81/CustomerServices.svc.

Paste the link into the Add Service Reference dialog. Click Go to find the service. If it finds it, it will show it in the Services window. If you click on the service’s interface, it will show the exposed Operation Contracts on the right. When adding new methods to your service, you have to be sure to also add them to the interface, or they won’t be exposed to the client. (Please don’t ask how many times I’ve forgotten this!)

At the bottom, change the Namespace to CustomerSvc and click OK.

We need to set up the Data Access class in the client. Open DAC.cs. First, let’s add a method to get the proxy and set up the endpoint. This enables us to easily programmatically change the address of the service we’re connecting to. We’ll call this from the proxy methods that call the WCF service.

//put these here rather than relying on the app.config being right 
//so you can set up a service reference running the service locally,
//and then just change this to point to the instance in the cloud
private static string m_endpointAddress = @"http://127.0.0.1:81/CustomerServices.svc";
//private static string m_endpointAddress = 
//  http://yourservicename.cloudapp.net/CustomerServices.svc";

private static CustomerSvc.CustomerServicesClient getClient()
{
  CustomerSvc.CustomerServicesClient prx = new CustomerSvc.CustomerServicesClient();
  prx.Endpoint.Address = new System.ServiceModel.EndpointAddress(m_endpointAddress);
  //this sets the timeout of the service call, which should give you enough time
  //  to finish debugging your service call
  prx.InnerChannel.OperationTimeout = new TimeSpan(0, 5, 0);
  return prx;
}

Now let’s add the rest of the methods:

internal static string GetFavoritesForCustomer(out string favoriteMovie, 
  out string favoriteLanguage, string firstName, string lastName)
{
  favoriteMovie = string.Empty;
  favoriteLanguage = string.Empty;
  CustomerSvc.CustomerServicesClient prx = getClient();
  return prx.GetFavorites(out favoriteMovie, out favoriteLanguage, firstName, lastName);
}

internal static string UpdateFavoritesByName(string firstName, string lastName, 
  string favoriteMovie, string favoriteLanguage)
{
  CustomerSvc.CustomerServicesClient prx = getClient();
  return prx.UpdateFavoritesByName(firstName, lastName, favoriteMovie, favoriteLanguage);
}

internal static string GetCustomerList(out DataSet customers)
{
  customers = new DataSet();
  CustomerSvc.CustomerServicesClient prx = getClient();
  return prx.GetCustomerList(out customers);
}

internal static string AddACustomer(string firstName, string lastName, 
  string favoriteMovie, string favoriteLanguage)
{
  CustomerSvc.CustomerServicesClient prx = getClient();
  return prx.AddACustomer(firstName, lastName, favoriteMovie, favoriteLanguage);
}

These are already hooked up in the code-behind in the form. So you can just click F5 to run the application.

So now you should have the service running and the client application running, and you should have a breakpoint in AddCustomer in the service. Fill in the four fields and click on the Add button.

It should call into the service and stop at your breakpoint. Then you can step through your service code and debug it if you have any problems.

Add a few records, then click on GetCustomerList to retrieve them. Fill in a First Name and Last Name from the list and blank out the two Favorite fields, then click GetFavorites, and it should fill them in. Change the favorites and click UpdateFavorites, then retrieve the Customer list again and see if they have been changed.

If everything works, you’re golden. You now have a working WCF service and a client to test it with.

In the next post, we’ll add a queue to the service and add a service call to add a message to the queue. Then we’ll add a worker role to the service that will retrieve messages from the queue and write them to blob storage.

Azure for Developers Tutorial Step 3: Creating the WCF service, part 2

July 8, 2012

This is the third step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

In this step, we will finish creating the WCF service and run it in the local compute emulator. In the next step, we’ll set up the client application and call the service.

First, a word about Windows Azure SQL Databases (WASD). When we were testing everything in Azure, I had done load testing, and everything was fine. But when we actually moved our customers to the new production environment running on Azure, I started seeing lots of connection problems. These are detailed in my article about our migration to Azure. To summarize, Microsoft hadn’t done a lot of testing for the case of a client hitting the database periodically rather than frequently, and having a small database. We were squeezed out by the other large and active databases on the server. They have made great headway in fixing this problem, but you really have to put in exponential retry code.

“Exponential retry code” means you have to try to connect or execute your data request, and if it fails, you have to wait a couple of seconds and try again, and if that fails, wait longer and try again, until you finally give up. Now, Microsoft recommends that you use their connection management framework, but I didn’t find out about that until I’d already done this the hard way, and I haven’t had time to go back and take a close look at it.

The Microsoft framework only does retries for specific SQL Server errors (which is good), and it almost completely wraps the process for you (which is good). I like a lot of logging, and I like to know when we’re having problems, so I log the retries and where they’re happening. This way I can tell how frequent they are, and where we might want to increase the retry count specifically. When I first looked at the framework, that capability wasn’t in there. I’ll leave it to you to check out the recommended solution. In the meantime, I’ll share how we did it.

We need to add the four classes to update and retrieve data to/from the Azure database. I’m going to post the first one here with a bunch of comments in the code to explain what it’s doing, and then I’ll post a link to download the other three, which use the same retry methodology.

Right-click on the web role and add a class. Call it CustomerFavorites.cs. First add the using clauses:

using System.Diagnostics;
using System.Data.SqlClient;
using Microsoft.WindowsAzure.ServiceRuntime;
using System.Data;
using System.Threading;

And here’s the code for the class itself:

internal class CustomerFavorites
{

  /// <summary>
  /// Given the first and last name, return the favorite movie and language.
  /// </summary>
  internal string GetCustomerFavorites(out string favoriteMovie, out string favoriteLanguage,
    string firstName, string lastName)
  {
    //write to diagnostics that this routine was called, along with the calling parameters.
    Trace.TraceInformation("[GetCustomerFavorites] called. FirstName = {0}, LastName = {1}", 
      firstName, lastName);
    string errorMessage = string.Empty;
    favoriteMovie = string.Empty;
    favoriteLanguage = string.Empty;

    //tryCount is the number of times to retry if the SQL execution or connection fails.
    //This is compared against tryMax, which is in the configuration 
    //   and set in GlobalStaticProperties.
    int tryCount = 0;

    //success is set to true when the SQL Execution succeeds.
    //Any subsequent errors are caused by your own code, and shouldn't cause a SQL retry.
    bool success = false;

    //This is the overall try/catch block to handle non-SQL exceptions and trace them.
    try
    {
      //This is the top of the retry loop. 
      do
      {
        //blank this out in case it loops back around and works the next time
        errorMessage = string.Empty;
        //increment the number of tries
        tryCount++;

        //this is the try block for the SQL code 
        try
        {
          //put all SQL code in using statements, to make sure you are disposing of 
          //  connections, commands, datareaders, etc.
          //note that this gets the connection string from GlobalStaticProperties,
          //  which retrieves it the first time from the Role Configuration.
          using (SqlConnection cnx 
            = new SqlConnection(GlobalStaticProperties.dbConnectionString))
          {
            //This can fail due to a bug in ADO.Net. They are not removing dead connections
            //  from the connection pool, so you can get a dead connection, and when you 
            //  try to execute this, it will fail. An immediate retry almost always succeeds.
            cnx.Open();

            //Execute the stored procedure and get the data.
            using (SqlCommand cmd = new SqlCommand("Customer_GetByName", cnx))
            {
              cmd.CommandType = CommandType.StoredProcedure;

              SqlParameter prm = new SqlParameter("@FirstName", SqlDbType.NVarChar, 50);
              prm.Direction = ParameterDirection.Input;
              prm.Value = firstName;
              cmd.Parameters.Add(prm);

              prm = new SqlParameter("@LastName", SqlDbType.NVarChar, 50);
              prm.Direction = ParameterDirection.Input;
              prm.Value = lastName;
              cmd.Parameters.Add(prm);

              SqlDataAdapter da = new SqlDataAdapter(cmd);
              DataTable dt = new DataTable();
              da.Fill(dt);
              //the call to get the data was successful
              //any error after this is not caused by connection problems, so no retry is needed
              success = true;

              if (dt == null || dt.Rows.Count <= 0)
              {
                errorMessage = string.Format("Error retrieving favorites; "
                    + "record not found for '{0}' '{1}'.",
                    firstName, lastName);
              }
              else
              {
                DataRow dr = dt.Rows[0];
                favoriteMovie = dr["FavoriteMovie"].ToString();
                favoriteLanguage = dr["FavoriteLanguage"].ToString();

                Trace.TraceInformation("[GetCustomerFavorites] FirstName = {0}, LastName = {1}, " 
                  + "FavoriteMovie = {2}, FavoriteLanguage = {3}",
                    firstName, lastName, favoriteMovie, favoriteLanguage);
              }
            }//using SqlCommand
          } //using SqlConnection
        }
        catch (SqlException ex)
        {
          //This is handling the SQL Exception. It traces the method and parameters, the retry #, 
          //  how long it's going to sleep, and the exception that occurred.
          //Note that it is using the array retrySleepTime set up in GlobalStaticProperties.
          errorMessage = "Error retrieving customer favorites.";
          Trace.TraceError("[GetCustomerFavorites] firatName = {0}, lastName = {1}, Try #{2}, "
            + "will sleep {3}ms. SQL Exception = {4}",
              firstName, lastName, tryCount, 
              GlobalStaticProperties.retrySleepTime[tryCount - 1], ex.ToString());

          //if it is not the last try, sleep before looping back around and trying again
          if (tryCount < GlobalStaticProperties.MaxTryCount 
            && GlobalStaticProperties.retrySleepTime[tryCount - 1] > 0)
            Thread.Sleep(GlobalStaticProperties.retrySleepTime[tryCount - 1]);
        }
        //it loops until it has tried more times than specified, or the SQL Execution succeeds
      } while (tryCount < GlobalStaticProperties.MaxTryCount && !success);
    }
    //catch any general exception that occurs and send back an error message
    catch (Exception ex)
    {
      Trace.TraceError("[GetCustomerFavorites] firstName = {0}, lastName = {1}, "
        + "Overall Exception thrown = {2}", firstName, lastName, ex.ToString());  
      errorMessage = "Error getting customer favorites.";
    }
    return errorMessage;
  }
}

Rather than post the other three modules, I’ve zipped them up. Download them from this link, then unzip the file, copy the files to your project folder. Right-click on the web role and select “Add Existing” and browse for the three classes.

Now that you’ve added the classes, uncomment the method calls in CustomerServices.svc.cs.

Now your service should build successfully, and you can run it by clicking F5. If it’s successful, you should see this:

Don’t panic!

This is because you don’t have access to the root of the webserver. Append “CustomerServices.svc” to the end of the URL, and you should see this:

So now your service is running. In the next step, I’ll show you how to hook up the client application to the service.

Azure for Developers Tutorial Step 2: Creating the WCF service, part 1

July 8, 2012

This is the second step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

In this step, we will create the basic structure of the WCF service, including the service contracts. In the next step, we will add the code to actually access the database.

Set up the project and configuration settings

Open Visual Studio in administrative mode (Right-click on Visual Studio, select Run as Administrator). Click File/New Project.

Select Cloud project, .NET 4 Framework, and specify the name as AzureCustomerServices. Check the box that says “Create directory for solution”. Browse to the location you want the code, and click OK.

Now you should see the New Windows Azure Cloud Service dialog.

Pick your version of the tools in the top left-hand corner. Then select WCF Service Web Role from the list of available roles. Click the > button to add the role to the solution. Then hover over the right side of that entry and a pencil will appear. Click the pencil to change the name, and set it to CustomerServicesWebRole. Then click OK to create the project. Visual Studio will create the web role project for you, and the cloud project with its configuration files.

The difference between a web role and a worker role is that a web role uses IIS. So if you have a web application or service, you should use a web role. Worker roles are generally used for processing that you want to be done asynchronously. Our web role will host our service, and later in the tutorial our worker role will process messages from the queue.

ServiceConfiguration.Cloud.cscfg is the configuration that will be used when you publish to the cloud. Service Configuration.Local.cscfg is the configuration that will be used when you run the application in the compute emulator. ServiceDefinition.csdef has the definition and the list of configuration items, which you will see in a minute.

Double-click on each of the cscfg files and go out to the end of the Service Configuration element. You will see this:

osFamily="1" osVersion="*">

By default, the osFamily will be set to 1. This means your Azure instance will be installed on Windows Server 2008. Generally, you will want to change this to 2, so your Azure instances will be run on Windows Server 2008 R2 VM’s.

If you double-click on the CustomerServicesWebRole under Roles in the cloud project, you will see where to set the instance count and VM Size, as well as the endpoints. You can also set up LocalStorage, which will allocate a disk you can access locally in the Azure instance. At GoldMail, we use this in one of our roles as a scratch disk.

There are two ServiceConfiguration.*.cscfg files – these contain configuration information that you can access programmatically as well as through the Windows Azure portal. You can change these values while your role is running without re-publishing your role. If you are migrating a current website to Azure, be sure to check the settings in your web.config file and see if any should be moved to the role configuration so you can change it without re-publishing.

Let’s add the configuration settings that we will need in the web role. Double-click on the ServiceConfiguration.Local.cscfg file. Remove any configuration settings that you have, and add these:

      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" 
               value="UseDevelopmentStorage=true" />
      <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />
      <Setting name="dbConnString" value="Data Source=YOURLOCALSQLSERVER;Initial Catalog=CodeCamp;
               User ID=YOURUSERNAME;Password=YOURPASSWORD;" />
      <!-- This is the max number of times to try executing the database commands before giving up. -->
      <Setting name="MaxTryCount" value="4" />
      <!-- This is the time interval to sleep between SQL retries in case there's a problem. It's in ms. -->
      <Setting name="RetrySleepInterval" value="2500" />
      <!-- frequency, in seconds, to retrieve the perf counters -->
      <Setting name="PerfMonSampleRate" value="60" />
      <!-- frequency, in seconds, to transfer the perf counters to the logs from the system-->
      <Setting name="PerfMonScheduledTransferPeriod" value="120" />
      <Setting name="ProcessQueueName" value="codecampqueue" />

For dbConnString, change this to your connection string to your local SQL Server database.

Now double-click on ServiceConfiguration.Cloud.cscfg. If you have a storage account and SQL Azure database, you can change the connection strings accordingly. Otherwise, you can use the same values you used in the Local configuration. You need to have the same XML elements in each configuration. Just be sure you don’t publish to the cloud with the local development connection strings – that is one of the most frequent causes of roles not spinning up during deployment.

For publishing to the cloud, your ServiceConfiguration.cloud.cscfg configuration settings element should look like this, but with the variables filled in:

      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" 
               value="DefaultEndpointsProtocol=https;AccountName=PUTYOURACCOUNTNAMEHERE;
               AccountKey=PUTYOURACCOUNTKEYHERE" />
      <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;
               AccountName=PUTYOURACCOUNTNAMEHERE;AccountKey=PUTYOURACCOUNTKEYHERE" />
      <Setting name="dbConnString" value="Server=tcp:YOURAZUREDATABASE.database.windows.net;
               Database=CodeCamp;User ID=YOURDBACCOUNT@YOURDATABASE;Password=YOURPASSWORD;
               Trusted_Connection=False;Encrypt=True;" />
      <!-- This is the max number of times to try executing the database commands before giving up. -->
      <Setting name="MaxTryCount" value="4" />
      <!-- This is the time interval to sleep between SQL retries in case there's a problem. It's in ms. -->
      <Setting name="RetrySleepInterval" value="2500" />
      <!-- frequency, in seconds, to retrieve the perf counters -->
      <Setting name="PerfMonSampleRate" value="60" />
      <!-- frequency, in seconds, to transfer the perf counters to the logs from the system-->
      <Setting name="PerfMonScheduledTransferPeriod" value="120" />
      <Setting name="ProcessQueueName" value="codecampqueue" />

Now open the csdef file; you need to add the definitions of the elements here as well. You probably don’t even have a Configuration section (I don’t). Just add it after Endpoints.

    <ConfigurationSettings>
      <Setting name="dbConnString" />
      <Setting name="DataConnectionString" />
      <Setting name="MaxTryCount" />
      <Setting name="RetrySleepInterval" />
      <Setting name="PerfMonSampleRate" />
      <Setting name="PerfMonScheduledTransferPeriod" />
      <Setting name="ProcessQueueName" />
    </ConfigurationSettings>

I’ll explain the settings as we use them. Now let’s add a class to our web role project (CustomerServicesWebRole) and call it GlobalStaticProperties. Rather than retrieve the configuration values from the Role repeatedly, I retrieve them from this class, which only reads them the first time they are retrieved. So right-click on CustomerServicesWebRole and choose Add Class. Name it GlobalStaticProperties and click OK. Change the class from Public to Internal Static.

You will need these using clauses at the top of the class:

using System.Collections.Generic;
using System.Diagnostics;
using Microsoft.WindowsAzure.ServiceRuntime;

Here is the actual class code itself. I’ve put comments inline to explain what each variable is.

internal static class GlobalStaticProperties
{

  private static string _dbConnectionString;
  /// <summary>
  /// connection string to the database; only retrieve it the first time
  /// </summary>
  internal static string dbConnectionString
  {
    get
    {
      if (string.IsNullOrEmpty(_dbConnectionString))
      {
        _dbConnectionString = RoleEnvironment.GetConfigurationSettingValue("dbConnString");
        Trace.TraceInformation("[CustomerServicesWebRole.GlobalStaticProperties] " +
            " Setting dbConnectionString to {0}", _dbConnectionString);
      }
      return _dbConnectionString;
    }
  }

  private static int _MaxTryCount;
  /// <summary>
  /// max number of times to try reading the SQL database before giving up
  /// </summary>
  internal static int MaxTryCount
  {
    get
    {
      if (_MaxTryCount <= 0)
      {
        //hasn't been loaded yet, so load it 
        string maxTryCount = RoleEnvironment.GetConfigurationSettingValue("MaxTryCount");
        int intTest = 0;
        bool success = int.TryParse(maxTryCount, out intTest);
        //if it's <= 0, set it to 1.
        if (!success || intTest <= 0)
          _MaxTryCount = 1;
        else
          _MaxTryCount = intTest;
        Trace.TraceInformation("[CustomerServicesWebRole.GlobalStaticProperties] "
          + "Setting MaxTryCount to {0}", MaxTryCount);
      }
      return _MaxTryCount;
    }
  }

  private static List<int> _retrySleepTime;
  /// <summary>
  /// amount of time to wait between retries when reading the SQL database
  /// This loads a list, which is then referenced in code. 
  /// This means my intervals are the same, just multiplied by the index.
  /// First retry waits 0 seconds, second waits 2.5, third waits 5, and last is irrelevant.
  /// (It stops if it retries 4 times.)
  /// </summary>
  internal static List<int> retrySleepTime
  {
    get
    {
      if (_retrySleepTime == null || _retrySleepTime.Count <= 0)
      {
        //hasn't been loaded yet, so load it 
        string interval = RoleEnvironment.GetConfigurationSettingValue("RetrySleepInterval");
        int intTest = 0;
        int intInterval = 0;
        bool success = int.TryParse(interval, out intTest);
        if (intTest <= 0)
          intInterval = 2500; //2.5 seconds
        else
          intInterval = intTest;
        Trace.TraceInformation("[CustomerServicesWebRole.GlobalStaticProperties] " 
          + "Setting Sleep Interval to {0}", intInterval);

        //put these in an array so they are completely dynamic rather than having
        //  variables for each one. You can change the interval and number of times
        //  to retry simply by changing the configuration settings.
        _retrySleepTime = new List<int>();
        //set the sleep times 0, 5, 10, etc.
        intTest = 0;
        _retrySleepTime.Add(0);
        for (int i = 1; i < MaxTryCount; i++)
        {
          intTest += intInterval;
          _retrySleepTime.Add(intTest);
        }

        for (int i = 0; i < MaxTryCount; i++)
        {
          Trace.TraceInformation("[CustomerServicesWebRole.GlobalStaticProperties] "
            + "Setting retrySleepTime({0}) to {1}", i, _retrySleepTime[i]);
        }
      }
      return _retrySleepTime;
    }
  }

  private static string _ProcessQueueName;
  /// <summary>
  /// name of the queue. You should never hard code this. If you do, you will have to 
  /// re-publish your application in order to change it.
  /// </summary>
  internal static string ProcessQueueName
  {
    get
    {
      if (string.IsNullOrEmpty(_ProcessQueueName))
      {
        _ProcessQueueName = RoleEnvironment.GetConfigurationSettingValue("ProcessQueueName");
        Trace.TraceInformation("[CustomerServicesWebRole.GlobalStaticProperties] "
          + "Setting ProcessQueueName to {0}", _ProcessQueueName);
      }
      return _ProcessQueueName;
    }
  }
}

Configure the diagnostics

Now that we have the configuration set up and handled, we want to add code to set up the diagnostics configuration. At the very least, you should set up and use trace diagnostics. You can not debug into your instance running in the cloud, and if you trace log exceptions (at the very least), it will help you figure out what’s happening.

We want the diagnostics code to execute when the web role starts up, so open WebRole.cs. Add this using statement at the top:

using Microsoft.WindowsAzure.Diagnostics.Management;

Remove the code in the OnStart() method and replace it with the following. I’ve included comments in-line to explain the code.  This basically sets the configuration for the diagnostics, for trace logging, IIS logs, Windows Event logs, and one performance statistic (%CPU). The log information is written locally and then transferred to Windows Azure storage. IIS logs are transferred to blob storage; the other logs are transferred to table storage.

public override bool OnStart()
{
  // Get a reference to the initial default configuration.
  string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";

  // First, get a reference to the storage account where the diagnostics will be written. 
  // It is recommended that you use a separate account for diagnostics and data, so the 
  //   performance of your data access is not impacted by the diagnostics.
  CloudStorageAccount storageAccount =
      CloudStorageAccount.Parse(
      RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));

  // Get an reference to the diagnostic manager for the role instance, 
  //   and then get the default initial configuration, which we will then change.
  RoleInstanceDiagnosticManager roleInstanceDiagnosticManager =
      storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, 
      RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
  DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();

  // Change the polling interval for checking for configuration changes
  //   and the buffer quota for the logs. 
  config.ConfigurationChangePollInterval = TimeSpan.FromSeconds(30.0);
  config.DiagnosticInfrastructureLogs.BufferQuotaInMB = 256;

  // The diagnostics data is written locally and then transferred to Azure Storage. 
  // These are the transfer intervals for doing that operation.
  config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); //for trace logs
  config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); //for iis logs

  // Configure the monitoring of one Windows performance counter
  // and add it to the configuration.
  int sampleRate = 0;
  int scheduledTransferPeriod = 0;
  bool success = false;
  //this is sample rate, in seconds, for the performance monitoring in %CPU.
  //By making this configurable, you can change the azure config rather than republish the role.
  success = int.TryParse(RoleEnvironment.GetConfigurationSettingValue("PerfMonSampleRate"), 
    out sampleRate);
  if (!success || sampleRate <= 0)
    sampleRate = 60;  //default is 60 seconds
  success =
    int.TryParse(RoleEnvironment.GetConfigurationSettingValue("PerfMonScheduledTransferPeriod"), 
    out scheduledTransferPeriod);
  if (!success || scheduledTransferPeriod <= 0)
    scheduledTransferPeriod = 120;  //default is 120 seconds

  PerformanceCounterConfiguration perfConfig
      = new PerformanceCounterConfiguration();
  perfConfig.CounterSpecifier = @"\Processor(*)\% Processor Time";
  perfConfig.SampleRate = TimeSpan.FromSeconds((double)sampleRate);
  config.PerformanceCounters.DataSources.Add(perfConfig);
  config.PerformanceCounters.ScheduledTransferPeriod = 
    TimeSpan.FromSeconds((double)scheduledTransferPeriod);

  // Configure monitoring of Windows Application and System Event logs,
  // including the quota and scheduled transfer interval, and add them 
  // to the configuration.
  WindowsEventLogsBufferConfiguration eventsConfig
      = new WindowsEventLogsBufferConfiguration();
  eventsConfig.BufferQuotaInMB = 256;
  eventsConfig.ScheduledTransferLogLevelFilter = LogLevel.Undefined; //was warning
  eventsConfig.ScheduledTransferPeriod = TimeSpan.FromMinutes(2.0); //was 10
  eventsConfig.DataSources.Add("Application!*");
  eventsConfig.DataSources.Add("System!*");
  config.WindowsEventLog = eventsConfig;

  //set the configuration to be used by the current role instance
  roleInstanceDiagnosticManager.SetCurrentConfiguration(config);

  //add an event handler for the configuration being changed while the role is running
  RoleEnvironment.Changing += 
    new EventHandler<RoleEnvironmentChangingEventArgs>(RoleEnvironment_Changing);
  return base.OnStart();
}

Add an event handler for the Role Environment changing:

/// <summary>
/// If they change any of the configuration values while the role is running,
/// recycle it. You can also have this check which setting got changed and 
/// handle it rather than recycling the role. 
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
void RoleEnvironment_Changing(object sender, RoleEnvironmentChangingEventArgs e)
{

  // If a configuration setting is changing
  if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
  {
    // Set e.Cancel to true to restart this role instance
    e.Cancel = true;
  }
}

That takes care of the diagnostics. They will be collected and transferred to Windows Azure storage periodically while the role is running.

Set up the service contract

In the Solution Explorer, rename IService1 to ICustomerServices, then double-click on it to open it. Remove the sample code that Microsoft provides. Add this to the using statements:

using System.Data;

And here is your service contract:

[ServiceContract]
public interface ICustomerServices
{
  [OperationContract]
  string GetFavorites(out string favoriteMovie, out string favoriteLanguage,
      string firstName, string lastName);

  [OperationContract]
  string UpdateFavoritesByName(string firstName, string lastName, string favoriteMovie, 
    string favoriteLanguage);

  [OperationContract]
  string GetCustomerList(out DataSet customers);

  [OperationContract]
  string AddACustomer(string firstName, string lastName, string favoriteMovie, 
    string favoriteLanguage);
}

As we discussed back in the introduction article, we are exposing four methods for retrieving, adding, and updating the data in the database. Now we need to implement our interface.

Rename Service1.svc to CustomerServices.svc, then double-click on it to open the C# code. Remove the sample code provided by Microsoft. Right-click on Service1 and select Refactor/Rename, and rename it to CustomerServices. (Am I the only one who finds it annoying that they don’t change this when you change the file name, but they do when you change the interface’s file name?)

So now we need some code for the service itself. First, add this using statement to the top:

using System.Data;

Add this attribute to the class itself. This allows the WCF service to be called from outside the Azure load balancer.

  [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
  public class CustomerServices : ICustomerServices

And here is the code for CustomerServices.svc.cs:

public string GetFavorites(out string favoriteMovie, out string favoriteLanguage,
  string firstName, string lastName)
{
  string errorMessage = string.Empty;
  favoriteMovie = string.Empty;
  favoriteLanguage = string.Empty;
  //CustomerFavorites cf = new CustomerFavorites();
  //errorMessage = cf.GetCustomerFavorites(out favoriteMovie, out favoriteLanguage, firstName, lastName);
  return errorMessage;
}

public string UpdateFavoritesByName(string firstName, string lastName, string favoriteMovie, 
  string favoriteLanguage)
{
  string errorMessage = string.Empty;
  //CustomerFavoritesUpdate cfu = new CustomerFavoritesUpdate();
  //errorMessage = cfu.SetCustomerFavorites(firstName, lastName, favoriteMovie, favoriteLanguage);
  return errorMessage;
}

public string AddACustomer(string firstName, string lastName,
  string favoriteMovie, string favoriteLanguage)
{
  string errorMessage = string.Empty;
  //CustomerFavoritesAdd cfa = new CustomerFavoritesAdd();
  //errorMessage = cfa.AddCustomer(firstName, lastName, favoriteMovie, favoriteLanguage);
  return errorMessage;
}


public string GetCustomerList(out DataSet customers)
{
  string errorMessage = string.Empty;
  customers = new DataSet();
  //CustomerList cl = new CustomerList();
  //errorMessage = cl.GetListOfCustomers(out customers);
  return errorMessage;
}

This is basically a proxy layer. Next, we need to set up the classes for modifying or retrieving the data. I’ll show you how to do that in the next step, and also discuss the SQL Azure connection management.

Azure for Developers Tutorial Step 1: Migrating the database to Windows Azure

July 8, 2012

This is the first part of the Azure for Developers tutorial in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

In this step, we will attach the database to our local SQL Server instance, and then migrate it to a Windows Azure SQL Database (previously known as SQL Azure, and referred to from hereon out as WASD) using the SQL Azure Migration Wizard from CodePlex.

Let’s start by attaching the CodeCamp database to our local SQL Server instance.

Unzip the provided database and copy it to the folder with the rest of your SQL Server databases. If you took the default installation, this is under Program Files/SQLServer something or other. (Is it just me, or is it a little vexing that Microsoft put out guidelines against storing data in Program Files when Windows Vista came out, and made it difficult to get around that restriction without turning off UAC, but then they go and store the SQLServer databases there?)

Open the SQL Server Management Studio (SSMS) and connect to your local database server. Right-click on Databases and select Attach.

In the Attach Databases window, click the Add button in the middle of the screen and browse to where you put the CodeCamp database. Select it, click OK. You should see something similar to this:

Click OK to attach the database. If you look at the database, you will see there is a table called Customer, and a handful of stored procedures that we are going to use in the WCF service.

We want to migrate this database to a WASD so we can run our service against the cloud. If you are only going to run this service locally, you can skip this step and go directly to step 2.

To migrade the database, we’re going to use the SQL Azure Migration Wizard from Codeplex. You can use this handy dandy tool to analyze your database and see if you can migrate it to Azure or not. When I ran this on GoldMail’s database, I found a CLR routine that I didn’t know was in there. CLR routines aren’t allowed in WASD’s, so I replaced it with some T-SQL code in the stored procedure where it was being called. I also use this rool when I create new tables and procedures locally and want to move them into the staging database for official testing, and to copy data from the production database to the staging database.

The other thing you may see when you migrate your own database is that all tables in a WASD require an unclustered index. If your table does not have one of these, the migration wizard will add one for you.

Download the version of the wizard that you need (depending on your version of SQL Server) and unzip it. Then double-click on the SQLAzureMW.exe file to run it. You should see this:

Select SQL Database under Analyze/Migrate and click Next. Specify your local SQL Server instance and authentication, and click Connect. You can try the dropdown on ServerName, but this doesn’t always work for me, so I usually end up typing in the name of my computer and SQL Server instance. After clicking Connect, you should see the list of databases, and you should see [CodeCamp] in the list.

Select CodeCamp and click Next.

You can now select how much of your database you want to migrate. We had two huge tables with millions of records, and the rest of the tables had fewer than 10,000 records. So the day we actually moved everything into production on Azure, we migrated everything except those two tables, and while we were putting all of the services into place and testing the client applications, we migrated the last two tables.

In our tutorial, we want to migrate the whole database, so leave “all database objects” selected, and click Next to review your selections, then click Next to generate the SQL Script. This wizard uses BCP to copy the data, so the amount of time it takes will depend on how much data you have. We have none, so it should be pretty fast!

After reviewing the results summary, click Next. Now you’re going to specify the WASD you’re going to import the database into.

The SERVER name is in the format of SERVER.database.windows.net, with SERVER being your WASD server, which is assigned by Microsoft after you set up your server in the Windows Azure Portal.

Since WASD does not support Windows Integrated Security, you have to specify a username and password to the database. I’m going to use my administrative account (the name of my WASD server is blurred out).

After connecting, you should see the list of databases on the server. Click on the [Create Database] button at the bottom and create a database called CodeCamp.

After creating the database, be sure it’s selected, and then click Next. It will ask if you want to execute your script against the CodeCamp database; click Yes. It wll now show you the results of running the script.

If there were any errors, they would be displayed in red. Now you can just exit the wizard. If you now connect to your database in Azure using the SQL Server Migration Wizard, you will be able to see the CodeCamp database.

So at this point, you have the database set up in Azure and are ready to start creating the WCF service to access it. We’ll cover that in step 2.

Azure for Developers: Introduction to the Tutorial

July 8, 2012

I’ve been travelling around giving a two-part talk called “Azure for Developers”, in which I show how and why to use each of the features of Windows Azure. I do this by building a WCF service that performs CRUD operations against a SQL Azure database or Windows Azure Table Storage, and setting up a client application to call the service. The point is to show the features of Windows Azure and talk about how I’ve used them in production, and focus on the programming.

I’m turning the talk into a series of seven blog posts in case other people find it helpful, especially if you’re just starting out with Windows Azure development. This series will give you all of the code to create a working service that you can run in the cloud or in the compute emulator, and a client that you can run to test the service. I will also provide the database structure and stored  procedures. So if you follow all of the instructions, then by the end of step 7, you should have your own version of the working service, and you will know how to create a web role and a worker role, and how to programmatically talk to SQL Azure, Windows Azure queues, Windows Azure table storage, and Windows Azure blob storage.

Let’s start with the objective of the service – what is it going to do? Here is what the client application looks like. We’re basically going to write the backend for this application and then hook up the application to call the backend.

We will have four fields that the user can add, update, and retrieve. They can also add a message to a queue, and we will retrieve the message from the queue and write their information out to blob storage.

Let’s talk about the architecture of the whole application. Here’s what we will be creating:

  • A database with a table and the stored procedures to retrieve the data (provided)
  • A web role hosting a WCF service that exposes methods that correspond to the features of the client application, and reads/writes to the SQL Azure database and to put messages on a Windows Azure queue. 
  • A worker role that will check the queue for messages and retrieve them, then write them to blob storage.
  • A client application to call the WCF service and show the results. (provided)

To follow along and create your own working version of the client and the service, you need the following:

  • SQLServer Express or SQLServer installed on your local machine (SQLCE won’t work)
  • If you want to run the service in the cloud, you  need a Windows Azure subscription with at least one service, one storage account, and one SQL Azure server. You can do this entire series in the compute emulator without publishing to the cloud.
  • Visual Studio 2010 or 2012 with the Tools/SDK 1.6 or 1.7 installed. The finished projects I post at the end will use VS2010 and target 1.6 for the widest audience.
  • An application you can use to view Blob Storage, Table Storage, Queues, and Diagnostics. I use (and will show) the Cerebrata tools, or you can use the (free) Windows Azure Platform Management Tool (MMC) on Codeplex, or any other tool you may prefer.

You need the starting version of the desktop application (without the service code). You can download that here.

You need the CodeCamp database, which you can download here.

Here is the list of steps in the tutorial, in case you want to look ahead.

Upgrading to Windows Azure Tools 1.7

July 4, 2012

At GoldMail, we like to be using the most recent version of the Azure Tools and SDK for two reasons. One, we can take advantage of the newest features, and two, we don’t get too far behind, which may cause problems down the line. This can be problematic, as we have a lot of services, a lot of other work to do, and the Azure team is pushing out iterations pretty quickly. They were coming out every 3-4 months until this last one (June 2012), which was almost 7 months.

All of our infrastructure runs on Windows Azure. The exceptions are the desktop client (WinForms) and the Flash player. So when we update our tools version, it impacts everybody in Engineering, and our build engineer in Operations (we call him Awesome Jack).

The new version of the Tools/SDK (1.7) will run side-by-side with the previous version (1.6) (thank you Microsoft!). From information provided at the MVP Summit, I knew this was going to be the case, so we updated to 1.6 in March, which was a small update that didn’t impact us. This would allow us more flexibility when updating to 1.7.

Upgrading from Tools 1.6 to Tools 1.7

First, I installed the new tools/SDK on one of my personal computers rather than my work computer. This way, if I have any problems with the service or application running with 1.7, I can work on resolving the issue, and I can still get my work done.

I compared the manual installation instructions for Tools/SDK 1.6 against the manual installation instructions for Tools/SDK 1.7, and determined that I only had to install the following:

  • Windows Azure Authoring Tools
  • Windows Azure Emulator
  • Windows Azure Libs for .NET
  • Windows Azure Tools.VS100

If you’re running VS 2012, you can install Windows Azure Tools VS.110.exe instead of VS100 – we haven’t upgraded to VS2012 yet. (Since that changes the development UI, that’s a whole different upgrade project!) The instructions for 1.7 also talk about IIS Express and SQL Express Local DB, but I have full IIS SQL Server Developer’s Studio installed, so those are unnecessary.

The rest of the instructions and other installs, such as  the URL Rewrite module, are the same as those for 1.6. So I only installed the Tools/SDK bits, carefully reading each License Agreement, just like everybody else. (Haha!)

So let’s see how to update the Visual Studio project. I’ve opened the Customer Services WCF service that is created during my Azure for Developers talk. Because I still have Tools 1.6 installed, I don’t get prompted to upgrade my project. But I want to do the upgrade, so how do I force the solution to upgrade? Right-click on the Cloud project and select Properties.

Now the properties of the cloud project are displayed. It recognizes that I have a newer version of the Tools installed and offers me an upgrade.

I click the Upgrade button and get the Visual Studio Conversion Wizard.

I click Next, and get the screen asking if I want to create a backup before the conversion is performed. (For solutions bound to source control, I always say no, because isn’t that the point of source control??)

I click Next and I get the Ready To Convert dialog, where it warns me about source control.

I click Finish, and I get the Conversion Complete dialog.

I check the box to see the conversion log because I’m nosy and I want to know what got changed, then click on Close. The log is displayed:

Among the changes, it looks like it added a Schema Version to the csdef and cscfg files. You can now see this in the csdef file at the end of the Service Definition element in the csdef file, and at the end of the Service Configuration element in the cscfg files:

schemaVersion="2012-05.1.7"

Now that it’s upgraded, I just need to publish it to one of my staging services and test it out.

How do we schedule upgrading our services at GoldMail?

It would impact the development schedule too much to publish all of the services to staging, test all of them, and move them into production. So this is how we handle these upgrades.

  • I put a work item in to TFS to for the upgrade of each service or application. (I want to be sure I can track each one separately, because we’re not going to release them simultaneously.)
  • For each service or application: I open the source code in Visual Studio and update the cloud projects. (Note: if you have more than one project, you need to update each of them.) Then I publish the project to staging and (assuming that worked) shelve the changes from the upgrade.
  • If the deployments work, I’ll do some cursory checking to make sure there aren’t any immediately obvious showstoppers. Then I will check in all of the changes. (No guts, no glory).
  • At this point, the other developers and the build engineer have to install the new tools/SDK. Until they do, they won’t be able to open any of the projects. (It’s best to check and make sure they’re not in the middle of something before checking in the changes!)
  • For each service, if we are currently working on it, I assign the work item for upgrading that service to that release. If we have an upcoming release that we haven’t started working on yet, we wait and include the Tools update with that release.

We are constantly working on the different products, so usually this will end up with most services being upgraded in 2-3 months. At that point, we will test whatever’s left and then release it.

If there is a feature we really need to use, we will accelerate the release schedule. But otherwise, this kind of lazy upgrade schedule works really well for us, even though it takes a little longer. Tools/SDK 1.6 came out before we’d finished upgrading everything to 1.5.

Of course, since 1.6 and 1.7 run side-by-side, we could wait and upgrade each product before we are going to make other changes, but I’m always afraid we’ll forget upgrade something, and this way ensures that we get everything updated eventually.

3/8/2014 GoldMail is now doing business as PointAcross.

San Diego Code Camp 2012

July 4, 2012

I gave my “Azure for Developers” talk at the San Diego Code Camp a couple of weeks ago. In this talk, I show how to set up a WCF service that performs CRUD operations and call it from a Windows Forms client. This shows all of the features of Windows Azure, including web roles, worker roles, diagnostics, blob storage, table storage, and queues.

The code from the talk includes the WCF service (CustomerServices), the Windows Forms application (TestService), and the SQLServer database (which is uploaded to SQL Azure using the SQL Azure Migration Wizard from codeplex).

You can download the code packet here.

You can download the SQL Azure Migration Wizard here.

Thanks to everyone who attended the talk; I hope you enjoyed it!