Managing snapshots of Azure blobs with PowerShell

August 19, 2015

In the previous article in this series, we talked about accessing and modifying the properties and metadata of a blob using PowerShell. In this module, we’ll look at how to manage snapshots – adding them, removing them, and viewing them.

Using snapshots lets you keep history for a blob, allowing you to revert to a previous version. In this blog entry, we’ll upload a blob, set the metadata, then take a snapshot. Then we’ll do that a couple more times. Then we’ll examine the snapshot, see how to promote it, and how to delete it.

If you don’t know anything about snapshots, or you need a refresher course, please read about snapshots in the Just Azure series about blob storage, where I explain everything you ever wanted to know about blob snapshots. That article shows how to do snapshots with C# code. We’re going to use PowerShell instead.

Setup

As we’ve seen before, first you need a storage account context. This is the same as in the previous posts. To define a storage account context, you need the storage account name and key. Set the variables to storage account name and key. You can get these from the Storage Account in the Azure Portal.

$StorageAccountName = "yourStorageAccountName"
$StorageAccountKey = "yourStorageAccountKey"

Now let’s set up the storage account context.

$ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName `
         -StorageAccountKey $StorageAccountKey

Upload a file and get a reference to the blob

First, let’s set a container name. You can use an existing container if you like; just plug in the container name. I’m going to create a new container called “snapshots”, and set the access level to “Blob”, which means the container will provide public access to the blobs.

$ContainerName = “snapshots"

New-AzureStorageContainer -Name $ContainerName -Context $ctx -Permission Blob

Now I’m going to upload files from a local directory. I selected pictures, but you can select any kind of files that you want to. Set a variable to the local folder. This way, you don’t have to type it in repetitively. I’ve set this one to match the folder where I have pictures I want to upload.

$localFileDirectory = "D:\_Temp\TestImages\"

Now I’ll set the blob name. I’ll upload each picture to the same blob, so this won’t change.

$BlobName = "RandomPicture.jpg"

Now let’s set the name of the local file and upload the picture.

$localFileName = "DogInCatTree.png"

$localFile = $localFileDirectory + $localFileName

Set-AzureStorageBlobContent -File $localFile -Container $ContainerName `

     -Blob $BlobName -Context $ctx

Now we’ll get a reference to the blob. This will point to the base blob, so we only have to do this the first time, because we’re going to keep uploading pictures to the same blob.

$Blob = Get-AzureStorageBlob -Context $ctx -Container $ContainerName -Blob $BlobName

$CloudBlockBlob = [Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob] $Blob.ICloudBlob

Now that we have a reference to the blob, we can modify the metadata and create some snapshots.

Create Snapshots

Let’s set the metadata on the blob with one key-value pair. The key will be “filename” and the value will be the file name of the blob that was uploaded. This way, we can tell what the original file was that was uploaded. Then we’ll save the change (SetMetadata) and create a snapshot.

$CloudBlockBlob.Metadata["filename"] = $localFileName

$CloudBlockBlob.SetMetadata()

$CloudBlockBlob.CreateSnapshot()

Let’s upload another picture, set the metadata, and create another snapshot.

$localFileName = "GuyEyeingOreos.png"

$localFile = $localFileDirectory + $localFileName

Upload the file, using the –Force argument that tells it to overwrite the file without prompting if it already exists. If you don’t use this, it will ask if it’s okay to replace the file.

Set-AzureStorageBlobContent -File $localFile -Container $ContainerName -Blob $BlobName -Context $ctx -Force

Set the metadata to hold the name of the original file before taking the snapshot, then take the snapshot.

$CloudBlockBlob.Metadata["filename"] = $localFileName

$CloudBlockBlob.SetMetadata()

$CloudBlockBlob.CreateSnapshot()

Let’s do the whole sequence one more time so we have more snapshots to look at.

$localFileName = "gizmodo_groundhog_texting.jpg"

$localFile = $localFileDirectory + $localFileName

Set-AzureStorageBlobContent -File $localFile -Container $ContainerName `
    -Blob $BlobName -Context $ctx -Force

$CloudBlockBlob.Metadata["filename"] = $localFileName

$CloudBlockBlob.SetMetadata()

$CloudBlockBlob.CreateSnapshot()

Now we have one base blob and 3 snapshots. The snapshots will have metadata that represents the file name used for that blob.

Examine the snapshots

Let’s get a reference to the container, and then list the files in the container and their snapshots, and show just the name and snapshot time.

$Container = Get-AzureStorageContainer -Name $ContainerName -Context $ctx

$Container.CloudBlobContainer.ListBlobs($BlobName, $true, "Snapshots") `

     | select Name, SnapshotTime

image-01

The entry without the snapshot time is the base blob.

Let’s save the list to a variable, then loop through the list and show the snapshot date/time and the filename from the metadata for each entry. I’m not going to show blob name, because they’re all the same.

$ListOfBlobs = $Container.CloudBlobContainer.ListBlobs($BlobName, $true, "Snapshots")

foreach ($CloudBlockBlob in $ListOfBlobs) {

    $CloudBlockBlob.FetchAttributes()

    write-host " snapshot time = " $cloudblockblob.SnapshotTime `

        " filename = " $CloudBlockBLob.Metadata["filename"]

    }

image-02

Notice the base blob has the same metadata as the last snapshot. This makes sense, as we uploaded 3 files, set the metadata, and took snapshots after each one. The last snapshot should match the base blob, and it does.

You can copy a snapshot to a regular blob. Let’s loop through the snapshots and use the filename in the metadata as the new blob name.

foreach ($CloudBlockBlob in $ListOfBlobs) {

    if ($CloudBlockBlob.IsSnapshot)

    {

        $CloudBlockBlob.FetchAttributes()

        $newBlobName = $CloudBlockBlob.Metadata["filename"]

        Start-AzureStorageBlobCopy -ICloudBlob $CloudBlockBlob `

            -DestContainer $ContainerName -DestBlob $newBlobName -Context $ctx

    }

}

Now let’s look at the list of blobs and snapshots in the container.

Get-AzureStorageBlob -Container $ContainerName -Context $ctx `

|     select Name, SnapshotTime

image

Each snapshot was copied back to its original name; you can see they now appear in the container with the original base blob and its snapshots.

Promote a snapshot

Let’s pick a specific snapshot and promote it. This will make that picture the base blob. First, save the list of snapshots for the base blob. Then set the file name you want to look for.

$ListOfBLobs = $Container.CloudBlobContainer.ListBlobs($BlobName, $true, "Snapshots")

$fileNameTarget = "GuyEyeingOreos.png"

1. Now let’s loop through the snapshots until we find one with a matching file name. We’ll save that specific instance in the variable $CloudBlockBlobSnapshot

foreach ($CloudBlockBlob in $ListOfBLobs)

{

    $CloudBlockBlob.FetchAttributes()

    write-host "filename = " $CloudBlockBlob.Metadata["filename"]

    if ($CloudBlockBlob.Metadata["filename"] -eq $fileNameTarget)

    {

        write-host "match found"

        $CloudBlockBlobSnapshot = $CloudBlockBlob

    }

}

Now the $CloudBlockBlobSnapshot points to that one snapshot. To promote the snapshot, go get a reference to the base blob, and then copy the snapshot over the base blob.

$OriginalBlob = Get-AzureStorageBlob -Context $ctx `

    -Container $ContainerName -Blob $BlobName

Start-AzureStorageBlobCopy -ICloudBlob $CloudBlockBlobSnapshot `

     -DestICloudBlob $originalBLob.ICloudBlob -Context $ctx

Get the URL of the original blob and paste it into the browser. You’ll see the base blob now matches the snapshot you selected.

Print out the URI so you can copy it and paste it into the browser.

$originalblob.ICloudBlob.Uri.AbsoluteUri

image-04

Delete a specific snapshot

Let’s delete the snapshot. First I’ll show $CloudBlockBlobSnapshot.SnapshotTime to get the date/time stamp so we can verify that’s the one removed. Then I’ll delete the snapshot we are referencing and get a list of blobs and snapshots for that one base blob.

$CloudBlockBlobSnapshot.SnapshotTime

$CloudBlockBlobSnapshot.Delete()

$Container.CloudBlobContainer.ListBlobs($BlobName, $true, "Snapshots")

image-05

Now we can see there are only two snapshots, and the one we deleted is gone.

Summary

In this article, you’ve seen how to create and view snapshots for a blob. This allows you to retain the history of changes, and retrieve a prior value if needed. You’ve also learned how to promote a snapshot to a regular blo, and how to delete a snapshot. In the next article, I’ll show you how to manage blob leases using PowerShell.

Accessing properties of Azure blobs with PowerShell

July 13, 2015

In the previous articles in this series talking about Azure blob storage and PowerShell, I’ve covered uploading and downloading blobs, copying blobs, and deleting blobs. Now let’s talk about how to read the properties of a blob, and how to update them.

I’m not going to discuss all of the properties of a blob, or the properties of a blob’s Properties. (Yes, a blob has a property called Properties and that property has properties of its own). (Yes, it’s a little confusing.) If you want to know all of the properties (and Properties properties) of a blob, please check out this article that I wrote for RedGate, and then come back here to see how to read and modify them using PowerShell.

Setup

As we’ve seen before, first you need a storage account context. This is the same as in the previous posts. To define a storage account context, you need the storage account name and key. Set the variables to storage account name and key. You can get these from the Storage Account in the Azure Portal.

$StorageAccountName = "yourStorageAccountName"
$StorageAccountKey = "yourStorageAccountKey"

Now let’s set up the storage account context.

$ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName `
         -StorageAccountKey $StorageAccountKey

Now we need to set the name of the container that holds the blob that we want to use. Set this to your container name, and specify the name of the blob. I’m going to use the same values I was using in the previous post.

$ContainerName = “images”
$BlobName = “gizmodo_groundhog_texting.jpg”

Get a reference and check the properties

To examine or modify a blob’s properties, first you have to get a reference to the blob through the Azure Blob Service. You can use the cmdlet Get-AzureStorageBlob to access the blob – this returns an object of type ICloudBlob, which is generic. You then have to convert this to a CloudBlockBlob object to access the properties and Properties of the actual blob.

$Blob = Get-AzureStorageBlob -Context $ctx -Container $ContainerName -Blob $BlobName

$CloudBlockBlob = [Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob] $Blob.ICloudBlob

Now that we have a reference to the CloudBlockBlob object that represents the actual blob, we can look at the properties using the IntelliSense in PowerShell. In the command window, type in $CloudBlockBlob and a period (.), and it will show all the properties you can access. You can type in the variable with the property to see the value ($CloudBlockBlob.BlobType), or use the Write-Host command as displayed here.

Write-Host "blob type = " $CloudBlockBlob.BlobType
Write-Host "blob name = " $CloudBlockBLob.Name
Write-Host "blob uri = " $CloudBlockBlob.Uri

PSBlobProperties-image-01

To see the Properties properties, fetch the attributes of the blob first. This will populate these properties.

$CloudBlockBlob.FetchAttributes()
Write-Host "content type = " $CloudBlockBlob.Properties.ContentType
Write-Host "size = " $CloudBlockBlob.Properties.Length

PSBlobProperties-image-02

The files got uploaded with a default content type of “application/octet-stream”. Let’s change the content type to “image/jpg”. To do this, we can just assign a new value to the Content Type property, then call SetProperties to save the change. In the following example, I’m changing the content type, then using the Write-Host statements from above to see the new value.

$ContentType = "image/jpg"
$CloudBlockBlob.Properties.ContentType = $ContentType
$CloudBlockBlob.SetProperties()

PSBlobProperties-image-03

Another important property is the metadata. This is a set of key/value pairs that you can attach to the blob. Here’s an example of how to set the metadata. This assigns both the key and the value for each pair. This then saves the changes by calling SetMetadata and then displays the results.

$CloudBlockBlob.Metadata["filename"] = "original file name"
#author is the key, “RobinS” is the value
$CloudBlockBlob.Metadata["author"] = "RobinS"
$CloudBlockBlob.SetMetadata()
$CloudBlockBlob.Metadata

PSBlobProperties-image-04

To clear the metadata, you can call the Clear method on the Metadata, then save the changes.

#blank out the metadata
$CloudBlockBlob.Metadata.Clear()
#save the changes
$CloudBlockBlob.SetMetadata()
#view the saved data
$CloudBlockBlob.Metadata

PSBlobProperties-image-05

Summary

In this post, I showed you how to access the properties of a blob in Azure Storage, and how to set the properties and the metadata. In the next post, I’ll show how to take a snapshot of a blob and then how to view the available snapshots for a blob.

Deleting and copying files in Azure Blob Storage with PowerShell

July 12, 2015

In my previous post, I showed you how to upload and download files to and from Azure blob storage using the Azure PowerShell cmdlets. In this post, I’ll show you how to delete blobs, copy blobs, and start a long-term asynchronous copy of a large blob and then check the operation’s status until it’s finished. I’m going to continue with the container and blobs that I used in the previous post.

Setup

First, you need a storage account context. This is the same as in the previous post. To define a storage account context, you need the storage account name and key. Set the variables to storage account name and key. You can get these from the Storage Account in the Azure Portal.

$StorageAccountName = "yourStorageAccountName"
$StorageAccountKey = "yourStorageAccountKey"

Now let’s set up the storage account context.

$ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName `
         -StorageAccountKey $StorageAccountKey

Now we need to set the name of the container that holds the blob(s ) that we want to delete. Set this to your container name.

$ContainerName = “images”

Delete a blob

To delete a blob, you need the storage context, container name, and blob name. So let’s set the blob name, and then use the PowerShell cmdlet to delete a blob.

$BlobName = “GuyEyeingOreos.png”

Remove-AzureStorageBlob -Blob $BlobName -Container $ContainerName -Context $ctx

Now get the list of blobs and you’ll see that the one you deleted is no longer there.

Get-AzureStorageBlob -Container $ContainerName -Context $ctx | Select Name

PSBlobDelete_image-01

Copy a blob

Now let’s copy a blob to another blob with a different name. We’ll use the Start-AzureStorageBlobCopy cmdlet to do this. The copy is done asynchronously. If it’s a huge file, we can check the progress of the copy. I’ll show you how to do that after this. Here, I’m going to copy one of my blobs to another blob, then get a list so I can see if it worked.

Set the blob name of the source blob, then set the blob name of the target blob. Then do the copy.

$BlobName = "gizmodo_groundhog_texting.jpg"
$NewBlobName = "CopyOf_" + $BlobName

Start-AzureStorageBlobCopy -SrcBlob $BlobName -SrcContainer $ContainerName `
        -DestContainer $ContainerName -DestBlob $newBlobName -Context $ctx

Get-AzureStorageBlob -Container $ContainerName -Context $ctx | Select Name

PSBlobCopy_image-01

You can see the new blob in the listing now.

Copy a blob asynchronously and check its progress

What if you have a really large blob, like a VHD file, and you want to copy it from one storage account to another? Maybe you even want to copy it to a storage account in another region, so you can recreate the VM in that region?

You may have noticed that there’s not a CopyBlob method per se – the cmdlet is Start-AzureCopyBlob. This is because the blob copy runs asynchronously. For small blobs, you don’t generally need to check the status of a copy operations – it will probably finish before you have a chance to check the status, depending on the size. For large blobs, though, like a VHD file that is 127 GB, it will take a while to copy it. So you want to be able to kick off the copy and then check the status later. Let’s see how to do that.

Setup the variables for the operation

I have a VHD file in the container “vhds” in one storage account and I’m going to copy it to a different storage account. Let’s start by setting the variables for the container name and blob name for the source. We will use the same blob name for the source and the destination.

$sourceContainer = "vhds"
$BlobName = "testasynccopysvc-testasynccopy-2015-06-16.vhd "

Next I’ll set up the name and key for the destination storage account. Fill these in with your own information.

$destStorageAccountName = "yourStorageAccountName"
$destStorageAccountKey = "yourStorageAccountKey"

We need a storage account context for the destination storage account.

$destctx = New-AzureStorageContext –StorageAccountName $destStorageAccountName `
        StorageAccountKey $destStorageAccountKey

Let’s set up the destination container. I’ll set the name and then create the container.

$destContainer = "copiedvhds"
New-AzureStorageContainer -name $destContainer -Context $destctx

Asynchronous copy and status check

Now let’s kick off the copy operation, then check the status by calling Get-AzureStorageBlobCopyState. You have to assign this to a variable representing the destination blob so you can check the status of the resulting blob. Notice that this copy command is a little bit different than the previous one – this one specifically provides the destination context because it’s different from the source context.

$BlobResult = Start-AzureStorageBlobCopy -SrcBlob $BlobName -SrcContainer $sourceContainer `
        -DestContainer $destContainer -DestBlob $BlobName -Context $ctx `
        -DestContext $destctx

$status = $BlobResult | Get-AzureStorageBlobCopyState
# show the status
$status

PSBlobCopyAsync_image-01

This shows that it is just starting the copy operation – the bytes copied is 0. Wait a few seconds and check the status again…

PSBlobCopyAsync_image-02

Now it shows the number of  bytes copied is greater than 0.

This PowerShell code will loop continuously until the status is not “Pending”. It will retrieve the status, display it, sleep 10 seconds, then loop around and check the status again. When the status no longer equals “Pending”, it will exit the loop.

While ($status.Status -eq "Pending") {
    $status = $BlobResult | Get-AzureStorageBlobCopyState
    $status
    Start-Sleep 10
}

When it’s completed, the final status will look like the following:

PSBlobCopyAsync_image-03

You can see that it has copied all the bytes, and the copy operation was successful.

Summary

In this post, we went over how to delete a blob and how to copy a blob to another blob. We also learned how to do an asynchronous copy of a large blob from one storage account to another, and check the status repeatedly until it was finished. In my next post, we will learn how to take a snapshot of a blob using PowerShell.

Uploading and downloading files to Azure Blob Storage with PowerShell

July 8, 2015

In my previous post, I showed you how to set up PowerShell so you can use it to perform commands against blob storage. In this post, I’ll show you how to create a container in blob storage and then upload files from the local machine to blob storage, and how to download files from blob storage to the local machine.

Setup for transferring files

When using hardcoded values (such as storage account name), I tend to put them in variables, and then use the variable in the script. This makes it easier to change the hardcoded values, especially if they are used in multiple places, which makes your script more flexible for different storage accounts, containers, etc.

First, set up a folder with some pictures or files in it that you can upload. Then set up a folder to which you can download blobs. I’m going to use “D:\_Temp\_AzureFilesDownloaded” for my target directory for downloads, and “D:\_Temp\_AzureFilesToUpload” to hold pictures that I can upload.

Now run PowerShell ISE as an administrator like you did in the previous post. If PowerShell is on your task bar, you can right-click on it and select “Run ISE as administrator.” We’re going to write a script in the script window and run the commands as we need to. When we’re done, you’ll have a script you can use to upload and download files. I tend to do this because it’s good to create your own library of samples that you can reuse.

Next, let’s set up variable names for the storage account name and key. I’m going to use $StorageAccountName and $StorageAccountKey. (If you didn’t know it, the $ prefix indicates a variable name). Fill your storage account name and key in. You can get these from the Storage Account in the Azure Portal.

$StorageAccountName = "yourStorageAccountName"
$StorageAccountKey = "yourStorageAccountKey"

When you access a storage account from PowerShell, you need to use what’s called a context. You create this using the storage account name and key, and pass it in with each PowerShell command so it knows which account to use when running the command. Here’s the command:

$ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName `
         -StorageAccountKey $StorageAccountKey

Note the backwards tick mark (`) after $StorageAccountName. This is a continuation character in PowerShell.

When accessing blobs, you have to specify which container the blob is in. So let’s specify a variable for container name. This can be a container that already exists or a new container (I’ll show you how to create a new container in a minute.)

$ContainerName = "images"

This is how you create a new container with public access to the blobs. If you try this and the container already exists, it will give you an error.

New-AzureStorageContainer -Name $ContainerName -Context $ctx -Permission Blob

So now we have the storage context and the container name set up. Set a variable for the local file directory:

$localFileDirectory = "E:\_Temp\_AzureFilesToUpload\"

Upload blobs from local folder

One of the files in my folder is called “SnowyCabin.jpg”. I’m going to set a variable for $BlobName – the actual name of the blob – and then append it with the $localFileDirectory to create the path to the local file. Then we’ll use the Set-AzureStorageBlobContent cmdlet to upload the file. To use that, you specify the path to the local file, the name of the container, the name of the blob, and the storage context.

$BlobName = "SnowyCabin.jpg"
$localFile = $localFileDirectory + $BlobName
Set-AzureStorageBlobContent -File $localFile -Container $ContainerName `
        -Blob $BlobName -Context $ctx

So now I’ve uploaded that one file. I can check blob storage and I’ll see the file there in the container called “images”. (To check blob storage, you can use one of the Azure Portals, Visual Studio Azure Explorer, or a storage explorer product like the Azure Management Studio from Cerebrata. I can also query the storage account for the blobs in that container using the cmdlet Get-AzureStorageBlob.

Get-AzureStorageBlob -Container $ContainerName -Context $ctx

PSBlobUpDown_image-01

Let’s upload some more blobs and do another list. I’m going to upload 3 more pictures and then get a list of blobs in the container. This is the same PowerShell as above, repeated 3 times with different files. I’ve added an argument “ | Select Name” to the end of Get-AzureStorageBlob. That first character is a pipe symbol; this takes the output from Get-AzureStorageBlob and selects just the name and displays it.

$BlobName = "BluebellsAndBeechTrees.jpg"
$localFile = $localFileDirectory + $BlobName
Set-AzureStorageBlobContent -File $localFile -Container $ContainerName `
        -Blob $BlobName -Context $ctx

$BlobName = "gizmodo_groundhog_texting.jpg"
$localFile = $localFileDirectory + $BlobName
Set-AzureStorageBlobContent -File $localFile -Container $ContainerName `
        -Blob $BlobName -Context $ctx

$BlobName = "GuyEyeingOreos.png"
$localFile = $localFileDirectory + $BlobName
Set-AzureStorageBlobContent -File $localFile -Container $ContainerName `
-Blob $BlobName -Context $ctx

Get-AzureStorageBlob -Container $ContainerName -Context $ctx | Select Name

image-02

Download blobs to local disk

First, we need to set a variable for the target directory on the local machine.

$localTargetDirectory = "D:\_Temp\_AzureFilesDownloaded"

To download a blob, use the cmdlet Get-AzureStorageBlobContent. I’m going to download 3 of the files I uploaded.

$BlobName = "BluebellsAndBeechTrees.jpg"
Get-AzureStorageBlobContent -Blob $BlobName -Container $ContainerName `
        -Destination $localTargetDirectory -Context $ctx

$BlobName = "gizmodo_groundhog_texting.jpg"
Get-AzureStorageBlobContent -Blob $BlobName -Container $ContainerName `
        -Destination $localTargetDirectory -Context $ctx

$BlobName = "GuyEyeingOreos.png"
Get-AzureStorageBlobContent -Blob $BlobName -Container $ContainerName `
        -Destination $localTargetDirectory -Context $ctx

Now if I look in that local directory, I see those files.

Summary

In this post, I showed you how to upload files to Azure blob storage from the local disk, and how to download files from Azure blob storage to the local disk. In my next post, I’ll show you how to delete a blob and how to copy a blob.

Azure Storage and PowerShell 101

July 7, 2015

I’ve been working for a while on a deep-dive course about Azure Storage for Opsgility. This will include video training, and Opsgility will offer live training classes using the material I’ve written. One of the things that I’ve found really difficult is doing anything remotely complicated with blob storage and PowerShell. For example, how do you take snapshots of a blob in PowerShell? You go ahead and bing that and let me know what you find. (whistling) Yeah, I didn’t find anything either. So I did a bit of trial and error, and thought I would blog some of the results in a series of posts, of which this is the first.

In this article, we’ll see how to install the PowerShell cmdlets and set up a default Azure subscription and Azure storage account. You must already have an Azure account. If you don’t have one, you can sign up for a free trial at http://azure.microsoft.com.

Let’s begin by installing the PowerShell cmdlets.

  1. Open your browser and navigate to http://azure.microsoft.com.
  2. Click Downloads and then click the Install link under Windows PowerShell.
  3. Click Run when prompted and follow the instructions to install the cmdlets.

You can also install the cmdlets using the Web Platform Installer, or by downloading the msi from GitHub.

There are two ways to run PowerShell. There’s the command-line, and then there’s something called the ISE which is basically a window that allows you to write script and execute it one line at a time. You can also select a range of lines to run, or even run the entire script. I use this a lot to test my scripts.

I like to run PowerShell ISE from my task bar. If you want to try that, here’s how to set that up in Windows 8.1 or Windows 10.

  1. Run Microsoft Azure PowerShell. To do this in Windows 8, you can go to the Start screen and type “powershell” and select it. In Windows 10, you can go to the Start menu and look for it. This runs the command-line version.
  2. After it’s running, right-click on the icon in the task bar and select the option to pin this program to the taskbar.
  3. Close the PowerShell command line window.
  4. Right-click on the icon in the task bar, and you’ll see a bunch of options. Choose “Run ISE as Administrator”.

This should look like the following – you should have a white window on the top for the script and a blue window at the bottom. The window at the bottom will show the output from any parts of the script that you run. You can also type commands into the bottom window directly if you want to.

image

Before you can run anything, you need to set up the Azure account you’re going to use. To do this, you’re going to use the command “Add-AzureAccount”. You can type it into the script window and highlight it, then click icon for “Run Selection” (F8 also works). This is the icon with the red square around it I this image:

image

If you click the green play button, it runs the whole script.  We’re going to use this a lot in future articles.

For now, I’m going to put my command in the command window and run it – just type it in and press Enter. You will be prompted to enter your Azure subscription account information. After authentication, it will show the properties of the Azure account in the output window. My subscription name is “Azure Pass”. Write down your subscription name and set it as the current subscription and the default subscription by executing this command:

> Select-AzureSubscription –SubscriptionName “Azure Pass”

If you want to see a list of the subscriptions you have set up, you can ask to see the list and just show just the names of each subscription like this:

> Get-AzureSubscription | Select SubscriptionName

You can just see the subscription set as the current by using this command:

> Get-AzureSubscription –Current

If you don’t have a storage account, you’ll need to add one. You can go to the portal to do this, or you can do it right here in PowerShell. Here’s the command:

> New-AzureStorageAccount –StorageAccountName “nameofnewaccount” –Location “West US”

If the storage account already exists, you will get an error message. If the account was added successfully, the new storage account will be displayed in the output window.

Now set that storage account as the default account for this subscription to use. These defaults will be used the next time you run PowerShell. My new storage account is called “robinsnewstorage”.

> Set-AzureSubscription –SubscriptionName “Azure Pass”  -CurrentStorageAccountName “robinsnewstorage”

Now view the current Azure Subscription again and it will show the current storage account name.

image

In the next few articles, I’ll show you how to access the storage account and perform some actions against blob storage such as uploading and downloading files, deleting blobs, and copying blobs (including async copy). I’ll also show you how to access the properties of a blob. Then we’ll move onto the hard topics of snapshots, leases, and shared access signatures.

Date/Time stamps and Azure file shares

February 27, 2015

My good friend Bill Wilder pinged me a while back and asked what date/time stamp is placed on the files in an Azure File share, and where does it come from? I thought that was a good question, and figured the only way to find out is to do a bit of trial & error. So this blog entry is for Bill, who can read it as soon as he digs out from under the 7 feet feet of snow he’s gotten in the past month.

There are multiple cases I want to look at.

  • Use PowerShell to transfer a file.
  • Log into a VM, attach the file share, and transfer the files between the VM and the desktop using the Windows Clipboard.
  • Write an application that uses the storage client library to transfer files from the desktop to the file share.
  • Use AzCopy to transfer files between the desktop and the file share.

While writing this, I found that if I copy a file from one folder to another in Windows 8, it changes the Created Date to the desktop’s current date/time, and leaves the modified date “as is”. It shows the modified date in File Explorer. this seems odd to me, but I guess if you think of Created Date as the date the file was first put in the current location, it could make sense. What’s weird about this is that it is really easy to have a file with a Created Date that comes long after the Modified Date. I mention this because there is a similar behavior in some of the following cases.

For the purposes of these tests, I set the time on my desktop machine to be 5 minutes behind so I could discern between the desktop machine’s date/time and the VM’s date/time (which I am referring to as actual date/time). The tests were run on 2/9/2015 and all of the times are in Pacific Standard Time.

Note: I tried to set the time difference to an hour, but when I did that, I got 403 Forbidden errors with every call to Azure Storage, so I guess they have some kind of check on the time difference of your computer against the real date/time.

I’ve included the PowerShell commands, C# code for using the storage client library, and AzCopy commands, so you can replicate these tests if you want to and aren’t familiar with one of the methods.

Using PowerShell

Note: You may need to install the most recent version of the Azure PowerShell cmdlets from GitHub. At the time of this writing, it is 0.8.11. You can also use the Web Platform Installer, but I’m not a fan because there’s no telling what it will install along with whatever you asked for. Last time I used it to install the Azure cmdlets, it installed the newest version of the Azure Tools/SDK, which I wasn’t ready to install (2.5 has breaking changes). If you go to the link above to GitHub, you can download the Windows Standalone version (it’s an msi) and install it.

To find which version you have, you can run PowerShell and type the following:

> Get-Module Azure

I’m going to use the PowerShell ISE so I can save my commands as a script in case I want to use them later. First I’m going to set up a couple of variables for storage account name and key. This makes it easy to just modify those fields and run the script for a different storage account. Then I’m going to create the context for the storage account. This tells which storage account to use when this variable is used in subsequent commands.

$storageAcctName = “your storage account name”
$storageAcctKey = “your storage account key ending with ==”

$ctx = New-AzureStorageContext $storageAcctName $storageAcctKey

Next, I’m going to create a new file share and retain a reference to it ($s). It knows which storage account to use because I’m supplying the context I just defined. I’m naming my file share “contosofileshare”.

$s = New-AzureStorageShare contosofileshare -Context $ctx

To check and see the files shares set up in a storage account, you can use this command:

Get-AzureStorageShare –Context $ctx

So now I have a file share. I’m going to create a VM in Azure and RDP into it. If you don’t know how to create a VM in Azure, download the Fundamentals of Azure book from Microsoft Press (it’s free!) and go to the chapter on VM’s; it has follow-along instructions that show you how to create a VM.  (Disclaimer: I’m one of the authors of the book. I get no royalties, so please don’t think I’m recommending the book for financial reasons!)

After logging into the VM, I’m going to mount my file share. First I need a command prompt window. (If your VM is running Windows Server 2012, it’s like Windows 8. Click the windows button to go to the start screen, then just start typing “Command Window” to search for it, and click on it when it shows up on the right side.) I can use a NET USE command to attach the file share. The NET USE command looks like this:

NETUSE z: \\storageaccountname.file.core.windows.net\contosofileshare /u:storageaccountname storageaccountkey

After this is done, you can type NET USE and it will show you all of the file shares available, and what drive letter they are mapped to. I mapped mine to z:. Now I can open file explorer and go see what’s on the Z drive.

Back to PowerShell, I’m going to upload a file called Chihuly.jpg. On the desktop, if I right-click on this file and look at its properties (Details):

  • Created Date: 11/18/2014 12:21:38 PM.
  • Modified Date: 7/5/2007 7:52:04 AM.
  • Actual time: 2/9/2015 4:01 PM.
  • Desktop time: 2/9/2015 3:56 PM.

I’m going to upload the file and see what it looks like on the other side. The following command in PowerShell uploads the file.

Set-AzureStorageFileContent -Share $s -Source D:\_temp\_AzureFilesToUpload\Chihuly.jpg

Now if I go to the File Explorer on the VM and navigate to the Z: share and look at that file, the properties displayed are these:

  • Created Date: 2/9/2015 4:01 PM
  • Modified Date: 2/9/2015 4:01 PM

So using PowerShell to upload files to an Azure file share changes both Created Date and Modified Date to the current date/time. The date/time on the desktop machine is not used.

Where does Azure get the Created Date and Modified Date – from a time server somewhere? From the server holding the storage? That’s the question.

Now I’m going to download the file using PowerShell and see what the date/time stamps are on the downloaded file.  The following command in PowerShell downloads the file.

Get-AzureStorageFileContent -Share $s -Path Chihuly.jpg -Destination d:\_temp\_AzureFilesDownloaded\

Desktop time: 6:15 PM
Actual time: 6:20 PM

After the download:

Created Date: 2/9/2015 6:15 PM
Modified Date: 2/9/2015 6:15 PM

From this, we can see that using PowerShell to download files from an Azure file share changes the Modified Date to the date/time on the local machine. In this case, it sets the Created Date to the time on the local machine, but in some test cases, it set it to a few minutes before that. Where does it get the value for created date?

Copy and Paste

Now I’m going to try the same thing with copy (desktop) and paste (vm). I copy the file on the desktop into my Windows Clipboard (Ctrl-C), then click on the VM and go to the share and paste the file (Ctrl-V). This is interesting.

Desktop time: 3:59 PM
Actual time: 4:04 PM
Desktop Created Date: 11/18/2014 12:21 PM PST
Desktop Modified Date: 7/5/2007 7:52 AM

Copy/Paste results:

File Share Created Date: 2/9/2015 4:04 PM 
File Share Modified Date: 7/5/2007 7:52 AM

From this, I’m going to surmise that using copy and paste to upload files to an Azure file share changes the created date to the current date/time, but leaves the modified date as the original value. This is the same behavior as Windows.

Now I’m going to try copying the file from the Azure file share (via the VM) to the local desktop and check the dates.

Desktop time: 6:13 PM
Actual time: 6:18 PM

Created date: 6:13 PM
Modified date: 7/5/2007 7:52 PM

Using copy and paste to download files from an Azure file share changes the created date to the current date/time on the desktop, but leaves the modified date unchanged – it is the same value as that the file on the file share. This is the same behavior as Windows.

Storage Client Library

What if I write some code to upload and download files to/from the file share? Let’s look at what that looks like.

First, I need a method to get a reference to the directory on the file share that I’m going to use. This will be used for both the upload and the download. Comments inline tell how this works. It’s very similar to blob storage.

public CloudFileDirectory SetupQuickCommands(string storageAccountName, string storageAccountKey,
    string shareName, string subFolderPath)
{
    //set up file share
    //get reference to the storage account 
    string connectionString = @"DefaultEndpointsProtocol=https;AccountName=" + storageAccountName +
        ";AccountKey=" + storageAccountKey;
    CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(connectionString);

    //create the cloudfileclient using the storage account object
    CloudFileClient cloudFileClient = cloudStorageAccount.CreateCloudFileClient();

    //get a reference to the share
    CloudFileShare cloudFileShare = cloudFileClient.GetShareReference(shareName);

    //does it go in the root directory? need to set CloudFileDirectory instance accordingly
    //this points to the directory on the file share 

    CloudFileDirectory rootDirectory = cloudFileShare.GetRootDirectoryReference();
    CloudFileDirectory cloudFileDirectory = null;

    if (string.IsNullOrWhiteSpace(subFolderPath)) //write it to the root directory
    {
        cloudFileDirectory = rootDirectory;
    }
    else
    {
        //write it to the requested directory
        cloudFileDirectory = rootDirectory.GetDirectoryReference(subFolderPath);
    }
    return cloudFileDirectory;
}

Next, we need code for doing the upload and download. Each of these will call the code above to get the reference to the directory on the file share.

Here is the code to upload a file. After getting the reference to the directory on the file share, this gets a reference to the file and uploads to that file. SubFolderPath is the path on the share to be used. If this is blank, the file goes in the root of the share.

public string QuickUpload(string storageAccountName, string storageAccountKey,
    string shareName, string subFolderPath, string fileName)
{
    CloudFileDirectory cloudFileDirectory = SetupQuickCommands(storageAccountName, 
        storageAccountKey, shareName, subFolderPath);
    //upload the file 
    string fileNameOnly = Path.GetFileName(fileName);
    CloudFile cloudFile2 = cloudFileDirectory.GetFileReference(fileNameOnly);
    cloudFile2.UploadFromFile(fileName, System.IO.FileMode.Open);
    return string.Empty;
}

And here is the code to download a file. The localFolderPath is the destination on the local machine for the Download directory where the files will be placed.

public string QuickDownload(string fileName, string storageAccountName, string storageAccountKey,
    string shareName, string subFolderPath, string localFolderPath)
{
    CloudFileDirectory cloudFileDirectory = SetupQuickCommands(storageAccountName,
        storageAccountKey, shareName, subFolderPath);

    //put this in the folder they specify, in a subfolder called Downloads
    //if Downloads doesn't exist, create it 
    string localPath = Path.Combine(localFolderPath, @"Downloads");
    if (!Directory.Exists(localPath))
    {
        Directory.CreateDirectory(localPath);
    }

    //download the file
    CloudFile cloudFile = cloudFileDirectory.GetFileReference(fileName);
    string downloadTo = Path.Combine(localPath, fileName);
    cloudFile.DownloadToFile(downloadTo, FileMode.Create);

    return string.Empty;
}

So now let’s run our tests using the code above.

Desktop time: 5:50 PM

Actual time: 5:55 PM

After uploading the file, the results are as follows:

Created Date: 2/9/2015 5:55 PM

Modified Date: 2/9/2015 5:55 PM

From this, I going to surmise that using the storage client library to upload a file, the Date Created and Date Modified are both set to the actual current date/time.

This makes sense. The PowerShell upload calls the REST API, and so does the storage client library.

When I use the storage client library to download the file to the desktop, what do I get?

Desktop time: 6:21 PM

Actual time: 6:26 PM

After downloading the file, the results are as follows:

Created Date: 2/5/2015 5:27 PM

Modified Date: 6:21 PM

It is setting the Modified Date to the date/time on the desktop machine. Where’s that created date coming from?

What about AzCopy?

This should be the same as the storage client library and PowerShell, but I’m going to try it to be thorough. Click to download AzCopy and/or learn more about it. I’m going to list the commands that I used to upload and download files. One thing to note: There is a switch you can add (/mt) that will retain the modified date/time on the blob when you download it. I’ve done this without the /mt switch because I want to see what it gets set to by default.

Desktop time: 4:07 PM

Actual time: 4:12 PM

The command looks like this. Take the space out before storageaccountname – I just put it in so it wouldn’t make it a live hyperlink.

AzCopy d:\_temp\_AzureFilesToUpload\

https:// storageaccountname.file.core.windows.net/sharename/ Chihuly.jpg /DestKey:storageaccountkey

After uploading the file, the results are as follows:

Created Date 4:12 PM

Modified Date: 4:12 PM

When uploading a file using AzCopy, the Created Date and Modified Date are both set to the current date/time.

Desktop time: 6:20 PM

Actual time: 6:25 PM

The download command looks like this. d:\_temp\_AzureFilesDownloaded is the local directory I’m downloading the files to. Again, take the space out before the storage account name.

AzCopy https:// storageaccountname.file.core.windows.net/sharename/

    d:\_temp\_AzureFilesDownloaded Chihuly.jpg /SourceKey:storageaccountkey

Created Date: 6:20 PM

Modified Date: 6:20 PM

When downloading a file using AzCopy, the Created Date and Modified Date are both set to the date/time of the local desktop. (It could be that this Created Date is sometimes earlier than Modified Date; I only ran this test a couple of times.)

RESULTS

Uploading files from the desktop to the file share:

PowerShell, Storage Client Library, AzCopy: Uploading files to an Azure file share changes the Created Date and Modified Date to the current date/time.

Copy/Paste: Uploading files changes Created Date to the current date/time, but retains the original Modified Date. This is the same behavior as Windows.

Downloading files:

PowerShell, Storage Client Library, AzCopy: Sets Created Date and Modified Date to the date/time on the desktop. In some cases, the Created Date was actually set earlier than the modified date, so I’m not sure it’s actually using the desktop date/time for that value.

Copy/Paste: Changes the Created Date to the desktop date/time, leaves the Modified Date unchanged.

How does this compare to blob storage?

The deal with the Created Date is just weird, so I wanted to compare this behavior with that of uploading and downloading files using the storage client library and AzCopy to or from blob storage.

When you upload a blob using either method, it sets the Last Modified Date to the current date/time. (There is no created date property on blobs.) This is consistent with uploading files to the file share using PowerShell, the Storage Client Library, and AzCopy.

When you download a blob from blob storage using AzCopy, it sets the Created Date and Modified Date to the current date/time.

When you download a blob from blob storage using the storage client library, I saw some very strange results:

Desktop time: 6:40 PM

Actual time: 6:45 PM

Last Modified Date on the blob is 6:44 PM

Download blob.

Created Date: 2/5/2015 5:27 PM

Modified Date: 6:22 PM 

Remember that unless specified, all of the dates are 2/9/2015. I have no idea where it’s getting that value for Created Date. Note that the Created Date is the same one given to the file when downloading it from the file share using the storage client library. Coincidence? Probably not.

Overall results

In most cases, the date/time stamps assigned to the files as they are transferred between a desktop and an Azure file share make sense. The one case I have no explanation for is where the Created Date comes from when downloading a file to the desktop. While it is frequently equal to the modified date, it is not always the case. Does Azure get this value from a time server somewhere, or from a machine hosting storage in Azure, or what? I don’t know. I’ve submitted this question to the Azure MVP distribution list, and if I get an answer, I’ll update this blog entry.

In the meantime, if you decide to write your own syncing software to sync file shares or blobs, I would consider using a hash of the file or blob to discover if the files are actually different, and use the Modified Date and not the Created Date.

Click Once deployment and VS2013 Update 3

September 5, 2014

As many of you know, I still answer the occasional question about Click Once deployment. This is a deployment feature that lets you host a client or console application deployment on a webserver, share, or even in Azure blob storage, and perform automatic updates when there are changes.

For the benefit of all those who used this technology, I try to keep up with what Microsoft’s doing with it. That’s not as hard as it might seem, as they rarely do anything with it. (I can sense you all nodding your heads.) However, there are a couple of changes in Update 3 for VS2013 that are pertinent specifically to Click Once deployment. I’ve known about this for a couple of months, so I’ve already gotten over the shock. If you need a few minutes to recover, please go ahead. I’ll be here when you come back.

Problem using WinRT APIs in a WPF application

The first thing is the problem referenced here. Scott Hanselman had a blog entry showing how to call WinRT APIs in Windows 8 from C# Desktop Applications (WPF), but nobody could get it to work. The error was “Resolve Manifest Files task failed” when trying to build the WPF application. To eliminate this problem, go to your project’s Properties, go to the Publish Tab, and then select Application Files. Set the publish status of all “*.winmd” files to “Exclude” as displayed in the following image:

image

Signing a .NET 4.0 or lower app with a SHA256 code-signing certificate

The second issue shows up when you use one of the new SHA256 code-signing certificates to sign an application that targets .NET 4.0 or earlier. When you do this, the hash for the deployment manifest (.application file) is generated using the SHA256 algorithm, which can not be understood by .NET 4.0 and below. When running the application on a machine that only has .NET 4.0 or below, this would result in exceptions like “The application is impropertly formatted”, “The manifest may not be valid,”, “Manifest XML signature is not valid,” or “Signature Description could not be created for the signature algorithm supplied”.

With Update 3 in VS2013, Microsoft has updated the built tasks to generate the hash using the SHA1 algorithm if the target .NET version for the application is below .NET 4.5, but still use a SHA256 hash for .NET 4.5 and above. So you no longer have to install .NET 4.5 just because you’re using a SHA256 certificate.

Summary

In this article, I discussed a couple of issues they have fixed in Click Once deployment with VS2013 Update 3. If you have seen any of the issues mentioned above, please install the update and let me know if you still have any problems.

Implementing SLAB in an Azure service

July 19, 2014

In my previous post,  I talked about design considerations when using SLAB, and presented a class to be changed from regular trace logging to semantic logging using SLAB. In this post, I’m going to show you how to set up the event logging with SLAB, how to write the entries to Windows Azure Table Storage, and how to implement the design to change the class to use SLAB instead of regular trace logging.

To implement SLAB, first you need to download the NuGet package and install it. So in Visual Studio, you can click Solution / Tools / NuGet Package Manager / Manager Packages and search for “semantic logging application block”. You will get several entries – one for the EventSource analyzer, one for each sink, one for the out-of-process service, and one that is just the semantic logging application block. Select the last one and the Azure Sink, which will allow us to write our results to Table Storage.

image_thumb[2]

Now we need to add a class that inherits from EventSource. I’m going to call it CustomerSvcEventSource. In the bare bones of the class, I will add a static method to call to Log events, a static field instance to provide access to an instance of the class, and a private constructor. Here’s the bare bones of the class.

public class CustomerSvcEventSource : EventSource
{        
  //create a static field instance that provides access to an instance of this class
  private static readonly Lazy<CustomerSvcEventSource> Instance
    = new Lazy<CustomerSvcEventSource>(() => new CustomerSvcEventSource());

  //create a static property called Log that returns the current value 
  //of the Instance field of the event source. This value is 
  //determined by the custom event methods called in the Customerservices application.
  private CustomerSvcEventSource() { }

  public static CustomerSvcEventSource Log
  {
    get { return Instance.Value; }
  }
}

The next thing we need to do is to set up the listener. This needs to start up when the application starts up, and close when the application closes. Since this is a WCF service, I’m going to put this in the Application_Start method and the Application_End method in Global.asax.cs.

I need these two using clauses:

using Microsoft.Practices.EnterpriseLibrary.SemanticLogging;
using Microsoft.WindowsAzure.ServiceRuntime;

I define the listener in the Global.asax.cs class:

public ObservableEventListener listener;

Now put the code to start up the listener in Application_Start. “CustomerSvc” is the name of the listener. The

“RoleEnvironment…” gets the connection string to the storage account I want to write the logs to. In EnableEvents, I provide the name of the Event Source log method, the logging level, and keywords. You can have your listener just write errors by changing the EventLevel, or just write specific Keywords. you can also set up multiple listeners and send the results of each to different tables. (You can specify the table name when instantiating the listener.)

//slab listener
listener = new ObservableEventListener();
listener.LogToWindowsAzureTable("CustomerSvc", 
  RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
listener.EnableEvents(CustomerSvcEventSource.Log, EventLevel.LogAlways, Keywords.All);

Put the code to stop the listener in Application_End:

if (listener != null)
{
  listener.DisableEvents(CustomerSvcEventSource.Log);
  listener.Dispose();
}

Now we can go back and update our Event Source with keywords, tasks, opcodes, and then finally the actual events. Here are the results of our analysis in the previous post on what events we’d like to log, including the keywords, tasks, and logging levels:

  • Database retry errors
    • Event id 101, Keyword=Database, Task=DatabaseRetry, Level = Error
  • Database messages that are errors/exceptions
    • Event id 102, Keyword=Database, Task=[DatabaseRead or DatabaseWrite], Level = Error
  • Database messages that are informational
    • Event id 103, Keyword=Database, Level = Informational
  • Errors with the input parameters
    • Event id 104, Keyword = ParameterError, Level = Warning
  • GetCustomerFavorites Entry Point (with params in)
    • Event id 105, Keyword = EntryPoint, Level = Informational
  • Output parameters (i.e. results)
    • Event id 106, Level = Informational
  • General exception
    • Event id 107, Level = Error

Let’s set up the Keywords. You can have up to 64, and they have to be in a nested class in the EventSource class. Each keyword value is a 64-bit integer, which is treated as a bit array, so they all have to be powers of 2 – 1, 2, 4, 8, 16, 32, etc.

From the information above, we have 3 keywords. I’m adding this in to our EventSource class:

public class Keywords
{
  public const EventKeywords Database = (EventKeywords)1;
  public const EventKeywords ParameterError = (EventKeywords)2;
  public const EventKeywords EntryPoint = (EventKeywords)4;
}

Now let’s add the Tasks. We have three. These do not have to be a multiple of 2.

public class Tasks
{
  public const EventTask DatabaseRead = (EventTask)1;
  public const EventTask DatabaseWrite = (EventTask)2;
  public const EventTask DatabaseRetry = (EventTask)3;
}

What’s left? We didn’t use any OpCodes, but they are set up the same way as the tasks. We assign the event id’s and levels when we add the events to the EventSource class. So let’s go back to our bare bones Event Source class and start adding the methods for writing the events.

The first event I’m going to write is the EntryPoint. You put the event id in the attribute of the method, as well as the Message, Level, and Keywords. You call WriteEvent inside the method, passing the parameters passed in. WriteEvent will write a line to table storage. The message will be written, and the parameters in json format.

[Event(105, Message = "[GetCustomerFavorites] Called/entry point. FirstName = {0}, LastName = {1}", 
  Level = EventLevel.Informational, Keywords = Keywords.EntryPoint)]
internal void GetCustomerFavoritesCalled(string firstName, string lastName)
{
  if (this.IsEnabled()) this.WriteEvent(105, firstName, lastName);
}

The event id is 105.  I’ve put if (this.IsEnabled()) in front of my WriteEvent in case the Event Source is not enabled. This will keep it from throwing an exception if for some reason the Event Source is not enabled.

So now we need to call this from the entry point in our class, the code for which is posted in the previous blog post. It will look like this:

CustomerSvcEventSource.Log.GetCustomerFavoritesCalled(firstName, lastName);

The next one I’m going to add is for the parameter errors. I’m going to make this one generic, and pass in the method name to be written as part of the payload. Then I can call this from anywhere.

[Event(104, Message = "Parameter error in method {0}, msg = {1}", Level = EventLevel.Error, 
  Keywords = Keywords.ParameterError)]
internal void ParameterError(string methodName, string errorMessage)
{
  if (this.IsEnabled()) this.WriteEvent(104, errorMessage);
}

Here is an example of calling this:

errorMessage = "First and last name are blank.";
CustomerSvcEventSource.Log.ParameterError("GetCustomerFavorites", errorMessage);

I’m calling this in the same places I’m calling the trace logging for the invalid parameters. (I’m going to post the entire class when I’m finished.)

One thing to note here – this shows the advantage of using this method of logging. You don’t have a whole bunch of trace statements that essentially do the same thing, which open you up to typos. You’re making the message once in the event definition and using it repeatedly. Of course, this means if you have a typo in the event definition, it will be displayed in every event message. But as a friend of mine once said about programming rules: Right or wrong, be consistent.

Here is a method for the case where I read the database and should find data, but I don’t. Note that this one has a Task attached to it, and the keyword is Database. (You could argue whether or not this should have the Database keyword, but we’ll leave it “as is” for an example.) 

[Event(102, Message = "Method {0}; no data found, but should be. Input Info = {1}.",
  Level = EventLevel.Warning, Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
internal void DatabaseRecordsNotFound(string methodName, string extraInfo, string errorMessage)
{
  if (this.IsEnabled()) this.WriteEvent(102, methodName, extraInfo, errorMessage);
}

And here’s what the call looks like:

CustomerSvcEventSource.Log.DatabaseRecordsNotFound("GetCustomerFavorites", 
  string.Format("FirstName = {0}, LastName = {1}", firstName, lastName),
  errorMessage);

Note – I’m consolidating a bit here. If you wanted to truly make this 100% SLAB-ified, you could make this less generic, and call it something like GetCustomersDataRNF and pass the first and last name in as parameters, and only use this for this one case. This goes for some of hte other methods you’re going to see below. You get to decide how much granularity you want to have.

Here is a method for database data conversion errors. Note that this one has a Task attached to it, and the keyword is Database. (Again, you could argue whether or not this should have the Database keyword, since it’s a coding error.)  This is called when trying to convert the parameters to local variables. This is an addition to the original design from the previous blog post. When working on this, flexibility is important. When you decide to implement your event source, you may find you want different information than originally decided.

[Event(108, Message = "Error converting the database fields to output parameters in method {0}. "
  + "ExtraInfo = {1}, ex = {2}",
  Level = EventLevel.Error, Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
internal void DatabaseFieldConversionError(string methodName, string extraInfo, string errorMessage)
{
  if (this.IsEnabled()) this.WriteEvent(108, methodName, extraInfo, errorMessage);
}

And here’s what the caller looks like: 

CustomerSvcEventSource.Log.DatabaseFieldConversionError("GetCustomerFavorites",
  string.Format("firstName = {0}, lastName = {1}", firstName, lastName),
  CustomerSvcEventSource.FormatException(ex));

I realized I am going to need to be able to log exceptions. I want all of them to have the same format, so I have added a FormatException method to my Event Source class that I can call from anywhere. You  might want to add the InnerException to it; this is just a basic example.

public static string FormatException(Exception exception)
{
  return exception.GetType().ToString() + Environment.NewLine + exception.Message;
}

I’m actually calling this when sending the information in to write the event. This way, if I want to format exceptions differently depending on where they are, I can just set up another FormatException type method and call that instead.

Here is the SQL DB Retry error event method. By including the TryCount as a parameter, I can search my logs for entries where TryCount > 3, and find the places that didn’t work at all because it could not connect to the database within 3 retries.

[Event(101, Message = "DB Retry in method {0}, MiscInfo = {1}, TryCount = {2, "
  + "RetrySleepTime = {3}, ex = {4}",
  Level = EventLevel.Error, Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
internal void DatabaseRetry(string methodName, string miscInfo, int tryCount, int sleepTime, 
  string exceptionInfo)
{
  if (this.IsEnabled()) this.WriteEvent(101, methodName, miscInfo, tryCount, sleepTime, 
    exceptionInfo);
}

And this is the caller, which is in the catch for the SQL Exception:

CustomerSvcEventSource.Log.DatabaseRetry("GetCustomerFavorites",
  string.Format("FirstName = {0}, LastName = {1}", firstName, lastName),
  tryCount, GlobalStaticProperties.retrySleepTime[tryCount - 1], 
  CustomerSvcEventSource.FormatException(ex));

Again, I’ve genericized this so I can use it repeatedly, but you can make it specific to this method.

You should have the general idea by now. After finishing all the code in the class, here is the final EventSource class:

public class CustomerSvcEventSource : EventSource
{        
  //create a static field instance that provides access to an instance of this class
  private static readonly Lazy<CustomerSvcEventSource> Instance
    = new Lazy<CustomerSvcEventSource>(() => new CustomerSvcEventSource());

  //create a static property called Log that returns the current value 
  //of the Instance field of the event source. This value is 
  //determined by the custom event methods called in the Customerservices application.
  private CustomerSvcEventSource() { }

  public static CustomerSvcEventSource Log
  {
    get { return Instance.Value; }
  }

  public class Keywords
  {
    public const EventKeywords Database = (EventKeywords)1;
    public const EventKeywords ParameterError = (EventKeywords)2;
    public const EventKeywords EntryPoint = (EventKeywords)4;
  }

  public class Tasks
  {
    public const EventTask DatabaseRead = (EventTask)1;
    public const EventTask DatabaseWrite = (EventTask)2;
    public const EventTask DatabaseRetry = (EventTask)3;
  }

  [Event(101, Message = "DB Retry in method {0}, MiscInfo = {1}, TryCount = {2}, "
    + "RetrySleepTime = {3}, ex = {4}",
    Level = EventLevel.Error, Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
  internal void DatabaseRetry(string methodName, string miscInfo, int tryCount, int sleepTime, 
    string exceptionInfo)
  {
    if (this.IsEnabled()) this.WriteEvent(101, methodName, miscInfo, tryCount, sleepTime, 
      exceptionInfo);
  }

  [Event(102, Message = "Method {0}; no data found, but should be. Input Info = {1}.",
    Level = EventLevel.Warning, Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
  internal void DatabaseRecordsNotFound(string methodName, string extraInfo, string errorMessage)
  {
    if (this.IsEnabled()) this.WriteEvent(102, methodName, extraInfo, errorMessage);
  }

  [Event(103, Message = "Message from method {0} = {1}", Level = EventLevel.Informational,
    Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
  internal void WriteDBInfoMsg(string methodName, string infoMessage)
  {
    if (this.IsEnabled()) this.WriteEvent(103, methodName, infoMessage);
  }

  [Event(104, Message = "Parameter error in method {0}, msg = {1}", Level = EventLevel.Error,
    Keywords = Keywords.ParameterError)]
  internal void ParameterError(string methodName, string errorMessage)
  {
    if (this.IsEnabled()) this.WriteEvent(104, methodName, errorMessage);
  }

  [Event(105, Message = "[GetCustomerFavorites] Called/entry point. FirstName = {0}, LastName = {1}",
    Level = EventLevel.Informational, Keywords = Keywords.EntryPoint)]
  internal void GetCustomerFavoritesCalled(string firstName, string lastName)
  {
    if (this.IsEnabled()) this.WriteEvent(105, firstName, lastName);
  }

  [Event(106, Message = "Method {0}, output params = {1}", Level = EventLevel.Informational, 
    Task = Tasks.DatabaseRead)]
  internal void OutputParams(string methodName, string paramString)
  {
    if (this.IsEnabled()) this.WriteEvent(106, methodName, paramString);
  }

  [Event(107, Message = "Unexpected exception in method {0}. Extra info = {1}. Ex = {2}", 
    Level = EventLevel.Error)]
  internal void UnexpectedException(string methodName, string miscInfo, string exceptionInfo)
  {
    if (this.IsEnabled()) this.WriteEvent(107, methodName, miscInfo, exceptionInfo);
  }

  [Event(108, Message = "Error converting the database fields to output parameters in method {0}. "
    + "ExtraInfo = {1}, ex = {2}",
    Level = EventLevel.Error, Keywords = Keywords.Database, Task = Tasks.DatabaseRead)]
  internal void DatabaseFieldConversionError(string methodName, string extraInfo, string errorMessage)
  {
    if (this.IsEnabled()) this.WriteEvent(108, methodName, extraInfo, errorMessage);
  }

  public static string FormatException(Exception exception)
  {
    return exception.GetType().ToString() + Environment.NewLine + exception.Message;
  }
}

And here is the final GetCustomerFavorites. I’ve left in the original trace logging, but commented out, so you can see how much cleaner the new calls are.

internal class CustomerFavorites
{
  int maxNameLength = 50;

  internal string GetCustomerFavorites(out string favoriteMovie, out string favoriteLanguage,
    string firstName, string lastName)
  {
    //Trace.TraceInformation("[GetCustomerFavorites] called. FirstName = {0}, LastName = {1}",
    //  firstName, lastName);

    CustomerSvcEventSource.Log.GetCustomerFavoritesCalled(firstName, lastName);

    string errorMessage = string.Empty;
    favoriteMovie = string.Empty;
    favoriteLanguage = string.Empty;

    if (string.IsNullOrWhiteSpace(firstName) && string.IsNullOrWhiteSpace(lastName))
    {
      //Trace.TraceError("[GetCustomerFavorites] called. First and last name are blank, "
      //    + "no data returned.");
      errorMessage = "First and last name are blank.";

      CustomerSvcEventSource.Log.ParameterError("GetCustomerFavorites", errorMessage);
    }
    else if (string.IsNullOrWhiteSpace(firstName))
    {
      //Trace.TraceError("[GetCustomerFavorites] called. First name is blank, no data returned."
      //  + " LastName = {0}", lastName);
      errorMessage = "First name is blank; both fields must be provided.";

      CustomerSvcEventSource.Log.ParameterError("GetCustomerFavorites", errorMessage);
    }
    else if (string.IsNullOrWhiteSpace(lastName))
    {
      //Trace.TraceError("[GetCustomerFavorites] called. First name is blank, no data returned."
      //  + " LastName = {0}", lastName);
      errorMessage = "First name is blank; both fields must be provided.";

      CustomerSvcEventSource.Log.ParameterError("GetCustomerFavorites", errorMessage);
    }
    else if (firstName.Length > maxNameLength)
    {
      //Trace.TraceError("[GetCustomerFavorites] called. First name is too long, no data " 
      //  + "returned. FirstName = {0}, LastName = {0}", firstName, lastName);
      errorMessage = string.Format("First name is too long. Can not be more than {0} "
        + "characters long.", maxNameLength);

      CustomerSvcEventSource.Log.ParameterError("GetCustomerFavorites", errorMessage);
    }
    else if (lastName.Length > maxNameLength)
    {
      //Trace.TraceError("[GetCustomerFavorites] called. Last name is too long, no data "
      //  + "returned. FirstName = {0}, LastName = {0}", firstName, lastName);
      errorMessage = string.Format("Last name is too long. Can not be more than {0} "
        + "characters long.", maxNameLength);

      CustomerSvcEventSource.Log.ParameterError("GetCustomerFavorites", errorMessage);
    }
    if (!string.IsNullOrWhiteSpace(errorMessage))
      return errorMessage;

    int tryCount = 0;
    bool success = false;

    try
    {
      do
      {
        errorMessage = string.Empty; 
        tryCount++;
        try
        {
          using (SqlConnection cnx = 
            new SqlConnection(GlobalStaticProperties.dbConnectionString))
          {
            cnx.Open();
            using (SqlCommand cmd = new SqlCommand("Customer_GetByName", cnx))
            {
              cmd.CommandType = CommandType.StoredProcedure;

              SqlParameter prm = new SqlParameter("@FirstName", SqlDbType.NVarChar, 50);
              prm.Direction = ParameterDirection.Input;
              prm.Value = firstName;
              cmd.Parameters.Add(prm);

              prm = new SqlParameter("@LastName", SqlDbType.NVarChar, 50);
              prm.Direction = ParameterDirection.Input;
              prm.Value = lastName;
              cmd.Parameters.Add(prm);

              SqlDataAdapter da = new SqlDataAdapter(cmd);
              DataTable dt = new DataTable();
              da.Fill(dt);
              success = true;

              if (dt == null || dt.Rows.Count <= 0)
              {
                errorMessage = string.Format("Error retrieving favorites; record not found "
                  + "for first name '{0}', last name '{1}'.", firstName, lastName);

                CustomerSvcEventSource.Log.DatabaseRecordsNotFound("GetCustomerFavorites", 
                  string.Format("FirstName = {0}, LastName = {1}", firstName, lastName),
                  errorMessage);
              }
              else
              {
                try
                {
                  DataRow dr = dt.Rows[0];
                  favoriteMovie = dr["FavoriteMovie"].ToString();
                  favoriteLanguage = dr["FavoriteLanguage"].ToString();
                  //Trace.TraceInformation("[GetCustomerFavorites] FirstName = {0}, "
                  //  + "LastName = {1}, FavoriteMovie = {2}, FavoriteLanguage = {3}",
                  //    firstName, lastName, favoriteMovie, favoriteLanguage);

                  string outputs = string.Format("favoriteMovie: {0}, favoriteLanguage: {1}", 
                    favoriteMovie, favoriteLanguage);
                  CustomerSvcEventSource.Log.OutputParams("GetCustomerFavorites", outputs);
                }
                catch (Exception ex)
                {
                  //Trace.TraceError("[GetCustomerFavorites] FirstName = {0}, LastName = {1}, "
                  //  + "Exception thrown trying to get favorites from SQL record = {2}", 
                  //  firstName, lastName, ex.ToString());
                  errorMessage = "Error retrieving customer favorites.";

                  CustomerSvcEventSource.Log.DatabaseFieldConversionError("GetCustomerFavorites",
                    string.Format("firstName = {0}, lastName = {1}", firstName, lastName),
                    CustomerSvcEventSource.FormatException(ex));
                }
              }
            }//using SqlCommand
          } //using SqlConnection
        }
        catch (SqlException ex)
        {
          errorMessage = "Error retrieving customer favorites.";
          //Trace.TraceError("[GetCustomerFavorites] firstName = {0}, lastName = {1}, "
          //  + "Try #{2}, will sleep {3}ms. SQL Exception = {4}",
          //    firstName, lastName, tryCount, 
          //    GlobalStaticProperties.retrySleepTime[tryCount - 1], ex.ToString());

          CustomerSvcEventSource.Log.DatabaseRetry("GetCustomerFavorites",
            string.Format("FirstName = {0}, LastName = {1}", firstName, lastName),
            tryCount, GlobalStaticProperties.retrySleepTime[tryCount - 1], 
            CustomerSvcEventSource.FormatException(ex));

          if (tryCount < GlobalStaticProperties.MaxTryCount && 
            GlobalStaticProperties.retrySleepTime[tryCount - 1] > 0)
              Thread.Sleep(GlobalStaticProperties.retrySleepTime[tryCount - 1]);
        }
      } while (tryCount < GlobalStaticProperties.MaxTryCount && !success);
    }
    catch (Exception ex)
    {
      //Trace.TraceError("[GetCustomerFavorites] firstName = {0}, lastName = {1}, "
      //  + "Overall Exception thrown = {2}", firstName, lastName, ex.ToString());
        errorMessage = "Error getting customer favorites.";

        CustomerSvcEventSource.Log.UnexpectedException("GetCustomerFavorites", 
          String.Format("FirstName = {0}, LastName {1}", firstName, lastName), 
          CustomerSvcEventSource.FormatException(ex));
    }
    return errorMessage;
  }
}

I’ve run the service, and set it to have a bunch of errors, which were all logged to the default table for SLAB logging – SLABLogsTable. The table has a column for EventID, Keywords, Level, Message, Opscode, Task, Version, Payload, and then each parameter in the payload across all of the error messages: Payload_firstName, Payload_lastName, Payload_methodName, etc. I’ve exported the file to CSV and Excel if you want to take a look at them.

If you use the same field names, they will be put in the same column. So if I went back and changed this, I would change MiscInfo and ExtraInfo  and paramstring to all have the same names, they will all show up in the same column. This is not something you’re likely to search on, it’s just some extra information to identify the records being handled.

Finally, there re two things you want to be sure of:

  • The id # in the attribute of the method used to write the event is the same as the id# in the WriteEvent call
  • Don’t use the same id# more than once.

You can include the EventSourceAnalyzer in your project and write a unit test that will confirm your EventSource doesn’t have any configuration errors.

Summary

In this post, I showed how to set up an Event Source and a listener for it, and then take a class with trace logging and “SLAB-ify” it. This provided us with structured output and multiple fields we can query against directly, making it easier to get our data in a consistent format that we can later use for data mining.

SLAB design considerations

July 18, 2014

In my previous post, I talked about the features available in the Semantic Logging Application Block. In this post, I will show you a class with regular trace logging, and talk about things to consider when converting it to implement the Semantic Application Logging Block (SLAB) to create structured logging. You can just do a straight conversion and not think about your trace logging methodology, but this kind of goes against the whole point of using SLAB. You should put some thought into it so that what you end up with isn’t just trace logging, but is structured in a way that will allow you to search for patterns or do data mining.

Show me some code

Here is a class called CustomerFavorites that is similar to one you might use in a data access layer or a WCF service. Input parameters are first name and last name; output parameters are favorite movie and favorite language. I’ve written this in a way that allows me to put in a lot of trace logging, like validating that the input fields aren’t blank, and rolling my own SQL retry code. This displays regular unstructured trace logging.

internal class CustomerFavorites
{
  int maxNameLength = 50;

  internal string GetCustomerFavorites(out string favoriteMovie, out string favoriteLanguage,
    string firstName, string lastName)
  {
    Trace.TraceInformation("[GetCustomerFavorites] called. FirstName = {0}, LastName = {1}", 
      firstName, lastName);

    string errorMessage = string.Empty;
    favoriteMovie = string.Empty;
    favoriteLanguage = string.Empty;

    if (string.IsNullOrWhiteSpace(firstName) && string.IsNullOrWhiteSpace(lastName))
    {
      Trace.TraceError("[GetCustomerFavorites] called. First and last name are blank, "
          + "no data returned.");
        errorMessage = "Must provide name to get favorites.";
    }
    else if (string.IsNullOrWhiteSpace(firstName))
    {
      Trace.TraceError("[GetCustomerFavorites] called. First name is blank, no data returned."
        + " LastName = {0}", lastName);
      errorMessage = "First name is blank; both fields must be provided.";
    }
    else if (string.IsNullOrWhiteSpace(lastName))
    {
      Trace.TraceError("[GetCustomerFavorites] called. First name is blank, no data returned."
        + " LastName = {0}", lastName);
      errorMessage = "First name is blank; both fields must be provided.";
    }
    else if (firstName.Length > maxNameLength)
    {

      Trace.TraceError("[GetCustomerFavorites] called. First name is too long, no data " 
        + "returned. FirstName = {0}, LastName = {0}", firstName, lastName);
      errorMessage = string.Format("First name is too long. Can not be more than {0} "
        + "characters long.", maxNameLength);
    }
    else if (lastName.Length > maxNameLength)
    {
      Trace.TraceError("[GetCustomerFavorites] called. Last name is too long, no data "
        + "returned. FirstName = {0}, LastName = {0}", firstName, lastName);
      errorMessage = string.Format("Last name is too long. Can not be more than {0} "
        + "characters long.", maxNameLength);
    }

    int tryCount = 0;
    bool success = false;

    try
    {
      do
      {
        errorMessage = string.Empty; 
        tryCount++;
        try
        {
          using (SqlConnection cnx = 
            new SqlConnection(GlobalStaticProperties.dbConnectionString))
          {
            cnx.Open();
            using (SqlCommand cmd = new SqlCommand("Customer_GetByName", cnx))
            {
              cmd.CommandType = CommandType.StoredProcedure;

              SqlParameter prm = new SqlParameter("@FirstName", SqlDbType.NVarChar, 50);
              prm.Direction = ParameterDirection.Input;
              prm.Value = firstName;
              cmd.Parameters.Add(prm);

              prm = new SqlParameter("@LastName", SqlDbType.NVarChar, 50);
              prm.Direction = ParameterDirection.Input;
              prm.Value = lastName;
              cmd.Parameters.Add(prm);

              SqlDataAdapter da = new SqlDataAdapter(cmd);
              DataTable dt = new DataTable();
              da.Fill(dt);
              success = true;

              if (dt == null || dt.Rows.Count <= 0)
              {
                errorMessage = string.Format("Error retrieving favorites; record not found "
                  + "for first name '{0}', last name '{1}'.", firstName, lastName);
              }
              else
              {
                try
                {
                  DataRow dr = dt.Rows[0];
                  favoriteMovie = dr["FavoriteMovie"].ToString();
                  favoriteLanguage = dr["FavoriteLanguage"].ToString();
                  Trace.TraceInformation("[GetCustomerFavorites] FirstName = {0}, "
                    + "LastName = {1}, FavoriteMovie = {2}, FavoriteLanguage = {3}",
                      firstName, lastName, favoriteMovie, favoriteLanguage);
                }
                catch (Exception ex)
                {
                  Trace.TraceError("[GetCustomerFavorites] FirstName = {0}, LastName = {1}, "
                    + "Exception thrown trying to get favorites from SQL record = {2}", 
                    firstName, lastName, ex.ToString());
                  errorMessage = "Error retrieving customer favorites.";
                }
              }
            }//using SqlCommand
          } //using SqlConnection
        }
        catch (SqlException ex)
        {
          errorMessage = "Error retrieving customer favorites.";
          Trace.TraceError("[GetCustomerFavorites] firstName = {0}, lastName = {1}, "
            + "Try #{2}, will sleep {3}ms. SQL Exception = {4}",
              firstName, lastName, tryCount, 
              GlobalStaticProperties.retrySleepTime[tryCount - 1], ex.ToString());
          if (tryCount < GlobalStaticProperties.MaxTryCount && 
            GlobalStaticProperties.retrySleepTime[tryCount - 1] > 0)
              Thread.Sleep(GlobalStaticProperties.retrySleepTime[tryCount - 1]);
        }
      } while (tryCount < GlobalStaticProperties.MaxTryCount && !success);
    }
    catch (Exception ex)
    {
      Trace.TraceError("[GetCustomerFavorites] firstName = {0}, lastName = {1}, "
        + "Overall Exception thrown = {2}", firstName, lastName, ex.ToString());
        errorMessage = "Error getting customer favorites.";
    }
    return errorMessage;
  }
}

One word about the SQL retry code: There is an array loaded in a global static variable called retrySleepTime that provides the retry information – how many seconds to wait before retrying. This is usually an increasing number; I think mine are 0, 2.5, 5, and 7.5. When it fails the first time, it retries immediately. The second time, it waits 2.5 seconds and then retries, etc. This kind of retry is necessary because the Azure SQL Database occasionally has connection problems, depending on a number of things. The best practice is not to roll your own retry code, but to use the Transient Fault Handling Application block from the Azure CAT team and the Microsoft Patterns and Practices team. This is part of Enterprise Library 6, which also contains the Semantic Logging Application Block.

Design considerations

Looking at the code above, we have the following kinds of messages we want to log:

  • Entry point, which also traces the input and output variables.
  • Input errors (variables or blank or too long)
  • No matching records found in the database and there should be
  • Exception converting the DataRow parameters to real numbers
  • SQL Exception with retry code
  • Unexpected exception

If you have a big WCF service, you might have lots of methods that look very similar to this, but with different database actions and different variables.

A note about keywords: When you set up your listener, you can filter the logging by the keywords and the Message Level. For example, you can turn off all but DatabaseError, or all but Errors. If you have something you’re going to be logging 1000 times per second, add a keyword for it, so you can turn it off if needed.

How would you gather this data? How are you going to look at the resulting logs?? When you set up your event source class, you can assign values to the following:

  • Event ID
    • This is an arbitrary number provided by you.
  • Keyword (defined by you) (limited to 64)
    • When you set up your listener, you can filter the logging by the keywords. For example, you can turn off logging for a specific keyword. If you have something you’re going to be logging 1000 times per second, add a keyword for it, so you can turn it off if needed.
  • EventOpcode (1-10 are taken up by .NET, opcodes 11 and above are defined by you)
    • You can filter the results by EventOpcode.
  • Task (defined by you)
    • You can filter the results by task.
  • Message
    • This is your message that uses the parameters sent in to be used when writing the event log; it’s the “pretty” format. So you might send in method name, and your message might be “Entry point for {0}.” The parameters make up the payload, and if writing to Azure Table Storage, you can query on the parameters individually.
  • Message Level (Informational, Critical, Error, etc)
    • You can filter the results by the message level; you can also turn the logging on/off. For example, you could turn off all of the information-only logging.

So here’s some of the kinds of events you might want to be able to query from this class and others like it:

  • Database retry errors
  • Database messages that are errors/exceptions
  • Database messages that are informational
  • Errors with the input parameters
  • GetCustomerFavorites Entry Point (with params in)
  • Output parameters (i.e. results)
  • General exception

Looking at this list, how would you assign event id’s, keywords, opcodes, and tasks?

  • You can only have up to 64 keywords, so you want to be careful about using them. You could set up keywords for Database, ParameterError, and EntryPoint.
  • You could set up tasks for DatabaseRead, DatabaseUpdate, DatabaseRetry.
  • You can tell error/exception from information by the event level.

So let’s look at that list again and categorize our events for this class. Each event has a different event id that you assign arbitrariliy.

  • Database retry errors
    • Event id 101, Keyword=Database, Task=DatabaseRetry, Level = Error
  • Database messages that are errors/exceptions
    • Event id 102, Keyword=Database, Task=[DatabaseRead or DatabaseWrite], Level = Error
  • Database messages that are informational
    • Event id 103, Keyword=Database, Level = Informational
  • Errors with the input parameters
    • Event id 104, Keyword = ParameterError, Level = Warning
  • GetCustomerFavorites Entry Point (with params in)
    • Event id 105, Keyword = EntryPoint, Level = Informational
  • Output parameters (i.e. results)
    • Event id 106, Level = Informational
  • General exception
    • Event id 107, Level = Error

You get the general idea. Note that the parameters passed in when writing the event get written as separate key/value pairs, and can be queried on if you’re writing to Azure Table Storage. So if you pass in the method name, you will be able to query for all events having that method name.

Notice how quickly the event id’s add up. You might want to consider putting them in partial classes and grouping them that way so they are easier to maintain. Of course, you could always use regions, which people either love or hate. If you do this, find out which way your boss feels before deciding whether or not to use them. ;-)  (I had a lead once who removed all comments I put in my code. She said “it makes it look untidy.” So you never know what’s going to make someone happy.) (Yes, this drove me nuts.)

Summary

In this post, I showed an example of a class with trace logging, and discussed design considerations for converting this to semantic logging. In my next post, I will show you how to implement our design.

Introduction to the Semantic Logging Application Block (SLAB)

July 18, 2014

In my previous post, I discussed diagnostics logging and why it’s important, and how structured logging can help you troubleshoot problems and do some data mining. Now I want to talk about SLAB and what features it has, and provide some resources for you to use to follow up with. In my next post, I’ll offer up an example and talk about the things you have to consider when designing your structured logging.

Technologies for semantic logging

Here are the technologies involved:

Event Tracing for Windows (ETW)

  • Is native to Windows.
  • Has great performance and OK diagnostic tooling
  • It has historically been difficult to publish events.

EventSource class

  • New in .NET 4.5
  • Helps make it easier to create structured logging
  • It’s extensible, but only supports ETW out of the box

Semantic Logging Application Block (SLAB)

  • This is a wrapper around ETW; it allows you to use ETW without having any specific knowledge of ETW.
  • It rovides several destinations (sinks) for events published with EventSource.
  • It has additional tooling support for writing events.

Basically, SLAB enables you to use the EventSource class to write log messages from your application. You set up a listener, and SLAB receives notifications whenever the application writes a message using an EventSource class. SLAB then writes the message to whatever sink you have set up.

SLAB can act as a stepping stone. When you use EventSource, there is no commitment to how you consume events. You can later decide to replace it with your own tooling or use something else.

SLAB Features – Sinks

SLAB has the following sinks/destinations built-in:

  • Microsoft Azure Staroge Table
  • SQL Database
  • Flat File
  • Rolling flat file
  • Console
  • Elastisearch

And the following formatters for text-based sinks:

  • JSON
  • XML
  • plain text

SLAB Out-of-process service

SLAB can be used in-process – you start up the listener when your application starts up and let it run until the application ends. It also has an out-of-process service that you can host as a Windows Service (it needs to run in an account with elevated permissions). This service will get events out of the originating process using ETW and persist the events to the selected sink.

All sinks are supported, and the service is configuration-driven with support for reconfiguration. Also, the monitored application does not reference SLAB. It writes to EventSource, which is caught by the out-of-process service.

Benefits:

  • Increased fault tolerance in case of application crash.
  • You can monitor multiple processes from a single service.
  • This moves the logging overhead from the application to a separate process. The overhead is still there, it’s just not part of your applicaton.

Drawbacks:

  • There are more moving pieces.
  • The internal buffers may overflow without notification.
  • You have to make sure the process starts before the logging starts, and doesn’t end until after the logging ends.

SLAB Features – Event Source Analyzer

This helps you author and validate the Event Source derived class, and flags hard-to-detect errors relating to plumbing. You can run this inside a unit test to validate your classes.

SLAB Features – Observable-based

The event listener is IObservable.

The event sinks are IObservers.

This means you can use Reactive Extensions (Rx) to filter, pre-process, or transform the event stream before it’s persisted.

For example, you could use Rx to check the errors coming in, and if you get the same error 5 times in a row, it could send you an e-mail and not publish the same error for a few minutes. An example where this would be helpful is if your application is calling SQL Server and SQLServer is down, and as a result you’re about to have hundreds (or thousands) of error messages that are all the same come through the system. It would be useful to receive a notification that you might be having a serious problem. I had this problem once at the startup I worked at. We had a problem with SQL Azure, and I got over a thousand e-mails within a few minutes before we managed to shut down the service. Yikes! :-O

Resources

Here are some resources I found really helpful when learning about this topic.

Developer’s Guide

Enterprise Library 6

Enterprise Library 6 code + downloads + documentation

The last one has the Enterprise Library 6 Reference Docs, Quick Starts (this includes the reactive extension examples), and the Developer’s Guide and samples. They have a really good example of adding semantic logging to a Windows Forms app that calculates pi.

Summary

This post talked about the features of SLAB and ETW. In my next post, I will show you a class with a bunch of trace logging in it, and talk about how to “SLAB-ify” it.


Follow

Get every new post delivered to your Inbox.

Join 76 other followers