Diagnostics logging and structured logging

July 18, 2014

This post is going to be about one of my favorite things, diagnostics logging. When I worked for a startup, I was responsible for the entire backend for all of the products. This comprised of a lot of WCF services running in Web Roles, an Azure SQL Database, and some Worker Roles. By habit, I always include a lot of trace logging. I thought maybe I had overdone it in this case, as I was tracing entry points for all methods in the WCF services called by their clients, and all of the variables as well, and some additional information, but then I went to this talk by the Halo group, and they talked about how much logging they do, and I realized I’m an amateur in comparison!

Diagnostics logging in action

Included in all of my tracing was the user GUID, which allowed us to pull together a bunch of information about what a particular customer was doing, to help troubleshoot when he had a problem. You could see when he logged in, when he called the backend and what for. This was very helpful when a customer had a problem. Additionally, the back end was largely developed when we migrated from a traditional hosting center to Azure, and it was .NET 2.0 asmx web services. This was back in 2010 before Azure was mainstream, and we didn’t have the capabilities we have today to attach the debugger in Visual Studio and debug the service running in the cloud. If you wanted to know what was going on, you added trace logging.

For example, we had a worker role that would pull requests from a queue to resize the photos from the customer into mobile-sized pictures. This role would read the queue, pull messages off of the queue and deconstruct them into whatever structure was needed, download the pictures involved, resize them, then upload them again. Every step was a failure point. If you have a hundred pictures, that’s 100 downloads, 100 resizes, 100 uploads, and there was deconstructing the message – what if it was in an invalid format and unrecognizable? I put in trace logging throughout the code, so if something happened, I would know exactly where it went wrong and why.

All of my trace logging was done straight into the Windows Azure Diagnostics Logs table. A couple of years later, I discovered ETW (Event Tracing for Windows) and SLAB – the Semantic Logging Application Block. I would have rewritten all of our logging, but at a startup, that’s not a luxury you have! It’s definitely something I would recommend and use going forward.

What’s wrong with plain ordinary trace logging

The problem with ordinary trace logging is that you create millions of rows of unstructured data containing the identifying information about each row (like timestamp and partition key), and a single field for whatever message you want to convey. This made it very time-consuming and difficult to look through the logs and find what I needed. I used the Cerebrata Azure Management Studio, which did some built-in filtering by service, which was helpful in reducing the amount of information I had to wade through, but I still had to search millions of records for the user GUID to find out what was going on.

There were some services, like the one that continuously polled our CRM system for changes and when found, migrated them to the SQL Server database, that generated so much logging, you had to be very determined to find what you wanted. Aside from troubleshooting, it would have been really helpful to have some kind of structure to that logging so we could gather some information, consolidate it, and learn something from it. But all you have with WADiagnostics is a single string field.

Rather than a single field that said something like “user [GUID] reloaded project abc”, how much more useful would it be to have something that has fields like this that you can query directly: { user:”guid”, project:”abc”, action:”reload” }. This would be possible if all the logging for the services used structured logging and was stored in  Azure Table Storage. This would have allowed us to query the table for specific values of GUID, projects, and actions. Think of the metrics you could gather – how many customers are accessing project “abc”, how many are reloading the project versus abandoning it and creating a new project, etc.

This is what structured logging is all about – making it easier to consume logs.

The problem is that structured logging is a completely different paradigm, and you don’t just put in random trace statements whenever you want to. You have to think about what you want to log, where you want to log it, and how. How do you want to query the data? Do you want to be able to retrieve all database retry errors together, all method entry points together, all data for a specific user GUID? These are some of the things you have to think about.

Summary

This post talked about the practice of diagnostics logging, why structured logging is a good idea, and how it can help you do troubleshooting and data mining. In my next post, I will talk specifically about the features of ETW and SLAB.

Azure Blob Storage, Click Once deployment, and recursive file uploads

July 17, 2014

In this blog post, I am going to show you how to upload an folder and all of its contents from your local computer to Azure blob storage, including subdirectories, retaining the directory structure. This can have multiple uses, but I want to call out one use that people still using Click Once deployment might appreciate.

I used to be the (only) Click Once MVP, and still blog about it once in a while. Click Once is a Microsoft technology that allows you to host the deployment of your client application, console application, or VSTO add-in on a file share or web site. When updates are published, the user picks them up automatically. This can be a very handy for those people still dealing with these technologies, especially since Microsoft removed the Setup & Deployment package feature from Visual Studio after VS2010 and replaced it with a lame version of InstallShield (example of lameness: it wouldn’t let you deploy 64-bit applications). But I digress.

I wrote a blog article showing how you can host your Click Once deployment in Azure blob storage very inexpensively. (It’s even cheaper now.) The problem is you have to get your deployment up to Blob Storage, and for that, you need to write something to upload it, use something like Cerebrata’s Azure Management Studio, or talk the Visual Studio team and ClickOnce support into adding an option to the Publish page for you. I tried the latter — what a sales job I did! “Look! You can work Azure into Click Once and get a bunch of new Azure customers!” “Oh, that’s a great idea. But we have higher priorities.” (At least I tried.)

Having been unsuccessful with my rah! rah! campaign, I thought it would be useful if I just provided you with the code to upload a folder and everything in it. You can create a new Windows Forms app (or WPF or Console, whatever makes you happy) and ask the user for two pieces of information:

  • Local folder name where the deployment is published. (For those of you who don’t care about ClickOnce, this is the local folder that you want uploaded.)
  • The name of the Blob Container you want the files uploaded to.

Outside of a Click Once deployment, there are all kinds of uses for this code. You can store some of your files in Blob Storage as a backup, and use this code to update the files periodically. Of course, if you have an excessive number of files, you are going to want to run the code in a background worker and have it send progress back to the UI and tell it what’s going on.

Show me the code

I’m going to assume you understand recursion. If not, check out the very instructive wikipedia article. Or put the code in and just step through it. I think recursion is really cool; I once wrote a program in COBOL that would simulate recursion that went up to 10 levels deep. (For you youngsters, COBOL is not a recursive programming language.)

In your main method, you need to add all of the following code (up until the next section).

First, you need to set up your connection to the Azure Storage Account and to the blob container that you want to upload your files to. Assuming you have the connection string to your storage account, here’s what you need to do.

First you’re going to get an instance of the CloudStorageAccount you’re going to use. Next, you get an reference to the CloudBlobClient for that storage account. This is what you use to access the actual blob storage. And lastly, you will get a reference to the container itself.

The next thing I always do is call CreateIfNotExists() on the container. This does nothing if the container exists, but it does save you the trouble of creating the container out in Blob Storage in whatever account you’re using if you haven’t already created it. Or if you have forgotten to create it. If you add this, it makes sure that the container exists and the rest of your code will run.

//get a reference to the container where you want to put the files, 
//  create the container if it doesn't exist
CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobCOntainer = cloudBlobClient.GetContainerReference(containerName);
cloudBlobContainer.CreateIfNotExists();

I also set the permissions on the container. If this code actually creates the container, the default access is private, and nobody will be able to get to the blobs without either using a security token with the URL or having the storage account credentials.

//set access level to "blob", which means user can access the blob 
//  but not look through the whole container
//  this means the user must have a URL to the blob to access it
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
cloudBlobContainer.SetPermissions(permissions);

So now we have our container reference (cloudBlobContainer) set up and the container is ready for use.

Next we need to get a list of files to upload, and then we need to upload them. I’ve factored this into multiple methods. Here are the top commands:

List<String> listOfiles = GetListOfFilesToUpload(folderPath);
string status = UploadFiles(listOfiles, folderPath);

After this finishes, all of your files are uploaded. Let’s look at the methods called.

GetListOfFilesToUpload(string folderPath)

This is the hard part – getting the list of files. This method calls the recursive routine. It starts with instantiating the list of files that will be the final result. Then it retrieves the list of files in the requested directory and adds them to the list. This is “the root”. Any files or directories in this root will be placed in the requested container in blob storage. [folderName] is the path to the local directory you want to uploaded.

//this is going to end up having a list of the files to be uploaded
//  with the file names being in the format needed for blob storage
List<string> listOfiles = new List<string>();
            
//get the list of files in the top directory and add them to our overall list
//these will have no path in blob storage bec they go in the root, so they will be like "mypicture.jpg";
string[] baseFiles = Directory.GetFiles(folderName);
for (int i = 0; i < baseFiles.Length; i++)
{
    listOfiles.Add(Path.GetFileName(baseFiles[i]));
}

Files will be placed in the same relative path in blob storage as they are on the local computer. For example, if D:\zAzureFiles\Images is our “root” upload directory, and there is a file with the full path of “D:\zAzureFiles\Images\Animals\Wolverine.jpg”, the path to the blob will be “Animals/Wolverine.jpg”.

Next we need to get the directories under our “root” upload directory, and process each one of them. For each directory, we will call GetFolderContents to get the files and folders in each directory. GetFolderContents is our recursive routine. So here is the rest of GetListOfFilesToUpload:

//call GetFolderContents (this routine) for each folder to retrieve everything under the top directory
string[] directories = Directory.GetDirectories(folderName);
for (int i = 0; i < directories.Length; i++)
{
    // an example of a directory : D:\zAzureFiles\Images\NatGeo; (root is D:\zAzureFiles\Images
    // topDir gives you the just the directory name, which is NatGeo in this example
    string topDir = GetTopDirectory(directories[i]);
    //GetFolderContents is recursive
    List<String> oneList = GetFolderContents(directories[i], topDir);
    //you have a list of files with blob storage paths for everything under topDir (incl subfolders)
    //  (like topDir/nextfolder/nextfolder/filename.whatever)
    //add the list of files under this folder to the list going in this iteration
    //eventually when it works its way back up to the top, 
    //  it will end up with a complete list of files
    //  under the top folder, with relative paths
    foreach (string fileName in oneList)
    {
        listOfiles.Add(fileName);
    }
}

And finally, return the list of files.

return listOfiles;

GetTopDirectory(string fullPath)

This is a helper method that just pulls off the last directory. For example, it reduces “D:\zAzureFiles\Images\Animals to “Animals”. This is used to pass the folder name to the next recursion.

private string GetTopDirectory(string fullPath)
{
    int lastSlash = fullPath.LastIndexOf(@"\");
    string topDir = fullPath.Substring(lastSlash + 1, fullPath.Length - lastSlash - 1);
    return topDir;
}

GetFolderContents(string folderName, string blobFolder)

This is the recursive routine. It returns List<string>, which is a list of all the files in and below the folderName passed in, which is the full path to the local directory being processed, like D:\zAzureFules\Images\Animals\.

This is similar to GetListOfFilesToUpload; it gets a list of files in the folder passed in and adds them to the return object with the appropriate blob storage path. Then it gets a list of subfolders to the folder passed in, and calls GetFolderContents for each one, adding the items returned from the recursion in to the return object before returning up a level of recursion.

This sets the file names to what they will be in blob storage, i.e. the relative path to the root. So a file on the local computer called D:\zAzureFiles\Images\Animals\Tiger.jpg would have a blob storage path of Animals/Tiger.jpg.

returnList is the List<String> returned to the caller.

List<String> returnList = new List<String>();
            
//process the files in folderName, set the blob path
string[] fileLst = Directory.GetFiles(folderName);
for (int i = 0; i < fileLst.Length; i++)
{
    string fileToAdd = string.Empty;
    if (blobFolder.Length > 0)
    {
        fileToAdd = blobFolder + @"\" + Path.GetFileName(fileLst[i]);
    }
    else
    {
        fileToAdd = Path.GetFileName(fileLst[i]);
    }
    returnList.Add(fileToAdd);
}

//for each subdirectory in folderName, call this routine to get the files under each folder
//  and then get the files under each folder, etc., until you get to the bottom of the tree(s) 
//  and have a complete list of files with paths
string[] directoryLst = Directory.GetDirectories(folderName);
for (int i = 0; i < directoryLst.Length; i++)
{
    List<String> oneLevelDownList = new List<string>();
    string topDir = blobFolder + @"\" + GetTopDirectory(directoryLst[i]);
    oneLevelDownList = GetFolderContents(directoryLst[i], topDir);
    foreach (string oneItem in oneLevelDownList)
    {
        returnList.Add(oneItem);
    }
}
return returnList;

UploadFiles(List<string> listOfFiles, string folderPath)

This is the method that actually uploads the files to Blob Storage. This assumes you have a reference to the cloudBlobContainer instance that we created at the top.

[listOfFiles] contains the files with relative paths to the root. For example “Animals/Giraffe.jpg”. [folderPath] is the folder on the local drive that is being uploaded. In our examples, this is D:\zAzureFiles\Images. Combining these gives us the path to the file on the local drive. All we have to do is set the reference to the location of the file in Blob Storage, and upload the file. Note – the FileMode.Open refers to the file on the local disk, not to the mode of the file in Blob Storage.

internal string UploadFiles(List<string> listOfFiles, string folderPath)
{
    string status = string.Empty;
    //now, listOfiles has the list of files you want to upload
    foreach (string oneFile in listOfFiles)
    {
        CloudBlockBlob blob = cloudBlobContainer.GetBlockBlobReference(oneFile);
        string localFileName = Path.Combine(folderPath, oneFile);
        blob.UploadFromFile(localFileName, FileMode.Open);
    }
    status = "Files uploaded.";
    return status;
}

Summary

So you have the following:

  • The code for the calling routine that sets the reference to the cloudBlobContainer and makes sure the container exists. This calls GetsListOfFilesToUpload and UploadFiles to, well, get the list of files to upload and then upload them.
  • GetListOfFilesToUpload calls GetFolderContents (which is recursive), and ultimately returns a list of the files as they will be named in Blob Storage.
  • GetFolderContents – the recursive routine that gets the list of files in the specified directory, and then calls itself with each directory found to get the files in the directory.
  • UploadFiles is called with the list of files to upload; it uploads them to the specified container.

If the files already exist in Blob Storage, they will be overwritten. For those of you doing ClickOnce, this means it will overlay the appname.application file (the deployment manifest) and the publish.htm if you are generating it.

One other note to those doing ClickOnce deployment – if you publish your application to the same local folder repeatedly, it will keep creating versions under Application Files. This code uploads everything from the top folder down, so if you have multiple versions under Application Files, it will upload them all over and over. You might want to move them or delete them before running the upload.

This post provided and explained the code for uploading a folder and all of its sub-items to Azure Blob Storage, retaining the folder structure. This can be very helpful for people using ClickOnce deployment and hosting their files in Blob Storage, and for anyone else wanting to upload a whole directory of files with minimal effort.

Using the Azure Files preview with the Storage Client Library

July 12, 2014

In the first post of this series on the Azure Files preview, I discussed what the feature entailed, and what you can do with it, as well as how to sign up for the preview. In the second post, I showed how to get the PowerShell cmdlets for the preview, and how to use them to manage your share and transfer files to/from your share. In the third post, I showed you how to create a VM in Azure, map the share, and update the share via the VM. In this post, I’m going to show you how to programmatically access your shares, get a list of the files on them, and upload/download files using the Storage Client Library.

To write code to manage your share(s), you need to download the newest Storage Client Library (4.1.0 at the time of this writing). This supports the new Azure Files preview, and has all the APis you need. At this time, the only documentation available for the Storage Client Library is the object model. Intellisense and trial & error are my friends.

I’ve created a Windows Forms project that will create shares, create folders, upload files, download files, list files, and delete files. The application looks like this:

image

I’ll make the project available at the bottom of this blog post.

Setup for accessing the file share

To access the file share, you  need three pieces of information: the Storage Account Name, Storage Account Key, and Share Name.

Here’s how you create the connectionString using the storage account name and key:

private string connectionString
{
    get
    {
        return @"DefaultEndpointsProtocol=https;AccountName=" + StorageAccountName +
            ";AccountKey=" + StorageAccountKey;
    }
}

So now we’re ready to write the code block to set everything up we need to execute all of our commands. The first thing you need to do is get an instance of the CloudStorageAccount.

CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(connectionString);

Next, we need to get an instance of the CloudFileClient that will be used to execute requests against the File service.

CloudFileClient cloudFileClient = cloudStorageAccount.CreateCloudFileClient();

Now get a reference to the share using the CloudFileClient and the share name. I’m also going to call CreateIfNotExists() so if the share isn’t already there, it will create it. This will only create it the first time, but keeps you from having to remember to create a share every time you need a new one that you’re going to access programmatically.

CloudFileShare cloudFileShare = cloudFileClient.GetShareReference(ShareName);
cloudFileShare.CreateIfNotExists();

I want to use the same code to process files in the root directory and in the subfolders, so I have some variables that I’m setting, such as writeToRoot and fileDirectory to make this simpler. Depending on writeToRoot, fileDirectory will be null or will point to the subfolder.

So let’s set up the reference to the root directory of the share first. Then I’ll set writeToRoot that tells me if I’m going to be using the root directory or a subfolder.

CloudFileDirectory rootDirectory = cloudFileShare.GetRootDirectoryReference();
writeToRoot = string.IsNullOrWhiteSpace(FolderPath);

Now I’m going to set up the fileDirectory that will point to the subfolder if applicable.

CloudFileDirectory rootDirectory = cloudFileShare.GetRootDirectoryReference();
writeToRoot = string.IsNullOrWhiteSpace(FolderPath);

CloudFileDirectory fileDirectory;
if (writeToRoot)
{
    fileDirectory = null;
}
    else 
{
    fileDirectory = rootDirectory.GetDirectoryReference(FolderPath);
    fileDirectory.CreateIfNotExists();
}

And last but not least, I’ll set up directoryToUse, which is a CloudFileDirectory object, to point at either the root or the subfolder, depending on which one we’re targeting.

directoryToUse = writeToRoot ? rootDirectory : fileDirectory;

I’ve put this in a method called SetUpFileShare() that I will execute each time I want to execute a command against the share. I’m doing this because the user of the application can change the criteria such as storage account at any time, and I’m providing those textbox values when I instantiate the AzureFiles object.

Upload files to the share

On my screen, I have a place for local directory (and a browse button). When you click Upload, it uploads all the files in that directory. It is not recursive – it won’t pick up subfolders. (Maybe later.) If the folder is blank, it will upload them to the root directory; if it is not blank, it will upload them to the folder.

I discovered there are a couple of ways to upload files to a share. Here’s one in which you create the URL to the file and execute the upload request with the storage account credentials.

string stringUri = @"https://" + StorageAccountName + ".file.core.windows.net/" + ShareName + "/";
if (!writeToRoot)
    stringUri += FolderPath + @"/";
stringUri += fileNameOnly;

Uri theUri = new Uri(stringUri);
CloudFile cloudFile = new CloudFile(theUri, Credentials);
FileInfo fi = new FileInfo(fileList[i]);
long fileSize = fi.Length;
cloudFile.Create(fileSize);

The Credentials object is created this way:

public Microsoft.WindowsAzure.Storage.Auth.StorageCredentials Credentials
{
  get
  {
    return 
      new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials(StorageAccountName, StorageAccountKey);
  }
}

This works okay, but you have to call the FileInfo class to get the file size, which is one more step than you need. However, if this is being executed by some code that doesn’t know how to format the connection string, or if you have a Uri from another source and you’re not creating it yourself, this could be the way to go.

Here’s the second way:

CloudFile cloudFile2 = directoryToUse.GetFileReference(fileNameOnly);
cloudFile2.UploadFromFile(fileList[i], System.IO.FileMode.Open);

Note that it has run through SetUpFileShare(), which sets directoryToUse and has created the CloudFileClient with the connection string. This is (obviously) much shorter and to the point.

One note: in the method call for UploadFromFile, the FileMode specified is for the local file, not the file in the cloud.

Download files from a folder on the share

Downloading files is very similar to uploading them. In the test application, it downloads the files that are selected in the ListBox. In the interest of simplicity, I’m downloading them to a folder called “Downloads” that I create in the local folder selected for uploads. For each file, this is the command that I’m executing. Note the use of the CloudFileDirectory “directoryToUse”, and remember that SetUpAzureFiles() has been run before this is called to set all of the references accordingly.

CloudFile cloudFile2 = directoryToUse.GetFileReference(fileNameOnly);
cloudFile2.DownloadToFile(localPath, FileMode.Create);

As with the uploads, tie FileMode specified is for the local file. FileMode.Create means it will write the file if it doesn’t exist, and it will overwrite it if it does.

Get a list of files in a folder on the share

First, I’ve set up a class for the items in the listbox on my form. ListBox contains a list of ListBoxFileItems.

public enum ListBoxFileItemType { File, Directory, Other };

public class ListBoxFileItem
{
    public string FileName { get; set; }

    public ListBoxFileItemType FileItemType { get; set; }

    public override string ToString()
    {
        return FileName;
    }
}

Note the field “FileItemType”. To get a list of files, you get an IENumerable<IListFileItem> from the method call to directory.ListFilesAndDirectories(). This returns both files and directories in the requested path. There is no property that tells you the difference between the files and the directories. To tell the difference, I cast the item as a CloudFile, and if the result was null, I cast it as a directory and checked that for null. It’s probably always going to be one or the other, but I never assume, so I allowed a type of “other”, in case they later expand the possibilities.

So this is how I’m retrieving the list of files or directories, and putting them into my List<ListBoxFileItem> to be returned to the screen and loaded.

public bool LoadListOfFiles(out List<ListBoxFileItem> listOfFiles)
{
    bool success = true;
    listOfFiles = new List<ListBoxFileItem>();
    foreach (IListFileItem fileItem in directoryToUse.ListFilesAndDirectories())
    {
        try
        {
            ListBoxFileItem fi = new ListBoxFileItem();
            fi.FileName = Path.GetFileName(fileItem.Uri.LocalPath);
            fi.FileItemType = GetFileItemType(fileItem);
            listOfFiles.Add(fi);
        }
        catch (Exception ex)
        {
            success = false;
            System.Diagnostics.Debug.Print("Exception thrown loading list of files = {0}", 
                ex.ToString());
        }
        //if any file fails to upload, stop looping
        if (!success)
            break;
    }
    return success;
}

And this is how I determine ListBoxFileItemType:

private ListBoxFileItemType GetFileItemType(IListFileItem fileItem)
{
    ListBoxFileItemType thisType;
    CloudFile fileCheck = fileItem as CloudFile;
    //cast it to a file and see if it's null. If it is, check to see if it's a directory. 
    if (fileCheck == null)
    {
        //check and see if it's a directory
        //you can probably assume, but since I haven't found it documented, 
        //  I'm going to check it anyway
        CloudFileDirectory dirCheck = fileItem as CloudFileDirectory;
        if (dirCheck == null)
        {
            thisType = ListBoxFileItemType.Other;
        }
        else
        {
            thisType = ListBoxFileItemType.Directory;
        }
    }
    else
    {
        thisType = ListBoxFileItemType.File;
    }
    return thisType;
}

So this sends back a list to the UI, and it has the types, so you can change ToString() to show the type if you want to.

Delete files from the share

It’s relatively easy to delete files from the share. In my case, the UI is sending a list of ListBoxFileItem, and I’ve already called SetUpFileShare() to set the appropriate references.

Because I have a list of ListBoxFileFileItems, I have the type (file or directory) for each object to be deleted. This is how I delete one entry (oneItem, which is a ListBoxFileItemType). Note that I am using the CloudFileDirectory object “directoryToUse”, which I set in SetUpFileShare() to point to either the root or a subfolder, depending on the user’s selection.

Note that you can not delete a directory that has files in it. It will throw an StorageException. I’m catching that and sending back the appropriate error message.

if (oneItem.FileItemType == ListBoxFileItemType.Directory)
{
   try
    {
        CloudFileDirectory cloudDir = directoryToUse.GetDirectoryReference(oneItem.FileName);
        cloudDir.Delete();
    }
    catch (Microsoft.WindowsAzure.Storage.StorageException ex)
    {
        errorMessage =
            string.Format("Could not delete directory {0}; it has files in it.", oneItem.FileName);
    }
    catch (Exception ex)
    {
        errorMessage = string.Format("Could not delete directory {0}. Ex = {1}", 
          oneItem.FileName, ex.ToString());
    }
}
else if (oneItem.FileItemType == ListBoxFileItemType.File)
{
    CloudFile cloudFile = directoryToUse.GetFileReference(oneItem.FileName);
    cloudFile.Delete();
}

Download the completed project

The completed project is here. I did not include the packages, so you need to have Visual Studio set to retrieve the packages when you build. You can change that in Tools/Options/NuGet package manager/General. The project was created using VS2013/SP2.

Summary

In this article, I’ve shown you how to use the Storage Client Library to get reference to a share and create it if it doesn’t exist, upload and download files between the local computer and your share, list the files in a folder on the share, and delete files in a folder on the share. Much of the Storage Client Library code is similar to that used when access Blob Storage, so if there’s something else you need to do, Intellisense and trial & error can be your best friends, too!

Accessing your Azure Files from inside a VM

July 11, 2014

In the first post of this series on the Azure Files preview, I discussed what the feature entailed, and what you can do with it, as well as how to sign up for the preview. In the second post, I showed how to get the PowerShell cmdlets for the preview, and how to use them to manage your share and transfer files to/from your share. In this post, I’ll show you how to create a VM in Azure, show you two ways to map the share, and how to copy files to it.

How to create a VM in Azure

I realize many who read this post will already know this; I’m including it for those who are new to Microsoft Azure and want to try out the Azure Files feature.

First, log into your Azure account. Select Virtual Machines from the menu on the left.

image

Next, at the bottom, select + New, Compute, Virtual Machine, FromGallery.

image

On the next screen, select Windows Server 2012 R2 Datacenter and click the right arrow in the bottom right hand side of the screen.

image

On the next screen, fill in a name for the VM. Then specify a username and password. These credentials will be used when you RDP into your VM, so don’t forget them! When you’re finished, click on the arrow on the bottom right-hand side of the screen.

image

On the next screen, specify “Create a new cloud service” and provide a DNS name for the cloud service. This is kind of a wrapper for the VM. (You can deploy multiple VM’s into the same cloud service, and they will be load balanced under the same IP address.) In my case, I already have a storage account set up. If you don’t, or want to use a different one, you can ask it to generate one for you (it’s an option in the Storage Account dropdown list). For optimal performance, pick a region that is close to your location. When you’re done, click the arrow at the bottom right-hand side of the screen.

image

The next screen is used to select the VM Extensions to be installed. Leave the checkbox checked for “Install the VM agent”; for the purpose of this exercise, you don’t need any of the other extensions, so just leave them unchecked and click the checkmark in the circle at the bottom right-hand side of the screen to continue.

image

Now Azure will provision and start up your VM. At this point, you just wait until it says “Running” (like the first one in my list displayed below), and then you’re ready to continue. This seems to take a long time, but you’ll find the wait much more enjoyable if you take a quick run to the closest Starbucks. (I’ll be right back…)

image

Now click on your VM and it will bring up the Dashboard. (It might bring up the QuickStart screen; if it does, just continue to the Dashboard.)

At the bottom of the screen, you will see the Connect icon is lit up and waiting for you.

Access your Share from inside your VM

Click Connect at the bottom of the portal screen to RDP into your VM. When prompted, specify the username and password that you provided when creating the VM. Click through the security prompts. If it’s the first time you’ve logged into your VM, click Yes when prompted with this screen:

image

Now let’s attach our share. There are a couple of ways to do this. One is to use the NET USE command.

Open a command window. The easiest way to do this is to click the Windows start button and on the Modern interface, just start typing “command”. This will bring up the search box on the right, and you should see what you’re looking for; click on it to open the command window.

imagez:

Here is the command to use to connect your share:

C:\ net use [drive letter]: \\[storage account name].file.core.windows.net\[share name]
        /u:[storage account name]   [storage account key]

I’m going to mount the share I created in my previous post. It was called ‘nightbird’, and was on ‘nightbirdstorage3’. I’ve blurred out my storage account key in the following example:

image

Now open Windows Explorer, and you’ll see the share listed.

image

Now you can double-click on it and see what’s on it. On mine, I can see the images I uploaded and the folder I created in the previous post.

image

If you double-click on the folder Images, you can see the files in that folder. At this point, this is just like using any network share you’ve ever used on-premises.

Any changes you make from within the VM will of course appear if you go back and use the PowerShell commands to list the files on the share, whether you add, change, or delete files and/or directories.

At this point, if you go back to the command window, you can use the NET USE command to see what shares you have attached.

image

Another way to access the share

Instead of using the NET USE command, you can actually map the network drive from within Windows Explorer.

Bring up Windows Explorer, right-click on “This PC” and select “Map network drive:”.

image

Type in what would be the network UNC path to your share, which will be in this format:

\\[storage account name].file.core.windows.net\[share name]

Be sure that “Reconnect at sign-in” is checked. Click Finish to complete the drive mapping.

image

You will be prompted for username and password. This is for the share, so the username is the storage account name (nightbirdstorage3 in my case) and the password is the account key.

After doing this, it will open Windows Explorer, and show the share and the files and directories in the root.

image

So that’s the second way to map your network drive.

How do I put files on my share?

This is pretty simple; you can use the RDP session to do that. Just bring up Windows Explorer on your local computer. Select the directory and/or files that you want to copy and click Ctrl-C.

image

Switch to the RDP session. Using Windows Explorer, select where you want the files to go, then click Ctrl-V.

image

You can also copy the files on the share and paste them into Windows Explorer on the desktop to download them.

Another way to do this is when you click Connect in the portal, it will prompt you to save or open the RDP file. If you save it, you can then go find the RDP file, right-click on it and choose Edit. In the Local Resources tab, you can select More… under Local Devices, and then open up Drives and select a local drive. This will map the drive when you log into the VM, and you can access it as if it were local in the VM. I’m going to attach my D drive and then click OK, select the General tab on the main RDP window, and select Connect to connect to the VM.

image

Now after I log into my VM and bring up Windows Explorer, I can access that drive from inside the VM:

image

Now I can copy the files directly from my local computer to the file share accessed by my VM (and vice versa).

Regions and subscriptions and access, oh my!

An important thing to note is that your file share does not need to be in the same account as the VM’s you are going to use to attach it, it just needs to be in the same region.

I have multiple Microsoft accounts that have an Azure subscription. If I create a storage account in US West in one of my Azure accounts, and set up a file share, I can access that file share from any VM in any of my other Azure subscriptions that have VM’s in US West. This could bring up some interesting use cases.

Something to try

When you attach a file share to multiple VM’s, and one of the VM’s changes one of the files, a notification is sent to the other VMs that the file has changed and their view of the file share is updated. To see this work in action, you can follow these steps:

1. Create another VM in the same region as your file share.

2. RDP into the VM and attach the share.

3. RDP into the first VM and bring up the share folder.

4. Change one of the files on the share in one of the VMs.

5. Change over to the other RDP session and look at the files on the share, and you will see the update there as well.

Making the file share sticky

When I attach the share using NET USE, log out, restart the VM, and come back in, I have to provide credentials again to use the file share.

If I map the network drive using Windows Explorer, and check the box that says “Reconnect at sign-in”, then when I log out, restart the VM, and come back in, the share is still mapped and available without providing credentials.

This is tied to the user account used to map the drive. So be aware that you might need to map the drive under other user accounts. For more information, check out this article by the Microsoft Azure Storage team that addresses the stickiness of Azure File share mappings.

If you want to access that share from a web site, you would create a virtual directory on the VM hosting the web site. For example, if I wanted \\nightbirdstorage3.file.core.windows.net\nightbird\Images\ to be accessible as http://contoso.com/images, I would create a virtual directory that points to that folder on the share using that UNC path of the share. When I do that, it requests the credentials for accessing the share, and I can provide the storage account information at that time and it will be sticky if the role is rebooted.  (Thanks to Steve Evans for the tip.)

Summary

In this post, I showed you how to create a VM in Azure and attach your share using two different methods. I also showed how to copy files to the share from your local computer. In my next post, I’ll show you how to programmatically create a share, create directories, and upload/download files using the Storage Client Library.

Azure Files: how to manage shares, directories, and files

July 9, 2014

In my previous post, I discussed the new Azure Files preview, what it entailed, and what you can do with it, as well as how to sign up for the preview. Don’t forget that you also need to create a new storage account in order to get the Azure Files endpoint.

Unfortunately, there is no UI available yet to let you manage and look at your shares. I use Cerebrata’s Azure Management Studio to access my storage tables, queues, and blobs, and I expect them to have a new version supporting Azure Files before too long. In the meantime, you have multiple options: write some code and use the REST APIs, use PowerShell, RDP into your VMs and attach the share, or write a program using the Storage Client Library. In this post, I will show you how to use the PowerShell cmdlets to create your file share, upload files to it, list the files on it, etc.

Setting up the PowerShell cmdlets for the Azure Files preview

First, you need to download the PowerShell cmdlets for this preview. At the time of this writing, they can be found here. Download the zip file and save it to your local computer. Depending on your OS and security settings, you might need to unblock the zip file before you can access it. To do this, right-click on the file and check the properties; if there is an Unblock button, just click on it to unblock the file.

These Azure File cmdlets use Storage Client Library 4.0, so you need to run them in a different PowerShell session than the regular cmdlets. You need to unzip them, run PowerShell, and then set your default directory to the unzipped files. So you might want to think about that before unzipping the files, and put them in an easy-to-type location. For example, I’m going to put mine in a folder on the root of the data drive in my computer instead of MyDocuments. So I’ve created a folder called D:\zAzureFiles\ (so it shows up at the bottom of the directory listing), and unzipped the zip file into that folder. (If you don’t have a D drive, just create a folder on your C drive and use that).

When you have the files unzipped, run either Windows PowerShell or Windows Azure PowerShell. Change directories to the AzureStorageFile directory created when you unzipped the zip file.

image

If you do a DIR at this point, it will show you the files in the directory. What you want to do is load the Azure Files cmdlets. Here is the what the command looks like:

> import-module .\AzureStorageFile.psd1

When I type this into the Windows Azure PowerShell window, I get the list of cmdlets that were loaded.

image

Now that you have PowerShell up and running with the new cmdlets, you can fully manage your share(s).

Using the PowerShell cmdlets

The first thing you need to do is create a context that specifies the credentials for the storage account you want to use for your share. This is what the command looks like; you’ll want to make the appropriate substitutions.

> $ctx = New-AzureStorageContext [storageaccountname] [storageaccountkey]

Nobody in their right mind wants to type in their storage account key, so you’ll want to paste this. In case you’re a newbie at PowerShell, I will tell you that the PowerShell window is similar to a Command Window. After copying it into the Windows clipboard, you can right-click on the line in the PowerShell window where you want it to go, and it will paste it automatically (I didn’t even have to right-click and select Paste, I just right-clicked and it pasted it.)

I have a Surface Pro tablet, and when trying this, I had all kinds of trouble. I could not right-click and get it to paste with the Pen, the Touchpad, or using my finger (not even the middle one Winking smile). So here’s a free tip for those of you trying the same thing: Copy the string into the Windows clipboard, then select the PowerShell window, click Alt+Space. This will bring up a menu where you can type E for edit, and P for paste.

image

(This doesn’t seem like a big deal unless you create a demo on your desktop computer for a talk that you’re giving using your Surface Pro, and you don’t actually test the demo on the Surface Pro until midnight the night before the talk, and then you find that you can’t get it to do something simple like paste a big-ass string into a window. Not that I would ever do anything like that. *cough*. But I digress.)

Here’s my command creating the context in PowerShell, using my storage account called nightbirdstorage3 with my storage account key truncated so you can’t access my storage account and store your vacation pictures for free:

image

Now that you have the context set, you can create a file share in that storage account like this:

> $s = New-AzureStorageShare [sharename] –Context $ctx

I’m going to create a share called nightbird in my storage account:

image

Ta-dah! I now have a file share called nightbird in my nightbirdstorage3 storage account.

So how do you upload files onto your share? Using this command, you can upload one file to the share you’ve set as $s.

> Set-AzureStorageFileContent –Share $s –Source [local path to the file]

I’ll upload several images from the D:\zAzureFiles folder. Notice that on the last one, I’ve already uploaded that file, so it gives me an error message. You can use the –Force switch to tell it to overwrite the file if it already exists – just add it to the end of the command.

image

What if you don’t want them in the root folder of the share, but in a subfolder? You can create a directory on the share, and then specify that directory when uploading the files. To create a directory (where $s is the share):

> New-AzureStorageDirectory –Share $s –Path [foldername]

Then when you upload the file, you specify the path ($s is the share):

> Set-AzureStorageFileContent –Share $s –Source [path to local file] –Path [foldername]

I’ll create a directory called Images, and upload some files into it:

image

How do I see what is on my share? Here’s the command for that, followed by an example.

This gets the files in the root.

Get-AzureStorageFile $s

This gets the files in the specified folder.

Get-AzureStorageFile $s –Path [foldername]

Here is it in action:

image

Note in the first listing, which is the root, it shows both the Images folder and the files in the root directory. For the second command, it shows what is in the Images folder. There are no subfolders in the Images folder; if there were, it would show them in the listing.

Deleting a file from the share

These are very similar to the commands above. To delete a file, use Remove-AzureStorageFile instead of Set-AzureStorageFileContent (where $s is the share).

> Remove-AzureStorageFile –Share $s –Path [foldername]/[filename]

To remove a directory, use Remove-AzureStorageDirectory:

> Remove-AzureStorageDirectory –Share $s –Path Images

Note that the directory must be empty before you can delete it. Right now, they don’t have a way to delete a directory and all the files in it recursively like Directory.Delete(path, true) in .NET.

Downloading a file from the share to your local computer

To download a file from the share to the local computer, use Get-AzureStorageFileContent.

> Get-AzureStorageFileContent –Share $s –Path [path to file on the share] [path on local computer]

Here’s an example of downloading shark.jpg from the Images folder on the share into my local directory:

image

If the file already exists locally, it will give you an error. You can use the –Force switch to overwrite it if it already exists – just add it to the end of the command.

Uploading/downloading multiple files with one command

What if you want to upload a bunch of files? You have to either specify them separately, or use AzCopy version 2.4. Download AzCopy, then run the install. Then go search the C:\ drive to find out where it put the dang files. I copied the whole folder over to my D:\zAzureFiles directory for easier use. Use a Command Window and set the directory to the AzCopy directory. Then you can run AzCopy by typing in a command. Because of the likelihood that I’m not going to get it right the first time, I create a text file with the command in it, and save it as a .cmd file and execute it. Or copy the command from the text file and paste it into the command window and execute it. This makes it easier to edit, copy, and paste repeatedly when all you have to change is a directory, or to fix it if you have a problem.

Here are some examples of what you can do with this.

Upload all the files in a directory (designated by [localpath]) to the specified storage account and share name. The /s switch tells it to upload the directory recursively, which means it picks up the files in that directory, and all subdirectories and files under that original directory, retaining the directory structure.

AzCopy [localpath] https://[storageaccount].file.core.windows.net/[sharename]         
/DestKey:[StorageAccountKey] /s

Here’s my example:

image

To make sure this worked, I’ll go back to PowerShell to pull a list of files in the directory and see if my new files are there.

image

You can see that there are two new files on the share – greatwhiteshark.jpg and macworld_bing.jpg – those came from my test directory.

Here’s how to upload files using a wild card. For example, you could upload all the jpg files.

AzCopy [localpath] https://[storageaccount].file.core.windows.net/[sharename]  *.jpg          
/DestKey:[StorageAccountKey] /s

Here’s how to download the files from the file share to your local computer. This will download all of the files on the file share, including all subdirectories and the files in them.

AzCopy https://[storageaccount].file.core.windows.net/[sharename] [localpath]         
/DestKey:[StorageAccountKey] /s

You can also copy just one file. This does not look recursively through the directories; it only looks in the one you specify.

AzCopy https://[storageaccount].file.core.windows.net/[sharename]/foldername [localpath]         
/DestKey:[StorageAccountKey] /s [name of the file you want to download]

Note: It will not retain empty folders. So if you upload a directory structure that has an empty folder in it somewhere, that folder will not appear on the target. The same applies when download a directory structure.

Summary

In this post, I showed you how to obtain the PowerShell cmdlets for the Azure Files preview, and how to use them to manage your share and transfer files to/from your share. In my next post, I’ll show you how to create a VM, RDP into it, attach the file share, and access the files.

Overview of the new Azure Files Preview

July 9, 2014

One of the recent announcements from the Azure Storage team was about a new feature called Azure Files. This basically allows you to set up an equivalent to network shares in your Azure environment, to make files available to multiple VM’s simultaneously.

Some use cases

If you have a web application that outputs data that you need to input into a different web application, you could set up a file share and give access to both applications; one could write the data, the other could read it.

If you have a set of tools that you frequently use, you might put them on a shared drive. Then you can attach the drive and use the tools from multiple VM’s. As a developer, I would put the installation packages for tools that I use every day – Beyond Compare, FileZilla, etc – and when I spin up a VM with Visual Studio in it, I could then attach the share and install the tools in my VM.

If you have a team of developers using Visual Studio in VM’s, this makes it easy for them to share tools, and to enable all of the team members to use the same version. You could even put a OneNote notebook on the share and have the developers share notes about the project in that notebook.

Wasn’t there already a way to do this?

There was a convoluted way to do this with the current IAAS offering – hosting a file share backed by an IAAS data disk, and writing code to find the IAAS File Share from the rest of the VMs in your service, trying to handle high availability, etc. With the new feature, you just set up the shared drive, then attach it in your VMs. It’s that simple.

Since the Azure Files is built on the same underlying layers as Azure Storage, it provides the same availability, durability, and scalability that Azure Storage already provides for blobs, queues, and tables. The Azure Files can be locally redundant (3 replicas of the data within a single region) or globally redundant (3 replicas of the data in the same data center, and 3 replicas in another region).

How do I access the files on the share?

Azure Files provides two interfaces: SMB 2.1 and REST. The SMB protocol allows you to access the file share from VM’s. The interface for REST provides something you don’t have in on-premise file shares – the ability to access the files in the share with REST interface. For example, you might store files on the share that are used in multiple web applications, then use the network path to the share in your web applications to access those files. Those files can then be updated externally using REST.

The SMB 2.1 protocol is natively supported by OS APIs, libraries, and tools. This includes Windows (CreateFile, ReadFile, WriteFile…), .NET (FileStream.Read, FileStream.Write, etc.), and many others. It also supports standard file system commands for moving and renaming files and directories, as well as change notifications.

There is an SMB 3.0 protocol available, and I suspect that eventually Microsoft will support it. I think they choose 2.1 to start with because it is supported by the most tools and OSes. It is definitely supported by Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2. While not being completely supported on every version of Linux, it does work on some distributions, such as Ubuntu 13.10 and Ubuntu 14.04 LTS.

Note that the file and directory objects on the share are true file and directory objects. This is different from Blob storage, which has a flat structure. In Blob storage, you only have containers (the equivalent of a top-level directory). Anything that looks like it is in a folder within a container is actually part of the Blob file name, which is frequently parsed and visualized like a directory structure.

What are the specs?

These are the preview scalability targets for Azure Files:

  • Up to 5TB per share
  • A file can be up to 1 TB
  • Up to 1000 IOPS (of size 8KB) per share
  • Up to 60 MBps per share of data transfer for large IOs

Using the SMB protocol, Azure Files are only accessible to VMs in the same region as the file share. However, you can access the files from anywhere in the world using REST. (This may be because SMB 2.1 doesn’t have built-in encryption, so there’s no way to protect the files going out of the data center.)

One question I’ve seen asked a lot so far is “Can I attach the share to my desktop?” The answer is no, because the SMB protocol is not designed for the latency involved. This is going to be a huge feature request, so I’d be surprised if Microsoft didn’t come up with a solution at some point in the future.

How can I get access to this preview feature?

Go to the Azure Preview Portal and sign up for the Azure Files service using one (or more) of your existing Azure subscriptions. As subscriptions are approved, you will get an e-mail notification.

After being approved, log into your account and create a new storage account. It will not work with storage accounts created prior to signing up for the preview. When you create the new account, you will see the new endpoint displayed in the portal (http://storageaccountname.files.core.windows.net).

From there, you can create the share, put files into it, and connect to it from a VM.

Other resources

Check out this blog article by the storage team for more details. It also has a link to the video from the Tech Ed session.

Summary

The new Azure Files preview is something people have been asking for for a long time. To be able to set up a file share and access it from multiple VMs simultaneously brings a lot of new possibilities to Microsoft Azure, and should help people migrating from on-premise to the cloud, as well as those running hybrid implementations. In my next post, I’ll show you how to use PowerShell to maintain a file share and upload/download files.

Diagnostics and Logging in Azure Web Sites

April 14, 2014

A couple of months ago, Microsoft Press asked me to write a blog article about something Azure. I chose to write about diagnostics and logging in Azure Web Sites. If you are deploying anything to run in the cloud, you should be including logging in all of your applications. Because the software is running in the cloud, you have less visibility into the machine it’s running on, and trace diagnostics and logging can be a lifesaver when there’s a problem going on.

To check out the article, click here. I hope it’s helpful!

Bay Area Azure events in March and April 2014

March 14, 2014

There are several upcoming Windows Azure events in the SF Bay Area. All of these events are free and open to everyone. Food and drinks will be provided, so please register if you’re coming so we can make sure we have enough food!

March 18: A Real Story of Azure Migration

On March 18, Giscard Biamby is coming to speak about his company’s experience migrating one of their larger legacy applications to Windows Azure and how they implemented continuous delivery. It’s always interesting to hear these stories from real customers rather than from Microsoft marketing. For more details or to register, click here.

March 29: Global Windows Azure Bootcamp

On March 29, I will be running a Global Windows Azure Bootcamp at Foothill College in Los Altos Hills with the help of several of my friends. This is a free event run by community leaders worldwide on the same day. So far, there are over a hundred locations confirmed. Everyone is welcome. If you know nothing about Azure and would like to have an opportunity to learn a bit and have people available to help you with some hands-on labs, this is a great opportunity. Also, if you’re already doing Azure and have questions about your project, feel free to attend this bootcamp and take advantage of the opportunity to ask some experts for advice.

I’ll be presenting an overview of Windows Azure. Neil Mackenzie will be speaking on IAAS and Networking. Eugene Chuvyrov and Fabien Lavocat will be showing how to use Mobile Services from an iOS device and a Windows device. The rest of the day will be hands-on labs. For more details or to register, click here.

April 2: Vittorio Bertocci on Identity and Active Directory

On April 2nd, the Windows Azure meetup in San Francisco will be hosting Vittorio Bertocci from Microsoft. Vittorio will be in SF for the Microsoft \\build conference (April 2nd through April 4th). Vittorio is brilliant, and is a vibrant, entertaining speaker, focusing on Identity and Active Directory in Windows Azure. He spoke last year, and we had a huge turnout, lots of conversation and audience participation, and it was a great event. This should be another great meetup. For more details or to register, click here.

April 22nd: Deep Dive on Windows Azure Web Sites

On April 22nd, Aidan Ryan will be speaking at the Windows Azure meetup in San Francisco, doing a deep dive on Windows Azure Web Sites. This feature becomes more robust every day, and Aidan will cover the basics as well as the recent releases, such as web jobs. He’s also promised to incorporate anything new announced at \\build. For more details or to register, click here.

I feel very fortunate to live in the San Francisco Bay Area where there are so many opportunities for keeping up with technology are available. I hope I’ll see you at one or more of these events!

Big Data Hackathon with Big Data Names, February 8-9, 2014

February 5, 2014

There’s a great opportunity coming up this weekend (2/8/14-2/9/14) for those who have an interest in Big Data. Microsoft is hosting a (free) hackathon at the Microsoft Silicon Valley Campus in Mountain View (California). This is a Future Cities Hackathon; the focus will be looking at how Big Data can be used to solve problems in San Francisco. There are prizes on offer for the winning teams. (At the time of this writing, I was unable to pry the list of prizes out of Microsoft. I’ll update this post if I get the list before the hackathon.)

There will be 3 categories which will be judged by a panel of experts.

  1. Data modelling: Can you finds trends in data on the movements of SF citizens? Can you make accurate predictions on the behaviour of the pedestrians and traffic? Can you alleviate traffic chaos and aid in the ergonomic redistribution of parking? Can you set up your own pre-crime division!? Any tools or languages can be used in this category from iPython Notebook, Hadoop, R, C++, Spark, Matlab or any other data analysis tool. The most innovative winning entry will win and the solution published.
  2. Data Visualisation: Can you find the best way to visualise the movements of pedestrians around the City? Is it mapping, D3 charts or a more interactive web driven approach? How can you show relative behaviours of groups of people, traffic or hotspots? For example, can you write a training application for traffic officers? Or do you build a dashboard for City planners? The best and most innovative idea will win!
  3. Mobile application: Do you want to combine modelling and visualisation so that mobile users can find things out about the City when they’re on the move? Will you deliver this through a combination of maps and text? Which audience will you target? The best entry wins. This is a Windows Phone competition and will get support from Nokia who will help the app to be marketed and published in the app store.

To help with the hackathon, they are bringing in some of the world’s foremost experts in Cloud and Data Science, including Richard Conway and Andy Cross, two Windows Azure MVP’s based in London whose company, Elastacloud, specializes in Big Data consulting. They are both brilliant, and will be speaking and helping people throughout the hackathon, alongside experts from Hortonworks and Microsoft. This is a rare opportunity to talk to several Big Data experts in person, and see how anyone can get up and running in hours solving large scale data problems.

If you know nothing about Cloud and/or Big Data, this is your chance. Microsoft will be supplying cloud time for all attendees. Interested in “machine learning” or advanced analytics? Then come to the Microsoft Future Cities Big Data Hackathon!

This hackathon is hosted by Microsoft, Hortonworks, and Elastacloud. To sign up, click here.

Windows Azure at San Diego Code Camp – 27th and 28th of July 2013

July 15, 2013

There is a code camp in San Diego on July 27th and 28th that has a lot of really interesting sessions available. If you live in the area, you should check it out – there’s something for everyone. It also gives you an opportunity to talk with other developers, see what they’re doing, and make some connections.

I’m going to be traveling from the San Francisco Bay Area to San Diego to speak, as are some of my friends — Mathias Brandewinder, Theo Jungeblut, and Steve Evans.

For those of you who don’t know me, I’m VP of Technology for GoldMail (soon to be renamed PointAcross), and a Microsoft Windows Azure MVP. I co-run the Windows Azure Meetups in San Francisco and the Bay.NET meetups in Berkeley. I’m speaking about Windows Azure. One session is about Windows Azure Web Sites and Web Roles; the other one is about my experience migrating my company’s infrastructure to Windows Azure, some things I learned, and some keen things we’re using Azure for. Here are my sessions:

Theo works for AppDynamics, a company whose software has some very impressive capabilities to monitor and troubleshoot problems in production applications. He is an entertaining speaker and has a lot of valuable expertise. Here are his sessions:

Mathias is an Microsoft MVP in F# and a consultant. He is crazy smart, and runs the San Francisco Bay.NET Meetups. Here are his sessions:

Steve (also a Microsoft MVP) is giving an interactive talk on IIS for Developers – you get to vote on the content of the talk! As developers, we need to have a better understanding of some of the system components even though we may not support them directly.

Also of interest are David McCarter’s sessions on .NET coding standards and how to handle technical interviews. He’s a Microsoft MVP with a ton of experience, and his sessions are always really popular, so get there early! And yet another Microsoft MVP speaking who is always informative and helpful is Jeremy Clark, who has sessions on Generics, Clean Code, and Design patterns.

In addition to these sessions, there are dozens of other interesting topics, like Estimating Projects, Requirements Gathering, Building cross-platform apps, Ruby on Rails, and Mongo DB (the name of which always makes me think of mangos), and that’s just to name a few.

So register now for the San Diego Code Camp and come check it out. It’s a great opportunity to increase your knowledge of what’s available and some interesting ways to get things done. Hope to see you there!