Running in Azure with Tools/SDK version 1.2, I had working code that assured that my IIS logs and IIS Failed Request logs were transferred automatically to blob storage, where I could easily view them. After upgrading to Azure Tools/SDK 1.3, my IIS logs no longer showed up in blob storage. After looking around, I found this MSDN article, which talks about this being a known problem. It’s caused because the processes involved do not have the right permissions to access the folders where the logs are located.
The article also says this: To read the files yourself, log on to the instance with a remote desktop connection. I thought, “Great, at least I can get to them and view them.” Well, not so much. You can RDP into the instance and track down the IIS logs, but the IIS Failed Request logs are not created.
The article blithely throws this solution your way: “To access the files programmatically, create a startup task with elevated privileges that manually copies the logs to a location that the diagnostic monitor can read. Doesn’t that sound easy? Not so much.
I started a thread in the MSDN Forum and my good friend Peter Kellner opened up a problem ticket with Azure support. So I finally have a solution, with input from and my thanks to Steve Marx (Microsoft Azure Team), Andy Cross, Christian Weyer (MVP), Ruidong Li (Microsoft Azure support), Neil Mackenzie (Azure MVP), and Cory Fowler (Azure MVP). Sometimes it takes a village to fix a problem. I have to give most of the credit to Ruidong Li, who took the information from Steve Marx and Christian Weyer on startup tasks and PowerShell and ran with it.
I’m going to give all the info for getting the IIS logs and IIS Failed Request logs working. The basic information is available from many sources, including this article by Andy Cross.
For the IIS Failed Request logs, you have to put this in your web.config.
<!-- This is so the azure web role will write to the iis failed request logs--> <tracing> <traceFailedRequests> <add path="*"> <traceAreas> <add provider="ASP" verbosity="Verbose" /> <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" /> <add provider="ISAPI Extension" verbosity="Verbose" /> <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module" verbosity="Verbose" /> </traceAreas> <failureDefinitions timeTaken="00:00:15" statusCodes="400-599" /> </add> </traceFailedRequests> </tracing>
In the OnStart method of your WebRole, you need this:
//from http://blog.bareweb.eu/2011/01/implementing-azure-diagnostics-with-sdk-v1-3/ // Obtain a reference to the initial default configuration. string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"; CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString)); RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id); DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration(); config.ConfigurationChangePollInterval = TimeSpan.FromSeconds(30.0); //transfer the IIS and IIS Failed Request Logs config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); //set the configuration for use roleInstanceDiagnosticManager.SetCurrentConfiguration(config);
If you publish it at this point, you get no logs in blob storage, and if you RDP into the instance there will be no IIS Failed Request logs. So let’s add a startup task (per Christian Weyer and Steve Marx) to “fix” the permissions.
First, create a small file called FixDiag.cmd with Notepad, and put this line of code in it. This is going to be the command executed when the role instance starts up. Add this file to your project and set the Build Action to “Content” and “Copy to Output Directory” to “copy always” so it will include the file in the deployment when you publish your application to Azure. Here are the contents of the file.
powershell -ExecutionPolicy Unrestricted .\FixDiagFolderAccess.ps1>>C:\output.txt
This is going to run a script called FixDiagFolderAccess.ps1 and output the results to C:\output.txt. I found the output file to be really helpful when trying to figure out if my script was actually working or not. So what does the powershell script look like?
Here’s the first bit. This loads the Microsoft.WindowsAzure.ServiceRuntime assembly. If it’s not available, it waits a few seconds and loops around and tries again. Then it gets the folder where the Diagnostics information is stored.
echo "Output from Powershell script to set permissions for IIS logging." Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime # wait until the azure assembly is available while (!$?) { echo "Failed, retrying after five seconds..." sleep 5 Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime } echo "Added WA snapin." # get the DiagnosticStore folder and the root path for it $localresource = Get-LocalResource "DiagnosticStore" $folder = $localresource.RootPath echo "DiagnosticStore path" $folder
This is the second part. This handles the Failed Request log files. Following Christian’s lead, I’m just setting this to give full access to the folders. What’s new is I’m creating a placeholder file in the FailedReqLogFiles\Web folder. If you don’t do that, MonAgentHost.exe will come around and delete the empty Web folder that was created during app startup. If the folder isn’t there, when IIS tries to write the failed request log, it gives a “directory not found” error.
# set the acl's on the FailedReqLogFiles folder to allow full access by anybody. # can do a little trial & error to change this if you want to. $acl = Get-Acl $folder $rule1 = New-Object System.Security.AccessControl.FileSystemAccessRule( "Administrators", "FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $rule2 = New-Object System.Security.AccessControl.FileSystemAccessRule( "Everyone", "FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule1) $acl.AddAccessRule($rule2) Set-Acl $folder $acl mkdir $folder\FailedReqLogFiles\Web "placeholder" >$folder\FailedReqLogFiles\Web\placeholder.txt
Next, let’s handle the IIS logs. The credit for this goes to Rudy with Microsoft Azure Support. This creates a placeholder file, and then it retrieves the folders under Logfiles\Web looking for one that starts with W3SVC – this is the folder the IIS logs will end up in. There won’t be any the first time because the folder is not created until the web application is loaded the first time.
In the first incarnation of this code, it just waited a specific amount of time and then set the folder ACLs. The problem was that there seems to be some kind of race condition, and it appeared that if it didn’t start up before the Azure process that copies the logs started up, then you could set the permissions on the folders all day long and it wouldn’t ever transfer the log files. So this code forces it to download the page, which causes the IIS logging to start, and it beats the race condition (or whatever the problem is). At any rate, this works every time.
mkdir $folder\Logfiles\Web "placeholder" >$folder\Logfiles\Web\placeholder.txt # Get a list of the directories for the regular IIS logs. # You have to wait until they are actually created, # which is why there's a loop here. # Just keep looking until you find the folder(s). $dirs = [System.IO.Directory]::GetDirectories($folder + "\\Logfiles\\web\\", "W3SVC*") $ip = (gwmi Win32_NetworkAdapterConfiguration | ? { $_.IPAddress -ne $null }).ipaddress $ip echo "dirs.count" $dirs.count while ($dirs.Count -le 0) { Sleep 10 $bs = (new-object System.Net.WebClient).DownloadData("http://" + $ip[0]) echo "in the loop" $dirs = [System.IO.Directory]::GetDirectories($folder + "\\Logfiles\\Web\\", "W3SVC*") echo "dirs" $dirs echo "dirs[0]" $dirs[0] echo "dirs.count" $dirs.count } echo "after while loop" echo "dirs[0]" $dirs[0]
Now that there’s a folder and you know where it is, set the permissions on it.
# Now set the ACLs on the "first" directory you find. (There's only ever one.) $acl = Get-Acl $dirs[0] $acl.AddAccessRule($rule1) $acl.AddAccessRule($rule2) Set-Acl $dirs[0] $acl
This powershell script should be called FixDiagFolderAccess.ps1 (it needs to match the name specified in FixDiag.cmd). Add this to your project and as before, set the Build Action to “Content” and “Copy to Output Directory” to “Copy Always”.
In your Service Configuration file, you need to add this to the end of the <ServiceConfiguration> element. In order for the –ExecutionPolicy flag to work, you must be running in Windows Server 2008 R2, which has PS2.
osFamily="2" osVersion="*"
In your Service Definition file, you will need to specify the Startup Task. This goes right under the opening element for the <WebRole>. As recommended by Steve Marx, I’m running this as a background task. That way if there is a problem and it loops forever or won’t finish for some reason, I can still RDP into the machine. Also, in order to set the ACL’s, I need to run this with elevated permissions.
<Startup> <Task commandLine="FixDiag.cmd" executionContext="elevated" taskType="background" /> </Startup>
So that should set you up for a web application. You can get your IIS logs and IIS Failed Request logs transferred automatically to blob storage where you can view them easily. And if you RDP into your instances, you can look at both sets of logs that way as well.
What if you have a WCF service, and no default web page? In the line that does the WebClient.DownloadData, just add the name of your service, so it looks like this:
$bs = (new-object System.Net.WebClient).DownloadData("http://" + $ip[0] + "MyService.svc")
What if your WCF service or web application only exposes https endpoints? I don’t know. I’m still searching for an answer to that question. I tried using https instead of http, and I get some error about it being unable to create the trust relationship. At this point, I’ve spent so much time on this, I have to just enable RDP on the services with https endpoints and access the logging by RDP’ing into the instance. If you have any brilliant ideas, please leave a comment.
[Edit 3/8/2011] I figured out for the https endpoints that rather than call DownloadData using the IP address, use the DNS name of the service, along with the service name. For example, say you are Contoso.com, and you have an SSL certificate. Your services will likely have DNS entries with the domain name contoso.com, like bingservice.contoso.com. If your service is BingService.svc, the URL would be https://bingservice.contoso.com/BingService.svc. If I put that into the DownloadData statement, it works. Since it’s https, of course it has to access the service with the same domain as the SSL certificate.