Azure cmdlets timeout workaround/hack

I know that fixing a timeout issue by increasing the timeout is probably the worst idea but hey, needs must…. 🙂

When you install the Azure cmdlets, it compiles the source code. Therefore allows you to amend the source code before this happens:


Change the CreateServiceManagementChannel methods to include these lines:

factory.Endpoint.Binding.SendTimeout = TimeSpan.FromMinutes(2);
factory.Endpoint.Binding.ReceiveTimeout = TimeSpan.FromMinutes(2);

…before the factory.CreateChannel();

Then run the startHere.cmd again otherwise you’ll get a strong name exception.

However I still get timeouts for GetDeployment, this is no longer a client side WCF timeout issue as I tried using HttpWebRequest against the ServiceManagement REST API setting both Timeout and ReadWriteTimeout. It appears this is a server side timeout of around 2 minutes resulting in

“The underlying connection was closed: The connection was closed unexpectedly.”

I’ve not found a way around this yet other than to try another approach that doesn’t use/need this method.

UPDATE 8/4/2011: About 5pm last night GMT this issue went away and the GetDeployment call that was taking over two minutes now runs in 8 seconds – we didn’t change anything so…. 😐

Azure Logging, Tracing and Diagnostics Viewers

3 common options seem to be available (without rolling your own via the Service Management API):

  1. Cerebrata Diagnostics Manager nice but $79.99 – this is the one I’m using at the moment.
  2. Windows Azure MMC free but you may need (as I did) this workaround if it you get “MMC launch – Could not load file or assembly ‘Microsoft.Samples.WindowsAzureMmc.ServiceManagement”
  3. Azure Cmdlets – allows you to initiate log transfers but you’d need to use something to view them like VS Server Explorer, Cloudberry or similar. If you’re going down this route, check out

This is worth a read too: Take Control of Logging and Tracing in Windows Azure

If directly looking at Table Storage -> WADLogsTable, the following syntax for WCF data filter is useful:

Timestamp gt datetime'2011-08-12T00:00:00'

If your application is failing to start up, this tracing may not help you because it will only log maximum every minute. Therefore for these kinds of problems you might want to consider writing directly to table storage. Steve Marx has implemented a TraceListener to do just this at

FIX: WCF Streamed webservice message exceeding 65536 bytes despite “correct” maxReceivedMessageSize setting.

“The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.”

The config file already had a bindingConfiguration specified for the endpoint that had a maxReceivedMessageSize=”67108864″ attribute. But it still wasn’t making a difference.

Fix: In the end it turned out that it was the <service name=”MyService”> element that was incorrect, it should have been fully qualified to the class name of the service, i.e. – <service name=”MyNamespace.MyService”> without this it was basically ignoring this configuration and falling back to default of 65536 max message size.