Category Archives: Uncategorized

Windows Azure: ‘Roles instances are taking longer than expected to start’

New caching doesn't seem to be a productivity improvement

New caching doesn’t seem to be a productivity improvement

I’ve experience the above plenty of times – hit F5, wait for your projects to build then sit back and enjoy anywhere from a minute to five minutes of peaceful reflection while the Azure emulator gets up to speed.

After some playing with procmon I discovered that the vast majority of activity in those interminable minutes could be put into three broad buckets:

  • Logging to DFAgent.log, sometimes tens of times a second (something I’m seriously considering symlinking to NUL, or at the very least a ramdisk)
  • Various things to do with installing the Windows Azure Cache Emulator
  • Lots of communications with ‘something cloudy’ at 168.63.0.*

Troubles seemed to start with the install of the 1.8 SDK, but in fact appear now to be linked to whether or not ‘Enable Caching’ is turned on in the cloud role properties:

j'accuse!

j’accuse!

Some timings from my machine on a particularly large solution, from ‘Starting emulator’ to a usable deployment:

  • Enable Caching ‘On’: ~3 minutes average
  • Enable Caching ‘Off’: ~15 seconds average

This is a bugger though, as you can’t have cloud-configuration-specific settings for your caching – it’s either globally on or globally off. In addition, without caching enabled anything that attempts to use it will fail with an exception (fine, if those classes fall back to a null cache implementation). So – for my purposes, my local development environment setup changes from:

  • Caching enabled
  • Use Azure caching for session state management

to a new cloud project specifically for development with:

  • Caching disabled
  • Use the Session State service for session state management (configured in web.config) and web.config transforms to convert to Azure Caching session state management for deployment

And my F5 experience goes from three to five minutes down to 15-20 seconds. Not ideal, as I’ve taken my development environment another step further away from my deployment environment, but from a productivity point of view it’s a no-brainer.

Have yet to diagnose why installing the Windows Azure Cloud Cache components as happens during a development run is so costly – more investigation required…

Proxying Microsoft.WindowsAzure.Storage.Table.CloudTable

The Windows Azure SDK’s pretty good, but implementation decisions in sections of it can make extending or monitoring what it’s up to tricky. Recently I wanted just to track the number of reads and writes that were taking place on a set of tables per request to identify opportunities for caching.

Given I was using a repository pattern, I was able to proxy the CloudTable object upon which queries are executed. While the proxy doesn’t inherit from CloudTable (not least because it’s sealed), with only a couple of lines replaced all code that called methods upon an instance of CloudTable were quickly calling the equivalent methods on my proxy. It’s not perfect, but now that it’s in place I can do a number of fun things like track table storage operation counts, or implement access control per table.

To save anyone else having to write it, it’s presented below:

Windows Azure cache client crashing when developing locally? Check your system time

While trying to diagnose a time-zone issue in an Azure application, I lazily changed my system clock to be an hour earlier instead of actually changing my system timezone. I then found I couldn’t start my app in the emulator:

CacheInstaller.exe has stopped working? Check your system time

CacheInstaller.exe has stopped working? Check your system time

If your system clock is out by more than about 10 minutes and you’re trying to use the Azure cache for development purposes then you’re going to have some trouble.

Remember – having your system time right doesn’t just mean setting it up to look right on the clock in the taskbar, you also need to be in the right timezone so that calculations to and from UTC are performed correctly.

Visual Studio assembly reference details in Properties pane doesn’t match what gets used for the build

Having lost a day to this, I post in the hope someone else gets their projects fixed sooner. Caveat: No idea which step fixed the issue.

Scenario

  • Large solution (~40 projects) containing multiple Azure services, all targeting Azure SDK 1.7
  • Another team member upgrades to Azure SDK 1.8
  • Now can’t deploy a working build to Azure as once up, site 500s with Could not load file or assembly ‘Microsoft.WindowsAzure.ServiceRuntime’ or one of its dependencies
    • Everything still works just peachy locally, though
  • Right-click… Properties on a Microsoft.WindowsAzure.ServiceRuntime reference in any project, find that the version mentioned is 1.7.0.0 when we’re expecting 1.8.0.0
    • Even though the .csproj file specifically says the reference is 1.8.0.0 and with a hint-path pointing the right way
    • Where is VS2012 pulling this 1.7.0.0 from?
  • Run build with diagnostic-level logging to find 1.7.0.0 assembly being used in build process even though it’s not part of the .csproj file

Solution for me

  • Close Visual Studio
  • Find C:\Users\USERNAME\AppData\Local\Microsoft\VisualStudio\11.0\Designer\ShadowCache folder, delete all contents
  • Uninstall all Azure SDKs lower than 1.8, and any other associated libraries etc using Add/Remove Programs
  • Remove registry value at key HKLM\SOFTWARE\Classes\Installer\Assemblies\Global\Microsoft.WindowsAzure.ServiceRuntime,version=”1.7.0.0″…
  • Use gacutil /u to uninstall the 1.7 assembly from the GAC
  • Restart machine
  • Delete entire source tree from local disk and pull down again from source control
  • Clean solution
  • Rebuild all

Failing to generate WSDL from an Azure-hosted WCF service behind SSL

Error: Cannot obtain Metadata from http://example.com/api/service.svc?wsdl

Setup

  • WCF Service with SOAP, JSON and XML endpoints where the SOAP endpoint is a basicHttpBinding rather than a wsHttpBinding
  • All service endpoints behind SSL
  • SecuritySwitch installed and rewrite rules enabled to force all requests through to SSL where available
  • AspNetCompatibilityEnabled = true, MultipleSiteBindingsEnabled = true
  • <useRequestHeadersForMetadataAddress> element added to service behaviour with default ports http -> 80 and https -> 443
  • Trying to add a service reference in Visual Studio yields a number of errors:
HTTP GET Error
    URI: http://example.com/api/Service.svc?wsdl

    The document at the url https://example.com/api/Service.svc?wsdl
was not recognized as a known document type.
The error message from each known type may help you fix the problem:
- Report from 'XML Schema' is 'The document format is not recognized 
(the content type is 'text/html; charset=UTF-8').'.
- Report from 'https://example.com/api/Service.svc?wsdl' is 
'The document format is not recognized (the content type is 
'text/html; charset=UTF-8').'.
- Report from 'DISCO Document' is 'Discovery document at the URL 
https://example.com/api/Service.svc?disco could not be found.'.
  - The document format is not recognized.
- Report from 'WSDL Document' is 'The document format is 
not recognized (the content type is 'text/html; charset=UTF-8').'.

Bugger. However, notice that the initial request URL is http, and the WSDL endpoint is https. Trying to navigate manually to the ?wsdl URL failed and just re-rendered the ‘You have created a service’ welcome page.

Solution

In my case the solution was to set httpsGetEnabled = true (in addition to httpGetEnabled=true) on the serviceMetadata tag for the service behaviour in question.

Hopefully Google’ll find this for me the next time I do it!

 

SkyDrive isn’t really a replacement for Live Mesh

Windows Live Mesh is a remote desktop and file sync affair that works through firewalls – you install the client on the machines you want to communicate and can then connect to them so long as they’re turned on; so far, so PCAnywhere. That it’s free and works well has made it pretty useful in my day-to-day life, so I was surprised that on rebuilding my current PC and downloading the latest Windows Live Essentials (2012 version) the installer made no mention of Live Mesh – it’d disappeared.

Microsoft says that most of what you can do with Live Mesh you can now do with SkyDrive, but that’s pretty disingenuous. The examples they give of ‘how to do this on SkyDrive’ omit the biggest reason a lot of people used it in the first place – remote desktop. There’s just no equivalent in 2012.

Luckily you can still get the Live Essentials 2011 installer from the Microsoft website that does still contain it, but you’ll never be able to upgrade to anything in the 2012 package without it being automatically removed.

Thing is – if they just dropped support for Live Mesh (to the point of actively removing it from your machine when you upgrade), how long will Microsoft realistically keep that 2011 installer going?

DotNetOpenAuth Channel.Send not working on Cassini

A note to self more than anything – next time DotNetOpenAuth’s Channel.Send method doesn’t work (by which I mean cause the current thread to stop, and cause the client browser to redirect to the RP authorisation URL) make sure you’re running against IIS Express or better, as the ASP.NET Development Server seems to swallow the ThreadAbortException and you’ll get redirected nowhere.

Get Travis-CI to do your Python packaging tests for you

I’ve been writing some API wrapper libraries recently – dead simple stuff in Ruby and Python that make calling my company’s OAuth-protected API a bit easier.

With the code on GitHub, and some basic unit tests in place, I wanted to test out Travis CI for continuous integration. Getting things setup on Travis is dead simple, but what I really wanted to do was test the packaged-and-installed library rather than testing it straight from source.

To do this there are a few script lines in the .travis.yml file:

install:
 - python setup.py sdist --formats=zip -k
 - find ./dist -iname "*.zip" -print0 | xargs -0 pip install
script:
 - python PACKAGENAME_GOES_HERE/test/__init__.py
  1. The first packages the library as a .ZIP file as would be uploaded to PyPI
  2. The second looks for all .ZIP files in the build (which here will just be the package we created from the first line) and pipes the filenames through to pip install installing the packages as would an end-user
  3. The third runs a small unit and system test suite

This way, every time we check code in to GitHub we can ensure that it:

  • The packaging configuration is correct and generates a sensible package
  • The packaging configuration has all the prerequisites listed correctly and can be pulled down from PyPI
  • The packaged and installed library works

32GB RAM slower than 16GB?

My PC was recently having some difficulties which were narrowed down to a failing SSD – after 4 years I’d be pretty tired too, so I wasn’t too aggrieved that I’d need to replace it. While I was at it, I upgraded a few other components too:

  • CPU from an E8400 Core2 Duo @ 3GHz to a Core i5-2400 @ 3.1GHz (with two extra cores)
  • 8GB Ballistix DDR2 800MHz (they make memory that slow?!) to 32GB Vengeance 1600MHz
  • 1x 64GB Kingston V+ SSD (220MB/s out, 140MB/s in) to 2x 120GB Agility 3 SSDs (525MB/s out, 500MB/s in)

Problem is – the new system was substantially slower than the old one. Initially I thought that was down to having had the two SSDs RAID-0’d and the chipset not liking it, so I broke the array and reinstalled to find things marginally improved but still slow. In fact, the wheels were coming off the wagon with the Anytime Upgrade to W7 Ultimate, with long pauses on startup and sluggishness in games.

Some Googling later and I found a Stack Overflow post detailing a suggested fix – bizzarely, to turn on the integrated graphics on the motherboard (even though I’ve a PCI-E graphics card) and allocate to it as much RAM as the BIOS will allow (480MB ish).

Proof’s in the pictures:

What the shit?

Whisky Fringe 2012

The RMW Whisky Fringe 2012 starts tomorrow, and the electronic version of the programme’s finally gone out with details of the Tasting Tracker app in it which is all a bit exciting – now to hope that Azure’ll keep it ticking over nicely for the next three days.

The site went live at http://www.wf2012.co.uk to make it a bit quicker to type on a mobile, and already has a few tens of people signed up and making wishlists.