TypeLite 0.9.6 out with multidimensional arrays fixed

I use Lukas Kabrt’s TypeLITE package (NuGet link) a lot to automatically generate TypeScript interfaces from .NET classes and interfaces, so occasionally I’ll drop a pull request to fix the odd issue in its backlog when I’ve got a spare hour (though generally they’re not issues I encounter myself).

0.9.6 includes a patch I submitted to fix jagged array output. Before, the following situations wouldn’t be processed correctly and they’d all output a single-dimensional array:

public string[][] MyJaggedArray { get; set; }
public string[][][] MyVeryJaggedArray { get; set; }
public IEnumerable<string> MyIEnumerableOfString { get; set; }
public List<string> MyListOfString { get; set; }
public List<string[]> MyListOfStringArrays { get; set; }
public List<IEnumerable<string>> MyListOfIEnumerableOfString { get; set; }
public List<List<string[]>> MyListOfListOfStringArray { get; set; }

Processing used to just stop once it detected an array-ish type (which oddly didn’t include IEnumerable), without diving deeper. For jagged arrays the logic’s pretty straightforward, as you can just look at the rank of the array type and output that many pairs of square brackets.

For enumerable types things are a bit more interesting as we have to read into the generic type parameter recursively. Now the above gets formatted as you’d expect, and another issue’s closed off.

MyJaggedArray: string[][];
MyVeryJaggedArray: string[][][];
MyIEnumerableOfString: string[];
MyListOfString: string[];
MyListOfStringArrays: string[][];
MyListOfIEnumerableOfString: string[][];
MyListOfListOfStringArray: string[][][];

gh-ticker – a simple ticker for your public GitHub activity

With a spare weekend I put together the ticker widget you can see at the top of the screen just now – iterating through my most recent GitHub activity items every few seconds.

It is, fittingly, available on GitHub for forking and customisation licensed under the BSD 3-Clause.

How it works

The GitHub API is very straightforward, and data that’s already public (such as what appears on your Public Activity tab) can be accessed without authentication and with JSONP – ideal for client-side hackery.

The widget’s architected as a couple of JS files (taking a dependency on jQuery and Handlebars for now), one which contains Handlebars precompiled templates and the other that makes the API call and renders partials befitting the type of each activity item.

Setting it up’s pretty simple – reference the JS and CSS, make sure Handlebars and jQuery are in there too and then whack a DIV somewhere on your page with id ‘gh-ticker’.

<div id="gh-ticker" data-user="pablissimo" data-interval-ms="5000" />

The user whose data is pulled and the interval between ticker item flips are configurable as data attributes.

The GitHub Events API

The Events API knows about a set number of event types – for each event type, there’s a Handlebars partial. When we’re wondering how to render an item we look up the relevant partial and whack it into the page.

Since that’s a fair few partials (neat for development in isolation, bad for request count overhead) those partials are precompiled using the Handlebars CLI and put into a single gh-templates.js file.

Improvements

The ticker’s very basic – it just hides or shows the items as required, without any pretty transitions. It also takes a dependency on jQuery which it needn’t, since it’s only using it for the AJAX call and element manipulation both of which are easily covered off by existing browser functionality.

Still – it can be easily styled to be fairly unobtrusive and has at least taught me a little about Handlebars.

NRConfig 1.4.0.0 for New Relic released

I’ve spent a little time working on NRConfig, the tool that generates custom instrumentation files for .NET projects using New Relic, after a bug report that pointed out that the tool was unable to run for an assembly for which dependencies weren’t available. This isn’t likely in production code as you’d likely need the dependencies available to run, but can happen when you want to do an offline run of instrumentation generation against a third-party library.

To this end, NRConfig’s been changed pretty substantially under the hood to support alternatives to .NET reflection for discovering instrumentable types, and Microsoft’s Common Compiler Infrastructure (or CCI) library drafted in as the default discovery provider.

CCI’s slower than reflection by quite a margin – it can now take several seconds to produce instrumentation configuration for large or complex assemblies, but I’m hoping to improve that if it becomes a problem.

Also introduced is support for MSBuild in a new NuGet package, NRConfig.MSBuild. This should make generating instrumentation files for your own code a lot less work – simply add the NRConfig.MSBuild package to any project containing code you want to instrument and mark up the assembly, types or methods with [Instrument] attributes to control the output. On build, a custom instrumentation file is generated in your output directory for you to deploy wherever.

Enabling CORS on your ASP.NET output-cached webservice? Don’t forget to change your varyByHeaders…

If you’re enabling CORS on your ASP.NET web service, you’ll be receiving an ‘Origin’ header and outputting an Access-Control-Allow-Origin header if you’re happy to receive the request. If you’re being strict about your access control policy, you’ll be returning the same origin you got rather than * so that the user agent knows to let the call continue.

This poses a bit of an obstacle when combined with ASP.NET Output Caching, as unless you either tell it to vary its output by all headers or explicitly call out the Origin header you may find that accessing your service from two URLs within your cache lifetime period will see one call succeed and the other fail.

The failing call is because the Access-Control-Allow-Origin header’s being sent from the cache, but for the broken site won’t match the Origin that was sent to it and since we’ve not configured output caching to vary by the Origin header it assumes the requests from the two different origins are the same and responds accordingly.So, we just need to tack in the Origin header into our cache configuration’s varyByHeader attribute (separated from other headers with a semicolon, if any others exist) and bingo! The two sites result in correct responses.

 

LinqPad, Azure Table Storage Driver and continuation tokens – or ‘how to get more than 1000 results/run a query that runs for more than 5 seconds’

Mauricio Diaz Orlich made a LinqPad storage driver that lets you query Azure Table Storage as easily as any other data-source within LinqPad – this is invaluable when working with Azure as there aren’t any real alternatives when you just want to make an ad-hoc query or make multi-table queries.

However, there’s an issue – the driver doesn’t use continuation tokens internally so your query needs to both finish executing within the Azure Table Storage limit of 5 seconds and return fewer than 1000 results, otherwise you’ll be missing data and won’t necessarily notice.

I forked the source to see if I could patch in a way around it but in doing so found a much simpler solution – do exactly what you’d do in your .NET code and call the AsTableServiceQuery() extension method on your query before you materialise it.

For example:


Users
.Where(x => x.EmailAddress == "user@example.com")
.AsTableServiceQuery()
.ToList()

This query will now return as many results as exist (up to LinqPad’s seemingly unavoidable 10,000 record limit) and will execute for as long as it takes to actually return all results by way of continuation tokens.

Visual Studio Azure Deployment Error: No deployments were found. Http Status Code: NotFound

Had this recently while deploying to Azure, to the blank staging slot of a cloud service that already had a production instance running.

Looking at the log, immediately before the error Visual Studio claimed that it was stopping a role – how can that be, when the staging slot’s already free? This is another situation where you shouldn’t believe Visual Studio’s lies – upload the package directly to the portal in the event of failure and you’ll generally get better and more accurate errors.

The issue in this case? We’d run out of spare cores in our subscription to deploy two medium instances of the application to staging. Why does the error talk about missing deployments (and try to stop non-existent deployments in this instance)? Unknown.

Whisky Fringe Tasting Tracker gets a new name – dramtracker.com

The Whisky Fringe Tasting Tracker will be running again this year, hopefully much better on some mobile devices where it was sluggish and tricky to use last year.

And to celebrate, it’s moving from its old location to http://dramtracker.com. Your old user accounts should continue to work, and all your old tasting scores and notes are still accessible – just pull down the ‘year’ list and choose 2012 to see your scores from last year.

The set of whiskies is still identical to last year for testing purposes until the full programme is released. Sláinte!

Getting Newtonsoft Json.NET to respect the DataMember Name property

Json.NET is brilliant, not least because it produces much more sensible JSON in a number of situations than the DataContractJsonSerializer but also because there aren’t many problems that can’t be solved by adding appropriate attributes or formatters.

Sometimes you need to output specific names for enumeration members – in most cases the enumeration member name itself, and Json.NET comes with the StringEnumConverter for this purpose and it works well – just decorate the enumeration itself with the JsonConverter attribute and you’re done.

In some cases though you need to output values that wouldn’t otherwise be valid C# identifiers – for example, the string ‘A128CBC-HS256′ for an enumeration member ‘A128CBC_HS256′ (where we want that underscore turned into a dash). Adding DataMember attributes to the enumeration members doesn’t work, so I threw the following together quickly to respect them where present.

Presented as-is and without much testing – in particular there’ll be a performance hit for this as we’re using reflection to render and parse the values.

Example enumeration:

[DataContract]
[JsonConverter(typeof(DataMemberAwareEnumJsonConverter))]
public enum EncryptionMethod
{
  [DataMember(Name = "A128CBC-HS256")]
  A128CBC_HS256,
  [DataMember(Name = "A256CBC-HS512")]
  A256CBC_HS512
}

 

Improving the Whisky Fringe Tasting Tracker UI

I’ve got a few modest goals for this year’s Whisky Fringe Tasting Tracker that almost all stem from lessons learned last year and feedback received.

Reducing busyness of the UI

The Tasting Tracker is a single, long list of all of the drams available for sampling – tapping a dram line-item expands out your tasting notes and scoring information for the dram. Finding your dram in the list can be tricky, though, as we show both dram name and distillery/bottler with the same level of UI precedence).

The old dram list (left) showed distillery name at the same scale as the dram name

The old dram list (left) showed distillery name at the same scale as the dram name

This has been changed so that the distillery name is a small, grey subtitle under the dram name making the dram names stand out much more and lightening the UI.

Page size and rendering speed

Expanding the scoring and note controls is a slow operation for a number of reasons:

  • We’re generating HTML for editing controls for each of the 250+ drams to be sampled, even though you’ll only ever see one at a time
  • Tapping expands out the section, causing a reflow of the document – slow anyway, but even slower on mobile devices
  • The sheer volume of HTML being sent to the device makes the DOM very large, slow to manipulate and style

To improve this, I’ve moved the expanding scoring section to instead be a modal popup:

Scoring and tasting notes now appear as a modal dialog, reducing the amount of reflow and simplifying the DOM

Scoring and tasting notes now appear as a modal dialog, reducing the amount of reflow and simplifying the DOM

Not only does it let us show the scoring and note sections without being quite so cramped, it also means that the editing controls are only output in the markup once, simplifying the DOM and the volume of HTML sent to the browser. In fact – just this reduces the volume of markup on the main dram-list for logged in users to just 35% of what it was before.

  • Before: 386KB, 19.7KB compressed
  • After: 134KB, 12.2KB compressed (34.7% of previous uncompressed volume)

While this has negligible impact on the actual number of bytes sent to the device (as compression is employed), it has a noticeable on-device impact in terms of responsiveness which I hope to quantify shortly.

Next steps

There’re a few big-ticket items left on my list:

  • Simplifying account creation and sign-in
  • Supporting some manner of offline mode + sync in case data drops-out during the event
  • Introducing live sampling heatmaps

There’s also more work to be done on making the site much quicker to navigate and manipulate – hopefully making it as quick to use as pen-and-paper.

Use the Windows Azure Event Log to diagnose cycling instances

I recently had a deployment issue where a push of new code caused a worker role to continually restart – everything worked locally, but the thing just wouldn’t stay up in the cloud.

A cleaner event log for diagnosing role-start issues

A cleaner event log for diagnosing role-start issues

The ability to remote into an instance is invaluable for diagnosing this sort of thing, especially when your instance is falling down before it even runs your start-up code. In my instance, the Application event log was filling up with error entries at a rate of 4 a second, all tied back to the Windows Azure Caching Client installer. That didn’t make any sense – the thing hadn’t changed for months. With so many log entries it was hard to tell what was happening.

However, the Windows Azure event log under Applications and Services Logs was much more helpful. It seemed that the role was restarting due to a version conflict of the New Relic monitoring agent – nothing to do with the Caching Client installer. Perhaps the caching client installer was being kicked off by the role starting, and so by terminating it was killing the child process leading to normal Application log entries?

Regardless – it set me down the right path of fixing the dodgy reference and redeploying – making the instance stable at the same time.