Category Archives: C#

Parsing source maps in .NET

When we minify JavaScript source, or write code in TypeScript or CoffeeScript and compile it down to JavaScript our debugging experience would be difficult without tools that support source maps.

I’m currently modifying Chutzpah to address a tiny gap in its handling of code coverage for generated source files like those output by the TypeScript compiler, and needed exactly that – a way for .NET code to parse a source map file, then query it to find out which original source line numbers map to a generated source line that’s been covered or not by a unit test.

SourceMapDotNet is my initial, bare-bones attempt at a partial port of the the excellent Mozilla source-map library, but intended only to handle that one type of query – not full parsing, definitely not generation.

It’s also up on NuGet.

SonarQube TypeScript plugin

I use SonarQube (live demo) a fair bit to monitor code quality metrics, but there’s no in-built support nor published community plugins for TypeScript analysis – so I’m writing one.

I intend two core features:

  • Measure code quality by running against TsLint
  • Measure unit test coverage by processing an LCOV file
Running an alpha version of SonarTsPlugin against a random TypeScript project from GitHub shows code issues but no code coverage - yet

Running an alpha version of SonarTsPlugin against a random TypeScript project from GitHub shows code issues but no code coverage – yet

The first of those two goals isn’t that far away at all – above is a screenshot from the alpha version running locally. If you’re interested in helping, drop me an email!

Enabling CORS on your ASP.NET output-cached webservice? Don’t forget to change your varyByHeaders…

If you’re enabling CORS on your ASP.NET web service, you’ll be receiving an ‘Origin’ header and outputting an Access-Control-Allow-Origin header if you’re happy to receive the request. If you’re being strict about your access control policy, you’ll be returning the same origin you got rather than * so that the user agent knows to let the call continue.

This poses a bit of an obstacle when combined with ASP.NET Output Caching, as unless you either tell it to vary its output by all headers or explicitly call out the Origin header you may find that accessing your service from two URLs within your cache lifetime period will see one call succeed and the other fail.

The failing call is because the Access-Control-Allow-Origin header’s being sent from the cache, but for the broken site won’t match the Origin that was sent to it and since we’ve not configured output caching to vary by the Origin header it assumes the requests from the two different origins are the same and responds accordingly.So, we just need to tack in the Origin header into our cache configuration’s varyByHeader attribute (separated from other headers with a semicolon, if any others exist) and bingo! The two sites result in correct responses.

 

Fun with sometimes.rb – in .NET…

Sometimes.rb is a fun set of helpers that give you the ability to express a degree of fuzziness in your Ruby logic. A couple of examples from the docs:

15.percent_of_the_time do
  puts "Howdy, Don't forget to register!"  # be annoying, but only 15% of the time
end
(4..10).times do
  pick_nose  # between 4 and 10 boogers made, it's unpredictable!
end

Given ten minutes and a small Aberlour I thought I’d have a bash at emulating some of it in .NET just for fun:

“Object reference not set to an instance of an object” exception when deploying Azure project

Scenario:

  • Created a new cloud project into which I wanted to deploy an existing bit of code (a new staging service for an existing production system for testing)
  • Right-click Publish… and after a few seconds of thinking deploy fails with NullReferenceException (Object reference not set to an instance of an object exception) and reports of a fatal error but no other diagnostic information

Problem was that the existing service had an HTTPS endpoint defined using a certificate that I’d not uploaded to my brand-new staging service. Deleting the endpoint (or uploading the certificate) does the trick.

Instrument specific types using wildcards with nrconfig

I just pushed version 1.3.0.0 of the NRConfig.Tool NuGet package and the associated project site – the binary is also available as a direct download.

Only two changes:

  • A fix for nested types showing duplicate method signatures in the output XML file
  • Introduction of the /w flag for wildcard matching of type names to be included in the New Relic custom instrumentation file

The /w switch is pretty straightforward – specify one or more wildcard filter strings that identify types to be included in the output file. So, if we had a project using the Repository pattern we could instrument the public methods of all of our concrete repositories:

nrconfig /i MyAssy.dll /f methods+ /w *Repository

which would match any type whose full name ends with Repository. Or we could instrument types in a specific namespace:

nrconfig /i MyAssy.dll /f methods+ /w MyAssy.Utils.* MyAssy.Controller.*

or limit ourselves to specific types:

nrconfig /i MyAssy.dll /f methods+ /w MyAssy.Controller.HomeController

Determine element visibility in WatiN with jQuery

This Stack Overflow post by MrBlueSky describes how to determine element visibility in a WatiN test case. As a convenience, here’s that code refactored as an extension method on Browser:

public static class BrowserExtensions
{
    public static bool IsElementVisible(this Browser browser, Element element)
    {
        var command = string.Format("$('#{0}').is(':visible');", element.Id);
        return browser.Eval(command) == "true";
    }
}

Usage:

            using (var browser = new IE("http://example.com/login.aspx"))
            {
                var page = browser.Page<Login>();

                page.LoginButton.Click();

                Assert.IsTrue(browser.IsElementVisible(page.UsernameRequiredMessage));
                Assert.IsTrue(browser.IsElementVisible(page.PasswordRequiredMessage));
            }

nrconfig – Automatically generating custom instrumentation configuration files for NewRelic with .NET

New Relic’s a pretty good monitoring service and its support for .NET is pretty good – both integrated instrumentation and support for custom runtime metrics.

However, configuring custom instrumentation gets frustrating if you have to instrument more than a handful of methods:

  • You need to specify each method to be instrumented down to the parameter list (if you have method overloads) – this is something of a time vampire
  • There’s no way of saying ‘just instrument everything in this class for me’
  • There’s no link in Visual Studio between the instrumentation config file and your code – refactoring breaks your instrumentation silently

To help, I’ve written a build-time tool that post-processes .NET assemblies and generates custom instrumentation files for NewRelic very quickly. It’s available on NuGet as two packages:

It’s got a GitHub project page that contains documentation and use-cases.

It’s got two modes:

Coarse-grained filtering on the command-line – no code changes required

Here we’re assuming that we don’t want to (or can’t) change the source of the projects we want to appear in the configuration file. The tool has a mode that does a more freestyle reflection over assemblies and generates instrumentation entries for every method (or constructor or property) that matches some very coarse criteria – it’s not perfect, but it’s intended to quickly bootstrap the instrumentation process with a file that can be manually tailored.

The workflow’s very simple:

  • Run the tool, specifying which code elements should be instrumented
    • One or more of {all, constructors, properties, methods}
    • Append + or – to the end of each class of code element to specify inclusion of public (+) or non-public (-) members
    • Use them in combination – for example, methods+- properties+ constructors+ will generate a configuration file to instrument all methods (public or otherwise), and public constructors and public properties
  • Either use the configuration file straight-off, or adjust it manually

Marking up your code with attributes

Need more control? The most fine-grained way to use the tool is to mark up assemblies, classes or methods (with properties and constructors also supported) with an [Instrument] attribute.

  • If you mark up an assembly with the attribute, it’s assumed you want to instrument every method in every class in that assembly – heavy handed but brutally straightforward
  • If you mark up a class with the attribute, it’s assumed you want to instrument every method in that class, and every method in any nested classes
  • You can limit the scope of the attribute to constructors, methods or properties (or combinations thereof), and to public and non-public members in each case

The workflow is still pretty straightforward:

  • Mark up your assemblies, classes or methods with [Instrument] attributes as required
  • Run the tool as a post-build step – it reflects over the assemblies and generates an appropriate custom instrumentation file

Download and Documentation

Binaries are available on NuGet – first, the tool itself (which can then be referred to as ‘nrconfig.exe’:

PM> Install-Package NRConfig.Tool

Then, optionally, the library that contains the [Instrument] attribute that can mark up assemblies, classes and methods:

PM> Install-Package NRConfig.Library

You can also download the tool and optional the library directly.

Documentation is available on the nrconfig GitHub page. Source is available at https://github.com/Pablissimo/nrconfig.

Limitations

New Relic’s instrumenting profiler has some limitations with respect to generics and the tool can’t do much about them:

  • It doesn’t appear possible to specify that a particular method overload should be instrumented if that method overload takes an instance of a generic class as a parameter and the generic class is itself parameterised by more than one type parameter. For example, IEnumerable<string> is fine, but IDictionary<string, string> isn’t as it has two type parameters.
    • The tool handles this by generating an instrumentation entry for all methods with the same name in the same class when a complex (i.e. >= 2 generic type parameters) generic type is detected – it means some methods get instrumented when you don’t want them to, but appears at present to be the only way to get instrumentation to work
  • It’s not possible to instrument generic classes at all.

Disclaimer

This project has nothing to do with New Relic the company – it’s neither supported nor sanctioned by them, so if it breaks come crying to me and not the nice folks on their support desk.

CodeKicker.BBCode

We recently used CodeKicker.BBCode in a work project, and part of the licence agreement is a post on an employee’s personal blog – voila!

Except that’s not much fun, so let’s detail some of the modifications we made to it and released to GitHub.

First things first

There’re loads of BBCode parsers out there that suffer from a variety of problems, and the CodeKicker implementation seemed to be the closest to a ‘working out-the-box’ solution, but we still needed to fix a few bugs and behaviours we weren’t happy with.

[code] tags keep parsing their contents

Say you want to post a code snippet of a for-loop to a forum:

[code]
for (int i = 0; i < arr.Length; i++)
{
arr_copy[i] = arr[i];
}
[/code]

We’d expect the for-loop to render maybe in pre tags:

for (int i = 0; i < arr.Length; i++)
{
   arr_copy[i] = arr[i];
}

Instead the parser sees those two array accesses and interprets them as italic tags:

for (int i = 0; i < arr.Length; i++)
{
   arr_copy = arr[i]; }

So – we want [code] tags to cause the parser to stop processing until it finds a [/code] tag.

Can’t have spaces in tag attributes

If you’re quoting a member of a forum where everyone has a username, the following syntax would do the trick:

[quote=pablissimo]Hi there![/quote]

If however your username has a space in it, or your forum uses real names then you’re in trouble:

[quote=Paul O’Neill]Hi there![/quote]

In this instance, the CodeKicker parser sees the quote tag as having a single default attribute with value ‘Paul’, and disregards the ‘O’Neill’ part. Bummer.

Whitespace handling requires the user to know too many implementation details to format a post

When you’re mixing tags in a longer forum post, you might expect whitespace to be largely ignored if it’s just for the purposes of laying out your BBCode in an understandable fashion. For example, given the following:

[list]
[*]Here's the first item
[*]Here's the second item
[/list]
And here's the code snippet:
[code]
var frob = MakeFrob();
[/code]
It's that easy!

We might expect that:

  • The first list item isn’t preceded by a newline just because it’s on a different line to the opening [list] declaration
  • There’s no extra new-line after the last item in the list
  • There’s no extra new-line after the code tag

Out of the box this isn’t the case, and it can mean that getting a sensible-formatted output means you have to write some pretty horrible-looking BBCode.

The fixes

Stopping [code] tags from parsing BBCode contained within

First we added a property to the BBTag class that lets us specify ‘StopProcessing’ behaviour, defaulting to off – we can turn this on for [code] tags and it gives us the flexibility to introduce a [noparse] tag if we desire.

Now the parser’s stack-based, so to implement the change all we need to do is never match starting tags if the node on the top of the stack is marked ‘StopProcessing’. This’ll make us parse everything between the [code] and [/code] tags as plain text, ignoring anything that happens to look like an [i] or a [u] or anything else.

Allowing spaces within tag attributes

We added another new property to BBTag, ‘GreedyAttributeProcessing’, that again defaults to false. When true, we’ll have the parser assume that there’ll only ever be zero or one attributes against the tag, and that the entire text up until the closing square bracket of the opening tag is the value of the attribute. In our quote example above, we go from the old behaviour (underlined means ‘parsed as attribute value’):

[quote=Paul O’Neill]Hi there![/quote]

to something more sensible in an opt-in fashion:

[quote=Paul O’Neill]Hi there![/quote]

We extend the signature of the BBCodeParser.ParseAttributeValue method to accept a boolean parameter signifying that it should consume all other text in the opening tag as the value of the attribute – we’ll give it a default value of false. Then in ParseTagStart we pass in the value of the current tag’s GreedyAttributeProcessing value into ParseAttributeValue.

Finally, we modify ParseAttributeValue to change the set of characters that it considers ends an attribute value, from {<space>, opening square bracket, closing square bracket} to just {opening square bracket, closing square bracket}. Spaces are now up for grabs as attribute values!

Fixing whitespace handling

We’ll tackle this in two parts. First, we want to suppress the first newline that follows an opening tag – like when the [list] tag and its first [*] item appear on different lines, we probably don’t want an extra newline in there.

We’ll add an extra call to ParseWhitespace near the end of ParseTagStart – this will consume all whitespace characters before we return to the parsing loop so that they’re not interpreted as text nodes.

Next, we tackle the problem that some tags naturally demand a newline after them when we’re composing our message (again, [/list] but also [/code]) to keep it readable while editing, but that don’t need that newline to show in rendered output.

We’ll add another new property to BBTag, ‘SuppressFirstNewlineAfter’ that does what it says on the tin. We’ll set this to false by default, and true on tags that are ‘block-type’ – [list] and [code] being two clear examples.

We’ll then modify the parser again – in ParseTagEnd, we’ll see if the tag we’re closing has the SuppressFirstNewlineAfter attribute set and then consume all whitespace up to and including the first newline after the tag closes so that it doesn’t end up in the output. We’ll do that by adding a new method similar to ParseWhitespace called ParseLimitedWhitespace that takes a parameter for the maximum number of newlines to consume:

        static bool ParseLimitedWhitespace(string input, ref int pos, int maxNewlinesToConsume)
        {
            int end = pos;
            int consumedNewlines = 0;

            while (end < input.Length && consumedNewlines < maxNewlinesToConsume)
            {
                char thisChar = input[end];
                if (thisChar == '\r')
                {
                    end++;
                    consumedNewlines++;

                    if (end < input.Length && input[end] == '\n')
                    {
                        // Windows newline - just consume it
                        end++;
                    }
                }
                else if (thisChar == '\n')
                {
                    // Unix newline
                    end++;
                    consumedNewlines++;
                }
                else if (char.IsWhiteSpace(thisChar))
                {
                    // Consume the whitespace
                    end++;
                }
                else
                {
                    break;
                }
            }

            var found = pos != end;
            pos = end;
            return found;
        }

Perfect. One final thing to do – make sure that all newlines that escape the above treatment get converted into <br /> tags as appropriate. We can do this in TextNode’s ToHtml method by tacking a simple Replace(“\n”, “<br />”) onto the end.

Ninject + NewRelic + Windows Azure Worker Role = horrific crash?

We recently encountered a weird issue – using Ninject for our DI needs, in a Windows Azure Worker Role with NewRelic instrumentation. All good-practice stuff, using proven technology.

And it didn’t work.

The role wouldn’t start up, RDPing in to look at the Event Log showed horrific errors in the .NET runtime itself:

Application: WaWorkerHost.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an internal error in 
the .NET Runtime at IP 000007FEF6076B81 (000007FEF6070000) with exit 
code 80131506.

Application popup: WaWorkerHost.exe - System Error : Exception 
Processing Message 0xc0000005 Parameters 0x000007FEFD08718C 
0x000007FEFD08718C 0x000007FEFD08718C 0x000007FEFD08718C

Faulting application name: WaHostBootstrapper.exe, version: 
6.0.6002.18488, time stamp: 0x4fcaabe9
Faulting module name: ntdll.dll, version: 6.1.7601.17696, 
time stamp: 0x4e8147f0
Exception code: 0xc0000374
Fault offset: 0x00000000000a0d6f
Faulting process id: 0x4a8
Faulting application start time: 0x01cd9fb18d252932
Faulting application path: E:\base\x64\WaHostBootstrapper.exe
Faulting module path: D:\Windows\SYSTEM32\ntdll.dll

Using the Ninject source code and a bit of detective work the likes of which House would be proud, we found that creating a new AppDomain on an Azure Worker Role with NewRelic installed causes a pretty horrible crash. In fact, the issue had nothing to do with Ninject at all – simply having a Worker Role whose code created a new AppDomain caused the crash.

The solution we found was to:

  1. Construct Ninject kernels using either a list of INinjectModule instances or a list of assemblies where the modules live – importantly do not use the Load method with a set of string filenames as Ninject will create a new AppDomain to host the assemblies while it reflects over them which causes the above crash
  2. In the INinjectSettings object passed to the kernel constructor, turn off LoadExtensions as this causes the same code path as above to be run through, irrespective of whether any extension assemblies exist on disk

Both solutions aim to avoid the creation of a new AppDomain and thus avoid the crashing behaviour.

While we hope it’s an issue NewRelic will resolve in due course, hopefully the above’ll keep you going in the meantime.