Category Archives: Uncategorized

SonarTsPlugin 1.0.0 released

In something of a milestone for the project SonarTsPlugin 1.0.0 has been released. While the last blog post that mentioned the plugin had it at v0.3, there have been a great many changes since then to the point that I might as well outline the total feature set:

  • Analyses TypeScript code using tslint, or consumes existing tslint output and reports issues to the SonarQube interface
  • Analyses code coverage information in LCOV format
    • Also supports Angular-CLI output
  • Derives lines-of-code in your TypeScript project
  • Supports user-defined rule breach reporting
  • Supports custom tslint rule specification
  • Compatible with Windows and Linux, supports various CI environments including VSTS
  • Compatible with SonarQube 5.6 LTS and above
  • A demo site exists
  • Sample projects demonstrating setup of the plugin are available

The project readme has fairly detailed information on how to configure the plugin, which I’m shortly to turn into a wiki on GitHub with a little more structure.

The plugin has been downloaded over a thousand times now, and appears to be getting increasing use given the recent trend of issues and activity on the project. Hopefully it’s now in a good place to build upon, with the core functionality done.

The next big milestone is to get the plugin listed on the SonarQube Update Centre, which will require fixing a few code issues before going through a review process and addressing anything that comes out of that. Being on the Update Centre is the easiest way as a developer to consume the plugin and to receive updates, so is a real priority for the next few months.

SonarQube TypeScript plugin 0.3 released and demo site available

I’ve recently made some changes to my SonarQube TypeScript plugin pithily named ‘SonarTsPlugin’ that:

  • Make it easier to keep up to date with changes to TsLint
  • Fix minor bugs
  • Support custom TsLint rules

Download links

Breaking change

In a breaking change, the plugin no longer generates a configuration file for TsLint based on your configured project settings, but instead requires that you specify the location of a tslint.json file to use for analysis via the sonar.ts.tslintconfigpath project-level setting.

There were several reasons for the change as detailed on the initial GitHub issue:

  • The options for any given TsLint rule are somewhat fluid and change over time as the language evolves – either we model that with constant plugin changes, or we push the onus onto the developer
  • Decouples the TsLint version from the plugin somewhat – so long as rules with the same names remain supported, a TsLint upgrade shouldn’t break anything
  • Means your local build process and SonarQube analysis can use literally the same tslint.json configuration

Custom rule support

New to 0.3 is support for specifying a custom rule directory. TsLint supports user-created rules, and several large open-source projects have good examples – in fact, there’s a whole repository of them. You can now specify a path to find your custom rules via the sonar.ts.tslintrulesdir project property.

NCLOC accuracy improvements

A minor defect in NCLOC was fixed, where the inside of block comments longer than 3 lines were considered code.

Demo site

To test the plugin against some larger and more interesting code-bases, there’s a SonarQube 5.4 demo installation with the plugin installed available for viewing. Sadly so far none of the projects I’ve analysed have any major issues versus their custom rule setup…

Future

There remains minor work to do on the plugin, and I’ll keep it up to date with TsLint changes where possible.

Failure to start Xamarin Android Player simulator

On Windows 10, you might find that after installing Xamarin tools and the Xamarin Android Player you can’t launch the simulator from Visual Studio nor the Xamarin Device Manager with the error ‘VBoxManage command failed. See log for further details‘.

xap failure

XAP is automating VirtualBox in the background, but you’ll probably find that you can’t manually start the VM image from there either, but with a more helpful error ‘Failed to open/create the internal network‘ and ‘Failed to attach the network LUN (VERR_INTNET_FLT_IF_NOT_FOUND)‘:

xap failure - vbox

The fix is to edit the connection properties of the VirtualBox Host-Only network adapter (as named in the error message) and make sure that VirtualBox NDIS6 Bridged Networking Driver is ticked. In the example below, even though the installation appeared to go swimmingly it didn’t enable the bridge driver.

xap failure - fix

Tick the box and off you go!

Chutzpah and source maps – more complete TypeScript/CoffeeScript coverage

I spent a lot of time over Christmas contributing to open-source JavaScript unit test runner Chutzpah, and the recent Chutzpah 3.3.0 release includes source-map support as a result.

The new UseSourceMaps setting causes Chutzpah to translate generated source (i.e. JavaScript) code coverage data into original source (i.e. TypeScript/CoffeeScript/whatever) coverage data for more accurate metrics. It also plays well with LCOV support, which I added a while back but only got released as part of 3.3.0.

Chutzpah before sourcemaps

Chutzpah handles recording code coverage using Blanket.js. However, code coverage was always expressed in terms of covered lines of generated JavaScript, and not covered lines of the original language.

This makes code coverage stats inaccurate:

  • There’re likely to be more generated JavaScript lines than source TypeScript/CoffeeScript (skewing percentages for some constructs)
  • The original language might output boilerplate for things like inheritance in each file, which if not used is essentially uncoverable in the generated JavaScript – TypeScript suffers especially from this

UseSourceMaps setting

The new UseSourceMaps setting tells Chutzpah to, when faced with a file called File.js, look for a source map file called File.js.map containing mapping information between File.js and its original source code – likely a TypeScript or CoffeeScript file.

{
 "Compile": {
   "Extensions": [".ts"],
   "ExtensionsWithNoOutput": [".d.ts"],
   "Mode": "External",
   "UseSourceMaps": true
  },
 "References": [
   {"Include": "**/src/**.ts", "Exclude": "**/src/**.d.ts" }
 ],
 "Tests": [
   { "Include": "**/test/**.ts", "Exclude": "**/test/**.d.ts" }
 ]
}

This will only be of use when Chutzpah has been told of original source files using the Compile setting, asked to perform code coverage and source maps exist.

TypeLite 0.9.6 out with multidimensional arrays fixed

I use Lukas Kabrt’s TypeLITE package (NuGet link) a lot to automatically generate TypeScript interfaces from .NET classes and interfaces, so occasionally I’ll drop a pull request to fix the odd issue in its backlog when I’ve got a spare hour (though generally they’re not issues I encounter myself).

0.9.6 includes a patch I submitted to fix jagged array output. Before, the following situations wouldn’t be processed correctly and they’d all output a single-dimensional array:

public string[][] MyJaggedArray { get; set; }
public string[][][] MyVeryJaggedArray { get; set; }
public IEnumerable<string> MyIEnumerableOfString { get; set; }
public List<string> MyListOfString { get; set; }
public List<string[]> MyListOfStringArrays { get; set; }
public List<IEnumerable<string>> MyListOfIEnumerableOfString { get; set; }
public List<List<string[]>> MyListOfListOfStringArray { get; set; }

Processing used to just stop once it detected an array-ish type (which oddly didn’t include IEnumerable), without diving deeper. For jagged arrays the logic’s pretty straightforward, as you can just look at the rank of the array type and output that many pairs of square brackets.

For enumerable types things are a bit more interesting as we have to read into the generic type parameter recursively. Now the above gets formatted as you’d expect, and another issue’s closed off.

MyJaggedArray: string[][];
MyVeryJaggedArray: string[][][];
MyIEnumerableOfString: string[];
MyListOfString: string[];
MyListOfStringArrays: string[][];
MyListOfIEnumerableOfString: string[][];
MyListOfListOfStringArray: string[][][];

LinqPad, Azure Table Storage Driver and continuation tokens – or ‘how to get more than 1000 results/run a query that runs for more than 5 seconds’

Mauricio Diaz Orlich made a LinqPad storage driver that lets you query Azure Table Storage as easily as any other data-source within LinqPad – this is invaluable when working with Azure as there aren’t any real alternatives when you just want to make an ad-hoc query or make multi-table queries.

However, there’s an issue – the driver doesn’t use continuation tokens internally so your query needs to both finish executing within the Azure Table Storage limit of 5 seconds and return fewer than 1000 results, otherwise you’ll be missing data and won’t necessarily notice.

I forked the source to see if I could patch in a way around it but in doing so found a much simpler solution – do exactly what you’d do in your .NET code and call the AsTableServiceQuery() extension method on your query before you materialise it.

For example:


Users
.Where(x => x.EmailAddress == "user@example.com")
.AsTableServiceQuery()
.ToList()

This query will now return as many results as exist (up to LinqPad’s seemingly unavoidable 10,000 record limit) and will execute for as long as it takes to actually return all results by way of continuation tokens.

Visual Studio Azure Deployment Error: No deployments were found. Http Status Code: NotFound

Had this recently while deploying to Azure, to the blank staging slot of a cloud service that already had a production instance running.

Looking at the log, immediately before the error Visual Studio claimed that it was stopping a role – how can that be, when the staging slot’s already free? This is another situation where you shouldn’t believe Visual Studio’s lies – upload the package directly to the portal in the event of failure and you’ll generally get better and more accurate errors.

The issue in this case? We’d run out of spare cores in our subscription to deploy two medium instances of the application to staging. Why does the error talk about missing deployments (and try to stop non-existent deployments in this instance)? Unknown.

Whisky Fringe Tasting Tracker gets a new name – dramtracker.com

The Whisky Fringe Tasting Tracker will be running again this year, hopefully much better on some mobile devices where it was sluggish and tricky to use last year.

And to celebrate, it’s moving from its old location to http://dramtracker.com. Your old user accounts should continue to work, and all your old tasting scores and notes are still accessible – just pull down the ‘year’ list and choose 2012 to see your scores from last year.

The set of whiskies is still identical to last year for testing purposes until the full programme is released. Sláinte!

Getting Newtonsoft Json.NET to respect the DataMember Name property

Json.NET is brilliant, not least because it produces much more sensible JSON in a number of situations than the DataContractJsonSerializer but also because there aren’t many problems that can’t be solved by adding appropriate attributes or formatters.

Sometimes you need to output specific names for enumeration members – in most cases the enumeration member name itself, and Json.NET comes with the StringEnumConverter for this purpose and it works well – just decorate the enumeration itself with the JsonConverter attribute and you’re done.

In some cases though you need to output values that wouldn’t otherwise be valid C# identifiers – for example, the string ‘A128CBC-HS256’ for an enumeration member ‘A128CBC_HS256’ (where we want that underscore turned into a dash). Adding DataMember attributes to the enumeration members doesn’t work, so I threw the following together quickly to respect them where present.

Presented as-is and without much testing – in particular there’ll be a performance hit for this as we’re using reflection to render and parse the values.

Example enumeration:

[DataContract]
[JsonConverter(typeof(DataMemberAwareEnumJsonConverter))]
public enum EncryptionMethod
{
  [DataMember(Name = "A128CBC-HS256")]
  A128CBC_HS256,
  [DataMember(Name = "A256CBC-HS512")]
  A256CBC_HS512
}

 

Improving the Whisky Fringe Tasting Tracker UI

I’ve got a few modest goals for this year’s Whisky Fringe Tasting Tracker that almost all stem from lessons learned last year and feedback received.

Reducing busyness of the UI

The Tasting Tracker is a single, long list of all of the drams available for sampling – tapping a dram line-item expands out your tasting notes and scoring information for the dram. Finding your dram in the list can be tricky, though, as we show both dram name and distillery/bottler with the same level of UI precedence).

The old dram list (left) showed distillery name at the same scale as the dram name

The old dram list (left) showed distillery name at the same scale as the dram name

This has been changed so that the distillery name is a small, grey subtitle under the dram name making the dram names stand out much more and lightening the UI.

Page size and rendering speed

Expanding the scoring and note controls is a slow operation for a number of reasons:

  • We’re generating HTML for editing controls for each of the 250+ drams to be sampled, even though you’ll only ever see one at a time
  • Tapping expands out the section, causing a reflow of the document – slow anyway, but even slower on mobile devices
  • The sheer volume of HTML being sent to the device makes the DOM very large, slow to manipulate and style

To improve this, I’ve moved the expanding scoring section to instead be a modal popup:

Scoring and tasting notes now appear as a modal dialog, reducing the amount of reflow and simplifying the DOM

Scoring and tasting notes now appear as a modal dialog, reducing the amount of reflow and simplifying the DOM

Not only does it let us show the scoring and note sections without being quite so cramped, it also means that the editing controls are only output in the markup once, simplifying the DOM and the volume of HTML sent to the browser. In fact – just this reduces the volume of markup on the main dram-list for logged in users to just 35% of what it was before.

  • Before: 386KB, 19.7KB compressed
  • After: 134KB, 12.2KB compressed (34.7% of previous uncompressed volume)

While this has negligible impact on the actual number of bytes sent to the device (as compression is employed), it has a noticeable on-device impact in terms of responsiveness which I hope to quantify shortly.

Next steps

There’re a few big-ticket items left on my list:

  • Simplifying account creation and sign-in
  • Supporting some manner of offline mode + sync in case data drops-out during the event
  • Introducing live sampling heatmaps

There’s also more work to be done on making the site much quicker to navigate and manipulate – hopefully making it as quick to use as pen-and-paper.