Running with Vi, the AI personal trainer: First impressions

I helped kickstart Vi: The AI Personal Trainer about 9 months ago, and this week it arrived – compelling me to go running for only the second time in months and much against my better judgement to make sure I get some value out of my impulse purchase.

A neat unfolding box lets you meet Vi for the first time

First, an introduction – Vi is both a bit of hardware and a bit of software (split running on the device and your smartphone) that acts as a virtual personal trainer for runners. In principle, it collects data about your running and then using some machine learning feeds back on your activity in a way that is tailored to whatever goal you’ve set – de-stressing, running faster, running further, losing weight etc.

The hardware of Vi is a wearable sensor, wireless headphone and microphone combo that sits about the neck, tracking heart rate and step count as you’re out on a run/excruciatingly slow jog (delete as applicable). The software of Vi, and arguably where most of the value lies runs both on the device and as a smartphone app synched via Bluetooth. While there’s enough brains on the device itself for it to welcome you and guide you through initial setup on its own it’s the smartphone app that’s likely doing the heavy lifting in actual use.

Meeting Vi

Clearly a lot of thought’s gone into the out-of-box experience for new Vi owners, with a high quality unfolding box hiding the device itself, quick-start guide, charging cable, carry case and a set of replacement in-ear gels and supports of different sizes.

Lifting Vi out of the packaging caused it (inadvertently?) turn on, and a tiny voice asking if you could hear her could just be heard from the earphones. The device is a collar, of sorts, two tapered cylinders where the sensor set lives connected by a very flexible band and is designed to sit around the neck and rest along the collarbone. The majority of the device rubberised but for the two ends, which terminate in metallic caps available in several colours – mine an elegant bronze. The earphones are connected individually to the two collarbone pieces with short adjustable leads meaning no dangling cables while worn – the phones themselves sharing the same metallic accent as the main body.

One earphone contains a flashing green heart-rate sensor, the other a touch sensor for prompting Vi to start a voice command. Both have a soft rubber appendage, curved to sit in the lobe of the ear to secure the earphone. The metallic outsides of the earphones are magnetic, one concave and one convex that can attach to each other around the neck when not required.

On first start-up, after a quick check that you can hear her, Vi directs you to download the smartphone app and pair the device – at present, that app’s in beta as is the whole platform but pairing was automatic in a way that pairing my Fitbit with my phone never was.

Once done, the app asks a series of questions – age, height, weight and then asks you to define and prioritise your fitness goals – all of which is meant to tailor Vi’s coaching of your runs.

Let’s go running

To kick off a workout, I started the Vi app and picked what I wanted to do – run a specific distance, run for an amount of time or a free run – let’s start nice and easy and go for a 3 mile jaunt.

Oh, OK then

After a couple of false starts (the app wants location enabled for fine-grained tracking of your route and distance travelled, and asks permission to access your media so you can have a playlist going as you’re running – you also need to either attach a Facebook account or create an account with LifeBEAM to track your stats), you’re off. By default you get a configurable and skippable two-minute warmup, where Vi directed me to start a slow jog to get things moving.

At this point Vi is talking to you through the earphones – the phone is in your pocket or mounted to your arm, and in theory all the information you need is relayed by voice. The two minute warm-up was also the start of what felt like a neat tutorial mode which lasted for the rest of the half hour I was out.

Vi checks she is pronouncing your name correctly, introducing you to yes/no questions and what amounts of a mic-check. And over the course of your outing taught how to trigger a voice command, some basic voice commands that are available and will receive periodic notifications of your distance travelled and pace.

Hardware performance and comfort during running

First off, it was blowing a gale while I was out and to begin with I had a real fear that the collar would blow off my neck entirely but that was unfounded – it stuck with me without issue. It was also surprisingly comfortable while worn – I’m not sure that I noticed it was there for the majority of the time I was out, and it sits quite stably against your collarbone even as your bobbing along or turning your head to check crossings. This is probably a combination of the rubberised covering and the weight distribution, but regardless it’s a success.

The earphones stayed pretty solidly in my ears for the duration and were as comfortable as any others I’ve worn. As it went, sound quality for my playlist was actually really good and with such short cables I didn’t get the same wind and friction noise transmitted through the earphones as I do with my in-ear Sennheisers.

The physical controls on the collar are big enough to feel and press unambiguously while running which is a big plus, altering the volume of your music or Vi’s coaching as you go.

If there’s one problem with the earphones it’s that you need to remember that the right one is a software button – which means that if you do feel the phone becoming a bit loose, pushing it into your ear either triggers a voice command prompt or (as happened at least twice in my run) terminates whatever Vi was telling you at the time. Not sure what the solution to that is beyond remembering to push the phone in from its barrel rather than just pushing but that’s some tens or years of muscle memory to overcome.

Vi’s coaching

Vi introduced her plan for the run during the warmup as a training exercise in picking and keeping a steady pace – no specific times in mind and no circuit work to start with, which may have been a feature of the specific goals I set in the app or may simply be a way to get a good baseline and provide opportunities for Vi to teach you how to operate the device with your voice.

I got periodic information about how far I’d travelled and my pace, and after the second mile when my pace was within a fistful of seconds of my first mile I was cheerily congratulated which, even if from a machine, was actually oddly reassuring. I was also encouraged to up my cadence (and told why I should) about a mile and a half in, which was presumably triggered after monitoring my step rate and seeing that my technique needed work.

When Vi wasn’t talking to me, in theory she could be triggered to provide feedback by touching right earphone and speaking after the beep and there are a range of commands that I’ll be trying out on my next voyage. There were also subtle chimes in the last hundred yards or so of each mile run, building up towards the mile marker itself.

Voice interface, meet Scotland

Now – this all sounds splendid on paper but living as I do in Edinburgh and given it was a pretty windy day I only managed to get Vi to pick up on what I was saying twice out of probably ten attempts during the outing which was obviously disappointing though understandable given how hard it can be to hold a phone conversation in the Scottish outdoors… It’s also got a workaround – the volume + and – buttons on the collar can be used to answer yes or no as required, but that nugget of information wasn’t revealed to me until I was back at the house doing a warm-down. I’m hoping that on a nicer day Vi and I will understand each other a bit better.

The other thing I noted so far was that there was noticeable lag between a successful request for your heart-rate and Vi giving you the reading – six or seven seconds for the one attempt I made. This was also the case within the app itself, with a little beating heart icon taking around ten seconds before an actual heart rate was displayed after the run – I’m not sure if this is an issue with the fit of the earphone or a software issue due to the pre-release nature of the app.

Post-run analysis and the app

I had to end the run using the app, rather than by answering Vi’s questions due to the previously-mentioned mic issues but once done I got detailed information:

  • Visual indication of the route I took
  • Standard high-level stats like distance travelled, average pace, run duration, average heart rate, calories burned, average speed…
  • Speed vs heart rate chart
  • Pace vs heart rate chart
  • Step rate over time chart

There’re also in-app achievements giving you some personal records to beat, and presumably all of the stats collected will filter into Vi’s coaching program for me over time – we’ll see on Run #2.

Closing thoughts

First off – this is one short run with a newly-released piece of hardware and beta software, and I’m pretty happy with what I’ve seen so far even with the odd issue.

The hardware of Vi feels premium and works very well given that it’s such an unusual form factor for a wearable of this type – obviously a lot of thought and design work has gone into making the collar piece sit just so and it totally shows in use. And the quality of the sound was a pleasant surprise.

The only disappointment with the experience so far was the mic not working well enough for Vi and I to communicate most of the time. Could have been strong wind, could have been my accent, could have been my laboured breathing – to be honest short of the mic having a wind muff I’m not sure that anything would have managed tonight though so I’m unfazed and we’ll see how the setup performs next time before I start paying for diction lessons.

On the software front the simple friendliness of the scripting and voice acting gives Vi a definite personality that was plenty encouraging and having useful info given to me passively and without my having to take my eyes off the road is a big plus over a running watch. I’m curious to see how Vi approaches the next few runs given the goals I’ve set the system.

So when my knee stops complaining after this run I’ll be back out again and seeing what Vi’s got in store.

SonarTsPlugin 1.0.0 released

In something of a milestone for the project SonarTsPlugin 1.0.0 has been released. While the last blog post that mentioned the plugin had it at v0.3, there have been a great many changes since then to the point that I might as well outline the total feature set:

  • Analyses TypeScript code using tslint, or consumes existing tslint output and reports issues to the SonarQube interface
  • Analyses code coverage information in LCOV format
    • Also supports Angular-CLI output
  • Derives lines-of-code in your TypeScript project
  • Supports user-defined rule breach reporting
  • Supports custom tslint rule specification
  • Compatible with Windows and Linux, supports various CI environments including VSTS
  • Compatible with SonarQube 5.6 LTS and above
  • A demo site exists
  • Sample projects demonstrating setup of the plugin are available

The project readme has fairly detailed information on how to configure the plugin, which I’m shortly to turn into a wiki on GitHub with a little more structure.

The plugin has been downloaded over a thousand times now, and appears to be getting increasing use given the recent trend of issues and activity on the project. Hopefully it’s now in a good place to build upon, with the core functionality done.

The next big milestone is to get the plugin listed on the SonarQube Update Centre, which will require fixing a few code issues before going through a review process and addressing anything that comes out of that. Being on the Update Centre is the easiest way as a developer to consume the plugin and to receive updates, so is a real priority for the next few months.

SonarQube TypeScript plugin 0.3 released and demo site available

I’ve recently made some changes to my SonarQube TypeScript plugin pithily named ‘SonarTsPlugin’ that:

  • Make it easier to keep up to date with changes to TsLint
  • Fix minor bugs
  • Support custom TsLint rules

Download links

Breaking change

In a breaking change, the plugin no longer generates a configuration file for TsLint based on your configured project settings, but instead requires that you specify the location of a tslint.json file to use for analysis via the sonar.ts.tslintconfigpath project-level setting.

There were several reasons for the change as detailed on the initial GitHub issue:

  • The options for any given TsLint rule are somewhat fluid and change over time as the language evolves – either we model that with constant plugin changes, or we push the onus onto the developer
  • Decouples the TsLint version from the plugin somewhat – so long as rules with the same names remain supported, a TsLint upgrade shouldn’t break anything
  • Means your local build process and SonarQube analysis can use literally the same tslint.json configuration

Custom rule support

New to 0.3 is support for specifying a custom rule directory. TsLint supports user-created rules, and several large open-source projects have good examples – in fact, there’s a whole repository of them. You can now specify a path to find your custom rules via the sonar.ts.tslintrulesdir project property.

NCLOC accuracy improvements

A minor defect in NCLOC was fixed, where the inside of block comments longer than 3 lines were considered code.

Demo site

To test the plugin against some larger and more interesting code-bases, there’s a SonarQube 5.4 demo installation with the plugin installed available for viewing. Sadly so far none of the projects I’ve analysed have any major issues versus their custom rule setup…

Future

There remains minor work to do on the plugin, and I’ll keep it up to date with TsLint changes where possible.

Failure to start Xamarin Android Player simulator

On Windows 10, you might find that after installing Xamarin tools and the Xamarin Android Player you can’t launch the simulator from Visual Studio nor the Xamarin Device Manager with the error ‘VBoxManage command failed. See log for further details‘.

xap failure

XAP is automating VirtualBox in the background, but you’ll probably find that you can’t manually start the VM image from there either, but with a more helpful error ‘Failed to open/create the internal network‘ and ‘Failed to attach the network LUN (VERR_INTNET_FLT_IF_NOT_FOUND)‘:

xap failure - vbox

The fix is to edit the connection properties of the VirtualBox Host-Only network adapter (as named in the error message) and make sure that VirtualBox NDIS6 Bridged Networking Driver is ticked. In the example below, even though the installation appeared to go swimmingly it didn’t enable the bridge driver.

xap failure - fix

Tick the box and off you go!

Chutzpah and source maps – more complete TypeScript/CoffeeScript coverage

I spent a lot of time over Christmas contributing to open-source JavaScript unit test runner Chutzpah, and the recent Chutzpah 3.3.0 release includes source-map support as a result.

The new UseSourceMaps setting causes Chutzpah to translate generated source (i.e. JavaScript) code coverage data into original source (i.e. TypeScript/CoffeeScript/whatever) coverage data for more accurate metrics. It also plays well with LCOV support, which I added a while back but only got released as part of 3.3.0.

Chutzpah before sourcemaps

Chutzpah handles recording code coverage using Blanket.js. However, code coverage was always expressed in terms of covered lines of generated JavaScript, and not covered lines of the original language.

This makes code coverage stats inaccurate:

  • There’re likely to be more generated JavaScript lines than source TypeScript/CoffeeScript (skewing percentages for some constructs)
  • The original language might output boilerplate for things like inheritance in each file, which if not used is essentially uncoverable in the generated JavaScript – TypeScript suffers especially from this

UseSourceMaps setting

The new UseSourceMaps setting tells Chutzpah to, when faced with a file called File.js, look for a source map file called File.js.map containing mapping information between File.js and its original source code – likely a TypeScript or CoffeeScript file.

{
 "Compile": {
   "Extensions": [".ts"],
   "ExtensionsWithNoOutput": [".d.ts"],
   "Mode": "External",
   "UseSourceMaps": true
  },
 "References": [
   {"Include": "**/src/**.ts", "Exclude": "**/src/**.d.ts" }
 ],
 "Tests": [
   { "Include": "**/test/**.ts", "Exclude": "**/test/**.d.ts" }
 ]
}

This will only be of use when Chutzpah has been told of original source files using the Compile setting, asked to perform code coverage and source maps exist.

Parsing source maps in .NET

When we minify JavaScript source, or write code in TypeScript or CoffeeScript and compile it down to JavaScript our debugging experience would be difficult without tools that support source maps.

I’m currently modifying Chutzpah to address a tiny gap in its handling of code coverage for generated source files like those output by the TypeScript compiler, and needed exactly that – a way for .NET code to parse a source map file, then query it to find out which original source line numbers map to a generated source line that’s been covered or not by a unit test.

SourceMapDotNet is my initial, bare-bones attempt at a partial port of the the excellent Mozilla source-map library, but intended only to handle that one type of query – not full parsing, definitely not generation.

It’s also up on NuGet.

SonarQube TypeScript plugin

I use SonarQube (live demo) a fair bit to monitor code quality metrics, but there’s no in-built support nor published community plugins for TypeScript analysis – so I’m writing one.

I intend two core features:

  • Measure code quality by running against TsLint
  • Measure unit test coverage by processing an LCOV file
Running an alpha version of SonarTsPlugin against a random TypeScript project from GitHub shows code issues but no code coverage - yet

Running an alpha version of SonarTsPlugin against a random TypeScript project from GitHub shows code issues but no code coverage – yet

The first of those two goals isn’t that far away at all – above is a screenshot from the alpha version running locally. If you’re interested in helping, drop me an email!

TypeLite 0.9.6 out with multidimensional arrays fixed

I use Lukas Kabrt’s TypeLITE package (NuGet link) a lot to automatically generate TypeScript interfaces from .NET classes and interfaces, so occasionally I’ll drop a pull request to fix the odd issue in its backlog when I’ve got a spare hour (though generally they’re not issues I encounter myself).

0.9.6 includes a patch I submitted to fix jagged array output. Before, the following situations wouldn’t be processed correctly and they’d all output a single-dimensional array:

public string[][] MyJaggedArray { get; set; }
public string[][][] MyVeryJaggedArray { get; set; }
public IEnumerable<string> MyIEnumerableOfString { get; set; }
public List<string> MyListOfString { get; set; }
public List<string[]> MyListOfStringArrays { get; set; }
public List<IEnumerable<string>> MyListOfIEnumerableOfString { get; set; }
public List<List<string[]>> MyListOfListOfStringArray { get; set; }

Processing used to just stop once it detected an array-ish type (which oddly didn’t include IEnumerable), without diving deeper. For jagged arrays the logic’s pretty straightforward, as you can just look at the rank of the array type and output that many pairs of square brackets.

For enumerable types things are a bit more interesting as we have to read into the generic type parameter recursively. Now the above gets formatted as you’d expect, and another issue’s closed off.

MyJaggedArray: string[][];
MyVeryJaggedArray: string[][][];
MyIEnumerableOfString: string[];
MyListOfString: string[];
MyListOfStringArrays: string[][];
MyListOfIEnumerableOfString: string[][];
MyListOfListOfStringArray: string[][][];

gh-ticker – a simple ticker for your public GitHub activity

With a spare weekend I put together the ticker widget you can see at the top of the screen just now – iterating through my most recent GitHub activity items every few seconds.

It is, fittingly, available on GitHub for forking and customisation licensed under the BSD 3-Clause.

How it works

The GitHub API is very straightforward, and data that’s already public (such as what appears on your Public Activity tab) can be accessed without authentication and with JSONP – ideal for client-side hackery.

The widget’s architected as a couple of JS files (taking a dependency on jQuery and Handlebars for now), one which contains Handlebars precompiled templates and the other that makes the API call and renders partials befitting the type of each activity item.

Setting it up’s pretty simple – reference the JS and CSS, make sure Handlebars and jQuery are in there too and then whack a DIV somewhere on your page with id ‘gh-ticker’.

<div id="gh-ticker" data-user="pablissimo" data-interval-ms="5000" />

The user whose data is pulled and the interval between ticker item flips are configurable as data attributes.

The GitHub Events API

The Events API knows about a set number of event types – for each event type, there’s a Handlebars partial. When we’re wondering how to render an item we look up the relevant partial and whack it into the page.

Since that’s a fair few partials (neat for development in isolation, bad for request count overhead) those partials are precompiled using the Handlebars CLI and put into a single gh-templates.js file.

Improvements

The ticker’s very basic – it just hides or shows the items as required, without any pretty transitions. It also takes a dependency on jQuery which it needn’t, since it’s only using it for the AJAX call and element manipulation both of which are easily covered off by existing browser functionality.

Still – it can be easily styled to be fairly unobtrusive and has at least taught me a little about Handlebars.

NRConfig 1.4.0.0 for New Relic released

I’ve spent a little time working on NRConfig, the tool that generates custom instrumentation files for .NET projects using New Relic, after a bug report that pointed out that the tool was unable to run for an assembly for which dependencies weren’t available. This isn’t likely in production code as you’d likely need the dependencies available to run, but can happen when you want to do an offline run of instrumentation generation against a third-party library.

To this end, NRConfig’s been changed pretty substantially under the hood to support alternatives to .NET reflection for discovering instrumentable types, and Microsoft’s Common Compiler Infrastructure (or CCI) library drafted in as the default discovery provider.

CCI’s slower than reflection by quite a margin – it can now take several seconds to produce instrumentation configuration for large or complex assemblies, but I’m hoping to improve that if it becomes a problem.

Also introduced is support for MSBuild in a new NuGet package, NRConfig.MSBuild. This should make generating instrumentation files for your own code a lot less work – simply add the NRConfig.MSBuild package to any project containing code you want to instrument and mark up the assembly, types or methods with [Instrument] attributes to control the output. On build, a custom instrumentation file is generated in your output directory for you to deploy wherever.