January TechMeetup Glasgow – and where have all the job applicants gone?

This Wednesday I was through to Glasgow for the first west-coast TechMeetup of the year. Free beer and pizza notwithstanding, the two talks were both excellent.

Jamie Boyd (@jamboid) gave a talk on ‘Responsive design’ illustrated with examples from the Macdonald Hotels website showing how combining media queries with specifically-designed CSS can yield dynamic layouts tailored to different device profiles without the use of user-agent sniffing or JavaScript. Sadly he left the visual demo of this until right at the end, whereupon the rest of his presentation clicked into place. Regardless, with me doing some web work at the minute it’s definitely an idea I’ll be returning to in the next few weeks.

Sebastien Lambla (@serialseb), the most energetic and characterful presenter I’ve seen in a long time, presented a deep-dive on HTTP caching and its knock-on effects, and how to combat them (and leverage them) when providing web services.

Besides the talks an interesting aspect for me was the initial welcome section. At the start of each meet-up attendees go round the room introducing themselves, saying what they do or who they work for and what they might be looking to gain from the event. There were perhaps 100 people in the room so the process took maybe half an hour, but what struck me was how many people attending worked for firms actively trying to recruit developers – and how few people attending were actually looking for work.

Technical events like TechMeetup and DDD Scotland are ideal hunting grounds for companies looking to recruit. By attending you gain visibility, you can get to know potential hires in a much more relaxed setting than an interview and you can get a feel for your competition’s hiring needs. You also get broader access to the rumour-mill – who’s planning on setting up shop in your town, who’s about to expand an existing office, who’s about to embark on a new project and who’s in trouble.

What interested me was the strength of the recruitment story for those seeking software development work in Edinburgh and Glasgow. At the Glasgow event there must have been at least 5 firms offering .NET developer opportunities, probably as many again doing Ruby on Rails projects, people after Android and iOS developers… The situation was much the same at the Edinburgh event two weeks previously – of those introducing themselves probably half were working for firms desperately trying to get software resource in the door. In both cases it was clear – developers were in extraordinarily high demand and very short supply.

The broader economy might be in trouble, but in Scotland at least the future seems bright for the developer community.

Improving the C# Ants AI Challenge starter kit for debugging support

The C# starter kit for the current AI Challenge works well out-of-the-box, but doesn’t easily let you debug your bot live. There’re a number of ways of supporting live debugging, so I chose one of the more interesting.

The aim is to host the bot in a Visual Studio debugging session, then write a proxy that the Python game script runs which relays commands and responses between the Python script (which expects IO through stdin and stdout) and the bot being debugged.

To start this, we need to do a little refactoring to the starter kit classes. Our aim is to decouple the starter kit from talking directly to Console.In and Console.Out. This only happens in a couple of places so is reasonably straightforward, and the methods called are few.

So, let’s define an interface representing a bot’s connection to the game:

public interface IGameDataPipe
{
   string ReadLine();
   void WriteLine(string s);
}

We’ll also implement a console-based version of this:

public class ConsoleGameDataPipe : IGameDataPipe
{
   public string ReadLine()
   {
      return Console.In.ReadLine();
   }    

   public void WriteLine(string s)
   {
      Console.Out.WriteLine(s);
   }
}

We then want to modify the Ants class to use our interface instead of talking directly to the Console class. First off, we add a member variable of type IGameDataPipe and default it to our console version. Then we find everywhere that calls Console.* and replace it with an equivalent call on the IGameDataPipe member.

There’s one other place in the starter kit that talks to the console directly – the base class for our bot implementation in Bot.cs. Here, IssueOrder is directly calling Console.WriteLine. It’d be great if we could avoid having to expose the Bot class to the concept of how to present its data to the game layer, so I’d like to avoid pushing an IGameDataPipe into the Bot. Instead we’ll expose this through a delegate and event – when the bot wants to issue an order to mutate the game state it raises the event with appropriate arguments and the Ants class figures out what to do with it.

That takes our definition of Bot to looking something like this:

public abstract class Bot {    
   public delegate void OrderIssueHandler(Location loc, Direction direction);
   public event OrderIssueHandler OrderIssued = delegate { };
   public abstract void DoTurn(IGameState state);

   protected virtual void IssueOrder(Location loc, Direction direction)
   {
      this.OrderIssued(loc, direction);
   }
}

We then modify the Ants class to hook the event upon entering PlayGame, with the handler delegating to the IGameDataPipe.

So after all that we’ve achieved nothing functionally different but we have managed to decouple the bot implementation from the actual mechanism by which it talks to the Python game script.

This lets us replace our console in-out implementation with something else. We could use IPC here, WCF remoting, anything. I’m going with TCP for this example. Our server and client implementations can be very simple because the way our IGameDataPipe is consumed is also very simple. The original Console-based client worked just fine using the blocking IO provided by the Console class, so we can replicate the same. First things first – let’s modify Ants.cs with a new constructor that lets us override the IGameDataPipe to be used for comms with any other implementation.

// Default to the console, allow caller to override as required.
IGameDataPipe dataPipe = new ConsoleGameDataPipe();

public Ants()
{
}

public Ants(IGameDataPipe pipe)
   : this()
{
   this.dataPipe = pipe;
}

Now on the server side (that is, the side that’s hosting the bot) we simply wrap a TcpClient with a StreamReader and StreamWriter and expose ReadLine and WriteLine methods delegating their work to the two Stream* classes.

public class TcpIPGameDataPipe : IGameDataPipe
{
   StreamReader _reader;
   StreamWriter _writer;
   TcpClient _client;    

   public TcpIPGameDataPipe(TcpClient client)
   {
      _client = client;
      NetworkStream stream = _client.GetStream();
      _reader = new StreamReader(stream);
      _writer = new StreamWriter(stream);
   }

   public string ReadLine()
   {
      return _reader.ReadLine();
   }

   public void WriteLine(string s)
   {
      _writer.WriteLine(s);
      _writer.Flush();
   }
}

On the client side (that is, the shim that the Python script talks to) we need to relay stdin over a TCP connection, and relay incoming data from the TCP connection to stdout. This is marginally more involved than the server implementation as we need to thread these operations since they’re being initiated from one of the Python script on one side and the TCP connection on the other. Still, the code’s straightforward:

public class ConsoleTcpIPForwarder
{
   TcpClient _client;
   StreamWriter _writer;
   StreamReader _reader;    

   Thread _readThread;
   Thread _writeThread;

   public ConsoleTcpIPForwarder(int port)
   {
      _client = new TcpClient(new IPEndPoint(IPAddress.Loopback, 0));
      _client.Connect(new IPEndPoint(IPAddress.Loopback, port));

      NetworkStream stream = _client.GetStream();
      _reader = new StreamReader(stream);
      _writer = new StreamWriter(stream);
   }

   public void Start()
   {
      _readThread = new Thread(ReadForwarder);
      _writeThread = new Thread(WriteForwarder);

      _readThread.Name = string.Format(“{0} read thread”this.GetType().Name);
      _writeThread.Name = string.Format(“{0} write thread”this.GetType().Name);

      _readThread.Start();
      _writeThread.Start();
   }

   private void ReadForwarder()
   {
      while (true)
      {
         _writer.WriteLine(Console.ReadLine());
         _writer.Flush();
      }
   }

   private void WriteForwarder()
   {
      while (true)
      {
         Console.WriteLine(_reader.ReadLine());
      }
   }
}

Now we’re almost home. Our shim implementation simply needs to kick off a ConsoleTcpIPForwarder and leave it running, which will in turn forward all console input to the network and network traffic back to console. Our server implementation that hosts the bot just needs to listen for incoming TCP connections made by the shim and when one’s initiated kick off a new Ants game using the TcpIPGameDataPipe. We’ll keep this in a busy-loop that runs a game then waits for more connections to avoid having to keep kicking off the Visual Studio side of things.

The only final comment here is that we want to keep out TcpIPGameDataPipe in a separate assembly to avoid breaking compilation of our main bot assembly when uploaded to the server (as it’s unlikely that it has the required System.Net* assemblies available for compilation against).

I’ve combined both the forwarding client and the game-host server in the same executable for straightforwardness, deciding whether to spin up the server or forward from the client based on whether any command-line params are supplied (none = client, anything, say ‘-server’ = server).

To run, we start the shim process in Visual Studio (with a command-line parameter of ‘-server’ to force it to go into server mode) and then run the Python game script pointing it at our shim exe instead of the actual bot exe. We can now put breakpoints into our bot code and have them fire just as we want.

The modified package is available here.

Improvements to Telerik’s input controls

Telerik’s RadControls for ASP.NET are reasonably good from an ‘out of the box pretty’ point of view and expose a fair bit of functionality with minimal configuration (hence the ‘RAD’ part of the control name, one imagines). However, such quick-to-throw-together functionality comes at a cost, as the controls can render down to some quite clumsy HTML.

For example a RadTextBox called ‘RadTextBox1’ with a prompt message within it renders to three input tags and a span tag:

<span id="RadTextBox1_wrapper" style="white-space: nowrap;">
  <input value="type here..." size="20" id="RadTextBox1_text" name="RadTextBox1_text" style="width: 290px;" type="text">
  <input id="RadTextBox1" name="RadTextBox1" title="" style="visibility: hidden; margin: -18px 0pt 0pt -1px; width: 1px; height: 1px; overflow: hidden; border: 0pt none; padding: 0pt;" value="" type="text">
  <input autocomplete="off" value="{&quot;enabled&quot;:true,&quot;emptyMessage&quot;:&quot;type here...&quot;}" id="RadTextBox1_ClientState" name="RadTextBox1_ClientState" type="hidden">
</span>

The element that actually shows on-screen isn’t the one with ID ‘RadTextBox1’ which is annoying when trying to tie together jQuery and Telerik controls. For example, to attach a jQuery Autocomplete to the above Telerik RadTextBox you’d have to either do it by class (fine, unless you’re using this in a user control in which case you’d have to class each one differently just to get different autocompletion lists working) or assume that Telerik stick with their current convention of appending ‘_text’ to the element ID that actually shows.

So it’s fairly good news for those faced with combining Telerik controls and jQuery (or anything else javascripty) that they’ve introduced ‘single input rendering mode‘ that turns the above into something marginally clearer:

<span id="RadTextBox1_wrapper" style="width: 290px;">
   <span id="RadTextBox1_display" style="display: inline; color: rgb(138, 138, 138); font-family: &quot;segoe ui&quot;,arial,sans-serif; font-size: 12px; line-height: 17px;">type here</span>
   <input style="color: transparent;" id="RadTextBox1" name="RadTextBox1" size="20" type="text">
   <input autocomplete="off" value="{&quot;enabled&quot;:true,&quot;emptyMessage&quot;:&quot;type here&quot;}" id="RadTextBox1_ClientState" name="RadTextBox1_ClientState" type="hidden">
</span>

It’s still generating a bunch of tags for something fairly straightforward, but at least now the visible input control is the one with the ID you assign in the page making interactions with jQuery a little easier.

The curious case of the Ebuyer £1 promo

Ebuyer kicked off its £1 clearance sale at 11am this morning and, as was surely inevitable, its infrastructure imploded almost immediately. The site at 1.30pm is still essentially inaccessible and for those that do manage to get anything into their baskets items are showing out-of-stock by the time they check-out or worse the process bails out mid-way through and their order becomes a quantum concern.

While the site’s down, they’re losing money through lost sales – if we take last year’s £260m turnover and assume people shop mostly in an 18 hour window my back-of-envelope maths suggests that they’re down £40k revenue per hour (so heading towards £100k for the day so far as I write).

The situation was foreseeable because it happens time and again with these sorts of promotions. Most recently, the Touchpad firesale was just another way of saying ‘rolling DDoS attacks’ as punters hammered sites the second a whiff of knock-down pricing was detected. Worse, once people find that they can’t get on the site they tweet about it causing others who may not even know who Ebuyer are to check out the car-crash live, adding to the problem.

In a world where resources can be virtualised and extra capacity spun-up on demand one might wonder why large retailers aren’t buying into the idea of cloud computing, CDNs and the like to buy themselves breathing space – even temporarily – around big promotional events.

In the case of Ebuyer, losing a couple hundred thousand in revenue as a result of self-inflicted ‘technical difficulties’ probably doesn’t come close to the effort and cost of rebuilding their infrastructure to cope with what are exceptionally rare traffic spikes. And in the end their image isn’t likely to be too-badly tarnished either – people have a short memory for this sort of thing and it seems unlikely that Staples, Dabs, Carphone Warehouse, Misco or anyone else caught in the torrent of traffic looking for Touchpads has suffered a loss of custom as a direct result.

Still, you’ve got to imagine that Ebuyer’s technical, sales and warehouse staff would probably be having a much better day had the promotion not been run quite so optimistically.

AI Challenge 2011

I’ve not taken part in an AI Challenge before but with the next few weekends looking pretty quiet I decided to sign up. The latest challenge, run as a collaboration between Google and the University of Waterloo Computer Science Club, simulates a competitive environment in which two or more ant colonies fight for survival on a selection of maps.

The contest is distinctive for the broad range of languages supported. You code up your entry, zip it and upload it where it is compiled (where necessary) on the server and put into a queue to fight against other players’ creations. There are starter packages provided so that you don’t have to parse all of the incoming data and wrap up the output.

You control an ant colony located somewhere on a map. By commanding your ants to move to and collect food you spawn new ants from your anthills. Survival is key, but a scoring system also encourages attacking behaviour – get close enough to another ant and you’ll attack it, and if you manage to land on an undefended ant-hill that hill is lost and no longer produces ants for the opposing player.

Your code has a limited amount of time to perform its processing per time-step, and if it exceeds that time then your bot is eliminated. Equally if your bot crashes during execution then you’re also out. Winning or losing changes your rank based upon the Trueskill algorithm and your bot will keep being queued for more battles in a round-robin fashion for a number of days after you upload it.

I’ve decided to try a diffusion approach which I suspect many others will be trying too. The idea is reasonably simple – goal items in the world exude a digital pheremone that makes them attractive to your ants, and this pheremone is diffused throughout the map over time by simple averaging in adjacent cells. For example, a food item might have a score of 1000 scent units in time step 0; in time step 1 its neighbours will have attributed themselves some proportion of that scent, and in time step 2 their neighbours will also be imbued. If you consider the scent of food to be a surface then we iteratively smooth out sharp peaks caused by goal items throughout the rest of the surface.

That means that we can find food or unexplored squares (if we associate a suitable attraction score to cells we’ve not yet visited) by simple hill-walking per-ant. When an ant is deciding where to move it examines the squares to the north, west, east and south of it and picks the direction with the highest scent value. It also means that we get pathfinding for free by clamping the ability of wall-cells to propagate scent to zero – any other cell adjacent to a wall cell will have a higher attractiveness, so the ant will not consider the move.

There are substantial kinks to work out in my implementation but I’ve only just started. For now, my bot (not yet uploaded) seeks only to explore the map without bothering to look for food (though there is enough food that it’ll stumble upon some accidentally fairly often). This video shows the pheremone trails over time; the green on the left is the ‘unexplored’ pheremone, while the red on the right is the ‘food’ pheremone. Just now the ants are configured to consume the unexplored pheremone when they are within a cell such that any ants following behind will seek different routes.

The ants are starting from near the top-left of the map. The green pheremone quickly ramps up across the map even in regions the ants cannot yet see because we do not yet know where the walls are in those sections – ants have a limited field of view. Once we start exploring these regions the wall cells start to clamp correctly and the map topology becomes a lot clearer.

On the red side the food pheremone pulses as new food items are added randomly. When an item is consumed its red signal falls off over a few time units.

The contest closes for new entries in December on the 18th, after which the tournament-proper starts – I’ve enough time to hopefully get something based on this diffusion approach uploaded by then!

Breaking out is hard to do

It’s easy to get stuck in a way of thinking and be unable to break out of it, especially when you get so drawn into a problem as to be emotional rather than impassive. You can be drawn in too close to an issue and get so focused on a narrow set of possible but flawed solutions that you miss the real solution that’s just in your peripheral vision, and all that it takes to see that real solution is a step back.

I use a memory of one such occasion as a reminder when a big problem arises, in the hope that remembering the lesson I learned those years ago will be employed by my modern self before it’s too late.

I was nearly a year out of university. I was working in a production facility with automation equipment – heavy-ish machinery making and packaging things that was struggling with yield issues. We had a radical idea on how to improve some of those issues but no office-hours to actually make the change – this being a start-up we did most of it in the evenings and tested the system in our spare time when the production line was shut down for the night (or weekend). The change was invasive – it involved splicing a PC in between an OEM vision system and a PLC in charge of making things move. We designed our modification so that we could unhook everything within a couple of hours, which made doing comparative studies easy and gave us a safety net in case the line needed to be started back up at short notice while we were testing.

Our first version was a bit of a kludge but tested well and was stable enough for production runs so we implemented it one weekend and ran one day’s equivalent production volume to make sure it held up. It did – yield was 100%. I went home happy, if exhausted as the previous two months of night-time and weekend development had done nothing for my health.

I woke up on Monday morning at 4am having had nightmares about the production line. The process statistics were monitored at the board level, and as it was a small firm we’d essentially taken ownership of the project (and therefore responsibility for its outcome). Even though I knew that the new system worked, the previous runs were freebies – test runs the statistics of which weren’t reflected at board level. Now that the statistics were real it took on a different feel, a new pressure that hadn’t existed before.

I went to work at 6am and sat by my phone waiting nervously for a call signalling production disaster. I knew that my fear was irrational – I kept looking back over test logs, statistics, charts and meeting minutes and was re-assured that all signs pointed to an anticlimactic morning. By 9am it hadn’t rung and our engineering team meeting started. At 10am I returned to my desk and a new email.

“Can you give production a call urgently please?”

My heart sank – I phoned production for an update.

“The process isn’t working – almost every part we put in is coming out as a reject”.

Worse still they’d had this problem since 8am and had now performed two hours of running with a yield in the low-teens, and no parts passing the process for the past hour. This was madness – even if I fixed the issue right now the yield figures for the day, week and even month were shot all to hell with everyone watching.

I went over and watched the process running and failing, time after time. We couldn’t understand it – I couldn’t think of a single failure mode of the new system that would cause this output. I was sleep deprived, emotionally drained from weeks of late nights and early mornings and suddenly very self-conscious with a room of production staff looking at me for answers. My head ran through ever more unlikely scenarios and eliminated them one by one, each time getting a step closer to having to shut down the line and remove the new system. I was running out of ideas. Then I heard one of the operators say:

“What’s this washer doing over here?”

A washer had been knocked off during morning cleaning of the equipment, and without it the system was doomed to keep producing reject parts. It all clicked into place, and it should have been obvious to an impassive, rational observer from the outset – as I was neither of these kinds of observer and running on misfiring instinct I was clutching at all the wrong straws. The washer was replaced, and the system turned in a respectably high yield from that point on.

My troubleshooting had become myopic and I’d not noticed. Having seen a change to their environment the production staff experienced the same. Our minds had taken a mental short-cut; post-hoc ergo propter hoc – after it therefore because of it. We hadn’t considered that the problems on day 2 might be unrelated to the actions on day 1. Everyone had skipped the first page of the story and unconsciously incorporated the premise ‘yesterday’s change caused this’ into our problem model – the set of solutions that didn’t rely on yesterday’s exploits were shrouded from us and none of us had noticed. We were asking the question:

“How could the move from old to new system cause this sort of output?”

We should have been asking the question:

“What might cause this sort of output?”

From that point on whenever a seemingly-large issue comes up I inadvertently think back to that episode, where a room full of people were flummoxed by a missing washer and remember two things; that all problems should be approached impassively, and that sometimes the best route forward starts by taking a step back.

An homage to Pages from Ceefax

I always loved Ceefax as a kid, and in particular (for reasons that probably indicate some manner of mental deficiency) Pages From Ceefax, the small non-navigable subset that was broadcast after-hours accompanied by a variety of big-band and easy listening music. And with a spare bank holiday weekend at my disposal and a girlfriend who is more accepting of my eccentricities than is necessarily prudent, I put together an HTML5 canvas + Javascript version that pulls in RSS feeds from the BBC news website and renders them in a hand-made sprite font:


Click image for project page – requires HTML5 canvas support

Two sprite fonts (one for character data and one for the graphics blocks from the header section) are combined with feeds from the BBC periodically downloaded by a cron job. Three feeds are in use in the above sample – the Top Stories, Sport and Business feeds. Each of these map to a single page (signified by the Pxxx number at the top of the view).

These are then split down into subpages so that both headline and byline are rendered together. For a given page there will be one or more subpages that contain the actual content – these are cycled between on a 15 second interval, and when all of the subpages of a given page have been shown the next page is slotted into place.

Graphing the Whisky Fringe with Visiblox

The Whisky Fringe Tasting Tracker generated a couple of hundred datapoints over the course of two days (after Neil gave it a crack on the Sunday, and a few of us had played with it on the Saturday). The data we have available is pretty simple, and associates opinions about whiskies against the times that those opinions were recorded.

This lets us look at a couple of things from the off:

  • What was the sampling rate like over the course of the day? Did it get quicker near the end when time was constrained, did we start off quickly and wind down early?
  • Did our sentiment towards the drams being tasted change over the course of the afternoon?

I charted some of the data using our company Scott Logic’s Visiblox charting component in a pretty rough-and-ready fashion. By way of a quick summary:

  • A PHP file generates a JSON object containing the various metrics
  • A custom Silverlight control hosting a Visiblox chart is coded to expose some ScriptableMember methods that directly manipulate the axes and data-series of the hosted chart, allowing Javascript on the HTML page hosting the Silverlight control to programmatically add data and configure axes
    • Note that this isn’t the way the Visiblox component’s intended to be used – I did it in this way as a fun experiment, to be written up in a subsequent post

For now here’s some static images:

Sampling rate

Sampling rate over the Saturday

First off we see the aggregate sampling rate. Each bar represents the total number of drams sampled (across all people using the system) in a given 15 minute time period. While the event started at 2pm, we didn’t actually get started on sampling until about 2.20pm (with the rest of the time spent actually getting into the venue and waiting for a few stragglers). Further, the tables started getting cleared at 5.45pm so there’s a corresponding lack of data there.

  • Around 4pm there’s a sharp drop-off as everyone camped out at their chosen ‘half-time orange’ stand – these are the rarer whiskies of which you can try one. We also seemed to use the subsequent wee while taking a breather.
  • Once the half-time orange period had ended things pick up substantially, possibly as people realised that time was no longer on our side.
  • There was another minor panic right near the end of the day as the time when stands packed up approached, though it’s clear that by about 5.30 everyone was pretty much done.

Likes and dislikes

Likes and dislikes over the Saturday

Here we’re charting the same data as above, but split out into those drams that were marked as ‘liked’ vs those marked as disliked. Dislikes are given a negative score to push them below the axis. It seems like there was a period about 45 minutes after the half-time orange where we were more critical of what we were trying, perhaps as a result of having recalibrated as a result of really liking the half-time whiskies. However, it’s a little difficult to tell (and in fairness, we don’t have enough data points with so few people using the system).

Sentiment

Sentiment over the Saturday

Sentiment’s a tricky thing to gauge. My initial attempt subtracts the number of whiskies marked as ‘disliked’ from the number marked as ‘liked’ in each 15 minute bucket, giving a net number of liked whiskies in the period. This is then divided by the total number of whiskies sampled during that 15 minute window, giving a normalised value between 0 and 1:

  • Values close to 0 suggest that more whiskies were disliked than liked in the period (that is to say that sentiment was negative overall). Zero values also occur when no tasting took place (such as the 4pm half-time orange lull).
  • Values close to 1 suggest that more whiskies were liked than disliked in the period (or that sentiment was positive overall).

With some of the buckets having so few samplings within them (for example, the 5.30ish bucket has only three samplings), the measure is very sensitive to noise but it’s an interesting diversion nonetheless.

Whisky Fringe Tasting Tracker

Each year the Royal Mile Whiskies Whisky Fringe showcases the wares of the whisky industry in the picturesque settings of Mansfield Traquair. For £20, you have four hours in which to wander round 30 or so stands sampling any of the ~250 whiskies and ~30 rums on offer.

Upon entry you get three things – a printed programme booklet, a biro and a tasting glass. I’ve tended in the past to try to take notes in the booklet, but it requires an unusual manoeuvre of trying to hold the tasting glass and booklet in one hand while scrawling (with less success the later in the day you are) with the other.

So on Thursday night I threw together a quick mobile-friendly website for myself and a few friends to use to track which whiskies we taste on the day and which we particularly like or dislike.

Screenshot of the Whisky Fringe Tasting Tracker UI

The interface is as simple as possible to allow one-handed operation on a phone – phone in one hand, sampling glass in the other. The list of whiskies is presented without much fanfare with a alphabetical jump-list at the top for navigation. Users can say that they either liked, disliked or simply tasted a given whisky which is recorded in a MySql database along with a timestamp such that we can piece together the events of the day after the fact.

It remains to be seen whether it’ll hold up for the day (having effectively had to teach myself PHP in the space of a night for this one, it’s probably not the most robust implementation I’ve ever been involved in)…

Graphing the “Keep F1 On The BBC Petition”

After the BBC announced that they were going to enter into a timeshare of sorts for the live TV rights to Formula 1 from 2012 onwards there’s been a fairly vocal crowd on Twitter, Facebook, the BBC message boards and petition sites making their views known. And those views are, understandably, almost universally negative. At time of writing, the ‘Keep F1 Coverage On The BBC For All Races From 2012‘ petition on PetitionBuzz has over 28,000 signatures and nearly 20,000 Facebook likes.

Since I’d intended this to be a technical blog, I figured that there wasn’t much better a way of starting off than hammering out some code over a single malt to track the progress of the petition as a chart:

F1 petition chart showing number of signatures over time, starting 03/08/2011 where over 28,000 signatures were already registered

A cron job running every 15 minutes first scrapes the petition site for the number of signatories, which is stored in a MySQL database against a timestamp. The above graph is then refreshed and saved to disk, such that it’s always in line with the latest data but not forever being generated on-the-fly. The graph is generated using the jpGraph library (the free version) with the actual incantations effectively copied-and-pasted from a sample on the website – I was going for quick and dirty with this one, and good Lord is it that.