2014 Season Review

So with footy finished for the season it is time to review the performance of our model. Tipping wise we achieved an accuracy of 70.0% which is pretty much the benchmark, much more than that requires a bit of luck.  Betting wise we managed a profit of 14% which isn’t too shabby, but less than what we expected (or wanted!). The last few rounds of the season were not kind to us.

The ratings of all the teams as they finished the 2014 season can be seen below

postGF(click to enlarge)


I’ve got a few cool ideas to try out that should improve performance for next year. Can’t wait for round 1!

Brownlow Results

Here we present comparisons between our predictions and the results of the 2014 Brownlow.

Recall we made two sets of predictions, the first where we allocated 3,2,1 votes to the three best players in each match, and the second where each player was allocated how many votes they were ‘expected’ to get on average.  The two figures below give the predictions and how they compared to the results for the first and second sets of predictions respectively.

Allocating 3,2,1.

compare1(click to enlarge)

Allocating expected votes

compare2(click to enlarge)

The figure below gives the Kendall rank correlation coefficients between the results and both sets of predictions considering the first 5 to 30 positions.

rankcor(click to enlarge)

However, I believe the Kendall rank correlation is not ideal for measuring the performance of our predictions for two reasons:

1. If we look at say the top 5 ranks of our predictions and results, it’s is possible to get a very low correlation between the two even if both lists contain the same set of players.

2. We care far more about the top positions being correct or more accurate than lower positions.

This last figure gives the NDCG (Normalized Discounted Cumulative Gain) which aims to measure the quality of ranked predictions. There are some assumptions made here that may not be reasonable, such as the the relative importance weights for each position, but it’s something I thought may be worth experimenting with.

(click to enlarge)

Grand Final Preview


Hawthorn (0.7) v Sydney (0.3)


Hawthorn 4.96%  @ 2.35

Looks like the model considers Sydney’s travel disadvantage a pretty big factor. Go Hawks!


2014 Brownlow Votes Predictions

I’ve had a bit of a bash with predicting Brownlow votes for the 2014 season. I present two different sets of predictions here. Next year I’ll endeavour to get some better quality data which should yield some results that I’d be far more confident with.

The first set of predictions are based on allocating 3, 2 and 1 votes to what a model suggests were the three best players in each game.


And the breakdown for each game:


The second set of predictions is based on allocating the ‘expected’ amount of votes given to each player based on their performance in each match.


And the breakdown for each game:


Both sets of predictions are in ‘pretty good’ agreement with the bookmakers. I think the models may be too generous towards Ablett, but we’ll see what happens next Monday night!