With the Premier League finishing, the result of the pre-season predictions competition will be announced here later today but yesterday’s interlude of a Eurovision Song Contest forecast also needs to be assessed and compared to others. We therefore scoured the internet and found three other forecasts to compare ours with. Microsoft Bing also appears to have forecast the competition but only a top five, including the correct winner, was published which is not enough to do anything with. If anyone can provide me with their full pre-competition ranking, I’ll happily add it here. When you see all sorts of claims on Microsoft Bing’s success in predicting this year’s competition, remember that an assessment should be made on the whole of a prediction and not just getting the winner right. Cherry picking the good parts of a forecast is a trick that many pull to promote themselves but is not something that we will do. Here it is full cards on the table, for everyone.
Runners and riders
The Guardian‘s data team produced a data driven forecast and they kindly provided me with the full rank order. Australia were not placed due to lack of data on the first time entrants so the Australians are ignored in assessing The Guardian’s success, or lack of it. ESC Chat, a forum dedicated to the Eurovision Song Contest asked its users to vote, whilst ESC Stats, a site whose name is self-explanatory, used a data driven approach with five factors equally weighted. The fourth model is, of course, the one produced by @one minute coach and published here on the morning of the contest.
Best and worst
What were the best and worst forecasts from our quartet or prognosticators?
Best: 4 exactly right (1. Sweden, 9. Israel, 17. Albania, 18. Lithuania)
Worst: Spain were ranked 8th but finished 21st, a difference of 13 places.
One Minute Coach
Best: 6 exactly right (1. Sweden, 2. Russia, 3. Italy, 8. Norway, 15. Romania, 25. France)
Worst: Montenegro were ranked 26th but finished 13th, a difference of 13 places.
Best: 4 exactly right (1. Sweden, 5. Australia, 7. Estonia, 16. Armenia)
Worst: Serbia were ranked 24th but finished 10th, a difference of 14 places.
Best: Albania, Cyprus and France within one place of their actual ranks.
Worst: Estonia were ranked 24th but finished 7th, a difference of 17 places.
This is too simplistic to judge the quartet but interesting all the same. What about success in the top-3 and top-10?
Top-3 and top-10 accuracy
Looking at the top end, The Guardian was the only one of the quartet not to pick the winner and the only one not to have the correct countries in the top three. Our forecast published here on the morning of the final, was the only prediction of the quartet to place the top three in the correct order. Moving onto the top-10, The Guardian was the only prediction not to correctly name at least seven of the eventual top-10 finishers scoring only three. ESC Chat was the best here with eight.
Accuracy of full ranking
A fair assessment of any prediction will take account of the full ranking though, and not just the top end, so I am very grateful to Cath Levett and her team at The Guardian for giving me their full prediction prior to the contest as it was not published in full online.
So, how do we do this? A very quick and dirty method would be to count the number of correct placings and the number within a certain number of placings, say two for example. There are better methods though and the one that James Grayson and I have used for the football predictions, the standard deviation of the differences between actual and predicted, will also be applied here. An explanation of why this is a good method can be found here on my blog.
ESC Chat: 16 of the 27 within two placings of actual rank, 4 exactly right, SD = 4.7
One Minute Coach: 14 within two placings, 6 exactly right, SD = 4.9
ESC Stats: 10 within two placings, 4 exactly right, SD = 5.5
The Guardian: 6 within two placings, 0 exactly right, SD = 8.5
ESC Chat the best on the night
So, despite being the only forecast to get the top three in the correct order, our forecast was actually outperformed by the ESC Chat user votes which had a lower standard deviation of prediction differences as well as more forecast placings within two ranks of what actually happened. We did get the most correct ranks though and the difference in standard deviations is not that big. Amongst the three truly data driven forecasts ours was the best. As I have mentioned before though, a good forecast can just be luck so little can be concluded definitively here. The Guardian perhaps needs to have a look at its methodology given how far it was behind the rest of the forecasts though.
Bookmakers the best of all though
One group who have to make as good a forecast as possible, as their income depends on it, are the bookmakers whose forecast can be found here. Last night, they had the top five all correct in the correct order, named all of the top-10 finishers in the top-10, got seven nations spot on and named 15 of the 27 entrants within two places of their actual rank. Their standard deviation of the difference between predicted and actual rank was 4.2. Their worst call? Montenegro in 26th position when they finished 13th, the same as our poorest prediction here. So, their bar is unsurprisingly somewhat higher than ours but this was a good start for us and beating a major newspaper’s data team is a satisfying begin. Roll on next year.