The past few weeks we’ve talked a lot about the brand new algorithm that we have designed for Wide Ideas. The story behind Score, which is the name of the new functionality, is a bit interesting. Actually Score is a result from my hobbie to predicting fotball matches…
Two years ago I asked myself if it in any way would be possible to use Machine Learning techniques to predict the outcome of football matches.
To describe the process briefly I started by collecting as much data as I could get hold of. I mined data about old games from every different source and API I could find. Some of the more important ones were Football-data, Everysport and Betfair. I then took all the data for from the old matches, with its corresponding results, quantified it and put it in a database. Finally I used the data to train a Machine Learning model, using it to predict upcoming games.
How to measure how well a model performs
Now, the nature of a football game is, of course, that it is unpredictable. I guess that is why we love the game. But I still was a bit obsessed by the naive idea that I with a Machine Learning approach was going to be able to predict games better than I would using my own mind. I knew from the past that I, as most humans do, base predictions on emotions, rather than facts and that I am somewhat biased. I know for myself that I quite often placed bets in the past based on a “gut feeling”.
The first question I now had to answer was how to measure if my Machine Learning model was successful or not. I quite quickly came to realize that measuring the actual percentage of correctly guessed games didn’t say that much if I didn’t put it in relation to something else. And the best thing I could come up with relating the model to was what other people were thinking. The easiest way to assess that would be to look at market regulated odds. So I started comparing how my model would perform if betting on Betfair because their odds are regulated on people betting against each other, making the odds a reflection of what the “market” predicts.
So now, two years has passed. Has the model made me rich? No, not at all. Quite soon I realized that the predictions my model made for most part was aligned with the market. Since I use a regression based model I’m able to predict the strength of the probability of a certain outcome of a game. And at the strongest grades of probability my model gives, it predicts roughly 70% of the games correctly. Now the problem is that the market more or less performs just as well making it hard to actually make money out of my model. But, to be honest, I never really thought that I would create a money machine. Instead I have come to several insights about the possibilities and limitations of Big Data and Machine Learning.
How much does a model learn over time?
One of the first things I started looking at was that, since the nature of Machine Learning would be that it in theory gets better over time – as the amount of data the model has to learn from grows, the outcome of the predictions would improve. This is something I haven’t seen at all. Two years ago I started with having about 2000 games in my database with quite a limited dataset attached to them. Now I have almost 30000 games, complete with lots of data covering everything from weather and distances between the teams home grounds to shots and corners for and against. So, given all this data and the fact that the model has been able to “learn” over time still hasn’t improved the predictions. This has taught me that machine learning only takes you so far in trying to predict the unpredictable.
Another important lesson I have learned is the power of machine learning still in many ways lies in its power to make unbiased generalizations. Over the past two years I was very curious to see if my model could predict when winning or losing streaks were to be broken. If it for instance could predict when Barcelona would finally lose after winning 10 straight games. If the model could find small signs that would indicate some kind of anomaly. Well, it has shown to not be that good at that.
What I found instead was that it was really good at, over time bet against over valued teams. Last season I for instance saw how my model quite often predicted against Borussia Dortmund while the market made another prediction. Dortmund ended up having a bad season making my model really successful here in relation to the market. This season I have seen the same when it comes to teams like Liverpool and Chelsea. So the lesson learned is that some people tend to make decisions based on emotions. Liverpool and Dortmund are teams liked by lots of people and at times you make predictions with your heart instead of your brain. My Machine Learning model does not.
Last but not least I guess I learned that making better predictions than the market is hard. Still, when I started looking at what I actually had achieved instead of looking at what I hadn’t I realized some quite amazing things. From a simple Python program and less than 10000 rows of code I still had made something that performed just as well as the market. How many man-hours aren’t behind bookies odds models and predictions? My model also is able to, on a weekly basis pick out interesting bets, just as any newspaper or expert does, but theirs with lots of manual labour behind. So the main insight is that by making generalizations you might not be able to find the one bet that makes you rich but it may save lots of time placed in the correct context.
With these insights I started to look at another project I’ve been involved in for the last 5 years. The idea platform Wide Ideas. What I wanted to do was to look at the ideas companies gather from their employees and try to predict whether the idea would be implemented or not.
We started by looking at the ideas just as if it was a football match. We quantified the data but instead of shots and weather we looked at how many who had interacted with an idea and in what way. Due to discretion I won’t go into details but the outcome was quite similar to the football model previously described. We now can make a quite good prediction on whether an idea will be implemented or not given the data the idea contains. This is a way of generalizing the ideas, answering the question, in general, what are the factors behind a good idea?
However – can we find a good idea that doesn’t follow the general patterns of a successful idea? No, not really – not yet at least. Still, for the product, and given that you look at an organisation that creates say 10000 ideas per year, finding any good idea is really hard and time consuming. So just by going from 10000 ideas to 100 probably good ideas and visualizing the result saves a lot of time. And this is where Machine Learning has given us the most gain.
Predicting the unpredictable
To sum my thoughts up. We see companies gathering lots of data promising that they might be able to predict anything from finding cancer to making self driving cars. And they might. Especially where generalization saves time. The medical implementations I think is a good example of this. Looking at pictures of birthmarks a Machine Learning model can pick the most likely ones to be cancer from a large set of pictures saving doctors important time and money.
But a lot of the things companies may try to predict has an unpredictable nature. Human behaviour is one. In what way is human behaviour predictable? How far can we come in predicting the human behaviour if it essentially is unpredictable? We will be able to generalize, placing people into different categories based on what you like to eat, watch or do, but honestly, who likes to be generalized?
What the past two years has taught me is that we in some way right now may be seeing a Big-Data-bubble. Will Big Data really find the anomalies or will it just be really good at making generalizations? I often believe that many of the promises made by companies tend to be that they will be able to find the needle in the haystack but that the results most often are generalizations. I guess that one of the reasons they do this is because their values as companies right now often are based on the amount of data they possess and not what they do with it. And if they were honest with the fact that the make generalizations, good ones but still generalizations, their value would decrease. I hope that we can see a future where companies values are based on what they do with the data rather than how much data they have. This will require transparency and honesty, just as I’ve been with my football model.
So, until someone proves me wrong I’m not convinced in the power of Big Data in general. I only believe in it where the cases are clear and one of the most obvious and best ones are examples within healthcare. The risk otherwise is that you end up with as much data that the sheer amount suffocates every possibility to make sense out of it in any other way than vast generalizations.