Jump to content
Sign in to follow this  
  • entries
    78
  • comments
    48
  • views
    43,408

The Count of Monte Carlo

blog-0609530001411325184.jpgI've never been to Monte Carlo. But I kinda like the music.

Monte Carlo is the stuff of James Bond and European royalty. I don't want to be presumptuous, but most of you reading probably will never go to Monte Carlo. But I'm here to show you how you can bring all the intrigue and prestige of Monte Carlo to your mother's basement or wherever it is you spend your free time playing around with sports statistics on your computer.

Monte Carlo simulations have a wide applicability in science and engineering. They're used to model the spread of diseases, radiation, traffic, and many other natural phenomena. At a really basic level, the Monte Carlo technique assumes that while the likelihood of any one particular event happening is essentially random, the collection of a large number of those random events results in behavior that is quite predictable. Roll a (non-loaded) die once and between 1 and 6, you have no idea what will come up. The individual event is random. But if you track the results of the individual rolls, you will see that as you accumulate more and more results, the behavior of the die becomes very predictable. Given a large number of rolls, 1 will come up 16.67% of the time, 2 will come up 16.67% of the time, 3 will come up 16.67% of the time, and so on up to 6.

To apply this to college football, we've discussed in the past about the randomness of a team's performance on a given play or even in a given game. TCU beats Oklahoma in Norman, then loses the next week to SMU. But over the course of a season, those random fluctuations average out to some mean level of performance, somewhere between the outliers. If the 2005 TCU team played Oklahoma 100 times, it might have lost 75 of them. And had they played SMU 100 times, they might have won 75. If those games were decided on the basis of coin flips at those odds, TCU beating Oklahoma was the equivalent of flipping two coins and getting heads twice while any other result (heads-tails, tails-heads, tails-tails) would have resulted in a loss to Oklahoma.

That pair of games resulted in two consecutive improbable results. Improbable, but not impossible. And fundamentally random. But using information from the 10-12 other rolls of the dice for those two teams that year, we can actually begin to estimate just how unlikely those results were.

So how do we determine what the odds of a victory in a particular game is?

Let's assume that a team's performance is "normal." That is, a team is just as likely to play a touchdown worse than their average performance as they are to play a touchdown better. And a team is just as likely to be two touchdowns worse than their average performance than two touchdowns better, but less likely than they are to be one touchdown worse or better. In other words, their performance fits the famous "bell curve."

Take your favorite performance metric. Obviously, I'm partial to DUSHEE, but you can use Sagarin, the SRS, or whatever metric ESPN is using these days to assess their win probabilities.

At the end of the 2005 season, using the DUSHEE metric and ignoring the effect of yardage, TCU was roughly 15 points better against their opponents than an average team was against their opponents. Oklahoma was roughly 10 points better than average. So you can see based on this metric alone, TCU beating Oklahoma, even in Norman if you give the Sooners the customary 3-4 points for home-field advantage, was not an upset based on their mean performance. In reality, the game was pretty close to a coin flip.

Also that year, TCU's standard deviation about that average +15 performance was about 12.5 points. Worded another way, roughly 75% of their performances that year were between 2.5 and 27.5 (+15-12.5 and +15+12.5) better than the average team's performance. Go out another standard deviation on either side, and you would capture 95% of the team's performances.

So that TCU team was good. Ranked 17th in the country based on DUSHEE. But it wasn't so good that a below average performance wasn't possible. in fact, that TCU team gave about a 10% chance of producing a below-average performance.

Oklahoma, on the other hand, had a standard deviation of 10.6 points. They were a little more consistent than TCU.

With three things, we can generate a Monte Carlo simulation of TCU-vs.-OU outcomes; a random number generator (the die in our previous analogy), the teams' average performance, and the teams' standard deviation about that average performance. Want the 2005 TCU team to play the 2005 OU team 100 times? 1000 times? A billion times?

The equation that governs the likelihood a randomized event distributed about a mean is an ugly one:

normpdf1.jpg

but Bill Gates gives us a handy little function in Excel that takes care of the math for us (the NORMINV function for those of you playing along at home) which solves for 'x' in the above equation if you give it F (the roll of the dice), mu (the average performance -- +15 for the 2005 TCU team), and sigma (the standard deviation).

On a spreadsheet, I've got 1030 games "simulated" between those OU and TCU teams. A screen shot of the first 31 of those games appears below:

blogentry-9-0-53600800-1411323311_thumb.

Column O is the randomly generated F(x) for TCU and Column P is the simulated performance result for TCU. Columns Q and R are the corresponding values for OU. Column S is the "outcome" of the simulated game, taking TCU's performance and subtracting OU's performance. The value could be viewed as the margin of victory for that simulated game. A positive value means TCU won; a negative value means OU won. Cell P5 counts the number of times TCU won and divides by the total number of simulated games; Cell R5 computes the same number for OU.

So you can see, our Monte Carlo simulation predicts that TCU had a 53% probability to beat OU that year. Slightly better than a coin flip. But looking down through that list of the first 30 simulated "games" out of the 1030, you can see that OU "won" 20 of those first 31, including 7 in a row at one stretch. At another point TCU "beat" OU 6 of 7. In the top right of the spreadsheet are the maximum, minimum and average MOV for the 1000+ game series. So in one of the 1000+ games TCU beat OU by 65; in another OU beat TCU by 80. On average TCU beats OU by 2.

If we repeat this process for the 2005 TCU-SMU matchup, SMU's mean performance that year was about -7.5 below average with a standard deviation of almost 26 points (extremely inconsistent). Using these numbers in our Monte Carlo simulator and SMU had about a 23% chance of winning that game.

I don't know if that makes anybody feel any better or worse about losing to SMU that year. Probably not. Maybe Kenny Rogers can help soothe the pain ...

https://www.youtube.com/watch?v=SVlRx2U-xUA



2 Comments


Recommended Comments

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...