Thinking About the 2017-18 Model(s)

I can distill the differences in the 2015-16 and 2016-17 models pretty simply:

  1. 2015-16 took the in-season values for overtime and tie games into effect when assigning K factors and expected values.  Tie games were assigned expected values 0.50-0.50, and overtime games were assigned 0.75 for a win and 0.25 for a loss.
  2. 2016-17 took the in-season values for overtime games into effect when assigning K factors.

Let’s take some examples from last season.

  • Week 5: LSSU (EV of 62.54%) lost in 5×5 overtime.  They lost 19 points (out of a possible 30).  The previous season’s model would’ve had them lose 15.  ABOVE 2016 was harsher to LSSU.
  • Week 8: UAA (10.95%) hosted MTU.  UAA got a shutout win, and it garnered them 9 points (out of a possible 10).  The 2016-17 model only assigned a maximum of 10 points.  In the 2015-16 model, that game would’ve been a tie, and UAA would’ve gotten 16 points (out of a maximum of 40).  ABOVE 2016 was nicer to MTU.
  • Week 15: LSSU (EV of 60.35%) won in 3×3 overtime, garnering them 8 points (out of a possible 20).  The previous season’s model would’ve had them lose four points.  ABOVE 2016 was nicer to MTU.
  • Week 16: BGSU (EV of 57.99%) hosted UAF in a game that was decided in a standard 5×5 overtime.  BGSU garnered 13 points (out of 30).  The previous season’s model would’ve seen BGSU gain just seven points (out of 40).  ABOVE 2016 was nicer to BGSU.

These comparisons are a little difficult, I admit.  For one, 2015-16 did not have a goal differential bonus: all games had a maximum of 40 points to be assigned, and the teams that got most of them were teams that ran up big upsets as scored by BELOW differentials.  But the ultimate result is that the goal differentials and the assumption that a win was a win was a win ended up making the system different, and I’m not sure it was for the better.

What I think that I’m going to try is something like the following:

  1. The 2015-16 overtime win/loss/tie expected value thinking comes back in.  I think that the model should reflect the positive value of a BELOW-says-they’re-weaker team taking the game to the 61st minute, or perhaps a shootout.
  2. Remove the three-goal bonus, which I’ve half-heartedly defended.  I think that there’s value in tracking multi-goal wins, but an upset of three or more goals has happened just four times last year in 34 games of that differential.

Before I dive back into writing the model, I’ll generate a canonical BELOW calculation from 2013 forward using this model:

  1. Teams get an initial BELOW based on their total winning percentage in the 2012-13 season.
  2. After each season, a team’s BELOW reverts to the mean by 50%.  If you finished 2012-13 with a BELOW of 1400, you start 2013-14 with a BELOW of 1450.
  3. For every game, take their current BELOW rating to develop an expected value.
  4. Use the ABOVE model for 2017-18 to assign points as follows:
    1. Multi-goal wins have a maximum of 50 BELOW points.
    2. One-goal wins in regulation are assigned 40 BELOW points.
    3. Wins in a standard overtime are assigned 30 BELOW points.
    4. If the game’s start date is 2016-10-01 or later, assign 20 BELOW points to the winner in the bogus overtime sessions.
    5. Wins for recalculating BELOW in regulation time are from a 1.0 actual value.  Losses receive a 0.0 actual value.
    6. Wins for recalculating BELOW in standard overtime are given a 0.75 actual value, while losses receive a 0.25 actual value.
    7. Any game still tied after 65:00 has each team assigned an actual value of 0.50.
  5. Repeat through the end of the 2017 Broadmoor title.

Then I can apply an eye test as to how well BELOW correlates with teams making the playoffs, including potentially changing the BELOW points assignments.  Then I can put that model and make a script to do a Monte Carlo simulation.

Off to do some data entry.