PDA

View Full Version : ding dong the wisconsin effect is dead...at least to kenpom



uh_no
10-21-2013, 08:06 PM
If you've been browsing historical kenpom rankings lately, you might have noticed they have changed....the man himself has rewritten history!

according to his blog

http://kenpom.com/blog/index.php/weblog/entry/pomeroy_ratings_version_2.0

he has started to diminish the returns when beating up overmatched teams....

so as I understand it, the further ahead you are ranked than your opponent, the less an impending blowout will factor into your overall rankings....which, as he himself points out, means wisconsin won't shoot to the top of the rankings beating up on cupcakes.

Soudns good to me....IMO the best just got better

Kedsy
10-21-2013, 09:42 PM
If you've been browsing historical kenpom rankings lately, you might have noticed they have changed....the man himself has rewritten history!

according to his blog

http://kenpom.com/blog/index.php/weblog/entry/pomeroy_ratings_version_2.0

he has started to diminish the returns when beating up overmatched teams....

so as I understand it, the further ahead you are ranked than your opponent, the less an impending blowout will factor into your overall rankings....which, as he himself points out, means wisconsin won't shoot to the top of the rankings beating up on cupcakes.

Soudns good to me....IMO the best just got better

He's also using the new algorithm retroactively, so the (public) historical numbers on his website reflect the new system and are different than his numbers at the time. Apparently somewhere on the pay portion of his site the old numbers remain.

The good news is Duke still finished #1 in KenPom in 2010 (also 2004, but we dropped from #1 to #2 in 2006). The bad news is retroactive data fitting doesn't necessarily show the predictive power of a system, e.g., no way can he reasonably take credit for "predicting" Florida as national champ in 2006, despite the Gators holding the #1 spot in the "current" 2006 ratings.

uh_no
10-21-2013, 10:19 PM
He's also using the new algorithm retroactively, so the (public) historical numbers on his website reflect the new system and are different than his numbers at the time. Apparently somewhere on the pay portion of his site the old numbers remain.

The good news is Duke still finished #1 in KenPom in 2010 (also 2004, but we dropped from #1 to #2 in 2006). The bad news is retroactive data fitting doesn't necessarily show the predictive power of a system, e.g., no way can he reasonably take credit for "predicting" Florida as national champ in 2006, despite the Gators holding the #1 spot in the "current" 2006 ratings.

I don't think he's trying to take credit for that. I would imagine his predictive metric that he uses only depends on how well his system predicted results based on information known up until that point.

BD80
10-21-2013, 10:58 PM
He's also using the new algorithm retroactively, so the (public) historical numbers on his website reflect the new system and are different than his numbers at the time. Apparently somewhere on the pay portion of his site the old numbers remain.

... .

Hold on one cotton picking minute. Are you saying this guy is manipulating numbers to make them say what he wants them to say?

Inconceivable

ForkFondler
10-21-2013, 11:07 PM
Hold on one cotton picking minute. Are you saying this guy is manipulating numbers to make them say what he wants them to say?

Inconceivable

Nah, he's manipulating his mathematical model to fit the data. Not a bad idea, really.

azzefkram
10-21-2013, 11:12 PM
Inconceivable

You keep using that word. I do not think it means what you think it means.

Kedsy
10-21-2013, 11:32 PM
Nah, he's manipulating his mathematical model to fit the data. Not a bad idea, really.

Actually, in many predictive models data fitting is a very bad idea.


You keep using that word. I do not think it means what you think it means.

You beat me to it.

uh_no
10-21-2013, 11:36 PM
Actually, in many predictive models data fitting is a very bad idea.

not when you can demonstrate the new model would have stronger predictive power over a decade of samples than the current model....or should we go back to predicting weather based on the movement of the stars as well?

I would agree with you if there was no reasoning behind the change in methodology, but the change is clearly defensible for both intuitive as well as numerical reasons

Kedsy
10-21-2013, 11:46 PM
I would agree with you if there was no reasoning behind the change in methodology, but the change is clearly defensible for both intuitive as well as numerical reasons

I haven't studied exactly what he did to know his methodology or how defensible it is. I do know that when you create predictive models for stock prices, for example, you of course backtest after you create the model. If you then change the model to fit the data, even if you have a reason, most of the time you more or less scuttle any predictive power the model may have had. I learned that the hard way.

I'm not saying his changes weren't good ones. I'm suggesting that publishing the new model with past data compromises our ability to assess how predictive the model may be in the future.

ForkFondler
10-22-2013, 07:18 AM
I haven't studied exactly what he did to know his methodology or how defensible it is. I do know that when you create predictive models for stock prices, for example, you of course backtest after you create the model. If you then change the model to fit the data, even if you have a reason, most of the time you more or less scuttle any predictive power the model may have had. I learned that the hard way.

I'm not saying his changes weren't good ones. I'm suggesting that publishing the new model with past data compromises our ability to assess how predictive the model may be in the future.

I think it just means the old model was wrong. Whether the new model is really any better remains to be seen.

CameronBlue
10-22-2013, 07:40 AM
Hold on one cotton picking minute. Are you saying this guy is manipulating numbers to make them say what he wants them to say?

Inconceivable

Kenpom and Mendel, two peas in a pod. (It's considered by some that Mendel cooked the results of his experiments just a tad.)

tbyers11
10-22-2013, 09:21 AM
Kenpom and Mendel, two peas in a pod. (It's considered by some that Mendel cooked the results of his experiments just a tad.)

He was hungry and he needed a vegetable side dish for his schnitzel.;)

Back to the topic at hand. I think this is a great tweak to Pomeroy's model. I feel this is the most relevant paragraph from the blog on the free part of his site

The result is that games perceived by the system as big upsets get the most weight, while the influence of expected lopsided wins is minimized. For instance, last season’s non-conference games involving Grambling would be largely ignored. Whether a team beat the Tigers by 30 or 60 would make little difference in its rating.

Basically, if you were expected to beat a team by 30 and you beat them by 50 it doesn't improve your rating much . However, if you were supposed to beat a team by 2 and you beat them by 30 it still helps your rating. Large margins of victory are still taken into account but not (or much less so) if they were expected. We'll see if it improves the predictive ability.