PDA

View Full Version : KenPom versus the RPI



Olympic Fan
02-17-2012, 06:36 PM
It's become fashionable to bash RPI and praise Ken Pom. And for illumination, I admire Ken Pom's tempo-based formulas.

But are Pomeroy's rankings better than the RPI rankings? Really?

Compare the two rankings with the two voter polls, which right or wrong, seem to offer the best guage for the top 10 teams.

The RPI top 10 is (with AP/coaches rankings):

1. Syracuse (2/2)
2. Duke (5-4)
3. Kentucky (1/1)
4. Michigan State (7/8)
5. UNC (8/7)
6. Baylor 9/10
7. Kansas (4/5)
8. Ohio State (6/6)
9. Missouri (3/3)
10. Southern Miss (unr/unr)

Georgetown (10/9) is the only top 20 team that misses the list ... hey are 13 in the RPI. The only real outlier in the RPI top 10 is No. 10 Southern Miss. That's not outrageous -- they are 22-4 an tied with Memphis for the lead in Conference USA.

Compare that with Ken Pom's top 10:
1. Kentucky (1/1)
2. Ohio State (6/6)
3. Michigan State (7/8)
4. Kansas (4/5)
5. Syracuse (2/2)
6. Wisconsin (unr/unr)
7. North Carolina (8/7)
8. Missouri (3/3)
9. Wichita State (unr/unr)
10. St. Louis (unr/unr)

New Mexixo (unranked/unranked) is No. 11. Duke (4/5) is No. 14 and Baylor (9/10) is No. 15.

His list looks okay in the top five (as does the RPI), but his second five is out there. Pomeroy wrote a blog entry last month just shrugging his shoulders over the bizarre high ranking for a so-so Wisconson, essentially admitting that it's a glitch in the system. Wichita State, St. Louis and New Mexico are (like Southern Miss) strong mid-majors, but nobody in either poll has any of them in the top 25 ... much less top 10.

I'm not suggesting that Pomeroy's rankings are without value or that the RPI is this great predictor. Or even a better predictor. I just think it's suddenly become fahionable in journalistic circles to trash the RPI ... and to laud Pomeroy -- as Eamonn Brennan of ESPN did on twitter last night. But as Brennan admitted after an e-mail exchange with an NCA official, he uses the RPI in his bubble report.

I guess the point of my post is not to get carried away with any predictive poll .. either good or bad. RPI is a big deal because the committee relies on it heavily -- not in the sense that they say, oh, N.C. State is 49 and Murray State is 50, therefore NC State should get the last at-large spot. What they do use it for is to look and say, well, NC State has four top 100 wins and Murray State just has three top 100 wins. They look at top 50 wins ... road wins ... non-top 100 losses -- all based on the RPI.

In the past, the human polls -- AP and the coaches -- have been better predictors of the top seeds than either RPI or Pomeroy. Both of the computer poll (along with Sagarin) are useful to a degree.

superdave
02-17-2012, 06:56 PM
I think Kedsy or someone demonstrated how a team could game the RPI by playing a ton of tough teams to pad their strength of schedule and winning none of them.

I'd say KenPom can tell you a lot more about a team than RPI because it uses more data to make its rankings and produces a clear picture of the team. But that's not to say the rankings are better or worse, just that it tells you more.

Another issue is in college basketball anyone can beat anyone, so neither is a great predictor of future tournament success. They can both tell you where you've been and what you might see, but as Butler and others prove every year, anyone can get hot.

Chris Randolph
02-17-2012, 07:51 PM
I'm curious as to the RPI and KenPom rankings from the last 3 seasons of Final Four teams and National Champions of that season going into the NCAA Tourney. Anyone have this data?

Bob Green
02-17-2012, 08:01 PM
Anyone have this data?

Google.

Reilly
02-17-2012, 08:58 PM
It's become fashionable to bash RPI ... I just think it's suddenly become fahionable in journalistic circles to trash the RPI ... ....

Scott Van Pelt had a long-ish anti-RPI rant on his radio show this week.

Reilly
02-17-2012, 09:02 PM
1 Kentucky
2 Ohio State
3 Syracuse
4 Kansas
5 Michigan State
6 Missouri
7 North Carolina
8 Wisconsin
9 Indiana
10 Duke

Reilly
02-17-2012, 09:06 PM
1 Kentucky
2 Ohio State
3 Kansas
4 Michigan State
5 Syracuse
6 North Carolina
7 Duke
8 Indiana
9 Wisconsin
10 Wichita State

Reilly
02-17-2012, 09:11 PM
2 - RPI
4 - coaches
5 - AP
7 - sports-reference
10 - Sagarin
14 - Kenpom

Wander
02-17-2012, 09:32 PM
I think you're missing part of the point. The RPI may resemble our intuition of what teams are good better, and it may even actually resemble the reality of what teams are better - but it's still arbitrary. Here's my theoretical ranking system: take the AP poll every week, switch the #8 and #13 teams around, and replace #24 with Wake Forest. Such a ranking system would probably give overall better predictive value than either the RPI or kenpom... but that doesn't mean it makes sense.

The RPI is a fundamentally arbitrary system. Why does your record count for 25% instead of 20% or 30%? Kenpom isn't perfect, but all the factors have some sort of basketball or mathematical justification to them. That's the point. Even if the predictive power isn't perfect - or even if it happens to be worse than the RPI - it's still a tool that makes sense.

SCMatt33
02-17-2012, 09:36 PM
This is one area where I really agree with Jay Bilas, who while preferring tempo-free and margin based metrics over pure win/loss metrics, also acknowledges that no one formula can be perfect. I think that a greater sample size is necessary to remove biases in individual formulas. You can pick through any formula and find things that are just plain wrong. I know that Colorado State is a top 30 team in the RPI, which is pretty bad. Worse still is KenPom having St. Louis at 10. He'd probably be the first to admit that.

I'd like to see the RPI replaced with a consensus rating system that combines ratings like the RPI, Sagarin, Pomeroy, Massey, LMRC, and the new ESPN computer rankings. I'd like a mix of win/loss based systems and margin based systems because each of them tend to ignore the value of the other. There's a much bigger difference between a 1 point loss and a 1 point win than margin based systems will tell you, but there's also a difference between a 1 point win and a 20 point win which win/loss systems ignore completely. We could easily combine these ratings by dropping the outliers for each team and averaging the rest. We could call it the Basketball Consensus Standings, or BC- oh wait...that acronym has already been taken. Seriously, though, I think basketball can find some value in the way that the BCS compiles its computer data in football. The problem with the way football uses it is that they take those numbers pretty much as is, with 1 being better than 2, 2 being better than 3, etc. If basketball could take the objective consensus data, but apply it in a subjective way (just like the committee does with the RPI right now), we'd have a much better system with less to complain about.

The NCAA already gives all of this information to the committee, but still uses the RPI as the basis of their data sheets. If they could just switch that data with less biased consensus data, the committee would have a much clearer picture. While I'm on the subject, I would be remiss not to include one valid concern that the NCAA would have with incorporating margin based systems, and that is putting incentive on running up the score to improve computer numbers. This was a very big concern for the BCS and for years, when the computers decided most of the rankings and the polls were just a small part, coaches and conferences were concerned that teams would just bury other teams for four quarters when it wasn't necessary to improve computer ratings. Contenders were worried about players getting hurt and non-contenders were worried about embarrassment. At first the computer ratings were prevented from using margin of victory, but that restriction has since been lifted without much detriment to the game. It's certainly a concern that the NCAA should have, but I don't think it should be big enough to prevent a move. If enough win/loss systems are included to balance out the ratings, and the ratings are only used indirectly, I can't possibly see it being too big of a detriment.

Reilly
02-17-2012, 10:02 PM
Why is Kenpom having St. Louis at 10 so crazy? Sagarin has them at 15. Sports-reference has them at 20.

There are 344 teams. These computers are good at placing teams in broad swaths correctly.
Kenpom ranks 2.9% teams higher than St. Louis ... Sagarin says no, 4.3% are ranked higher ... Sports-ref 5.8% ... no matter what, St. Louis is good (per the computers) ... 97th percentile ... 95th percentile ... 94th percentile ...

SCMatt33
02-17-2012, 10:10 PM
Why is Kenpom having St. Louis at 10 so crazy? Sagarin has them at 15. Sports-reference has them at 20.

There are 344 teams. These computers are good at placing teams in broad swaths correctly.
Kenpom ranks 2.9% teams higher than St. Louis ... Sagarin says no, 4.3% are ranked higher ... Sports-ref 5.8% ... no matter what, St. Louis is good (per the computers) ... 97th percentile ... 95th percentile ... 94th percentile ...

There's no way that St. Louis is anywhere near a top 10 team. The margin based computers give them way too much credit for beating up on bad competition. I'm not saying that St. Louis is bad, but they have a pretty poor SOS, their best win is a pretty good win at Xavier, but their next best win is probably either Washington or Dayton. They don't have any terrible losses, with @LMU and @UMass being the worst, but I'd think you'd be hard pressed to call it a top 10 resume. The win/loss systems (RPI, Sagarin's ELO Chess) have them more in the 20's or 30's. The point wasn't just to pick out examples, but to kind of show that their are outliers in every system because of the individual biases of each that can be removed by considering them all on a consensus basis.

Reilly
02-17-2012, 10:34 PM
The sports-reference SRS says it is based on points *and* strength of schedule. So, not a points-only basis. It has St. Louis at #20. The distance from #10 to #20 is not that much of an outlier. One is 97th percentile and one is 95th percentile. I'm prone to agree w/ you that a consensus may take off the rough edges; but I don't see St. Louis at 10 as that much of an outlier, when others have them at 15 and at 20.

SCMatt33
02-17-2012, 11:23 PM
The sports-reference SRS says it is based on points *and* strength of schedule. So, not a points-only basis. It has St. Louis at #20. The distance from #10 to #20 is not that much of an outlier. One is 97th percentile and one is 95th percentile. I'm prone to agree w/ you that a consensus may take off the rough edges; but I don't see St. Louis at 10 as that much of an outlier, when others have them at 15 and at 20.

I'm not looking to make a big deal about St. Louis. I was just scrolling for the first example that felt wrong to list. After looking at all the ratings, St. Louis at 10 probably isn't an outlier among computers, but it still doesn't mean that St. Louis is in the top 10 teams. There are plenty of computers that fall all over the spectrum from pure win/loss, to hybrid, to pure margin based. I don't know much about the SRS, but using SOS is very different from using win/loss. KenPom uses SOS as well. It's just a way to adjust the margin for the strength of your opponent, but it doesn't create a wide gap between a win and a loss in the same way as a non-margin system does. Other computers might have them rated high, in fact, the BPI thing that ESPN just came out with has them at 9, but that still doesn't feel right to me. This is why you still apply the computers subjectively with the committee. Even if St. Louis computer average came out somewhere in the teens or low 20's, they probably shouldn't be a 5 or 6 seed. Lunardi has them at 9 right now (around low 30's on the S-curve) which feels right. Even a consensus computer won't get every team right, but it will probably be closer to an accurate measure than any individual poll. I'd love to have a discussion on the merits of computer rating systems here, but we should probably take the individual team talk over to a bracketology thread. I'd certainly like to have that discussion, but we're getting close to hijacking a thread about a different topic.

papa whiskey
02-18-2012, 01:58 AM
I'm curious as to the RPI and KenPom rankings from the last 3 seasons of Final Four teams and National Champions of that season going into the NCAA Tourney. Anyone have this data?

I don't have all the data, but I seem to recall that going into the tournament in 2010, KenPom's numbers predicted we would win. I can honestly say I was not so confident.

Reilly
02-18-2012, 06:52 AM
[1] I'm not looking to make a big deal about St. Louis. I was just scrolling for the first example that felt wrong to list. .... Even if St. Louis computer average came out somewhere in the teens or low 20's, they probably shouldn't be a 5 or 6 seed. ... Even a consensus computer won't get every team right, but it will probably be closer to an accurate measure than any individual poll. ... we're getting close to hijacking a thread about a different topic.

I don't believe we're close to hijacking a thread at all. We're using St. Louis as an example of how different polls treat different teams, and if one peels that treatment back, it gets at what those ratings systems may be valuing.

Just as we'll use Duke as an example, and many other teams as an example, as the thread continues. The OP started with exactly that sort of feeling: "look at what this poll has -- that seems whack ..."

A theoretical computer poll discussion that mentions *zero* teams? I guess such a thing is possible, but doubt it has ever happened. I suppose it would entail a discussion of things that one believes go into making a good basketball team (and what's the definition of "good") and what should be valued: wins, points, certain efficiencies, how to measure those things, what proportions to weigh the different numbers.

On the St. Louis example, I'm not seeing much for or against them. You argue that it *feels* wrong. On some level, the computers are supposed to get past such feelings. I don't know much of anything about St. Louis - other than Majerus is their coach, and Spoonhour died here recently. I certainly don't know their players or pace or strong suits or weaknesses. I do know a computer valuing certain things thinks they are 10, another 15, another 20 .... out of 344. So, they are doing the things that those who think about these things (and who set up the computers) value.

I don't know what you value: why does St. Louis not "feel" right? Why a 9 seed and not a 6 or a 5 seed? I don't know what I value, either, necessarily. I also don't know the underlying math and equations for any of these rating systems. I -- like you -- judge on feel. I follow college football pretty closely, and SRS at the sports-reference CFB site usually feels pretty accurate to me. I'm coming at this via back-tracking and feeling: look at a system, then, does it jive overall with what all I know? If yes, then it's a good system! Sort of how those who agree with me, or find me funny or smart or to be a stand up guy, seem to have exceptional taste in humans ...

Reilly
02-18-2012, 07:13 AM
I'm curious as to the RPI and KenPom rankings from the last 3 seasons of Final Four teams and National Champions of that season going into the NCAA Tourney. Anyone have this data?

I don't know that this would prove much. (I also don't know that you were even insinuating that it would prove anything; it would be neat to look at such numbers.) Wasn't VCU #85 in maybe the RPI or Kenpom? That doesn't prove that that particular ststem is right, or wrong, or that that system is better, or worse, than some other system that had VCU higher, or lower. All it proves is that a team that that ststem -- valuing what it values -- ranks at #85 can get to the Final Four, or, has gotten to the Final Four.

In other words, a team that a system values as being in the top 25% (85/344) of all of college basketball can knock off 4 other teams also valued in the top 25% of all of college basketball. Which doesn't tell us much at the end of the day.

At Duke, I had a literature class with a future Rhodes Scholar. My insights were consistently better than his; at one point, he even mentioned not understanding a certain text at all, while the professor kept going back to my input to explain it to everyone. He had a higher RPI than me, but I was good enough to be in the game, and had a nice run.

MCFinARL
02-18-2012, 09:12 AM
It's become fashionable to bash RPI and praise Ken Pom. And for illumination, I admire Ken Pom's tempo-based formulas.

But are Pomeroy's rankings better than the RPI rankings? Really?

Compare the two rankings with the two voter polls, which right or wrong, seem to offer the best guage for the top 10 teams.

The RPI top 10 is (with AP/coaches rankings):

1. Syracuse (2/2)
2. Duke (5-4)
3. Kentucky (1/1)
4. Michigan State (7/8)
5. UNC (8/7)
6. Baylor 9/10
7. Kansas (4/5)
8. Ohio State (6/6)
9. Missouri (3/3)
10. Southern Miss (unr/unr)

Georgetown (10/9) is the only top 20 team that misses the list ... hey are 13 in the RPI. The only real outlier in the RPI top 10 is No. 10 Southern Miss. That's not outrageous -- they are 22-4 an tied with Memphis for the lead in Conference USA.

Compare that with Ken Pom's top 10:
1. Kentucky (1/1)
2. Ohio State (6/6)
3. Michigan State (7/8)
4. Kansas (4/5)
5. Syracuse (2/2)
6. Wisconsin (unr/unr)
7. North Carolina (8/7)
8. Missouri (3/3)
9. Wichita State (unr/unr)
10. St. Louis (unr/unr)

New Mexixo (unranked/unranked) is No. 11. Duke (4/5) is No. 14 and Baylor (9/10) is No. 15.

His list looks okay in the top five (as does the RPI), but his second five is out there. Pomeroy wrote a blog entry last month just shrugging his shoulders over the bizarre high ranking for a so-so Wisconson, essentially admitting that it's a glitch in the system. Wichita State, St. Louis and New Mexico are (like Southern Miss) strong mid-majors, but nobody in either poll has any of them in the top 25 ... much less top 10.



Small technical correction (hope I'm not repeating something already said; didn't see it in a quick skim of the thread)--Wichita State is actually ranked 24th in the AP poll; they are the unofficial "26th" team in the coaches' poll.

Kedsy
02-18-2012, 11:51 AM
I'm curious as to the RPI and KenPom rankings from the last 3 seasons of Final Four teams and National Champions of that season going into the NCAA Tourney. Anyone have this data?

I have this data, but can't give it to you until Monday. And you probably can't google it, because you'd get post-tourney data.

loran16
02-18-2012, 01:58 PM
There's no way that St. Louis is anywhere near a top 10 team. The margin based computers give them way too much credit for beating up on bad competition. I'm not saying that St. Louis is bad, but they have a pretty poor SOS, their best win is a pretty good win at Xavier, but their next best win is probably either Washington or Dayton. They don't have any terrible losses, with @LMU and @UMass being the worst, but I'd think you'd be hard pressed to call it a top 10 resume. The win/loss systems (RPI, Sagarin's ELO Chess) have them more in the 20's or 30's. The point wasn't just to pick out examples, but to kind of show that their are outliers in every system because of the individual biases of each that can be removed by considering them all on a consensus basis.

I'd argue this. They also lost to a Really good New Mexico team on the road only by 4 (a close game all the way appaerntly) and the A10 is a far better conference than people give credit this year....and St. Louis has mainly ripped through it. The only truly bad teams (think Wake or BC) in the conference are Fordham and Rhode Island, and St Louis hasn't played either team yet.

La Salle and St. Joes on the road are tough opponents, yet St. Louis just beat both and neither game was close...and they only have one truly bad loss, to UMass.

Do I think they're better than Duke? Maybe not, but I think it's certainly close and I'd be really worried if we were to face them.

EDIT: Put it this way: If they were named "Temple" instead of St. Louis, I think they'd get a lot more credibility.

throatybeard
02-18-2012, 02:13 PM
KenPom is OK, but I prefer to get my analysis from JimSum.

Wander
02-18-2012, 02:43 PM
I'd argue this. They also lost to a Really good New Mexico team on the road only by 4 (a close game all the way appaerntly) and the A10 is a far better conference than people give credit this year....and St. Louis has mainly ripped through it. The only truly bad teams (think Wake or BC) in the conference are Fordham and Rhode Island, and St Louis hasn't played either team yet.

La Salle and St. Joes on the road are tough opponents, yet St. Louis just beat both and neither game was close...and they only have one truly bad loss, to UMass.

Do I think they're better than Duke? Maybe not, but I think it's certainly close and I'd be really worried if we were to face them.

EDIT: Put it this way: If they were named "Temple" instead of St. Louis, I think they'd get a lot more credibility.

Good point, and I'd make a similar case for Wichita State. I don't know if they're really the #9 team in the country... but I'm not completely convinced that Duke is better than them, either.

TexHawk
02-18-2012, 02:52 PM
I have this data, but can't give it to you until Monday. And you probably can't google it, because you'd get post-tourney data.

KU was #1 in Kenpom leading up to the conference tournaments, with Duke close behind. In the last pre-tourney rankings, Duke did indeed jump to #1.

nmduke2001
02-18-2012, 02:52 PM
Btw, I’d hate to see New Mexico in the second round. I’ve seen them play a lot and they are legit. Sure they struggled a bit early in the season, but they have pieces that would give us fits. They have quick point guards that like to penetrate. Among them is Hugh Greenwood from Australia. He dominated the U.S. in the U18 World Championships. We don’t contain penetration well. They have 3 or 4 guys 6’-4” through 6’-7” that play the 2 and 3 that can shoot the three. That group includes Tony Snell who reminds me a lot of Barnes. It is well documented here that we don’t have anyone to defend these types of players. Lastly, they have a legit pro in the middle. Drew Gordon is a 6’-9” transfer from UCLA who has games in which he looks like a lottery pick. Unfortunately for UNM, he also has games where he looks Greece bound.

In addition to personnel, UNM does a few things very well as a team. UNM is exceptional at guarding the three. Overall, they are a solid defensive team. They have several guys that can shoot the three.

As I write this, UNM is up 15 on the UNLV team that took it to UNC earlier this year. UNM will likely be top 20 next week if they hold on for this win.

COYS
02-18-2012, 03:03 PM
KU was #1 in Kenpom leading up to the conference tournaments, with Duke close behind. In the last pre-tourney rankings, Duke did indeed jump to #1.

If I'm not mistaken, Duke leapfrogged KU prior to the final pre-tourney rankings. It may have been after the 82-50 smackdown of UNC, but even though I'm not sure when it occurred, I am almost certain that Duke was number 1 before the final polls and possibly well before the end of the regular ACC season. I wish I had the ability to verify this, and I might be wrong, but I remember a discussion on the boards in which Jumbo argued that even though Duke was rated ahead of KU in KenPom, he still that KU was a more complete team though Duke certainly had a chance to establish themselves as the best. I searched for this post but couldn't find it, offhand.

toooskies
02-18-2012, 06:13 PM
I don't believe any coach in the top 25 would have caught a St. Louis game unless they were playing them.

But, keep in mind that there are ways to "game" a possession-based system just like there are ways to game the RPI. And that fundamentally changes the goal of basketball. If the metric upon which teams are judged for the NCAAs is possession-based, no one plays stall-ball anymore. No one puts in subs at the end of the game. Teams that coast during blowout wins will get under-valued. Winning games no longer matters.

Possession-based rankings aren't 100% accurate, and they'll get less accurate as teams figure out how to take advantage of its biases, just like they did the RPI.

snowdenscold
02-19-2012, 01:52 AM
I don't believe any coach in the top 25 would have caught a St. Louis game unless they were playing them.

But, keep in mind that there are ways to "game" a possession-based system just like there are ways to game the RPI. And that fundamentally changes the goal of basketball. If the metric upon which teams are judged for the NCAAs is possession-based, no one plays stall-ball anymore. No one puts in subs at the end of the game. Teams that coast during blowout wins will get under-valued. Winning games no longer matters.

Possession-based rankings aren't 100% accurate, and they'll get less accurate as teams figure out how to take advantage of its biases, just like they did the RPI.

So it measures teams best as long as it's not offically used to measure the best teams. Sort of a Catch-22.

throatybeard
02-19-2012, 02:31 AM
SLU is having an amazing season. I've heard. I really ought to catch at least one of their games. I'm a lazy, lazy man. Majerus probably has them back in the NCAAT for the first time since 2000.

Dev11
02-19-2012, 10:18 AM
I think Kedsy or someone demonstrated how a team could game the RPI by playing a ton of tough teams to pad their strength of schedule and winning none of them.

Michigan State at #4 says hello!

(Not to put them down, because that team is always good, but we all know they are the notorious SOS builders. Izzo is no dummy)

patentgeek
02-19-2012, 10:35 AM
Basketball Prospectus published this article recently - it essentially says that adding a team's RPI and its Pomeroy rating can give great insight into predicting the NCAA tournament field. The author doesn't suggest that either system (or his combined system) is better at assessing how good a team is - just that whatever the tournament committee looks at seems to be captured in the combination of these two rating systems.

http://www.basketballprospectus.com/article.php?articleid=2053

weezie
02-19-2012, 11:05 AM
SLU is having an amazing season. I've heard. I really ought to catch at least one of their games. I'm a lazy, lazy man. Majerus probably has them back in the NCAAT for the first time since 2000.

Oh, throaty! Come on Mr. Sports Fan Extraordinaire! Get yourself over there. I am expecting a report!

Kedsy
02-20-2012, 02:52 PM
I'm curious as to the RPI and KenPom rankings from the last 3 seasons of Final Four teams and National Champions of that season going into the NCAA Tourney. Anyone have this data?

Here are the Pomeroy and RPI rankings on Selection Sunday for the past three seasons:



Year Team Pom Overall Pom O Pom D RPI
---- ----- ------------ ----- ----- ----
2009 UNC 2 1 35 3
2009 Mich St 13 33 10 6
2009 V'Nova 19 25 25 13
2009 UConn 3 20 3 8
2010 Duke 1 1 4 3
2010 Butler 26 55 15 12
2010 West Va 8 11 24 4
2010 Mich St 24 38 27 28
2011 UConn 17 21 31 14
2011 Butler 54 39 77 33
2011 Kentucky 7 7 22 7
2011 VCU 84 59 143 49


Neither Pomeroy nor the RPI appear to be particularly good predictors, although I suppose the RPI appears to be slightly more accurate. Although how much of that is self-fulfilling prophecy is hard to say, since the RPI is used to help determine seeding, and better seeds play worse opponents and thus have a better chance to advance, etc.

gam7
02-20-2012, 03:05 PM
Speaking of Kenpom, a couple of interesting tidbits:

1. As jk mentioned in another thread, our defensive efficiency, which had been bordering on triple-digits 10 days ago, is now up to 65th largely on the strength of our Maryland and BC wins. This is a nice development. We're seeing the team shore up its biggest weaknesses and now, as Coach K said after the BC game, it's a matter of figuring out a way for the guys to be sharp consistently. There is still time left for them to put it all together, and if/when they do, I think Duke will be very tough to beat.

2. Kenpom ran an analysis on his blog a couple of days ago (http://kenpom.com/blog/), the results of which I found very interesting and counterintuitive. His analysis suggested that, like opponents' free throw percentage, teams have very little control over their opponents' three-point accuracy. They do, however, have significantly more control over the percentage of opponents' total shots that are 3-pointers. I found these results to be very interesting. I would have thought that teams with taller/longer perimeter players or with defenses intent on guarding the permeter more closely would have a higher correlation with lower field goal percentages. This appears not to be true. I also would have thought that a team that is successful at keeping down the percentage of opponents' shot attempts that are 3s is doing so because they are doing a better job of challenging the perimeter and cutting down open looks (which appears to be the case) but that it would also lead to a lower 3-point shooting percentage because more of the 3s that would be taken would be challenged shots. This does not appear to be supported by the numbers.

I'm not sure that this has a significant practical impact on the way we play defense. The suggestion is that if we play a team that is a strong three-point shooting team, our goal should be to limit the total number of 3s they take (as a percentage of their total shots). Whether we are trying to make a team shoot a low percentage from 3 or take a lower percentage of its shots from 3, we'd be trying to do the same thing - challenge three-point shooters on the perimeter to encourage them to either take a bad shot or work the ball inside. Kenpom's analysis was interesting nonetheless.

hurleyfor3
02-20-2012, 03:27 PM
If I'm not mistaken, Duke leapfrogged KU prior to the final pre-tourney rankings. It may have been after the 82-50 smackdown of UNC, but even though I'm not sure when it occurred, I am almost certain that Duke was number 1 before the final polls and possibly well before the end of the regular ACC season. I wish I had the ability to verify this, and I might be wrong, but I remember a discussion on the boards in which Jumbo argued that even though Duke was rated ahead of KU in KenPom, he still that KU was a more complete team though Duke certainly had a chance to establish themselves as the best. I searched for this post but couldn't find it, offhand.

If you're referring to 2010, we passed Kansas for #1 Pomeroy with a couple weeks remaining in the regular season. IOW, right around now.

Personally I've been bashing RPI since the early 1990s, back in the era when the ncaa kept people from finding out what it really was. (Maybe they were embarrassed it was so craptacular.) One problem is you can't do anything with the actual data, as you can with Sagarin, Pom, SRS and every other well-constructed rating system out there. So Duke's RPI is .7343 and unc's is .7219 or whatever. What does this tell me? How can I use it to predict the outcome of a Duke-unc game?

loran16
02-20-2012, 04:00 PM
Michigan State at #4 says hello!

(Not to put them down, because that team is always good, but we all know they are the notorious SOS builders. Izzo is no dummy)

They're 3rd in Pomeroy.

darthur
02-20-2012, 04:12 PM
Here are the Pomeroy and RPI rankings on Selection Sunday for the past three seasons:



Year Team Pom Overall Pom O Pom D RPI
---- ----- ------------ ----- ----- ----
2009 UNC 2 1 35 3
2009 Mich St 13 33 10 6
2009 V'Nova 19 25 25 13
2009 UConn 3 20 3 8
2010 Duke 1 1 4 3
2010 Butler 26 55 15 12
2010 West Va 8 11 24 4
2010 Mich St 24 38 27 28
2011 UConn 17 21 31 14
2011 Butler 54 39 77 33
2011 Kentucky 7 7 22 7
2011 VCU 84 59 143 49


Neither Pomeroy nor the RPI appear to be particularly good predictors, although I suppose the RPI appears to be slightly more accurate. Although how much of that is self-fulfilling prophecy is hard to say, since the RPI is used to help determine seeding, and better seeds play worse opponents and thus have a better chance to advance, etc.

This is too small a sample size. A better judge would be just to look at winners/losers across all games.

I don't have data on kenpom, but this article (http://espn.go.com/mens-college-basketball/story/_/id/7561413/bpi-college-basketball-power-index-explained) gives some data on Sagarin:

"Between the 2007 and 2011 NCAA tournaments, it picked 74.4 percent of the matchups correctly, whereas Sagarin picked 73.2 percent and RPI picked 71.9 percent. (Kenpom is more difficult to evaluate because its pre-tournament rankings are not available.)"

Here's some stats on how the seeding committee does:

http://insider.espn.go.com/ncb/ncaatourney06/insider/news/story?id=2353126

There's no simple figure but it doesn't look really look any better.

PS: Count me in the camp that thinks RPI is idiotic. It averages together a few stats in a completely meaningless way. The only thing it has over Sagarin + KenPom is it's simple to understand even if you don't know much math, but as someone who does know math, I think that's a horrible way to choose a rating system.

PPS: How many tournament games were there in the range of 2007-2011? Not counting the playin games, there were 63*5 = 315. Counting the playin games, there were 323 I think. But there is no number of correct guesses over either 315 or 323 that gives you a winning percentage of 74.4...

Kedsy
02-20-2012, 04:38 PM
This is too small a sample size. A better judge would be just to look at winners/losers across all games.

Someone asked for exactly this data, so I provided it. I think even winners/losers across 5 years of tournament games is too small a sample to make any definitive conclusions.


I don't have data on kenpom, but this article (http://espn.go.com/mens-college-basketball/story/_/id/7561413/bpi-college-basketball-power-index-explained) gives some data on Sagarin:

"Between the 2007 and 2011 NCAA tournaments, it picked 74.4 percent of the matchups correctly, whereas Sagarin picked 73.2 percent and RPI picked 71.9 percent. (Kenpom is more difficult to evaluate because its pre-tournament rankings are not available.)"

I agree the RPI is idiotic, but if the difference between the best system and the worst system is only a couple of percent, then the RPI as a predictive model is only one or two games worse than the best system, over the course of a single tournament. Which means if all you want is results, RPI is only a little more idiotic than all the other systems. Also, I wonder what the winning percentage would be if you took the higher seed in every game over the past five years?

darthur
02-20-2012, 05:04 PM
I agree the RPI is idiotic, but if the difference between the best system and the worst system is only a couple of percent, then the RPI as a predictive model is only one or two games worse than the best system, over the course of a single tournament. Which means if all you want is results, RPI is only a little more idiotic than all the other systems. Also, I wonder what the winning percentage would be if you took the higher seed in every game over the past five years?

Well, if you average together some useful stats, the thing you get out is probably okay :). But yes, you're right. The difference between the various ranking methods seems pretty small.

If you look at the other link I provided, it includes some stats on how seeding works as a predictor, but nothing aggregated in a way to be directly comparable. My impression is the committee does not do better than the computer rankings.

Kedsy
02-20-2012, 05:53 PM
If you look at the other link I provided, it includes some stats on how seeding works as a predictor, but nothing aggregated in a way to be directly comparable. My impression is the committee does not do better than the computer rankings.

I did look at the link, and I have studied those numbers closely for years. I agree the seeds don't perform better than the computer rankings, but my guess is on average they don't do outrageously worse, either. If I get some free time, maybe I'll crunch the numbers for the past five years.

HokieEngineer
02-20-2012, 07:02 PM
Basketball Prospectus published this article recently - it essentially says that adding a team's RPI and its Pomeroy rating can give great insight into predicting the NCAA tournament field. The author doesn't suggest that either system (or his combined system) is better at assessing how good a team is - just that whatever the tournament committee looks at seems to be captured in the combination of these two rating systems.

http://www.basketballprospectus.com/article.php?articleid=2053

Despite being a good (but not great) basketball team, Virginia Tech has consistently gotten left out because the tournament selection committee chooses to use RPI rather than a more rational system. Even worse is the way that it (ab)uses it: the absurd practice of saying that a win versus the 50th-ranked RPI team is significantly different than a win versus the 51st ranked team.

Of course, in the end, it comes down to the fact that I don't think Greenberg understands the RPI well enough to game it like other teams/conferences do. I think it would be worth the ACC's while to do as other conferences have done and schedule with RPI in mind. It won't make difference for teams like Duke and UNC who are consistently successful against strong schedules, but it will make a difference for the bubble teams. (I think that a failure to do this is one of the reasons the ACC has slipped in comparison to other conferences in getting tournament bids.)

Kedsy
02-20-2012, 10:35 PM
Despite being a good (but not great) basketball team, Virginia Tech has consistently gotten left out because the tournament selection committee chooses to use RPI rather than a more rational system. Even worse is the way that it (ab)uses it: the absurd practice of saying that a win versus the 50th-ranked RPI team is significantly different than a win versus the 51st ranked team.

It's actually worse than that. Because "schedule strength" is the major component of the RPI (50%), the fact that they look at RPI and schedule strength and record against teams in the top n (whether that's 50 or 25 or 100) is, in essence, counting the same thing three times. And, yes, teams that don't realize this and thus fail to game the system are at a significant disadvantage.

toooskies
02-21-2012, 12:04 AM
You must remember that a predictor of the results of past games-- especially when some of those games may have been used to design the system-- is simply an unfair comparison. (The claims of BPI's predictiveness are only valid if the BPI was finalized before actually looking at the tournament record.)

Also, accuracy in prediction doesn't mean that upsets don't happen. Presumably, four situations occur when an imperfect system predicts games:

- The system picks the better team correctly and the better team wins. (Result 1)
- The system picks the better team correctly and the better team loses. (Result 2)
- The system picks the better team incorrectly and the better team wins. (Result 3)
- The system picks the better team incorrectly and the better team loses. (Result 4)

And so a system that has 74% of games "picked correctly" isn't actually a useful number, because that makes the incorrect assumption that the better team always wins on a neutral court. But the reality is that the better team only wins slightly more often than a slightly worse team, due to randomness. In fact, possession-based metrics are based on the very premise that results are random. So how many games are truly upsets? The best metric may be 1% away or 10% away, or nothing close to it.

But I think reality is, as much as we want to fit teams into a system where team A is better than team B is better than team C, there's at least some degree of rock/paper/scissors going on where teams can't be sorted into an absolute order in any meaningful way on a given day. And even if they could, that order would probably change the next day.

darthur
02-21-2012, 02:31 AM
You must remember that a predictor of the results of past games-- especially when some of those games may have been used to design the system-- is simply an unfair comparison. (The claims of BPI's predictiveness are only valid if the BPI was finalized before actually looking at the tournament record.)

Yes, ESPN's trumpeting of BPI based on these numbers is absolutely suspect. But we know Sagarin, Kenpom, and RPI did not cheat, so the numbers for them are interesting.


And so a system that has 74% of games "picked correctly" isn't actually a useful number, because that makes the incorrect assumption that the better team always wins on a neutral court. But the reality is that the better team only wins slightly more often than a slightly worse team, due to randomness. In fact, possession-based metrics are based on the very premise that results are random. So how many games are truly upsets? The best metric may be 1% away or 10% away, or nothing close to it.

A 100% prediction rate is impossible, but ultimately any system that ranks teams from top to bottom is giving predictions about who will win games, and it can be fairly evaluated based on the correctness of those predictions. Yes, because of randomness, you cannot read much into just one prediction or perhaps even fifty predictions. But eventually you will get something statistically significant. And I would argue that, difficult as it is to get sufficient data for a metric like this, you are just arguing over aesthetics if you don't do it. There is only one scientific way to evaluate a ranking formula, and that is to see how good of a predictor it is in practice.

PumpkinFunk
02-21-2012, 07:41 AM
Here's the thing: Every rating system is flawed in some way. For the RPI, it's been shown how it can be very easy to inflate a nonconference RPI by smart scheduling, namely by playing a lot of road/neutral games against semi-weak teams. For KenPom, it doesn't put a cap on final score/scoring margin in a game, so a team like Wisconsin that beats up on weak teams but lost quite a few games will end up with an inflated number. The question is not what system is the best, it's about knowing how to use them all in a way such that you can get the best sense of an overall picture of a team's value. If you listen to the CBSSports.com podcast from yesterday, there's a great discussion of that with Matt Norlander, John Gasaway (Basketball Prospectus/ESPN Insider) and Nate Silver (FiveThirtyEight) where they discuss the flaws in the RPI and some of the flaws of other systems. Nothing is perfect, but you can get very different ideas of a team if you use the right systems in unison.

sagegrouse
02-21-2012, 08:32 AM
You must remember that a predictor of the results of past games-- especially when some of those games may have been used to design the system-- is simply an unfair comparison. (The claims of BPI's predictiveness are only valid if the BPI was finalized before actually looking at the tournament record.)




Normally, when using statistical models to develop predictors, you do exactly that -- estimate the parameters of the model using all past data. So, "is simply unfair" is a little strong, don't you think. I mean, it suggests that there is no value to statistical inference.

Now it is perfectly appropriate, once developing a model, to test it on future data as independent verification. In that sense, school is still out on the results. But the sentence in parenthesis is a little confusing. For example, it ain't statistics if you don't use available data, so how do you develop the model?

sage

Kedsy
02-21-2012, 09:51 AM
Normally, when using statistical models to develop predictors, you do exactly that -- estimate the parameters of the model using all past data. So, "is simply unfair" is a little strong, don't you think. I mean, it suggests that there is no value to statistical inference.

Now it is perfectly appropriate, once developing a model, to test it on future data as independent verification. In that sense, school is still out on the results. But the sentence in parenthesis is a little confusing. For example, it ain't statistics if you don't use available data, so how do you develop the model?

sage

I think what he's objecting to is "data fitting" a system to explain past results. For example, if you are creating a simple system based on seed, you will get better historical results if you take all higher seeds except pick 9s over 8s (because historically, 9 seeds have beaten 8 seeds 55% of the time). Does that make it a better system than picking all higher seeds? I'd say not, unless you can come up with a logical reason why 9s have beaten 8s so often.

In other words, if you take all the past games and all the ratings going into the NCAAT, and throw them in a computer and ask the computer to give you the best fit, it is creating a formula that takes past randomness into account, so the very elements that give it a better performance looking backwards will not help it (and might even hurt it) going forwards. So you really have no idea if it's better or not. I used to create computer algorithms to pick stocks, and I ran into this phenomenon all the time.

sagegrouse
02-21-2012, 10:42 AM
I think what he's objecting to is "data fitting" a system to explain past results. For example, if you are creating a simple system based on seed, you will get better historical results if you take all higher seeds except pick 9s over 8s (because historically, 9 seeds have beaten 8 seeds 55% of the time). Does that make it a better system than picking all higher seeds? I'd say not, unless you can come up with a logical reason why 9s have beaten 8s so often.

In other words, if you take all the past games and all the ratings going into the NCAAT, and throw them in a computer and ask the computer to give you the best fit, it is creating a formula that takes past randomness into account, so the very elements that give it a better performance looking backwards will not help it (and might even hurt it) going forwards. So you really have no idea if it's better or not. I used to create computer algorithms to pick stocks, and I ran into this phenomenon all the time.

Sure, "garbage can" multivariate regressions are worth just that and are scorned by social scientists and statisticians. But that doesn't mean that all attempts at statistical inference are of dubious value, which is what toooskies seems to be saying. He is also implying that predictive models can't use past data, which begs the question of how you develop models (crystal balls, manna from heaven?).

Now I think we can all agree that testing a model's predictive power on future results is all to the good.

sagegrouse

hughgs
02-21-2012, 11:07 AM
Yes, ESPN's trumpeting of BPI based on these numbers is absolutely suspect. But we know Sagarin, Kenpom, and RPI did not cheat, so the numbers for them are interesting.



A 100% prediction rate is impossible, but ultimately any system that ranks teams from top to bottom is giving predictions about who will win games, and it can be fairly evaluated based on the correctness of those predictions. Yes, because of randomness, you cannot read much into just one prediction or perhaps even fifty predictions. But eventually you will get something statistically significant. And I would argue that, difficult as it is to get sufficient data for a metric like this, you are just arguing over aesthetics if you don't do it. There is only one scientific way to evaluate a ranking formula, and that is to see how good of a predictor it is in practice.

I disagree that the ranking of teams is giving predictions about who will win games. The rankings can give you an idea of how often teams should win games against particular opponents but they certainly cannot predict who will win games.

It seems to me that this a big fallacy of ranking systems. The rankings are simply ordered statistics. Someone comes up with a way to assign a number to a team (the statistic) and then ranks them. The problem comes about when you try to use statistics to predict outcomes of events, in this case the winner of games. Statistics cannot predict the winner of games. Statistics can only tell you the probability that one team will beat another team. So, teams with small differences in their ratings will have close to a 50/50 split in games while teams with large differences in their ratings will be further from a 50/50 split.

So how can you evaluate how well a ranking system performs? I would argue that you can't look at how well a ranking system predicts winner. Predicting winners doesn't tell you how well the statistics describe the teams. Just because teams are separated by large distances in the rank doesn't mean that there is a large difference in their rating. What you need to do is look at how teams with small differences in statistics perform against each other and how well teams with large differences in statistics perform against each other. Since you would expect small rating differences to occur across the range of teams then you that should give you a good indication of how well the rating system performs. It's a tougher metric to measure but you avoid the trap of using rankings rather than the ratings.

terminalwriter
02-21-2012, 11:31 AM
I'm a little late to the conversation, but if you're looking at the historical accuracy of the Pomeroy ratings, it took a little digging, but here's the pre-tourney predictions for the last few years:


2011 predictions
http://www.basketballprospectus.com/unfiltered/?p=673

2010
South
http://www.basketballprospectus.com/article.php?articleid=998
West
http://basketballprospectus.com/article.php?articleid=1000
Midwest
http://basketballprospectus.com/article.php?articleid=999
East
http://basketballprospectus.com/article.php?articleid=994


2009
West & South
http://basketballprospectus.com/article.php?articleid=607
East & Midwest
http://basketballprospectus.com/article.php?articleid=601


------

toooskies
02-21-2012, 07:00 PM
Sure, "garbage can" multivariate regressions are worth just that and are scorned by social scientists and statisticians. But that doesn't mean that all attempts at statistical inference are of dubious value, which is what toooskies seems to be saying. He is also implying that predictive models can't use past data, which begs the question of how you develop models (crystal balls, manna from heaven?).

Now I think we can all agree that testing a model's predictive power on future results is all to the good.

sagegrouse

I'm not saying that the predictive model cannot be evaluated based on data. I'm saying that it hasn't actually made any predictions until it gets games that weren't used to design it. You obviously must use some basis to design it, but other decisions are as arbitrary as the RPI's breakdowns of percentage value. For instance, do they have data for what constitutes a blowout margin? Did they pick their blowout margins based on past data or logic? How did they decide how much to weigh a win as a future predictor of success? If it's data, then we need fresh data to test against; it's not predicting past wins, it's deducing them.

Now, it's possible to use only a subset of data for design/calibration, and then as confirmation, "predict" the rest of the data as a test. But I'd guess it's unlikely that this has occurred, as there is already a shortage of data available; and we certainly wouldn't be able to double-check.

I didn't mean to imply that there isn't value to statistical models of this sort; I paid my $20 to use KenPom's site, for instance. All I am saying is that the results aren't necessarily truth, they're simply samples. And in March, it doesn't matter; all that matters once you get in the tournament is whether you come out with a win every single game. And especially, picking the most winners doesn't make your system better; just luckier. For instance, no one predicted a Final Four match-up of Butler-VCU.

darthur
02-21-2012, 10:58 PM
It seems to me that this a big fallacy of ranking systems. The rankings are simply ordered statistics. Someone comes up with a way to assign a number to a team (the statistic) and then ranks them. The problem comes about when you try to use statistics to predict outcomes of events, in this case the winner of games. Statistics cannot predict the winner of games. Statistics can only tell you the probability that one team will beat another team. So, teams with small differences in their ratings will have close to a 50/50 split in games while teams with large differences in their ratings will be further from a 50/50 split.

Of course statistics can predict the winner of games. Take the team with the higher ranking. I predict he wins. The fact that I may be more or less confident in my prediction, or that the outcome has randomness involved, doesn't change anything.

This is not too different from what humans do. When you fill out NCAA brackets, you choose the team you think is better to win each game (or perhaps you consider matchups and other more tricky factors, but most people aren't that sophisticated). It doesn't matter how confident you are in your prediction. You get points if you're right, and you lose points if you're wrong. Is there a lot of randomness? Absolutely. But over the long haul (i.e., multiple years), people who are knowledgeable are going to be able to prove themselves by getting more points than people who aren't.

In the end, 50/50 splits don't matter. After enough games, it doesn't matter who you predict wins, because you'll get it right 50% of the time either way. The bigger the difference in strength between the two teams, the more important it is that you consistently know which is best. As it should be. This is a rigorous and very reasonable way of objectively comparing two ranking systems without having to know anything about their inner workings. Yes, it fails to measure whether Pomeroy's exact win probabilities are correct, but since we're comparing with RPI which doesn't even pretend to do that, who cares?

hughgs
02-22-2012, 12:59 AM
Of course statistics can predict the winner of games. Take the team with the higher ranking. I predict he wins. The fact that I may be more or less confident in my prediction, or that the outcome has randomness involved, doesn't change anything.

This is not too different from what humans do. When you fill out NCAA brackets, you choose the team you think is better to win each game (or perhaps you consider matchups and other more tricky factors, but most people aren't that sophisticated). It doesn't matter how confident you are in your prediction. You get points if you're right, and you lose points if you're wrong. Is there a lot of randomness? Absolutely. But over the long haul (i.e., multiple years), people who are knowledgeable are going to be able to prove themselves by getting more points than people who aren't.

In the end, 50/50 splits don't matter. After enough games, it doesn't matter who you predict wins, because you'll get it right 50% of the time either way. The bigger the difference in strength between the two teams, the more important it is that you consistently know which is best. As it should be. This is a rigorous and very reasonable way of objectively comparing two ranking systems without having to know anything about their inner workings. Yes, it fails to measure whether Pomeroy's exact win probabilities are correct, but since we're comparing with RPI which doesn't even pretend to do that, who cares?

While I tend to agree with your assessment, you're arguing against something that I never said. I was very careful to use your words, that rankings are "... giving predictions about who will win games, ...". That statement and your above words are completely two different things. Of course we can use the statistics to make predictions. But, that's very different than stating that the statistics themselves make the predictions. The statistics themselves only give you probabilities of winning.

If you're simply going to compare two ranking systems then I would argue that you still need to look at the number that used to produce the ranking. If you only use the difference in ranks then you have no idea if a large difference in rankings are due to a large difference in ratings. And hence you have no idea whether an "upset" is a high probability event or a low probability event. For example, take any two ranking systems. Both systems rank Duke as number 1 and Delaware as number 9. Delaware ends up winning the game. Is it really an upset? From a ranking standpoint I would think it is. What if I then said that the first ranking system had a rating different of 0.001 out of 1.0, implying a 50/50 split while the second had 0.1 out of 1.0 implying a 70/30 split (I'm making number up for illustration). With this information I would say that it was only an upset under one system but not the other. What if both systems consistently produced those outcomes (number 9 beating number 1 with the same difference in ratings). Without looking at the ratings you can't say whether any of the ranking systems are actually doing a good job.

The point is, the randomness in the games is the winner. The ability of any particular ranking system to give you an idea of who should win the game cannot be evaluated by simply looking at the ranking, but must be based on the rating used to produce that ranking. Without using the rating then you have no idea about the difference in strength between two teams. And without that information then you don't know if you have an "upset".

rsvman
02-22-2012, 09:35 AM
I'm curious as to the RPI and KenPom rankings from the last 3 seasons of Final Four teams and National Champions of that season going into the NCAA Tourney. Anyone have this data?
I don't have hard numbers, but for the past 3 or 4 years I have been filling out one bracket using RPI only, one using Kenpom only, one using Sagarin, and one using my picks. To the best of my memory Kenpom has beat the RPI in every one of those years.

Having said that, I also need to point out that my picks beat Kenpom every year except one.

But there's no doubt in my mind that Kenpom is a better metric than the RPI.

rsvman
02-22-2012, 10:05 AM
I don't have hard numbers, but for the past 3 or 4 years I have been filling out one bracket using RPI only, one using Kenpom only, one using Sagarin, and one using my picks. To the best of my memory Kenpom has beat the RPI in every one of those years.

Having said that, I also need to point out that my picks beat Kenpom every year except one.

But there's no doubt in my mind that Kenpom is a better metric than the RPI.

Note that I'm not talking about predicting the Final Four. I scored the entire tournament the way most office pools do: one point for a first-round win, two-points for a second-round win, etc.

And I guess "no doubt in my mind" is definitely an overstatement based on a very small amount of data. The truth is that I think Kenpom is better, but I don't know that it's better. In all likelihood a bit of it is based on my biases that the RPI is not very good.

Asyeop
05-15-2012, 10:34 PM
KenPom is OK, but I prefer to get my analysis from JimSum.

Me too. I'm more confident in JimSum's analysis but I do check KenPom's as well.