PDA

View Full Version : Dork Poll Tracking 2012-13



mr. synellinden
12-12-2012, 01:20 PM
Not to be confused with the Bilas Power Index or the Vitale Bald Dome Index (VBDI) ...

We are ranked number one in ESPN's dork poll (http://espn.go.com/mens-college-basketball/story/_/id/8739426/bpi-dec-12-college-basketball-power-index-rankings):

Also worth noting that Virginia Tech (27) and Maryland (35) and Virginia (47) all are ranked ahead of NC State (48) and UNC (53).

Turtleboy
12-12-2012, 04:24 PM
If SoS means strength of schedule, how in the heck is this (http://espn.go.com/mens-college-basketball/team/_/id/2400/mississippi-valley-state-delta-devils) #1? Am I misreading something?

DukeBlue666s
12-12-2012, 04:50 PM
I believe I'm misreading the same thing ...

Aurelius
12-12-2012, 04:52 PM
If SoS means strength of schedule, how in the heck is this (http://espn.go.com/mens-college-basketball/team/_/id/2400/mississippi-valley-state-delta-devils) #1? Am I misreading something?

Their opponents are a combined 44-7. One of the benefits, if you want to look at it that way, of going 0-6 is that your opponents each have an extra win.

Duke's opponents, for comparison, are a combined 57-25, good for an SoS of #10. Nine of those losses, of course, came against Duke, so that does hurt us a little bit in terms of strength of schedule.

Turtleboy
12-12-2012, 04:55 PM
Thanks.

toooskies
12-12-2012, 05:27 PM
That's not how BPI works-- it's similar to KenPom, but with corrective factors for injury, blowouts, and a few other aspects. The numbers end up pretty different, but the SOS calculations should be similar.

MVST has played teams with ranks of 10, 14, 65, 19(!), 27, 47. According to ESPN's rankings, that's reasonable.

Not sure how LSU got such a high rating, although they do have two games missed by top-5 rotation guys.

The value-add here relative to KenPom is that these don't have preseason weighting factoring in. That's probably why we aren't ranked higher there.

hurleyfor3
12-13-2012, 11:35 AM
Y'all are welcome for establishing the term, "dork polls".

MChambers
12-20-2012, 10:01 AM
Y'all are welcome for establishing the term, "dork polls".
Don't look now, but Duke is #1 by a huge margin in the blended Sagarin dork poll, 3.87 points better than #2 UF. That 3.87 gap is as big as the gap between #2 and #12.

Duke's #3 in Pomeroy, but moving up, and the defense is up to #13, which is a nice thing to see. (Although I was disappointed in the defense in the first half last night.)

loran16
12-20-2012, 10:47 AM
Don't look now, but Duke is #1 by a huge margin in the blended Sagarin dork poll, 3.87 points better than #2 UF. That 3.87 gap is as big as the gap between #2 and #12.

Duke's #3 in Pomeroy, but moving up, and the defense is up to #13, which is a nice thing to see. (Although I was disappointed in the defense in the first half last night.)

Let's be fair - if you're going to use the term "Dork" polls - you need to use the better polls. In other words, you need to use Sagarin's predictor, which he considers his better poll because it includes margin of victory (The Blended poll essentially merges the more useless ELO poll). Duke has indeed jumped to #1 in the predictor, but by a tiny margin.

MChambers
12-20-2012, 10:56 AM
Let's be fair - if you're going to use the term "Dork" polls - you need to use the better polls. In other words, you need to use Sagarin's predictor, which he considers his better poll because it includes margin of victory (The Blended poll essentially merges the more useless ELO poll). Duke has indeed jumped to #1 in the predictor, but by a tiny margin.
Sagarin has modified his ELO poll to include margin of victory. From the poll itself:

"ELO_SCORE applies ELO principles to the actual SCORES of the games and so it is now SCORE BASED
and thus should be a good match for the PURE POINTS in terms of predictive accuracy for upcoming games."

By the way, I can't believe I just outdid you on a Dork poll matter.

hurleyfor3
12-20-2012, 05:18 PM
Unc's negative-18-point victory against Texas drops them to #58 Sagarin. Some teams that are ahead of them:

North Dakota State, Lehigh, Stephen F. Austin, Canisius, Middle Tennessee, Boise State, Belmont and Wyoming. Sagarin says Wyoming should be a 7-point favorite over unc on a neutral floor.

Indoor66
12-20-2012, 06:28 PM
Sagarin says Wyoming should be a 7-point favorite over unc on a neutral floor.

Go Cowboys. :cool:

uh_no
12-21-2012, 02:08 AM
well, our defense shot up from 18 to 8 after these past two games (confirming the "wisconsin effect"), and after a somewhat slow day offensively (that FT shooting surely hurt....) our offense dropped to just a shade short of pitt...putting us at #3 in O

we stay at 3 overall

(kenpom)

as the other dogs start getting into more of the meat of their schedules, we should see some of their efficiency numbers drop off....I think duke will end up #1 at some point in late january

loran16
12-21-2012, 09:15 AM
well, our defense shot up from 18 to 8 after these past two games (confirming the "wisconsin effect"), and after a somewhat slow day offensively (that FT shooting surely hurt....) our offense dropped to just a shade short of pitt...putting us at #3 in O

we stay at 3 overall

(kenpom)

as the other dogs start getting into more of the meat of their schedules, we should see some of their efficiency numbers drop off....I think duke will end up #1 at some point in late january

I don't think a win by less points than Pomeroy expected is even close to the "Wisconsin Effect" - Cornell and Elon aren't bad enough for the computers to overrate.

uh_no
12-21-2012, 11:16 AM
I don't think a win by less points than Pomeroy expected is even close to the "Wisconsin Effect" - Cornell and Elon aren't bad enough for the computers to overrate.

that's not the point....the point was our defense was underrated relative to the other top teams because we had played such a tougher schedule....thus when we beat up on a couple patsies, are ranking shot up.

COYS
12-21-2012, 11:20 AM
that's not the point....the point was our defense was underrated relative to the other top teams because we had played such a tougher schedule....thus when we beat up on a couple patsies, are ranking shot up.

It was also weighed down by the preseason projections, which were influenced by the large number of returning personnel we had from our poor (for Duke) defensive team last season. The preseason projections have all but disappeared at this point.

Kedsy
12-21-2012, 11:20 AM
I don't think a win by less points than Pomeroy expected is even close to the "Wisconsin Effect" - Cornell and Elon aren't bad enough for the computers to overrate.

Call it what you want. The point is that good teams are "more better" than teams like Cornell and Elon than the computers (especially Pomeroy's) think they are. Our defense didn't show so much to justify a jump from 18th to 8th based on our beating up on two sub-150 teams. Clearly our tough early schedule negatively impacted our efficiency ratings.

MChambers
12-21-2012, 12:17 PM
Call it what you want. The point is that good teams are "more better" than teams like Cornell and Elon than the computers (especially Pomeroy's) think they are. Our defense didn't show so much to justify a jump from 18th to 8th based on our beating up on two sub-150 teams. Clearly our tough early schedule negatively impacted our efficiency ratings.
What do you draw from this? That Duke's defense is stronger than rated? Or just that Pomeroy's system has some drawbacks, especially in the weight it gives to performance against weaker teams?

Not trying to argue with you, but am genuinely curious.

uh_no
12-21-2012, 12:48 PM
What do you draw from this? That Duke's defense is stronger than rated? Or just that Pomeroy's system has some drawbacks, especially in the weight it gives to performance against weaker teams?

Not trying to argue with you, but am genuinely curious.

I think it's just that it's tough to make a valid comparison of teams' efficiencies when they have played wildly different strengths of schedules....that is a drawback in the early season, and he admits as much. By the end of the season, that has mostly evened out...and while one schedule might be tougher than another, most teams have played a good smattering of pretty good teams (conference play), so the deviations are much less than they re now...thus getting more valid comparisons.

I think duke's defense is probably better than 18th. I think that other team's ratings were inflated since they played few very good teams, and as duke is playing some patsies, and they might be playing some toughies, it evens out.

Either way, I think the 8 is much more indicative of the team's defense than 18

Kedsy
12-21-2012, 01:04 PM
What do you draw from this? That Duke's defense is stronger than rated? Or just that Pomeroy's system has some drawbacks, especially in the weight it gives to performance against weaker teams?

Not trying to argue with you, but am genuinely curious.

I think this is definitely a drawback to Pomeroy's system, and you can see it every year. Hopefully toward the end of the season most of this bias is gone, but every year there are a few teams Pomeroy rates very highly that really aren't that good (and for whatever reason, quite often Wisconsin is one of those teams), and I believe scheduling is a primary reason for that anomaly.

I don't think this is a drawback distinctive to Pomeroy. I believe pretty much all computer systems contain a similar bias. Why? My guess is that most good computer rating systems use some set of simultaneous equations and, while the overall computations may be reasonably sophisticated the individual equations aren't. It seems to me that even after you take out the "noise" (minor injuries and illnesses, the day-to-day emotional highs and lows of teenagers, good days and bad days, etc.), for which no computer system can fully compensate, it remains that the differences between teams is probably more of a step function than a smooth continuum. And if you're playing against a team several steps down, the results will be less predictive than the computer expects.

Having said all that, to answer your first question, after the Ohio State game our defensive efficiency in Pomeroy was ranked in the low 20s. I think our defense is stronger than that. Since we've now played Delaware, Temple, Cornell, and Elon, we've jumped all the way up to 8 (and most of that jump was due to the last two games). I don't think that's a coincidence and I don't think it's because our defense was significantly better in those last two games. Is our defense even stronger than #8? I have no idea. We'll probably be ranked a bit higher after playing Santa Clara, but then we might settle a bit after we play the stronger ACC teams.

toooskies
12-21-2012, 03:13 PM
I think this is definitely a drawback to Pomeroy's system, and you can see it every year. Hopefully toward the end of the season most of this bias is gone, but every year there are a few teams Pomeroy rates very highly that really aren't that good (and for whatever reason, quite often Wisconsin is one of those teams), and I believe scheduling is a primary reason for that anomaly.

I don't think this is a drawback distinctive to Pomeroy. I believe pretty much all computer systems contain a similar bias. Why? My guess is that most good computer rating systems use some set of simultaneous equations and, while the overall computations may be reasonably sophisticated the individual equations aren't. It seems to me that even after you take out the "noise" (minor injuries and illnesses, the day-to-day emotional highs and lows of teenagers, good days and bad days, etc.), for which no computer system can fully compensate, it remains that the differences between teams is probably more of a step function than a smooth continuum. And if you're playing against a team several steps down, the results will be less predictive than the computer expects.

Having said all that, to answer your first question, after the Ohio State game our defensive efficiency in Pomeroy was ranked in the low 20s. I think our defense is stronger than that. Since we've now played Delaware, Temple, Cornell, and Elon, we've jumped all the way up to 8 (and most of that jump was due to the last two games). I don't think that's a coincidence and I don't think it's because our defense was significantly better in those last two games. Is our defense even stronger than #8? I have no idea. We'll probably be ranked a bit higher after playing Santa Clara, but then we might settle a bit after we play the stronger ACC teams.

Also keep in mind that preseason biases aren't worked out of the KenPom system yet, and our preseason defensive rating was probably extrapolated out from last year's team (and therefore, not up to Duke's usual standards). As time goes on, those become less and less relevant.

New note: the toughest-rated team left on our schedule? Miami. I won't enjoy playing them twice this year...

loran16
12-21-2012, 04:18 PM
I think this is definitely a drawback to Pomeroy's system, and you can see it every year. Hopefully toward the end of the season most of this bias is gone, but every year there are a few teams Pomeroy rates very highly that really aren't that good (and for whatever reason, quite often Wisconsin is one of those teams), and I believe scheduling is a primary reason for that anomaly.

I don't think this is a drawback distinctive to Pomeroy. I believe pretty much all computer systems contain a similar bias. Why? My guess is that most good computer rating systems use some set of simultaneous equations and, while the overall computations may be reasonably sophisticated the individual equations aren't. It seems to me that even after you take out the "noise" (minor injuries and illnesses, the day-to-day emotional highs and lows of teenagers, good days and bad days, etc.), for which no computer system can fully compensate, it remains that the differences between teams is probably more of a step function than a smooth continuum. And if you're playing against a team several steps down, the results will be less predictive than the computer expects.

Having said all that, to answer your first question, after the Ohio State game our defensive efficiency in Pomeroy was ranked in the low 20s. I think our defense is stronger than that. Since we've now played Delaware, Temple, Cornell, and Elon, we've jumped all the way up to 8 (and most of that jump was due to the last two games). I don't think that's a coincidence and I don't think it's because our defense was significantly better in those last two games. Is our defense even stronger than #8? I have no idea. We'll probably be ranked a bit higher after playing Santa Clara, but then we might settle a bit after we play the stronger ACC teams.

Kedsy you're misinterpreting here. Duke's ratings actually got WORSE after Elon, though our D got better.

This isn't because we were playing poor teams - POMEROY FACTORS THAT IN (I'm not sure why this keeps getting lost). Duke was expected to hold both opponents to pretty inefficient scoring - Duke did even better than that.

However Duke dropped after the Elon game in the overall ratings, because unlike in the Cornell game, Duke's O wasn't as efficient as Pomeroy would've thought. We scored 1.09 points per possession against Elon - and this was basically true even before we took out starters in the last few minutes - when Duke was expected to put up around 1.2 points per possession.

It's possible that the Elon drop is an overreaction caused by the team not being used to starting two games in a row. But that's beyond the computer's capabilities.

Kedsy
12-21-2012, 04:42 PM
Kedsy you're misinterpreting here. Duke's ratings actually got WORSE after Elon, though our D got better.

This isn't because we were playing poor teams - POMEROY FACTORS THAT IN (I'm not sure why this keeps getting lost). Duke was expected to hold both opponents to pretty inefficient scoring - Duke did even better than that.

However Duke dropped after the Elon game in the overall ratings, because unlike in the Cornell game, Duke's O wasn't as efficient as Pomeroy would've thought. We scored 1.09 points per possession against Elon - and this was basically true even before we took out starters in the last few minutes - when Duke was expected to put up around 1.2 points per possession.

It's possible that the Elon drop is an overreaction caused by the team not being used to starting two games in a row. But that's beyond the computer's capabilities.

I didn't misinterpret. I know Duke's offensive efficiency went down a little because we played relatively poorly on offense, and because of that our overall rating went down a little.

My point is, capital letters or no, when we play vastly inferior opponents Pomeroy doesn't factor it in enough. Two games against Cornell and Elon shouldn't make our defensive efficiency jump from 18th to 8th. But it did because those two teams (especially Cornell) found it so much more difficult to score against us than Pomeroy predicted. Was part of that just that we played good defense? Sure. But I believe a larger component of it was our team is so much better that Pomeroy's system breaks down in these sorts of games. I notice this every year. Teams tend to make huge jumps in adjusted efficiency by beating up inferior teams. Sure, occasionally they don't make big jumps, like Duke's offense against Elon, but in those cases I'd suggest that Duke's offensive efficiency should have gone down even more -- our offense was pretty poor by our own standards so far this season.

After we got done our really hard stretch of games in November, some posters were discussing Duke's defensive ranking (which IIRC was in the low 20s). I suggested at that time that our defensive efficiency numbers would improve drastically after we played our weak December schedule (despite Pomeroy factoring in the weakness of our opponents). So far, I was right, and like I said before I don't think it's luck or coincidence. It's because good teams' Pomeroy efficiencies tend to jump disproportionally when they feast on inferior teams.

robed deity
12-21-2012, 05:16 PM
To any KenPom subscribers, how is the defensive rebounding percentage looking after the last couple games? Just curious to see if this also improves while playing some lesser teams.

Wander
12-21-2012, 05:51 PM
I think this is definitely a drawback to Pomeroy's system, and you can see it every year. Hopefully toward the end of the season most of this bias is gone, but every year there are a few teams Pomeroy rates very highly that really aren't that good (and for whatever reason, quite often Wisconsin is one of those teams), and I believe scheduling is a primary reason for that anomaly.

Agreed that Pomeroy's system isn't this sacred/infallible/perfect thing as some make it out to be (Pomeroy would certainly agree to this), but I'll offer a different reason. Not every single stat deserves to be considered on purely a per-possession basis; I think, on average, kenpom overrates slow teams. That explains Wisconsin, and Duke's vastly overrated 2007 team was also unusually slow. That's only a couple data points - I guess the way to find out would be to see if there's a correlation between the Pomeroy "tempo" and "luck" ratings of all ~350 teams (ie, are the slowest teams unluckier on average?).

loran16
12-21-2012, 05:56 PM
I didn't misinterpret. I know Duke's offensive efficiency went down a little because we played relatively poorly on offense, and because of that our overall rating went down a little.

My point is, capital letters or no, when we play vastly inferior opponents Pomeroy doesn't factor it in enough. Two games against Cornell and Elon shouldn't make our defensive efficiency jump from 18th to 8th. But it did because those two teams (especially Cornell) found it so much more difficult to score against us than Pomeroy predicted. Was part of that just that we played good defense? Sure. But I believe a larger component of it was our team is so much better that Pomeroy's system breaks down in these sorts of games. I notice this every year. Teams tend to make huge jumps in adjusted efficiency by beating up inferior teams. Sure, occasionally they don't make big jumps, like Duke's offense against Elon, but in those cases I'd suggest that Duke's offensive efficiency should have gone down even more -- our offense was pretty poor by our own standards so far this season.

After we got done our really hard stretch of games in November, some posters were discussing Duke's defensive ranking (which IIRC was in the low 20s). I suggested at that time that our defensive efficiency numbers would improve drastically after we played our weak December schedule (despite Pomeroy factoring in the weakness of our opponents). So far, I was right, and like I said before I don't think it's luck or coincidence. It's because good teams' Pomeroy efficiencies tend to jump disproportionally when they feast on inferior teams.

You miss my point. Pomeroy does overestimate the abilities of the worst of the worst in D1. But Elon and Cornell ARE NOT THESE BAD TEAMS. Elon is an average team pretty much (Technically slightly above average). They are not bad. They are not good. They are not anything like the opponents Indiana has been facing and thus Pomeroy has no problems with teams playing them.

Cornell is a bad team - but it is not a horrible team. By Comparison, Cornell lost by only 11 to Vandy - so our 41 point victory over them is extremely impressive and worth noting - thus Pomeroy is correct in adjusting our defensive abilities so much after that game.

In other words Kedsy, yes you see a "Wisconsin effect" in other teams, but those are TRULY AWFUL TEAMS, not Elon and Cornell.
-------------------

Robed Deity - our D Rebounding rate was still horrible up until Elon where we finally had a dominant rebounding game (84.8% DRebound %, 35.7% ORebound %). This pushed our D Rebounding to....202nd in the Country. Still pretty bad.

Kedsy
12-21-2012, 06:02 PM
In other words Kedsy, yes you see a "Wisconsin effect" in other teams, but those are TRULY AWFUL TEAMS, not Elon and Cornell.

According to Pomeroy, Cornell has the 304th worst offense in the land. How much more awful could they be?

No way a good defensive performance against that offense should raise our efficiency number so much.

Kedsy
12-21-2012, 06:30 PM
That's only a couple data points - I guess the way to find out would be to see if there's a correlation between the Pomeroy "tempo" and "luck" ratings of all ~350 teams (ie, are the slowest teams unluckier on average?).

I took a look at this a couple years ago. I don't remember the exact details (and I admit statistical calculations aren't my greatest skill), but there did not appear to be a correlation.

COYS
12-21-2012, 11:23 PM
According to Pomeroy, Cornell has the 304th worst offense in the land. How much more awful could they be?

No way a good defensive performance against that offense should raise our efficiency number so much.

If I'm not mistaken, our defensive efficiency number only improved by about 1.5 points after Cornell. That's a decent swing but hardly earth shattering. However, because of how tightly clustered the defensive efficient numbers are for the teams closest to us in the rankings, it didn't take much for us to leapfrog a few teams and move up to 8th. Michigan State is currently ranked 12th behind us with a dEf of 86.4 while Duke is 8th at 86.3. KenPom must go into the hundredths or even thousandths to separate us in ranking when, effectively, we are tied. One more made three by Cornell or Elon and we might be ranked behind MSU at 13th or even higher even though our efficiency numbers barely moved at all. And while Cornell might be bad at offense, a 54-8 run is impressive so I'm comfortable with the numbers moving our way a bit, especially since we presumably put the clamps on Cornell like no one else has this year.

I'm not saying I disagree with your over all argument. In fact, I wish KenPom allowed us to restrict the data to eliminate the games in which good teams beat up on vastly inferior teams and coaching decisions about when to throw in the scrubs mess up the value of the data. That being said, I think we should look at the actual adjusted defensive efficiency numbers to determine if Duke is being rated too highly for games against bad teams rather than the rankings. The teams around Duke are so closely clustered that the change in ranking can be deceptive and misleading when determining how much KenPom's system actually changed Duke's efficiency numbers.

uh_no
12-22-2012, 03:27 AM
If I'm not mistaken, our defensive efficiency number only improved by about 1.5 points after Cornell. That's a decent swing but hardly earth shattering.

1.5 points after 11 games into the season is a pretty darn big effect for a single game to have.....that would imply the defense in that single game was something like 15 points better than it was predicted to have been.....that's pretty darn significant. well....now that i think about it, I think he uses a decaying moving average, but i don't know what his weighting scheme is relative to how long ago a game was....either way....you can't view it as 1.5 points just for the game, but with a single game we improved our average for the entire season.

COYS
12-22-2012, 09:04 AM
1.5 points after 11 games into the season is a pretty darn big effect for a single game to have.....that would imply the defense in that single game was something like 15 points better than it was predicted to have been.....that's pretty darn significant. well....now that i think about it, I think he uses a decaying moving average, but i don't know what his weighting scheme is relative to how long ago a game was....either way....you can't view it as 1.5 points just for the game, but with a single game we improved our average for the entire season.

I understand that. But, for conparison's sake, after Arizona's regrettably incredible performance against us in the 2011 tourney, our D plummeted from 4th to 11th and moved more than 3 points in adjusted efficiency. This is after 37 games. A 1.5 point move 11 games into the season aided by our bad predicted numbers losing less and less relevance and after we put together a 54-8 (!) run seems perfectly reasonable even if Cornell is a bad offensive team.

cptnflash
12-22-2012, 09:46 AM
To any KenPom subscribers, how is the defensive rebounding percentage looking after the last couple games? Just curious to see if this also improves while playing some lesser teams.

Cornell was meh, we "held" them to a 29.7 oReb percentage (i.e., our dReb% was 70.3). The D1 median is currently 31.9/68.1, so I guess you could say we were a tiny bit better than average, but our size advantage was so overwhelming that I personally considered it disappointing.

Elon was the first game all year where you can honestly say we dominated the defensive glass, with a dReb% of 84.8 (Cornell oReb% was 15.2).

I thought it was great to see how Coach K, Ryan, Mason, Rasheed, and Quinn all mentioned defensive rebounding as an area of focus for the team in their interviews from earlier this week. In particular, I loved how Coach K used Jason Kidd as an example, and pointed out that the opposing team's point guard typically will not try for an offensive rebound because he's supposed to be the first man back in transition. So in theory, our point guard should have no one to box out, and should be free to pick up long rebounds in the open area near the foul line, or other loose balls that aren't rebounded directly under the basket. Quinn seems to have taken that example to heart, posting 7 defensive rebounds against Elon. He won't have that many every game, obviously, but if he can pick up an extra ball or two every game, it'll make a difference.

-jk
12-22-2012, 11:01 AM
Apparently KenPom saw we didn't have practice yesterday - we dropped from 8th to 9th in Adj D. :eek:

-jk

uh_no
12-22-2012, 11:48 AM
Apparently KenPom saw we didn't have practice yesterday - we dropped from 8th to 9th in Adj D. :eek:

-jk

justifiably so!

if you don't use it, you lose it

cptnflash
12-23-2012, 01:12 AM
Apparently KenPom saw we didn't have practice yesterday - we dropped from 8th to 9th in Adj D. :eek:

-jk

OK, I'll take the bait, jump in, and state the obvious. We may not have played (or practiced), but other teams did play. All KenPom rankings are relative.

I feel like I've just been trolled!

ice-9
12-23-2012, 01:50 PM
The Wisconsin Effect is most easily explained in terms of two scenarios (assuming equal pace):
1) KenPom model predicts we beat Team A by 1. We beat them by 11 instead for a plus 10 margin.
2) KenPom model predicts we beat Team B by 30. We beat them by 40 instead for a plus 10 margin.

In both scenarios, it's the same margin and counts the same in the KenPom model.

But in the real world, beating a strong team by 11 (e.g. Louisville) is much more impressive than beating a weak team by 40 (e.g. Cornell). Duke could've probably beaten Cornell by more than 40 had our starters played more minutes or had we experimented less; margin at that level of disparity simply isn't a good indicator of team strength.

It's not linear, it's diminishing returns. The 10 point margin in Scenario 1 should count more than it does in Scenario 2, but in KenPom's model it doesn't.

That's where you get the Wisconsin effect that Kedsy is referring to, because Wisconsin is really, really good in beating up crappy teams and so usually looks good in KenPom. Because Duke has played a relatively tough schedule, we haven't had that opportunity. Getting that extra margin is tough, and specifically on the defensive side (relatively speaking).

While margin in points per possession shouldn't be accounted for on a linear basis, I do acknowledge that determining a better alternative could be arbitrary.

Des Esseintes
12-23-2012, 05:04 PM
The Wisconsin Effect is most easily explained in terms of two scenarios (assuming equal pace):
1) KenPom model predicts we beat Team A by 1. We beat them by 11 instead for a plus 10 margin.
2) KenPom model predicts we beat Team B by 30. We beat them by 40 instead for a plus 10 margin.

In both scenarios, it's the same margin and counts the same in the KenPom model.

But in the real world, beating a strong team by 11 (e.g. Louisville) is much more impressive than beating a weak team by 40 (e.g. Cornell). Duke could've probably beaten Cornell by more than 40 had our starters played more minutes or had we experimented less; margin at that level of disparity simply isn't a good indicator of team strength.

It's not linear, it's diminishing returns. The 10 point margin in Scenario 1 should count more than it does in Scenario 2, but in KenPom's model it doesn't.

That's where you get the Wisconsin effect that Kedsy is referring to, because Wisconsin is really, really good in beating up crappy teams and so usually looks good in KenPom. Because Duke has played a relatively tough schedule, we haven't had that opportunity. Getting that extra margin is tough, and specifically on the defensive side (relatively speaking).

While margin in points per possession shouldn't be accounted for on a linear basis, I do acknowledge that determining a better alternative could be arbitrary.

I think you nailed it. Massive blowouts of weaker teams yield information, no doubt about that, but the quality of information is not as high as against top-shelf competition. After all, as you point out, when winning is basically assured coaches behave differently in the one case than they do in the other. The final score has to be somewhat less meaningful.

That said, most of this stuff would come out in the wash of a long enough season. College basketball has too few games--and the competition is too asymmetric--to yield completely satisfying results. At the NBA level, you can be confident at the end of 82 games who the elite teams are (though even there, veteran "playoff switch-flipping" teams can distort the numbers), but at this level I just always wonder how robust the numbers are. They're the best we have, but that's not the same as saying they're great.

cptnflash
12-23-2012, 10:11 PM
The Wisconsin Effect is most easily explained in terms of two scenarios (assuming equal pace):
1) KenPom model predicts we beat Team A by 1. We beat them by 11 instead for a plus 10 margin.
2) KenPom model predicts we beat Team B by 30. We beat them by 40 instead for a plus 10 margin.

In both scenarios, it's the same margin and counts the same in the KenPom model.

But in the real world, beating a strong team by 11 (e.g. Louisville) is much more impressive than beating a weak team by 40 (e.g. Cornell). Duke could've probably beaten Cornell by more than 40 had our starters played more minutes or had we experimented less; margin at that level of disparity simply isn't a good indicator of team strength.

It's not linear, it's diminishing returns. The 10 point margin in Scenario 1 should count more than it does in Scenario 2, but in KenPom's model it doesn't.

That's where you get the Wisconsin effect that Kedsy is referring to, because Wisconsin is really, really good in beating up crappy teams and so usually looks good in KenPom. Because Duke has played a relatively tough schedule, we haven't had that opportunity. Getting that extra margin is tough, and specifically on the defensive side (relatively speaking).

While margin in points per possession shouldn't be accounted for on a linear basis, I do acknowledge that determining a better alternative could be arbitrary.

Wisconsin also plays at an extremely slow pace, so their lopsided wins look even more impressive in efficiency terms. That's what really makes them shine in his system, since it's all based on per possession data.

For example:

Let's say UNC beats some crappy team by 30 points, in a game that has 75 possessions (many of which start with Roy yelling at his players to push the ball up the court faster).

Let's also say that Wisconsin beats some similarly crappy team by 30 points, in a game that has only 60 possessions (many of which end with a Badger bucket coming with less than 5 seconds left on the shot clock).

Wisconsin has outplayed its opponent by 0.5 points per possession, while UNC has outplayed its opponent by 0.4 points per possession. Wisconsin will get more "credit" for this win than UNC, even if the final score in both games was exactly the same!

Acymetric
12-23-2012, 11:01 PM
For those complaining about the seemingly unreasonable fluctuations in KenPom's rankings...bear in mind that his ratings aren't as meaningful until later in the season when they are "fully connected" (or at least that is what I'm recalling). Correct me if I'm wrong, but that most likely has something to do with it.

loran16
12-23-2012, 11:07 PM
The Wisconsin Effect is most easily explained in terms of two scenarios (assuming equal pace):
1) KenPom model predicts we beat Team A by 1. We beat them by 11 instead for a plus 10 margin.
2) KenPom model predicts we beat Team B by 30. We beat them by 40 instead for a plus 10 margin.

In both scenarios, it's the same margin and counts the same in the KenPom model.

But in the real world, beating a strong team by 11 (e.g. Louisville) is much more impressive than beating a weak team by 40 (e.g. Cornell). Duke could've probably beaten Cornell by more than 40 had our starters played more minutes or had we experimented less; margin at that level of disparity simply isn't a good indicator of team strength.

It's not linear, it's diminishing returns. The 10 point margin in Scenario 1 should count more than it does in Scenario 2, but in KenPom's model it doesn't.

That's where you get the Wisconsin effect that Kedsy is referring to, because Wisconsin is really, really good in beating up crappy teams and so usually looks good in KenPom. Because Duke has played a relatively tough schedule, we haven't had that opportunity. Getting that extra margin is tough, and specifically on the defensive side (relatively speaking).

While margin in points per possession shouldn't be accounted for on a linear basis, I do acknowledge that determining a better alternative could be arbitrary.

This is not true. Again, Pomeroy doesn't care about absolute margin of victory difference, it cares about efficiency. Take Duke over Temple - we were supposed to win by 11 points, we won by 23. Pomeroy essentially notes that the efficiency gap between Duke and Temple was twice as much as it was expected, and adjusts accordingly (note: the system is bayesian, so it doesn't throw out the data that led it to believe that Duke was x points per possession better). If Indiana beats a #300 team by 50 instead of 40, it notes that Indiana did NOT outperform its expected efficiency advantage by that much, and adjusts them marginally.

Again, the Wisconsin effect isn't caused by falsely equating margins of victory. It's caused by the system's consistent overrating of extremely extremely poor teams, none of whom Duke has played.

EDIT: Acymetric: At this point according to Pomeroy, preseason data has less weight than 2 games - so it's a very very small part of the data - and there is a large amount of data per team. Pomeroy is fine to use right now.

Kedsy
12-23-2012, 11:15 PM
For those complaining about the seemingly unreasonable fluctuations in KenPom's rankings...bear in mind that his ratings aren't as meaningful until later in the season when they are "fully connected" (or at least that is what I'm recalling). Correct me if I'm wrong, but that most likely has something to do with it.

Actually, KenPom's ratings fluctuate wildly all the way to the end. In 2010, going into their conference tournament, Butler's defense was 24th in the country. After the NCAAT was over, their defense was 5th. In 2011, Kentucky's D was 38th going into their conference tournament and they finished 15th. Also in 2011, UConn's offense and defense were both 32nd going into their conference tournament. After the NCAAT, they were 16th (O) and 14th (D). VCU in 2011 was rated 68th on O and 156th on D coming into their conference tourney and ended the season rated 32nd (O) and 86th (D). They'd all played at least 28 games before those huge jumps. (And these are just the first four examples I looked at; I'm sure there are plenty of others.)

Granted, these teams all won at least 7 games in a row to cause those improvements, so maybe they deserved them. But either way, his ratings fluctuate all the way to the end.

Kedsy
12-23-2012, 11:18 PM
Again, the Wisconsin effect isn't caused by falsely equating margins of victory. It's caused by the system's consistent overrating of extremely extremely poor teams, none of whom Duke has played.

You didn't respond to me before, so I'll ask you again. Cornell's offense is currently rated 302nd in the nation (and this is the key rating since it's our defense that jumped up after we played them). Overall, they're #258. They're not an "extremely extremely poor team"? Compared to whom?

ice-9
12-24-2012, 12:30 AM
This is not true. Again, Pomeroy doesn't care about absolute margin of victory difference, it cares about efficiency. Take Duke over Temple - we were supposed to win by 11 points, we won by 23. Pomeroy essentially notes that the efficiency gap between Duke and Temple was twice as much as it was expected, and adjusts accordingly (note: the system is bayesian, so it doesn't throw out the data that led it to believe that Duke was x points per possession better). If Indiana beats a #300 team by 50 instead of 40, it notes that Indiana did NOT outperform its expected efficiency advantage by that much, and adjusts them marginally.

Again, the Wisconsin effect isn't caused by falsely equating margins of victory. It's caused by the system's consistent overrating of extremely extremely poor teams, none of whom Duke has played.

EDIT: Acymetric: At this point according to Pomeroy, preseason data has less weight than 2 games - so it's a very very small part of the data - and there is a large amount of data per team. Pomeroy is fine to use right now.


Hmm, I wish KenPom has better documentation on how he defines things, but I do think you're misunderstanding at least part of my previous post. It's not absolute margin, yes, but it is about margin per possession -- which is why in the two scenarios I outlined above pace is explicitly assumed to be equal. (I'm not sure how you're defining "absolute margin" vs. "efficiency" btw.)

What I'm surprised about is your assertion that a 10 point "more than expected" win over a lousy team is treated less than one over a good team...assuming equal number of possessions of course! If the pace is different than yes the impact would be different (higher pace teams would have a lower impact, lower pace teams would have a higher impact). But if the pace is the same how would KenPom's math work to make the impact relative, as per your post?

Here's a passage from KenPom's explanation of his ratings (http://kenpom.com/blog/index.php/weblog/entry/ratings_explanation):


How do you cap margin of victory?

This is the most obvious problem with the system - there is no cap on margin of victory. It’s not that I’m particularly comfortable with it, but I’ve looked at quite a few ways to limit the impact of MOV, and I haven’t found one that I like, yet. I’ll find something someday, but until then we have to deal with things like Georgia being ranked 11th and Oklahoma being ranked 17th at this point (12/10/06) in the season. More games will push these teams to their rightful location.

This would imply that margin per possession is counted on an absolute (linear) basis, not relative (diminishing returns).

All that said I am no statistician, just a struggling tech entrepreneur, so claim no expertise on the matter. :p

Des Esseintes
12-24-2012, 05:05 PM
This is not true. Again, Pomeroy doesn't care about absolute margin of victory difference, it cares about efficiency. Take Duke over Temple - we were supposed to win by 11 points, we won by 23. Pomeroy essentially notes that the efficiency gap between Duke and Temple was twice as much as it was expected, and adjusts accordingly (note: the system is bayesian, so it doesn't throw out the data that led it to believe that Duke was x points per possession better). If Indiana beats a #300 team by 50 instead of 40, it notes that Indiana did NOT outperform its expected efficiency advantage by that much, and adjusts them marginally.

Again, the Wisconsin effect isn't caused by falsely equating margins of victory. It's caused by the system's consistent overrating of extremely extremely poor teams, none of whom Duke has played.

EDIT: Acymetric: At this point according to Pomeroy, preseason data has less weight than 2 games - so it's a very very small part of the data - and there is a large amount of data per team. Pomeroy is fine to use right now.

You're missing the guy's point. Yes, kenpom's stats are tempo-free, which means the final score is immaterial; the points-per-possession scored and allowed are how he ranks teams. And I agree that tempo-free margins offer a sharper image of what happened than absolute score margins. But Ice-9's argument does not live or die on tempo-free vs. absolute score. He stated that beating a bad team by 30 when the expected margin is 20 is less meaningful than beating an elite team by 11 when the expected margin is 1. If we say that beating a bad team by a ppp of .3 when the expected margin is .2 is less meaningful than beating an elite team by .15 than the expected ppp margin of .05, we are making a very similar statement. Different, sure, but pretty similar when the central idea is that high-level competition inherently yields more illumination than low-level competition. As stated upthread, we don't toss the info gained from weak team blowouts; blowouts do have value and help complete the picture. But it's mistaken, I would submit, to think that that data is of the same quality as games against the best competition.

ChillinDuke
12-26-2012, 11:31 AM
You're missing the guy's point. Yes, kenpom's stats are tempo-free, which means the final score is immaterial; the points-per-possession scored and allowed are how he ranks teams. And I agree that tempo-free margins offer a sharper image of what happened than absolute score margins. But Ice-9's argument does not live or die on tempo-free vs. absolute score. He stated that beating a bad team by 30 when the expected margin is 20 is less meaningful than beating an elite team by 11 when the expected margin is 1. If we say that beating a bad team by a ppp of .3 when the expected margin is .2 is less meaningful than beating an elite team by .15 than the expected ppp margin of .05, we are making a very similar statement. Different, sure, but pretty similar when the central idea is that high-level competition inherently yields more illumination than low-level competition. As stated upthread, we don't toss the info gained from weak team blowouts; blowouts do have value and help complete the picture. But it's mistaken, I would submit, to think that that data is of the same quality as games against the best competition.

I want to agree with you, and I think I probably do. But to further this discussion, I'm not completely sure that "high-level" competition definitively offers more than "low-level" competition. The way I see it is there is a spectrum (perhaps consider it a normal bell curve for discussion/illustrative purposes - might not be factual, but that's not my point for now) in which the mean of the bell curve is set on an expected margin of victory. With this in mind, it is easy to see that no matter the level of competition a team is equally likely to miss the expected margin either by more or by less.

Think of it like betting on the spread of a game. I'm not really a sports gambler so don't know a ton about it, but I have to imagine that betting on spreads does not yield any long-term profit potential as teams will largely win by the spread, half above and half below. My point is, either way you prefer to look at it teams will under-perform and over-perform an expected margin of victory with approximately equal likelihood regardless of the level of competition. Junky team? Expected margin may be 25. We win by 10 or by 40 - same likelihood. Great team? Expected margin may be 3. We win by 20 or lose by 14 - same likelihood.

Again, my point is that I'm not sure "high-level" competition is a definitively better barometer for weighing a team, assuming the expected margins are set in such a way that they are accurate. Especially when you consider "high-level" is a different definition for every team. In this way the games against "low-level" competition say just as much as the games against "high-level". They may not be water cooler talk material, but they offer the same value in an accurate statistical model.

Would love to hear more views on this.

- Chillin

ChillinDuke
12-26-2012, 11:37 AM
I have an honest question for all the Dork Trackers out there. And I'm sure that this has been discussed in previous years. But I forget, does anyone know the main difference between KenPom and Sagarin?

I ask because I generally have leaned toward KenPom in the past, but I find Sagarin's rankings just "feel" more accurate. [Please note, that I am obviously biased in my "feel"]

Duke at #1, UK at #35, and UNC at #56 (although also pleasing) appears to be a reasonable interpretation of those team's rankings. The same three teams at #3, #13, and #26, respectively, seems off. Indiana is another example of a team that at #8 Sagarin and #1 KenPom also seems more reasonable in Sagarin.

Is it b/c KenPom does that preseason weighting thing (although shouldn't it be almost gone)? Any other explanations why Sagarin passes the eye test better?

- Chillin

uh_no
12-26-2012, 12:09 PM
I have an honest question for all the Dork Trackers out there. And I'm sure that this has been discussed in previous years. But I forget, does anyone know the main difference between KenPom and Sagarin?

I ask because I generally have leaned toward KenPom in the past, but I find Sagarin's rankings just "feel" more accurate. [Please note, that I am obviously biased in my "feel"]

Duke at #1, UK at #35, and UNC at #56 (although also pleasing) appears to be a reasonable interpretation of those team's rankings. The same three teams at #3, #13, and #26, respectively, seems off. Indiana is another example of a team that at #8 Sagarin and #1 KenPom also seems more reasonable in Sagarin.

Is it b/c KenPom does that preseason weighting thing (although shouldn't it be almost gone)? Any other explanations why Sagarin passes the eye test better?

- Chillin

Kenpoms ratings are very iffy until most of the season is up and teams have played more similar schedules in terms of average difficulty, which isn't the case now.

Also curious, why do you think a team who has one loss, in overtime, to a very good butler team is a better #8 than #1? 2-4 seems most appropriate, but i'm not sure I can find 7 teams that are anywhere as good as indiana.

Kedsy
12-26-2012, 12:29 PM
I have an honest question for all the Dork Trackers out there. And I'm sure that this has been discussed in previous years. But I forget, does anyone know the main difference between KenPom and Sagarin?

I ask because I generally have leaned toward KenPom in the past, but I find Sagarin's rankings just "feel" more accurate. [Please note, that I am obviously biased in my "feel"]

Duke at #1, UK at #35, and UNC at #56 (although also pleasing) appears to be a reasonable interpretation of those team's rankings. The same three teams at #3, #13, and #26, respectively, seems off. Indiana is another example of a team that at #8 Sagarin and #1 KenPom also seems more reasonable in Sagarin.

Is it b/c KenPom does that preseason weighting thing (although shouldn't it be almost gone)? Any other explanations why Sagarin passes the eye test better?

- Chillin

Sagarin does a pre-season weighting too, that also dissipates over time (until all teams are "connected"). To answer your first question, Sagarin's ratings are based on the scores of the games while Pomeroy's ratings are based on points per possession. Pomeroy's calculations are "tempo free" -- because he calculates per possession his numbers necessarily take pace into account. Sagarin has a couple different points-based ratings and I'm not sure what his formulas are, but both appear to be scoring margin-based, which wouldn't really take pace into account. I believe the difference between his two systems is one of them also takes wins and losses into account as well as margin, so that would be even further from Pomeroy. I believe Sagarin also uses a diminishing returns principle to deal with blowouts while Pomeroy I don't think does.


I want to agree with you, and I think I probably do. But to further this discussion, I'm not completely sure that "high-level" competition definitively offers more than "low-level" competition. The way I see it is there is a spectrum (perhaps consider it a normal bell curve for discussion/illustrative purposes - might not be factual, but that's not my point for now) in which the mean of the bell curve is set on an expected margin of victory. With this in mind, it is easy to see that no matter the level of competition a team is equally likely to miss the expected margin either by more or by less.

Think of it like betting on the spread of a game. I'm not really a sports gambler so don't know a ton about it, but I have to imagine that betting on spreads does not yield any long-term profit potential as teams will largely win by the spread, half above and half below. My point is, either way you prefer to look at it teams will under-perform and over-perform an expected margin of victory with approximately equal likelihood regardless of the level of competition. Junky team? Expected margin may be 25. We win by 10 or by 40 - same likelihood. Great team? Expected margin may be 3. We win by 20 or lose by 14 - same likelihood.

Again, my point is that I'm not sure "high-level" competition is a definitively better barometer for weighing a team, assuming the expected margins are set in such a way that they are accurate. Especially when you consider "high-level" is a different definition for every team. In this way the games against "low-level" competition say just as much as the games against "high-level". They may not be water cooler talk material, but they offer the same value in an accurate statistical model.

Would love to hear more views on this.

- Chillin

Well, I think in theory you are close to right in that Pomeroy adjusts for the quality of the offense or defense against which a team's efficiency is measured. The possible issue (in my mind, at least -- some disagree) is that the adjustment doesn't work sufficiently well so that good teams often perform much better against poor teams than Pomeroy's ratings predict. My theory is that the "true" difference between the teams would be some sort of step function, rather than a smooth curve as you've suggested. I'm not sure how to prove or disprove that theory, however.

ChillinDuke
12-26-2012, 01:40 PM
Sagarin does a pre-season weighting too, that also dissipates over time (until all teams are "connected"). To answer your first question, Sagarin's ratings are based on the scores of the games while Pomeroy's ratings are based on points per possession. Pomeroy's calculations are "tempo free" -- because he calculates per possession his numbers necessarily take pace into account. Sagarin has a couple different points-based ratings and I'm not sure what his formulas are, but both appear to be scoring margin-based, which wouldn't really take pace into account. I believe the difference between his two systems is one of them also takes wins and losses into account as well as margin, so that would be even further from Pomeroy. I believe Sagarin also uses a diminishing returns principle to deal with blowouts while Pomeroy I don't think does.

Thanks, Keds.


Well, I think in theory you are close to right in that Pomeroy adjusts for the quality of the offense or defense against which a team's efficiency is measured. The possible issue (in my mind, at least -- some disagree) is that the adjustment doesn't work sufficiently well so that good teams often perform much better against poor teams than Pomeroy's ratings predict. My theory is that the "true" difference between the teams would be some sort of step function, rather than a smooth curve as you've suggested. I'm not sure how to prove or disprove that theory, however.

I think you are probably right to some degree. My point was only that low-level competition offers comparable value to high-level competition in a good statistical model. I just used a bell curve as a simplified example - a step function is an interesting idea, and my gut says it would better lump teams into levels of competition (pun? irony? something?).


Kenpoms ratings are very iffy until most of the season is up and teams have played more similar schedules in terms of average difficulty, which isn't the case now.

Also curious, why do you think a team who has one loss, in overtime, to a very good butler team is a better #8 than #1? 2-4 seems most appropriate, but i'm not sure I can find 7 teams that are anywhere as good as indiana.

I don't think they are #8, but also don't think they are #1 and #1 by a margin that is larger than the spread between #2-#4. Regardless, maybe not the best example.

- Chillin

uh_no
12-26-2012, 02:06 PM
Thanks, Keds.



I think you are probably right to some degree. My point was only that low-level competition offers comparable value to high-level competition in a good statistical model. I just used a bell curve as a simplified example - a step function is an interesting idea, and my gut says it would better lump teams into levels of competition (pun? irony? something?).



I don't think they are #8, but also don't think they are #1 and #1 by a margin that is larger than the spread between #2-#4. Regardless, maybe not the best example.

- Chillin

they're .03 above louisville....you can't even pretend that that's large margin....2-4 happen to be very closely bunched, yes, but for comparison, the 4-5 gap is .08.....

Are they #1? probably not....are they in the bunch of 4 teams tightly packed at the top? yes....and that's what the rankings show.

loran16
12-26-2012, 03:54 PM
Sagarin does a pre-season weighting too, that also dissipates over time (until all teams are "connected"). To answer your first question, Sagarin's ratings are based on the scores of the games while Pomeroy's ratings are based on points per possession. Pomeroy's calculations are "tempo free" -- because he calculates per possession his numbers necessarily take pace into account. Sagarin has a couple different points-based ratings and I'm not sure what his formulas are, but both appear to be scoring margin-based, which wouldn't really take pace into account. I believe the difference between his two systems is one of them also takes wins and losses into account as well as margin, so that would be even further from Pomeroy. I believe Sagarin also uses a diminishing returns principle to deal with blowouts while Pomeroy I don't think does.


More importantly - Sagarin's system is much more proprietary - the results are public, but the inputs are not clear. Pomeroy's is more transparent - anyone can calculate efficiencies on their own, he just adjusts for difficulty, which I'm pretty sure he's explained on the site before.



You didn't respond to me before, so I'll ask you again. Cornell's offense is currently rated 302nd in the nation (and this is the key rating since it's our defense that jumped up after we played them). Overall, they're #258. They're not an "extremely extremely poor team"? Compared to whom?


I missed this sorry Kedsy. Here's my answer - Let's look at some comparable team's schedules:

Wisconsin:
324 SE Lousiana
340 Presbyterian
338 Nebraska-Omaha
304 UW Milwaukee

Those are truly awful teams (Wisconsin amusingly has also played Cornell). The best of those teams has a Pomeroy expected win % (Pythag - against average teams only) of 18.64%. And that's the best of those teams (Presbyterian would have a win percentage under 8%).

Cornell's very not good - but Cornell is a 29% win %. In essence they'd win 3/10 of their games against average teams, for a 3-7 record.

This is the Wisconsin effect source - those are beyond horrific teams.

So Cornell is close, but it's borderline. Elon is not even close - their pythag (Expected Win %) is over 50%...(which makes sense as they're slightly above the NCAA average).

So of the teams you mention where Duke's D has made the most improvement in the system - and the improvement is not very large in absolute terms, these teams are close together by the way, only one is close to being truly awful.

ChillinDuke
12-26-2012, 04:06 PM
Kenpoms ratings are very iffy until most of the season is up and teams have played more similar schedules in terms of average difficulty, which isn't the case now.

Also curious, why do you think a team who has one loss, in overtime, to a very good butler team is a better #8 than #1? 2-4 seems most appropriate, but i'm not sure I can find 7 teams that are anywhere as good as indiana.

You're right, and we're splitting hairs. Indiana is very good. No statistics needed.

- Chillin

Kedsy
12-26-2012, 11:21 PM
I missed this sorry Kedsy. Here's my answer - Let's look at some comparable team's schedules:

Wisconsin:
324 SE Lousiana
340 Presbyterian
338 Nebraska-Omaha
304 UW Milwaukee

Those are truly awful teams (Wisconsin amusingly has also played Cornell). The best of those teams has a Pomeroy expected win % (Pythag - against average teams only) of 18.64%. And that's the best of those teams (Presbyterian would have a win percentage under 8%).

Cornell's very not good - but Cornell is a 29% win %. In essence they'd win 3/10 of their games against average teams, for a 3-7 record.

This is the Wisconsin effect source - those are beyond horrific teams.

So Cornell is close, but it's borderline. Elon is not even close - their pythag (Expected Win %) is over 50%...(which makes sense as they're slightly above the NCAA average).

So of the teams you mention where Duke's D has made the most improvement in the system - and the improvement is not very large in absolute terms, these teams are close together by the way, only one is close to being truly awful.

OK, I understand your point now. Thanks.

But isn't it just a matter of degree? Meaning Wisconsin is helped more than Duke because they played 4 or 5 awful teams and we only played 1? The phenomenon is still present even if you only play one awful team, isn't it? It's just not as pronounced?

darthur
12-26-2012, 11:39 PM
More importantly - Sagarin's system is much more proprietary - the results are public, but the inputs are not clear. Pomeroy's is more transparent - anyone can calculate efficiencies on their own, he just adjusts for difficulty, which I'm pretty sure he's explained on the site before.

The specifics of Sagarin's system may be less transparent, but it's Bayesian, which is a pretty well known approach:

http://en.wikipedia.org/wiki/Bayesian_network

At a very high level, the idea is to model the probability of various game outcomes in terms of unknown team strengths, and then report the team strengths that maximize the probability of what actually happened. Unlike Pomeroy, this system is not specific to basketball really, but it is a general and effective technique for machine learning.

ice-9
12-26-2012, 11:52 PM
I think you are probably right to some degree. My point was only that low-level competition offers comparable value to high-level competition in a good statistical model. I just used a bell curve as a simplified example - a step function is an interesting idea, and my gut says it would better lump teams into levels of competition (pun? irony? something?).

I think it can definitely be of value, especially when a team isn't able to blowout a low-level competitor. Only beat Cornell by 2 points? That probably says something about the winning team. But I don't think a 40 point laughter over Cornell is any worse than a 42 point win.

The Wisconsin effect isn't that they play crappy competition. It's that they play crappy competition -- and then destroy them (on a points per possession basis).

loran16
12-29-2012, 01:22 AM
OK, I understand your point now. Thanks.

But isn't it just a matter of degree? Meaning Wisconsin is helped more than Duke because they played 4 or 5 awful teams and we only played 1? The phenomenon is still present even if you only play one awful team, isn't it? It's just not as pronounced?

The latter. Nearly every top team plays one or two super-cupcakes. So they all get helped by the effect to a minor extent, but it evens out. The Wisconsin effect occurs when a team faces a ton of super-cupcakes. Indiana's another such team, playing 7 teams out of the top 250 (although only their most recent opponent was sub-300).

@Darthur - every computer ranking system is Bayesian. Only humans forget bayesian principles at times.

toooskies
12-29-2012, 01:14 PM
The latter. Nearly every top team plays one or two super-cupcakes. So they all get helped by the effect to a minor extent, but it evens out. The Wisconsin effect occurs when a team faces a ton of super-cupcakes. Indiana's another such team, playing 7 teams out of the top 250 (although only their most recent opponent was sub-300).

@Darthur - every computer ranking system is Bayesian. Only humans forget bayesian principles at times.

The Wisconsin effect is also enhanced because of their home-court advantage. They use a different brand of basketball than everyone else, which inflates their advantage by a possession or two.

Newton_14
12-29-2012, 02:46 PM
The Wisconsin effect is also enhanced because of their home-court advantage. They use a different brand of basketball than everyone else, which inflates their advantage by a possession or two.

If you are being serious here, I thought the NCAA made the same ball mandatory for all teams several years ago? Duke used to use a different ball that was a very dark tan in color, but that went out with the rule change, or so I thought..

Am I misremembering?

TexHawk
12-29-2012, 03:00 PM
If you are being serious here, I thought the NCAA made the same ball mandatory for all teams several years ago? Duke used to use a different ball that was a very dark tan in color, but that went out with the rule change, or so I thought..

Am I misremembering?

Not according to this (http://www.nytimes.com/2012/03/02/sports/ncaabasketball/college-home-teams-can-pick-their-brands-of-basketballs.html?pagewanted=all&_r=0). Some notes in there about the Wisconsin ball, which nobody else uses in the NCAA.

Newton_14
12-29-2012, 04:19 PM
Not according to this (http://www.nytimes.com/2012/03/02/sports/ncaabasketball/college-home-teams-can-pick-their-brands-of-basketballs.html?pagewanted=all&_r=0). Some notes in there about the Wisconsin ball, which nobody else uses in the NCAA.

Very interesting, thanks for the link. I searched to see if the ACC mandates a specific ball in league play but could not even find the ACC Basketball Rulebook anywhere. Evidently Duke switched when they went to the Nike contract, and not due to any rule change.

loran16
12-29-2012, 04:59 PM
The Wisconsin effect is also enhanced because of their home-court advantage. They use a different brand of basketball than everyone else, which inflates their advantage by a possession or two.

Again I'd disagree with this given that wosconsins home court edge doesn't seem huge against decent teams. The ball is an edge but the crappy opponents are a bigger edge

uh_no
12-29-2012, 05:08 PM
Very interesting, thanks for the link. I searched to see if the ACC mandates a specific ball in league play but could not even find the ACC Basketball Rulebook anywhere. Evidently Duke switched when they went to the Nike contract, and not due to any rule change.

I don't believe there are any league-specific rules in NCAA Bball. Teams generally use the ball of the company that sponsors them

We use a nike ball. Maryland uses an Underarmour ball.

Indoor66
12-29-2012, 05:16 PM
Maryland uses an Underarmour ball.

Gee, that sounds almost pornographic. On second thought, that may be appropriate for the turtles.

uh_no
12-29-2012, 05:20 PM
Gee, that sounds almost pornographic. On second thought, that may be appropriate for the turtles.

then this might not be safe for work

http://gamedayr.com/wp-content/slideshow/2012/10/new-maryland-under-armour-basketball-uniforms-2012/full/new-2012-maryland-basketball-uniforms-570x380.jpeg

uh_no
01-01-2013, 02:44 AM
after indiana's game their offensive efficiency dropped significantly....over a full point....putting them in a virtual tie (less than a tenth of a point) with duke for the best offense in the land.

Listen to Quants
01-01-2013, 05:33 PM
The specifics of Sagarin's system may be less transparent, but it's Bayesian, which is a pretty well known approach:

http://en.wikipedia.org/wiki/Bayesian_network

At a very high level, the idea is to model the probability of various game outcomes in terms of unknown team strengths, and then report the team strengths that maximize the probability of what actually happened. Unlike Pomeroy, this system is not specific to basketball really, but it is a general and effective technique for machine learning.

That lack of transparency, in Sagarin's system, makes it far less interesting to me. If I only want to know which team is likely to win and by how much, I look to Vegas rather than either system. Pomeroy's site tells more than that, it measures defensive and offensive strengths and pace. If you pay for it, it also provides a lot of nice player numbers. Both Sagarin and Pomeroy seem to get very close to Vegas numbers by the end of the year (absent importamt injuries).

I'd like to see a reverse engineered Vegas power rating (that is, create the power rating that underlies the Vegas lines), but am too lazy to do one (easy for the NFL, hard for College hoops).

loran16
01-01-2013, 07:11 PM
Pomeroy has just released a post showing the results of 10000 simulations of each conference's play:
http://kenpom.com/blog/index.php/weblog/entry/conference_title_predictions

The ACC goes as follows:
Conference Champion %:
Duke 87.97%
UVA 4.76%
NC State 2.35%
Miami 2.26%
UNC 1.43%
Maryland .59%
GTech .43%
FSU .14%
Clemson .08%

UVA is probably overrated by Pomeroy's computer - as admitted by the man himself - but UVA only plays Duke, NC State, & Miami once - meaning their schedule isn't that bad. By contrast, UNC's single game opponents are Clemson, VaTech, BC, and Wake - the worst teams in the conference. Bad time to have a not great year.

Of course this system doesn't know that Miami is without Reggie Johnson.

----------------
Side Note: This might be interesting as well to Duke fans:
MEAC Champion odds:
1. NC Central: 45.89%

Yep, NC Central may very well reach the tournament this year.

uh_no
01-02-2013, 11:00 AM
and as of today, duke has the top offense in the country!

COYS
01-02-2013, 12:28 PM
and as of today, duke has the top offense in the country!

For all the (deserved) attention Coach K gets for his defense first approach, it is truly incredibly how good Duke is on offense every single year no matter how much offensive production is lost from the previous season. In fact (per KenPom), Duke is arguably a more consistent team on the offensive end than on the defensive end (going back to 2003), with only the 2007 season showing a Duke offense out of the top 20. Most of the other years, Duke boasted top 10 or even top 5 offenses. Even last year, though Duke finished 11th after losing to Lehigh, we were as high as number one and still remained close to the top spot before Ryan was injured.

We are obviously fond of marveling at K's ability to get the most out of his team, but when you think about the different types of teams Duke has had since 2003, you just have to marvel a little bit more. These teams have scored their buckets with dominant post scorers (2004-2006 with Shelden and this year with Mason), with killer jump shooters (the JJ years), with a team of guards and one small forward (2008), with combo guards playing the point position (2009 and 2010), with pure point guards (Duhon years, the Kyrie games, and now with Quinn), and with three pointers and offensive boards (2010). Duke has scored whether playing fast or slow. Duke has scored by forcing turnovers with agressive man to man defense and Duke has scored with a more conservative man to man defense (2010). Bottom line: A Coach K team is virtually guaranteed to be able to put points on the board no matter the style of play.

toooskies
01-02-2013, 07:02 PM
Some meaningful updates: Sagarin (http://usatoday30.usatoday.com/sports/sagarin/bkt1213.htm) has us #1 in both individual components of his ratings system, and #1 overall by a significant margin (3.22 points, which is about the difference between 2 and 9 or 10).

ESPN's BPI (http://espn.go.com/mens-college-basketball/story/_/id/8802726/bpi-jan-2-college-basketball-power-index-rankings) was also released today, which we also lead by a significant margin (3.7 points, which is again the difference between 2 and 9 or 10).

MChambers
01-03-2013, 08:18 AM
Don't look now, but Duke is up to #2 in Pomeroy, just a shade behind #1 Indiana. Perhaps more impressively, Duke's defense is now ranked #7. Our offense fell to #3 after last night's game.

uh_no
01-03-2013, 10:11 AM
Don't look now, but Duke is up to #2 in Pomeroy, just a shade behind #1 Indiana. Perhaps more impressively, Duke's defense is now ranked #7. Our offense fell to #3 after last night's game.

yeah, offense dove down about a point :/ credit davidson for a really good game....and some very sloppiness by us at times...

3 ten thousandths of a point of indiana.

Olympic Fan
01-03-2013, 01:57 PM
I haven't noticed it before, but as of today, Pomeroy has issued his player of the year ratings. Even after last night's sub-par game in Charlotte, Mason ranks No. 1 -- ahead of Louisville's Russ Smith.

Those ratings will obviously be updated daily during the a season.

Interesting that Duke's numbers should jump so much after the Davidson win (from 11 to 7 on defense; from 4 to 2 overall). Pom's computer REALLY liked the Davidson win.

loran16
01-03-2013, 02:16 PM
I haven't noticed it before, but as of today, Pomeroy has issued his player of the year ratings. Even after last night's sub-par game in Charlotte, Mason ranks No. 1 -- ahead of Louisville's Russ Smith.

Those ratings will obviously be updated daily during the a season.

Interesting that Duke's numbers should jump so much after the Davidson win (from 11 to 7 on defense; from 4 to 2 overall). Pom's computer REALLY liked the Davidson win.

Actually it's a function of how the Pythagorean formula for sports works- points allowed (or in this case d efficiency) is in both the numerator and denominator so if you have a good d performance but meh offense , you'll improve in pythag

Edit: I am incorrect actually. Ignore this post

Bluedog
01-03-2013, 02:22 PM
I haven't noticed it before, but as of today, Pomeroy has issued his player of the year ratings. Even after last night's sub-par game in Charlotte, Mason ranks No. 1 -- ahead of Louisville's Russ Smith.

Those ratings will obviously be updated daily during the a season.

Interesting that Duke's numbers should jump so much after the Davidson win (from 11 to 7 on defense; from 4 to 2 overall). Pom's computer REALLY liked the Davidson win.

Which is a bit funny because Pomeroy's system predicted a 17-point Duke victory and is exactly what occurred, so you wouldn't think it would move the needle much. I guess Loran's explanation makes sense, though.

MChambers
01-03-2013, 02:28 PM
Which is a bit funny because Pomeroy's system predicted a 17-point Duke victory and is exactly what occurred, so you wouldn't think it would move the needle much. I guess Loran's explanation makes sense, though.

I'm guessing that last night's 17 point spread differed from the 17 win predicted by the system, in that the actual score was much lower than the system predicted. (I don't have access to the predictions.)

A 17 point win in a game with 117 total points is much more impressive than a 17 point win in a 150 point game. Pomeroy captures this; Sagarin does not.

COYS
01-03-2013, 02:38 PM
Which is a bit funny because Pomeroy's system predicted a 17-point Duke victory and is exactly what occurred, so you wouldn't think it would move the needle much. I guess Loran's explanation makes sense, though.

If I'm not mistaken, Pomeroy predicted slightly more total possessions for the game, so a 17 point win on fewer possessions is worth slightly more than a 17 point win. Meanwhile, Davidson's offense is now ranked 36th in the land, which is well above average. Meanwhile, their defense was ranked almost perfectly average at 165th. Since KenPom's offensive and defensive rankings are intended to predict a team's efficiency against an average team, our offensive efficiency of 105 was about 15 points less than predicted. Meanwhile, our defense held a well above average offensive team to a mere 80 points per 100 possessions. I would imagine that Kenpom's adjustment algorithms rate such an impressive defensive performance very highly while a mediocre showing on offense was much less harshly rated given the many other examples of strong Duke offense against good defensive teams.

loran16
01-03-2013, 02:47 PM
Which is a bit funny because Pomeroy's system predicted a 17-point Duke victory and is exactly what occurred, so you wouldn't think it would move the needle much. I guess Loran's explanation makes sense, though.

Well I screwed up above - pythag weights offense more but let me try two better explanations

1. The gap between the Pomeroy top 4 right now is very very small and these four are likely to Change positions a bunch.

2. Pomeroy cares not for scoring margin but efficiency margins. Duke was expected to win by 17 in a faster game - one with more possessions. Thus duke outperformed how it was expected to perform even if the margin of victory was the same

CDu
01-03-2013, 02:54 PM
Which is a bit funny because Pomeroy's system predicted a 17-point Duke victory and is exactly what occurred, so you wouldn't think it would move the needle much. I guess Loran's explanation makes sense, though.

It's because Pomeroy's weighting system is based on efficiency - not point differential. Efficiency takes into account differences in tempo, so a 50-33 win is better (according to Pomeroy) than a 100-83 win. The rationale being that the relative efficiency difference is greater in the 50-33 game than in the 100-83 game. I assume that Pomeroy expected a higher-tempo game. As such, the 17-point spread would have suggested a closer (relatively speaking) game in terms of efficiency. Because the game was a bit slower-paced, the 17 point win is more impressive than Pomeroy had expected.

Edit: basically, the same thing that MChambers, Loran16, and COYS said. I should have read the rest of the thread before responding. Oops!

COYS
01-03-2013, 03:10 PM
It's because Pomeroy's weighting system is based on efficiency - not point differential. Efficiency takes into account differences in tempo, so a 50-33 win is better (according to Pomeroy) than a 100-83 win. The rationale being that the relative efficiency difference is greater in the 50-33 game than in the 100-83 game. I assume that Pomeroy expected a higher-tempo game. As such, the 17-point spread would have suggested a closer (relatively speaking) game in terms of efficiency. Because the game was a bit slower-paced, the 17 point win is more impressive than Pomeroy had expected.

Edit: basically, the same thing that MChambers, Loran16, and COYS said. I should have read the rest of the thread before responding. Oops!

You said it more succinctly and eloquently than I did, so no apology necessary.

Incidentally, this point is the primary reason why I think people failed to see how good our 2010 team was. We played at such a slow pace that our raw point differentials weren't all that convincing. However, when it came to a possession by possession count, Duke's 2010 team was truly dominant. I don't think KenPom is the end all be all of basketball stats, but I would be lying if I said I am not encouraged by how much improved this year's team appears to be.

striker219
01-03-2013, 10:37 PM
Fun fact. Excluding Duke, the other 11 teams in Pomeroy's top 12 have a combined record of 128-18. Duke is responsible for 5 of those 18 losses. I found that pretty entertaining.

hurleyfor3
01-08-2013, 06:48 PM
Montana is now the only state in the Mountain time zone not to have a team ranked above unc-ch in Sagarin.

As of 1/8/13:

Arizona #10
Wyoming #16
Colorado #25
Boise State #28
Colorado State #32
New Mexico #40
Byu #53

vs.

Unc #57

Montana is #166. C'mon, Grizz, get your act together!

Still, you can form an eight-team "conference" of seven Mountain Time Zone schools and unc, in which unc is statistically last.

ChillinDuke
01-08-2013, 09:48 PM
Montana is now the only state in the Mountain time zone not to have a team ranked above unc-ch in Sagarin.

As of 1/8/13:

Arizona #10
Wyoming #16
Colorado #25
Boise State #28
Colorado State #32
New Mexico #40
Byu #53

vs.

Unc #57

Montana is #166. C'mon, Grizz, get your act together!

Still, you can form an eight-team "conference" of seven Mountain Time Zone schools and unc, in which unc is statistically last.

I wholeheartedly endorse any and all posts that describe the inferiority of UNC, no matter how contrived.

More, please.

- Chillin

El_Diablo
01-08-2013, 10:22 PM
I wholeheartedly endorse any and all posts that describe the inferiority of UNC, no matter how contrived.

More, please.

- Chillin

States with a team rated higher than UNC by Sagarin:

3099

And states with a team rated higher than UNC by Pomeroy:

3100

D.C. should also be shaded for both maps. And please note that these maps are a little deceiving because there are several states with multiple teams rated higher than UNC. :)

uh_no
01-08-2013, 10:39 PM
fair chance we hit top 3 in defense tomorrow morning and drop out of the top 10 offense

pfrduke
01-08-2013, 10:55 PM
fair chance we hit top 3 in defense tomorrow morning and drop out of the top 10 offense

Not sure we'll be out of the top 10. We only barely underperformed what was predicted - Pomeroy had us scoring 73 in 64 possessions, we had 68 in 63 possessions. It wasn't a great offensive performance, but it's not going to drop our season-long adjusted offensive efficiency by 3.5 points per 100 possessions (which is the drop we would need to have to fall out of the top 10). Creighton will pass us, and maybe Pitt (they're having a pretty good night against a pretty good Georgetown defense), but the worst we end up tomorrow is 8th.

uh_no
01-08-2013, 11:02 PM
Not sure we'll be out of the top 10. We only barely underperformed what was predicted - Pomeroy had us scoring 73 in 64 possessions, we had 68 in 63 possessions. It wasn't a great offensive performance, but it's not going to drop our season-long adjusted offensive efficiency by 3.5 points per 100 possessions (which is the drop we would need to have to fall out of the top 10). Creighton will pass us, and maybe Pitt (they're having a pretty good night against a pretty good Georgetown defense), but the worst we end up tomorrow is 8th.

gotcha. I misread creighton's efficiency as our own (was just looking at the 7 line for our defense)

Olympic Fan
01-09-2013, 01:45 AM
Well, Duke's performance couldn't have been too bad -- the Devils jumped from No. 3 overall to No. 1 in the latest Pomeroy ratings (posted a few minutes ago). Duke did drop to No. 8 offensively, but climbed to No. 3 defensively.

Duke is now No. 1 in all three major dork polls -- Pomeroy, Sagarin and RPI.

Ggallagher
01-09-2013, 07:59 AM
Well, Duke's performance couldn't have been too bad -- the Devils jumped from No. 3 overall to No. 1 in the latest Pomeroy ratings (posted a few minutes ago). Duke did drop to No. 8 offensively, but climbed to No. 3 defensively.

Duke is now No. 1 in all three major dork polls -- Pomeroy, Sagarin and RPI.

We way out-performed in two of our not so strong areas last night - so we should have jumped up significantly. Our Offensive Rebounding % Average prior to Clemson was 29%. Against Clemson we were 40%.
Our defensive rebounding average was 32%, but in last night's game we were 70%.
That's a pretty dominating performance on the boards.

pfrduke
01-09-2013, 08:18 AM
We way out-performed in two of our not so strong areas last night - so we should have jumped up significantly. Our Offensive Rebounding % Average prior to Clemson was 29%. Against Clemson we were 40%.
Our defensive rebounding average was 32%, but in last night's game we were 70%.
That's a pretty dominating performance on the boards.

Just to clarify that, our defensive rebounding average was 68% (i.e., we get 68% of our opponents' misses). It's just expressed as the rate at which you allow offensive rebounds (32%). If we really only rebounded 32% of our opponents' misses, we would be a legendarily bad rebounding team.

Also, the only thing that matters for Pomeroy purposes is efficiency - how many points per possession you score and allow. While rebounding helps those numbers (by reducing scoring opportunities for the opponent and increasing them for you), it doesn't move the needle in one direction or other - you can have a better than average rebounding game but allow more points per possession (or score fewer) and your rating will likely go down.

pfrduke
01-09-2013, 08:20 AM
gotcha. I misread creighton's efficiency as our own (was just looking at the 7 line for our defense)

As you predicted, we hit top 3 in defense. Our offensive efficiency went down slightly (from 118.8 to 118.6) but big nights from Creighton and Pitt propelled them past us (to 7th/118.9 and 5th/119.4, respectively).

uh_no
01-09-2013, 09:13 AM
As you predicted, we hit top 3 in defense. Our offensive efficiency went down slightly (from 118.8 to 118.6) but big nights from Creighton and Pitt propelled them past us (to 7th/118.9 and 5th/119.4, respectively).

i would imagine our offensive efficiency will likely creep back up. I'd be again more worried about the slow starts than anything....i mean we had a 15 point lead at halftime, but we sleep walked on offense for 10 minutes before picking it up. it was 7-6 12 minutes into the game...ouch

pfrduke
01-09-2013, 09:30 AM
i would imagine our offensive efficiency will likely creep back up. I'd be again more worried about the slow starts than anything....i mean we had a 15 point lead at halftime, but we sleep walked on offense for 10 minutes before picking it up. it was 7-6 12 minutes into the game...ouch

I thought we had some pretty good looks that just didn't fall. From a process point of view, I didn't think we executed worse over that initial stretch, we just happened to hit a cold streak. I also didn't think our execution (again from a process point of view) was dramatically better in the second half, when we shot 72% from the floor. If we take a couple of makes away from the second half and stick them into the first 5 minutes, I think people would be viewing this game differently.

Over the last 35 minutes of the game, we shot 57.4%. If we were up 68-38 with 5 minutes to play and then got outscored over the last 5 minutes 2-0, no one says a word about the 5 minutes with no points. I don't think it's that much more meaningful when the 5 minute stretch happens at the very start of the game.

I know the examples above are counterfactual, but I think people get a little too bogged down in viewing very small segments of game play as indicative of larger issues, rather than taking in the overall result.

(also, minor point but it was 7-6 eight minutes into the game; by 12 minutes in, we were up 16-6. 16 points in 12 minutes is still not great, but much better than 7)

uh_no
01-09-2013, 09:52 AM
I thought we had some pretty good looks that just didn't fall. From a process point of view, I didn't think we executed worse over that initial stretch, we just happened to hit a cold streak. I also didn't think our execution (again from a process point of view) was dramatically better in the second half, when we shot 72% from the floor. If we take a couple of makes away from the second half and stick them into the first 5 minutes, I think people would be viewing this game differently.

Over the last 35 minutes of the game, we shot 57.4%. If we were up 68-38 with 5 minutes to play and then got outscored over the last 5 minutes 2-0, no one says a word about the 5 minutes with no points. I don't think it's that much more meaningful when the 5 minute stretch happens at the very start of the game.

I know the examples above are counterfactual, but I think people get a little too bogged down in viewing very small segments of game play as indicative of larger issues, rather than taking in the overall result.

(also, minor point but it was 7-6 eight minutes into the game; by 12 minutes in, we were up 16-6. 16 points in 12 minutes is still not great, but much better than 7)

While i'd generally agree with you, I'd argue that our "slow starts" (and i haven't done any actual research into offensive or defensive efficiency in, say the first 10 minutes of the game) are likely statistically significant. One game is just random variance. when it happens EVERY game (or it would seem, most games) I think we have an issue.

Now, I'll admit, there is a good bit of selective perception going on here....first, the game will always be "close" early on (unless we get an early run in), so what is actually an average start (say, both teams within 1 std dev of the mean efficiency) may appear to be a bad start....especially for a really good team like duke. second, as you point out, bad segments of game further on in the game (once we're already up "big") are much less noticeable.

COYS
01-09-2013, 11:41 AM
While i'd generally agree with you, I'd argue that our "slow starts" (and i haven't done any actual research into offensive or defensive efficiency in, say the first 10 minutes of the game) are likely statistically significant. One game is just random variance. when it happens EVERY game (or it would seem, most games) I think we have an issue.

Now, I'll admit, there is a good bit of selective perception going on here....first, the game will always be "close" early on (unless we get an early run in), so what is actually an average start (say, both teams within 1 std dev of the mean efficiency) may appear to be a bad start....especially for a really good team like duke. second, as you point out, bad segments of game further on in the game (once we're already up "big") are much less noticeable.

I think one of the biggest questions to ask is what causes our slow starts. To a certain extent, I don't mind a slow start on offense as long as our defense starts strong. While our defense hasn't always been perfect from the start, we've generally come out pretty strong on that end in most of the games.

Our recent slow starts seem to be more closely correlated to the offensive end. I think there are two reasons for that. One is that establishing our defense is a priority, and the team buys into this idea. I'm ok with that as I think our D wears teams down over time, which contributes to our offense picking up as the game goes on. Also, after last season, I LOVE seeing suffocating Duke D return to Cameron. The second reason we've encountered some slow starts recently is the increased physicality of the games. The teams we've been playing have been trying to "ugly" it up from the start. The refs have more or less allowed this. While personally, I dislike watching B1G style wrestling matches, I have been encouraged by the way our team has responded. In each case, we push back on defense first and then make our adjustments on offense. More importantly, we aren't letting the physical nature of the game take us out of our game plan. Most importantly, we've been winning these types of games pretty comfortably. Now we have a plan B for tough games like this and have proven that we can not only win them, but win dominantly, like last night. This bodes well for the team's long term chances as if we happen to entera knock-down drag-out affair (such as the 2006 tourney matchup with LSU), we will have a better plan B than hope that the refs call holding and pushing off the ball.