I think that we should not lose sight of one very important facet of this discussion. What we all are seeking to discover is how to level the playing field so that it is fair to all teams. If all teams played a true interlocking schedule, then of course win% would be the only thing that counted. If all teams played perhaps not the *same* schedule, but schedules of equal strength, then win% would still be the only thing that counted. And, of course, when the schedule strengths are unequal, then somehow we must find a method to correct the win% so that meaningful comparisons can be made. How to do this? I've been giving this some thought, and have not yet got my arms around it -- but I know what we shouldn't be doing. We should not be drawing any conclusions from the amount of variation produced by win% or by strength of schedule. We simply don't have an analysis yet which quantifies the effect of SOS on win%. Consider Team A which has a win% of 0.600 against opp% of 0.450. Is that better or worse than Team B which has a win% of 0.400 against opp% of 0.550? We simply don't know, and so far the discussion has boiled down to intuition. In proposing YAM2, for instance, I call it an intuitively based method. I would be quite happy to dump YAM2 for something more rigorously derived if we had an analysis which justified it. What we need is an eager MS student to grab this problem and do a thesis on it. I can think of no better win-win situation than to be able to tell my professor that I was going to the hockey game to work on my research!!!! In any event, the data are just sitting there waiting for someone to try.:-):-) -- Dick Tuthill