HOCKEY-L Archives

- Hockey-L - The College Hockey Discussion List

Hockey-L@LISTS.MAINE.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
College Hockey discussion list <[log in to unmask]>
Subject:
From:
Ken Butler <[log in to unmask]>
Date:
Tue, 29 Mar 1994 22:06:11 PST
Reply-To:
Parts/Attachments:
text/plain (59 lines)
Just thought I'd throw in an opinion or two here....
 
First of all, I think it is a mistake to criticize "computer rating systems"
generically on the basis of perceived or (IMO) real flaws in the RPI.
There are other ways to do it (which is why CHODR, TCHCR and KRACH hang
out around these parts) which might be a quite acceptable way to go for
such things as tournament selection and seeding. Certainly, I would much
rather have these things settled by an impartial (even if flawed)
rating system than by a human committee. The workings of a rating system
can be set out for all to see, and the reasons for a team making it, or
failing to do so, are right there in the numbers. Of course, you can still
disagree with *the rating system as a whole*, but once you accept that
the system is doing (or trying to do) the right thing, you have to
accept its findings.
 
As to the RPI: my biggest problem with it is that it is not really a
measure *of* anything -- it's just three numbers thrown together for
each team, with weights that may (or may not) be the most appropriate.
On the other hand, CHODR, TCHCR and KRACH are all trying to rate the
teams so as to correspond as closely as possible to the results that
have actually happened -- for these three, there's a notion that the
ratings produced are "best" according to the systems' criteria.
(Specifically, CHODR tries to match observed and predicted goals scored,
TCHCR does the same with a "point score" that describes the game
outcomes, and KRACH tries to match observed and expected probabilities.)
No doubt this is partly my training as a statistician, but I feel
much happier about rating systems that try to estimate something.
 
There's been some talk about the weights for the RPI recently.
My thought is, if you do want to use it, the "best" weights could
be estimated somehow. This ought not to be too difficult: pick some
weights, update them week by week and use them to pick the winners
for the following week, then pick some different weights and repeat.
The best weights would then be the ones that pick the most winners --
this would be a nice argument-ending way of deciding whether 50-25-25,
25-50-25, 40-30-30 or whatever is the best. (If we were really in the
mood for competition, presumably we could have a mass prediction
contest with CHODR, TCHCR and KRACH involved too. Or maybe it's
better to let the arguments rage... :-) )
 
All that aside, a problem faced by all rating systems is in comparing
the various conferences. There simply aren't very many inter-conference
matchups, and using what there is in order to estimate the relative
strengths of the conferences is rather like making bricks out of straw.
Or, to look at it another way, the interconference games are all very
influential in determining the ratings, much more so than one of
the plentiful conference games would be. So, add my voice to the call
for more interconference games!
 
Sorry if I sounded too scholarly tonight. It's been that kind of a day.
I could do with some good Phinal Phour action to shout at, but I'm
out of TV range :-( I'll just have to put on my Boston College
sweatshirt and see if that's enough to kill off BU.....
 
 
--
Ken Butler
[log in to unmask]

ATOM RSS1 RSS2