# No More Gimmicks ... It's Time for KRACH

*by Adam Wodon/Managing Editor*

The news came out Wednesday that the NCAA Division I men's ice hockey committee will have a new criteria in its arsenal when it selects and seeds the 2003 NCAA tournament. Word of an award for "good wins" sent the college hockey masses — used to the tidy, if esoteric, PairWise Rankings system — into a tizzy.

First things first.

The NCAA isn't some vast shadowy conspiracy, out to "screw" fans and teams.

In fact, the nebulous term "the NCAA" isn't really what most people think it is at all. The people making the decisions that affect hockey are basically six athletic directors and coaches ... i.e. members of the men's ice hockey committee ... i.e. real people. One from each conference. Good guys. Fun to have a beer with.

These are people who care about the sport. Their intentions are noble.

Of course, that doesn't mean I have to agree with them.

The philosophy behind the recent decision to reward teams during the NCAA tournament selection/seeding process for "good wins" is sound. ECAC teams, for example, with smaller buildings are forced to play most nonleague games on the road. This can somewhat skew the comparison system, which doesn't account for home/road differences.

The implementation, however, is ugly.

There are a litany of problems with this, much of which I'll leave up to the math PhDs, but here's a couple.

- Why is a "good win" one against a Top 15 RPI team? Why not Top 16? Top 10? Team A defeats Team 15, and Team B defeats Team 16, and Team A gets all sorts of bonus points, while Team B gets nothing? (In fact, this is also a problem with the Teams Under Consideration criteria.)
- The amount of bonus points awarded is left purposefully vague, but no matter what it is, it's arbitrary. Who came up with the number? How was it derived? Is it grounded in any sort of statistics principle?
- The Record vs. Teams Under Consideration criteria already factors in wins against "good teams." The only thing it doesn't do is create a home/road factor, but, then again, neither does any of the PairWise criteria.
- Teams that play fewer nonconference games than others, have fewer chances to earn bonus points.
- It may actually discourage top teams from scheduling weaker teams, thus being counter-productive.

Of course, let's be honest. The real issue here is not this new criteria in particular; the entire system of PairWise Comparisons is flawed in a multitude of ways. It works pretty well, all things considered, but there are some gaping holes that, year-to-year, could drastically affect seeding, or bubble teams.

So long as you're going to have a 100 percent objective system, you might as well make it one that actually does what it's intended to do.

The PairWise system right now reads like a mish-mosh of concepts all designed to compensate for flaws in each other. Take any one of the components, and you could blow holes right through it. Most especially the RPI, which is about as flawed a mathematical system as was ever invented.

All right, at this point, half the audience is recoiling in horror. "Mathematical system? Uh-oh, you're not going to come at me with all sorts of complicated, arcane, geek-filled math, are you?"

Well, sorta. But bear with me.

If you have a severely flawed mathematical system for coming up with the teams, then it's no better than being subjective.

Hockey is the only 100-percent completely objective sport, and remains that way. So it's clear that's what hockey wants, and what hockey gets.

(Contrary to earlier reports, the "good win" idea is not "subjective." The formula is defined, it's just not public. The idea of adding subjectivity is a separate issue — see sidebar — and, regardless, even a subjective system needs a base).

Therefore, the real key is to just come up with the "right" system. No more fitting square pegs into round holes by adding a "good win" criteria. No more tweaking the RPI every year, or going from a Last 20 criteria to Last 16 to nothing at all.

There are infinitely better systems out there, and it's time to use them.

Recently we began running what we believe is the "right" system. It's called KRACH ... bad name, great system. It's a highly-sophisticated mathematical algorithm, created by statistics PhDs, based upon a lot of mumbo jumbo I can't really explain. All I know is, when it was explained to me, in detail, it made a lot of sense. It also makes intuitive sense in practice, just from looking at it.

Is it geeky? Absolutely. But it's high time to embrace our inner geek.

I realize it's difficult to convince everyone this is the right system. People don't want to hear "trust me." But let's try to summarize:

KRACH is far superior to RPI in many respects.

First, it eliminates all the problems with "insular schedules." Remember the awkward subjective criteria added by the committee, allowing them to remove teams from conferences with poor RPIs (i.e. MAAC and CHA schools) from consideration at their discretion. This kind of witchcraft works when the schedule differences are obvious, but what about the more subtle ones?

KRACH can also easily take into account home/road differences — the very thing this "good win" criteria is supposed to be addressing. In the process, it renders all other criteria virtually obsolete.

Taken from the explanation on the KRACH ratings page: "KRACH is based on a statistical technique called *logistic regression*, in essence meaning that teams' ratings are determined directly from their won-loss records against one another. A key feature of KRACH is that strength of schedule is calculated directly from the ratings themselves, meaning that KRACH, unlike many ratings (including RPI) cannot easily be distorted by teams with strong records against weak opposition."

See? Easy.

In fact, I uncovered a conversation that occured in 2000. It seems that coaches, at their annual meetings that year, were very willing to listen to the idea that KRACH was a superior system. Apparently, they even went so far as to recommend that some of the PairWise criteria be KRACH-modified, and were willing to have the NCAA stats people study the merits of KRACH.

I had no idea it ever went this far.

Apparently, the biggest hurdle was explaining how exactly KRACH calculates the Strength of Schedule. RPI — as flawed as it is — is simple to explain (though, as the math geeks remind me, not as simple to calculate as it sounds). Explaining how KRACH is figured requires you to do some pretty tricky mental gymnastics.

Hey, I'm pretty good at math — I got a 700 on my math SAT — and I can intuitively understand what KRACH is doing, but I can't really explain it too well. Even though it's easy to give a definition, it's not easy to grasp *recursion*. Believe me, I know. I remember my one Advanced Computing class in college, where I was rendered muttering to myself aimlessly all semester trying to grasp recursion — until the threat of failing released some sort of endorphins that made the light bulb go on.

So, the best I can say is ... well ... trust me. Or at least trust the statisticians out there who know this inside and out. KRACH is derived from a proven 50-year old method taught in statistics classes. On the other hand, RPI was pulled out of thin air, somewhere in the late '70s.

So long as you're going to have a 100 percent objective system, you might as well make it one that actually does what it's intended to do.

Let's go hockey world. Embrace your inner geek, trash the current jury-rigged system, and let's all boldly go where no sport has gone before.