Revised July 21, 2001
Several years ago I was asked by some loyal followers of my rating system to put into words
an exact description of my views regarding the National Championship in College Football.
It was not an easy task, as the process itself is very complex. Yet, I do feel it is important for
followers of any point of view, whether it be religious, political, or philosophical in
relationship with sports, to be well informed of exactly what it is they are following. As a
pollster for over 30 years, I certainly recognize, that in this new age of "instant
information" the sports public is becoming more and more aware of their options available
in regards to college football polls and rating systems. Thus, I felt it important for people to
understand my views in relationship to this subject. I hope you find it to be informative,
but even more importantly, thought provoking. Happy reading!
The popularity of college football spread widely in the early 1900's. What began in 1869 with two teams grew to almost 90 major teams by 1920. The NCAA was founded in 1906 to organize and regulate the sport, and points for scores, size of the field, and penalties etc., were all standardized by 1912. But, the NCAA failed to address the one issue that burned in the hearts and minds of players, alumni, and fans of all ages, the question of "Who is No.1 ?" Perhaps if they had addressed it 100 years ago we would not have the controversy that we have today! American's thrive on competition. There is only one Grand Champion bull at the County Fair, one Best of Show at the AKC Dog Show and one Blue Ribbon Apple Pie, so there has always been a need for a college football poll. The problem lies in the fact that there has always been more than one poll and they don't always agree.
The first widely recognized College Football Poll did not originate until 1926. It was a mathematical rating system developed by Frank Dickinson, a professor of economics at the University of Illinois. Later, an onslaught of pollsters came onto the scene all prepared to crown college football's best. The list was staggering: 1927, Dick Houlgate; 1929, Dick Dunkel; 1930, William Boand; 1932, Paul Williamson; 1934, Edward Litkenhous; and in 1935, Richard Poling. All of those gentlemen had various mathematical formulas for determining a national champion. It's obvious that originally, and continuing through the 1920's and 30's, mathematical formulas were the norm for determining who should be declared the Nation's No. 1 team.
All of that changed in 1936 when the Associated Press (AP) began publishing a poll voted on by a national board of sportswriters and broadcasters, and because of its national distribution, their word instantly became gospel. The United Press International (UPI) joined the hoopla in 1950 by soliciting votes from a board of coaches. Their theory, I suppose, was that coaches knew more about football than writers and broadcasters.
It was bound to happen sooner or later, but it wasn't until 1954 that the AP and UPI disagreed on who the No. 1 team in the land should be. The AP chose Ohio State, and UPI favored UCLA. Both were undefeated as was Oklahoma. Ever since that fateful day in 1954 when the two "biggies" couldn't agree, the controversy of " Who's No. 1 ?" has raged on from the Golden Dome to the Tiger Den, from the Coliseum to the Swamp, from Happy Valley to Death Valley and everywhere in between.
Eventually, everyone and his dog got in on the action: The New York Times, Sporting News, Football News, Sports Illustrated, Sears, McDonald's. Heck fire, there are more polls than there are bowls and God knows we've got more than we need of both. Over the years there have been many fine rating systems developed, and with the advent of the Internet you may examine all of them by simply clicking a button. Check out David Wilson's Web Library Of College Football Polls at www.cae.wisc.edu/~dwilson/rsfc/rate/index.shtml. Among those listed you will find Hermann Matthews, who began his poll in 1966 and Jeff Sagarin who began in 1978. Those gentlemen, along with myself and Kenneth Massey, David Rothman, Dr. Peter Wolfe, and Wes Colley, are the current recognized leaders in the mathematical poll process. Although the Dunkel Index is no longer part of the BCS, it continues to be one of the most respected polls in America.
A pure mathematical poll is power-based and revolves around a point-spread projection for the upcoming week's games. This is the kind of system familiar to us through computer rankings. This type of system does take emotion out of the decision making process, and at the end of the season their No. 1 team will have a very high percentage chance of beating any other Division 1-A team. Impressive. Impressive but not always fair in head to head competition, which is one of my main concerns with any rating system.
A personal choice poll is just that, it is based solely on someone's personal opinion. In the 1940's and 50's individual personal choice polls were somewhat popular. Sports editors of large newspapers would sometimes announce a Top 10 college football poll at the end of the regular season.
The AP, UPI, USA Today Coaches Poll and most sports-related magazine polls today are all a form of personal choice. The choices are just grouped together to form a larger whole, but the source is still an individual vote and it boils down to being a personal choice. These type of polls are very familiar to us all as the impact on the sport of College Football over the years has been tremendous. The A P and the USA Today Coaches Poll are perhaps the most widely used polls in our society today, and rightly so. They both have a long, respected history with the sport. The problem here, if there is one, is that a personal choice poll can be too emotionally based and motive-oriented. We need enthusiasm in college football, but at times, emotion can override objectivity.
Personal choice polls are fun and exciting. I can assure you that on more than one occasion, while still in school, I raced to get a Tuesday paper to read the polls. They can really get the blood boiling at a rival institution, and remember, everyone is entitled to an opinion. Yet, personal choice polls, like mathematical polls, are not always logical. Many times, I have witnessed a team play a great game against a Top Five opponent, lose by a slim margin, and then drop drastically in the polls. If #10 Clemson losses to #1 Florida St. 20-17, I don't think Clemson should be dropped out of the Top 10. Over the years I've seen it happen numerous times.
You can see the dilemma that was created. I wanted to be fair, but I wanted to be logical as well. What's a guy to do? I solved it by uniquely combining the two. My system is a mathematically based power rating that is, I believe, through a series of checks and balances, as logical and fair as it can be within the boundaries that must be in place to assure objectivity. The Billingsley Report, where power meets logic!
This system is not designed for gambling use. If a person tried to gamble with this
information without understanding it's functions, they would miserably fail because you
cannot look at these figures and determine a point spread. For instance, a #1 ranked
Georgia with a rating of 300, playing a #10 ranked Florida with a rating of 270, looks like
on the surface that Georgia would be favored by 30 points, since that is the way most
systems are designed. Not so in my system. Georgia would not be favored by 30. A point
spread can be determined through another step in math, but I use it only as a
"performance projection" to determine the strength of the opponent. I never have and I
never will support gambling in College Athletics.
The first thing I want to say, is the same thing I have always said about my rating system. I'm not here to prove to anyone than my work is better than anyone else's. I have a very healthy respect for a lot of rating systems. This formula is just an extension of my point of view, and they come a dime a dozen. I will say this, I take my work very seriously. I have a passion for College Football and I have done a tremendous amount of research, more than anyone I know. All that hard work, experience, passion, and dedication has gone into the creation of this formula. I am not a mathematician, I am not a computer geek. I am a devout College Football Fan, and have been since I was 7 years old. My formula is 100% computer generated and it treats all teams equally. I wrote the program myself and it's not written using fancy math equations, just simple addition, subtraction , multiplication and division. It's the RULES that make the system unique and the rules are MY RULES. Rules that make sense from a fan's perspective. Rules that come from 32 years of experience in which I researched the ENTIRE 132 years of College Football.
I'm a pretty strongly opinionated guy, and if you ruffle my feathers I can certainly take you toe to toe on any of these opinions.... but the one thing you will ALWAYS find about me is that I'm willing to listen, and if I'm proven wrong, I'm always willing to admit it and change. You may not always agree with where I place your favorite team, but after looking over your team's history for a decade or two, I hope you can at least say " this guy knows a thing or two about football."
OK, lets make this short and sweet in the beginning for those of you who don't care about details. These are the main components in the formula, Strength of Schedule, and Won-Lost Records, with a strong emphasis on the most recent performance. Very minor consideration is also given to the site of the game, the opponents record and scoring defensive performance. Now... for those of you who appreciate details and like to hear me ramble, read on.
Believe it or not, the system is designed after our own United States Constitution. But don't hold that against it! Although at times I feel this system is just about as complicated as our Federal Government, there is one huge difference..... this one works!
The design is one of a series of checks and balances. Just as our Constitution designates Federal, Legislative, and Judicial branches that provide the basis for our Democracy, my formula provides a similar series of checks and balances to ensure accuracy (higher rated teams winning games against lower rated opponents), without sacrificing fairness in head to head competition. The checks and balances revolve around these three basic components, the Strength of the Opponent, the Won-Lost Record and Season Progression.After 32 years my formula no longer uses margin of victory. It only accounted for 5% of the total for several years, and after careful consideration in this off season I decided to remove it completely. For a detailed explanation please read "BCS Approves Billingley No Margin Formula" from the Home Page.
The SOS will fast become the "hot" topic of discussion in College Football as this component is now the main ingredient in the BCS formula, and for that matter all 8 computer polls. Especially now that four of them, (Billingsley, Colley, Seattle Times, and Massey), do not use margin of victory. Why will it be so hotly discussed? Because ALL OF US HAVE DIFFERENT MODES OF CACULATING SOS. To say, "oh, the most important part of my formula is SOS", means nothing. The important question to ask should be "How is it calculated" and "is that SOS calculated fairly?".
For many years I struggled with whether a team's SOS should be calculated by using a teams rating and rank on the day the game was played, or use an opponents most recent rating and rank. There are excellent arguments for both sides. Early on I used ONLY GAME DAY stats. I felt very strongly that if Georgia was ranked #1 when they played #5 Florida, the Gators should get credit for playing a #1 team, even if Georgia later fell to #10. THE MIND SET OF THE GAME, THE INTENSITY OF THE GAME, REVOLVED AROUND PLAYING A #1 TEAM. How can the mind set and intensity of a game be overlooked 4 weeks later? But critics will say "but what if Georgia fell to #50, do the Gators still get credit for playing a #1 team?" Very good point. It does happen. Rankings can fluctuate dramatically during the course of a season. Look at Alabama in 2000.
Several years ago I made a compromise that I think has worked exceptionally well. I use a combination of both, with percentages tilted slightly towards the current rating and rank. This way both are taken into account. The early rankings are not totally discounted and therefore credit is given to the "mind set and intensity" of EVERY GAME, yet the emphasis is placed on a teams current status.
A team's Won - Lost Record is pretty self explanatory. Winning takes care of EVERYTHING as long as it's against quality opposition.
The Season Progression may need a little explanation. They are really a very simple yet powerful set of rules . I want my poll to "look logical". In the first week of the season if Florida St. beats #107 No. Illinois, and Ball St. beats #58 Memphis, I don't want Ball St. ranked ahead of Florida St. just because they both have 1-0 records. That's not logical. We ALL KNOW Ball St. is not in the same league with Florida St., at least not at this juncture. Let them EARN IT first. Let them prove it over due course of time, then my poll will respond accordingly. That's what I mean by Season Progression. All of my teams start out with a rating and a rank, #1-#117, because they ARE NOT ALL EQUAL. We KNOW THAT from past experience, so why not use that experience to begin with. Some would say starting all teams equal, or all at 0, is the only FAIR thing to do. I say it's the most UNFAIR thing you can do, and besides it's just plain illogical.
Now, let's go one step further. I don't want a team jumping 60 places from #70 to #10 in November either. You just simply can't turn your season around in one game, even if you beat a #1 team. I want people to be able to look at my poll, look at the previous week's contests, and say, "oh, I can see how he did that". So there are specific rules in place that PREVENT those things from occurring. I guess you could say it "forces a team to progress through the season in a logical fashion". You can't be #50 one week and #1 the next in the 7th week of the season. I wanted to create as much STABILITY as possible in the poll, especially in the Top 10. If a team moves up, I want a person to be able to see WHY, through looking AT THE MOST RECENT PERFORMANCE FIRST, then taking the other factors into account.
The "checks and balances" are played out through a series of four "phases" in the formula. Each phase has a different purpose and a different mathematical function in the application of the checks and balances. I will give as many practical examples as possible as I feel that is the best way for people to understand the point I'm trying to make. The checks and balances provide what I call "the fairness factor". Under these guidelines an undefeated team playing a hard schedule is ALWAYS going to be ranked close to the top. A team with one loss , but playing a very hard schedule can still be in contention for the National Championship, as evidenced by Nebraska's 11-1 record pushing Virginia Tech to the wire for the #2 spot in the 1999 season Sugar Bowl. Additionally, an undefeated team playing a moderate schedule may also be in contention, as witnessed by Virginia Tech in 1999. Let's take a look at the "FOUR PHASES".
I am convinced that carrying a team's RANK over from one season to the next, and then making the rules for the first few weeks of the season "more relaxed" is the best method to use. To accomplish this I created a different set of rules for the first 4 weeks of the season. Normally, as the season progresses, a teams "earnings" are drastically reduced as they go thru the various phases in the formula. This creates a more stable poll week to week, not allowing drastic movements up or down, and therefore preventing any one team from changing the whole outlook of their season in one game. However, in the first few weeks, since everyone is more equal in terms of won- lost records, everyone receives a very high percentage of their earnings, double what they do during the balance of the season. This allows a team to be ranked ahead of any team they beat in the first few weeks of play unless the computer detects that it was a "major upset". Believe me, those type of upsets do occur, and if allowed to stand, a "major upset" in the first few weeks can create pure havoc in the correct balance of a poll, so there had to be some boundary in place, all be it so lenient .
Granted, it does put a lot of emphasis on the first few games of the season, but why not? If everyone is aware of their importance, steps can be taken to prepare accordingly. Under the rules written into the program a predetermined figure is used to distinguish between a "minor upset" and a "major upset". The figure comes from my research, where I have found that 92% of the time teams who won in games where this predetermined figure was greater than what I am using were unable to sustain that level of performance. In other words, it was a fluke. I do not believe the stability of a poll should be compromised for something that has only happened 8% of the time over the last 132 years. Because of the flexible rules in the early stages of the season, a team is easily able to re-position itself in the poll simply by performing well. It's not uncommon for teams to shift 15 or 20 places in their first game, but it's because they've earned it, not because it was handed to them.
Another change you will notice from the previous formula is that a teams RATING IS NOT
CARRIED OVER only the rank. A new rating is assigned. The new rating was created
from the "average rating of the last 50 years at middle ground" ( #58 ), and then one point
up for each rank above and one point down for each rank below. In other words #58 gets
207 points, #57 gets 208, and #59 gets 206. Using this method #1 gets 265 points and #117
gets 148 points. A projected point spread can still be achieved by taking the ratings of both
teams, subtracting, and dividing by 3. Moving to this method of assigning a rating to begin
the season prevents a team from receiving an undue advantage from having an excessive
rating the previous year. I've toyed with this for years, but just decided to implement it
with the rest of the changes. I feel like that by doing this I will also be able to get a more
accurate read of the strength of teams from one decade to the next, which will be important
to me as I run the new formula though all 132 years of football. To begin each season a #1
team will be favored over the #117 team by 39 points. Keep in mind however, this figure
has no bearing on the future ratings at all, this is purely for the fun of it , for the fans sake I
The initial "point value" assigned to an opponent is based according to THEIR RATING
AND RANK. An opponent's strength is determined, not by their won- lost record, which
alone reflects only a portion of their strength, but rather, it is based on their rating and
rank, which is more reflective of a true strength. This is a HUGE bone of contention
between myself and the BCS. One which I have tried to no avail to have addressed over the
last two years. Currently the BCS SOS is determined solely on opponents and opponents,
opponents won- lost records. In other words, at the end of last season, a team initially
received the same value for playing Ball St as they did for playing Colorado. Both teams
finished 3-8. Tell me what's wrong with this picture? Is there ANYONE out there who can
honestly tell me Ball St. was as good a team as Colorado? I don't think so. I'm convinced
the BCS strength of schedule formula is flawed. My calculation of strength of schedule,
which is a combination of a teams rating and rank on GAME DAY AND CURRENT DAY
I believe, is a much more accurate SOS.
Next, a team's position in the poll is compared to their own record each time a team acquires a loss on the season. The reason is to prevent teams with multiple losses on the season from remaining high in the poll unless they are playing far superior opposition. An additional deduction is attached to the adjusted accrued value each time a team losses a game. The more losses acquired, the higher the deduction. If a team losses a game, and it's their first loss, the penalty is one %, if it's their second loss, the % is greater, and if it's their third, the penalty is even more severe. This process has proven to filter down the ratings until it is possible to be ranked in the top 25 with 3 losses, but it can only occur if a team has played well consistently, and played a difficult schedule.
After phase four is completed, the result is added to a team's previous week's rating. That result becomes a new rating which is reflective of the team's overall performance to that point in the season, with a slight emphasis on the most recent performance. This formula has proven to reward teams who, through consistency, create a solid winning record against quality opposition.
I hope we all have a very exciting and rewarding 2001 college football season. I know my participation with the BCS has certainly compounded my passion for football and I hope in some small way contributes to the sport overall. The BCS has done a tremendous service for college football by bringing the poll process to the forefront. Remember, it's not important that you "believe" one poll is better than another. Explore the various options, understand their dynamics and follow who you will, whether it be me or someone else. What's really important is that you trust the BCS process as a whole and celebrate the fact that for the first time in our great sport's tradition laden history, (which I believe is the greatest on earth), we have an opportunity to match the #1 and #2 teams every year. That's quite a statement in itself!
College Football Research Center