MoCoRunning






MoCoRunning XC Ranking Formula

Portions of the ranking rules were borrowed from runwv.com, the creators of this ranking system. The formula has been modified over the years to better suit Montgomery County, Maryland.

This ranking was started as an alternative to opinion-based rankings and ranked 5k lists. Third parties may cite Mocorunning's individual runner rankings, but it is important to understand how the scoring system works.

In one sentence: this is a system of point-stealing. Runners steal points from each other by running better than expected relative to one another.

OK, so how does this thing work?

Now, let's dig into the weeds. All ranked runners have a base score. The base score, expressed in points greater than zero, represents how fast a runner is expected to race compared to the ranking cutoff and how fast a runner is expected to race compared to all other runners in the ranking. 1 second = 2 points.

Runners take points from each other based on how well they run relative to one another at a given meet. If Runner A has a base score of 160 and Runner B has a base score of 150, Runner A should theoretically beat Runner B by 5 seconds (160 points minus 150 points is 10 points. 10 points divided by 2 points per second = 5 seconds). In the event that Runner A does indeed beat Runner B by 5 seconds in a race, neither runner's base score will change. The actual margin of victory minus expected margin of victory equals initial point gain (10 points - 10 points = 0 points). They did what they were supposed to do according to their base scores so no points are exchanged, but it rarely works out that cleanly.

Now, say that Runner A beats Runner B by 10 seconds, or 20 points. Runner A was supposed to beat Runner B by 10 points but he beat him by 20 points. In the initial calculation, Runner A will gain 10 points off of Runner B. Actual margin of victory minus expected margin of victory equals initial point gain (20 points - 10 points = 10 points).

There is one added level of complexity in the ranking formula: a buffer divisor. To calculate the final addition/subtraction to each runner's base score, the formula divides the initial point difference by 3 (this divisor was changed from 4 to 3 in 2015). 10/3 = 3.33. 3.33 points would be subtracted from Runner B's score and 3.33 points added to Runner A's score. Runner A's new score is 163.33. Runner B's new score is 146.67.

Returning to the original example, say that Runner B beat Runner A by 5 seconds even though he was supposed to lose by 5 seconds. The differential between what happened and what was supposed to happen is 10 seconds, or 20 points. Actual margin of victory minus expected margin of victory equals initial point gain (-10 points - 10 points = -20 points). Runner B will initially gain 20 points off of Runner A, while Runner A loses 20 points. Once again, the formula divides by 3, so the final point exchange is plus or minus 6.67. Runner B's new score is 156.67, and Runner A's new score is 153.33. So in this example, Runner B surpassed Runner A in the ranking.

Please note that one head-to-head victory does not guarantee that a runner will move past another. The base score intentionally has inertia. The buffer divisor ensures that. One way to think about it is that the entire history of data put into the ranking formula has a lot of weight. The newest piece of added information also has a lot of weight, but it does not replace a runner's entire history in the ranking.

The most important factor to ranking mobility is the separation between two runners in a given race, measured in seconds at the finish line. Returning to the original example where Runner A and Runner B are ranked 10 points (5 seconds) apart, if Runner B beats Runner A by just one second, it may take several iterations of the ranking formula calculation (several meets) for Runner B to move ahead of Runner A with such a slim margin of victory. Conversely, a wide margin of victory will cause much greater movement within the ranking system. If Runner B beats Runner A by 45 seconds, Runner B would have surged past Runner A and created separation in the rankings.

Of course, there are often more than two ranked runners in a race, so what is done with three or more ranked runners? First, the ranking compares Runner A with Runner B, as described above. Then it compares Runner A to Runner C. Then it compares Runner B with Runner C. And so on, and so on. A large matrix is used to perform all the necessary runner comparisons. The sum of all the runner comparison is divided by the total number of ranked runners run against, aka the total number of ranked runners minus one. If 50 ranked runners were compared, the formula would calculate 49 comparisons for Runner A, 48 additional comparisons for Runner B, 47 additional comparisons for Runner C, and so on, and so on. Finally, all the comparisons are added for each runner and then divided by 49 to determine the final point gain for each runner for that meet.

There are caps! The ranking does not let one single race ruin someone's season-long ranking. The maximum amount of points that a runner may lose in a race is 25 points. The only exception to that rule is described in the Breakdown Provision below. The maximum amount of points that a runner may gain in a race is 25 points with a few exceptions. If there are 15-24 ranked runners in a race, a runner's score can increase by 30 points. If there are 25+ ranked runners in a race, a runner's score can increase by 40 points. There is no maximum point gain at the Montgomery County Championship Meet. This rewards runners for racing against strong, ranked competition.

Getting Added and Removed From Rankings

  • An unranked athlete gets added to the ranking if they run well enough compared to the ranked runners in the race, but getting added is not as simple as beating any ranked runner. It is actually possible and common for a runner to be added to the ranking without beating any ranked runner! The number of seconds that the unranked runner finishes ahead of or behind the most representative ranked runner determines if the unranked runner is worthy of being on the ranking and how many points with which they will be added to the ranking.
  • The most representative ranked runner is determined after scoring a meet. The ranked runner whose point total changes the least after scoring a meet is considered to be the most stable ranked runner and the most representative of the system.
  • To determine the point total of a newly ranked runner, count the number of seconds the runner finished ahead of or behind the most representative ranked runner. Multiply that number by two. Add that number to the point total of the most representative runner if the unranked runner beat the most representative runner or subtract that number if the unranked runner finished behind the most representative runner.
  • Unranked runners will not be added to the system with less than 15 points. Under 15 points is considered the danger zone because runners with less than 15 points are in danger of dropping off the ranking.
  • Unranked runners will not be added to the ranking with a point total higher than any ranked runner who beat them in the race.
  • The ranking administrator reserves the right to use discretion when adding runners to the ranking early in the season. The administrator may choose not to add runners based on performances in small early season meets if it appears that ranked runner(s) do not race to their past potential to begin the season. For example, if Runner Z builds her base score as a consistent 20:00 5k runner but begins the new season with a 24:00 5k, the administrator will not add new runners to the ranking as if they raced against a 20:00 5k runner. If there were no other ranked runners in the race, the challengers may need to wait for their next shot to be added to the ranking.
  • Athletes are removed from the system if they graduate, if they are inactive for 4 consecutive weeks, or if their base score drops below 0.
  • When seniors graduate and are removed from the ranking system, all non-senior boys are awarded an automatic 15 bonus points to begin the following season. The purpose is to replinish the points lost to graduation. The original formula called for all girls to be awarded an automatic 15 point bonus just like the boys, but I found that that inflated girls point totals too much. The girls ranking has a much higher percentage of freshmen who burst onto the ranking early in their freshman seasons and remain in the ranking for a long time. Replinishing girls points with an automatic bonus hurt the ranking system more than it helped due to inflation.


  • Provisions

    Provisions are in place to help get the athletes where they belong in the rankings more quickly.

  • Long/Short Course Conversions - The difficulty and length of a course generally do not require any changes to the formula calculations unless the course is significantly longer than 3.1 miles or significantly shorter than 3 miles. Finish times on courses longer than 3.2 miles and shorter than 2.9 miles will be converted to 3.1 mile times using a direct distance-to-time extrapolation before performing the ranking formula calculations. This is typically only used for the Manhattan Invitational which is a prestigious 2.5 mile high school varsity race.
  • Breakthrough and Breakdown Provision - When a runner gains the maximum amount of points for two consecutive races, that runner will be awarded 25 bonus points. When a runner loses the maximum amount of points for two consecutive races, that runner will be penalized 25 points. The breakthrough bonus and breakdown penalty may be awarded continuously until the runner no longer maxes out the possible points scored in a meet. Remember that runners can run multiple races in between published rankings so the Breakthrough and Breakdown Provision can cause a runner to gain or lose over 100 points in a week.
  • Inactivity Rule - An athlete will lose 15 points if they go three weeks without running in a meet. Without the inactivity clause, someone not racing may inappropriately move ahead of those athletes who are racing. After a fourth week inactive, athletes are removed from the ranking and may earn their way back onto the ranking through racing. Athletes will not lose points due to inactivity in the final week of the season if the inactivity is most likely due to not cracking the varsity squad, i.e. a runner is borderline 7th on their team at the county championship and does not compete at regionals and states. If an athlete is clearly in the top 5 on their team, the inactivity rule will apply in the final week.
  • Underrated Rule - This allows me to remove an athlete from the meet comparison if that athlete hurts more than 50% of his opponents by the maximum amount. That athlete will be given the maximum point gain, but will not hurt other runners.
  • Overrated Rule - This allows me to remove an athlete from the meet comparisons if that athlete's performance is substantially below the 25 point loss limit. That athlete will be given the maximum point loss, but will not help other runners.
  • Safe Victory Rule - Any runner who wins a race by 15 seconds or more will not lose points for that race. This rule protects highly ranked runners who may be asked to race against low ranked competition in a low stakes race.
  • Private School Ghosts - The Crilly Clause - *New in 2015* Public school runners have always made up the majority of the ranking system, but it is perfectly valid to include Montgomery County private school runners as long as there is good mixing. Mixing between private and public schools has not always been sufficient. If a team isolates itself and dodges strong county competition all season (many small private schools do this out of tradition or necessity), then the ranking will not work for that school and its athletes. In 2015, Mocorunning began incorporating Washington D.C. area private school runners into the ranking system but not publishing those names. Those "ghosts" in the ranking system will never be known to fans reading the rankings and admittedly make it impossible to follow along at home. Mocorunning did this to improve the "mixing" of the ranking system between public and private school athletes. Increasing the pool of ranked runners to Washington D.C. area private schools increased the amount of scoring opportunities between public and private schools prior to the private school championship meets.
    Why the 'Crilly Clause?' Besides alliteration? In 2013, Good Counsel High School traveled to large meets all over the area, and its top runner Collin Crilly finished in the top 3 in every meet on his way to his second individual WCAC title. Good Counsel raced tough competition and did everything to prove its standing in the Washington D.C. area all season. Unfortunately, Good Counsel barely saw ranked public school runners that entire season. Where the ranking was concerned, there was almost no opportunity for Crilly to steal points from MCPS runners. In fact, his own teammates stole points from him which caused him to fall down the ranking when it was evident that he should have been climbing the ranking based on his performances against the strongest Washington D.C. and Northern Virginia runners. That was the pinnacle of a trend of the Montgomery County public and private schools not mixing well enough during the regular season for the ranking formula to work for the private schools. Since implementing the Crilly Clause in 2015, the relative ranking of public and private school athletes has been sufficient.

    Montgomery County Championship Provisions

    It is important for the county championship meet to be weighed heavily because it is the only race where all MCPS runners are combined. The following special cases are made to ensure that the ranking is set up for maximum accuracy in the final few weeks of the season.
  • Two Iterations - The ranking formula will be applied to the county championship meet results twice before it is published. The maximum point loss will still be -25.
  • Unlimited Gain - There will be no cap to the number of points a runner can gain at the Montgomery County Championship Meet.
  • County Champ Protection - In the event that the individual county champion loses points upon scoring the county championship meet, the individual champion shall not lose points such that he or she falls below lower ranked runners who did not record a performance at the county championship meet.
  • Zarate-Woods Clauses - A. The individual Montgomery County champion will be awarded a bonus if the formula does not automatically push the county champion to the top. The bonus will be such that the individual champion's new base score will be equal to the score of the highest ranked MCPS runner plus a point bonus equal to the number of seconds by which the individual champion won the race. The individual Montgomery County champion will not be automatically ranked ahead of any runner that did not compete in the Montgomery County championship meet including any highly ranked private school runners.
    B. An individual state champion who places first at the state championship meet may be awarded a bonus no greater than 25 points and equal to the minimum amount of points necessary to move ahead of ranked runners defeated head-to-head at the state championship meet. The bonus will not allow a state champion to move ahead of any runner not entered in the same race as him or her.
    Why the 'Zarate-Woods Clauses?' The 'Zarate-Woods Clauses' were inspired by Diego Zarate and Evan Woods, for whom the ranking formula may have missed the mark. In 2014, Whitman High School's Evan Woods was the #1 ranked runner entering the county meet. Northwest High School's Diego Zarate won the county champinship meet over Woods by a fraction of a second. The victory was awarded based on a judge's call at the finish line, but because the margin of victory was so small, the ranking formula still output Woods as the #1 ranked runner the day after that meet. Zarate had the best day at the regional meet and took over the #1 spot on the ranking, but then Woods won the state title and Zarate retained the #1 ranking according to the formula. For two evenly matched rivals, it was wrong not to have the county champion ranked #1 after the county championship meet and not to have the state champion ranked #1 after the state championship meet. It may have been a little different if the state champion came out of nowhere, hence why there is a limit on the bonus awarded to a state champion, but this rule is written for future evenly matched rivals who may exchange victories down the stretch.

    Where Did the Original 2006 Base Scores Come From?

    The 2006 pre-season base scores were developed starting with the top athletes from the 2004 cross country season. The boys were given 2 points for every second that they ran under 19:30 at the 2004 county championship and the girls were given 2 points for every second under 23:00. Many of the known top athletes from the 2005 season were added to the ranking and given the lowest possible base score, 15. I then scored the entire 2005 cross country season using the scoring system. The final scores from the 2005 season is where the 2006 season started.

    Note that the current point totals are no longer based on scales of 19:30 and 23:00. The point totals also cannot be compared across seasons. The point scales shift as points are added and removed from the system.

    Critiques

    When there is a deficiency with the ranking system, I am willing to add or modify a rule to make it better. Here are some common critiques and my responses.

  • "I don't understand what the points mean." The points are like seconds on a stopwatch with a multiplier. If you want to do no math (who would blame you?), just think of the points like the number of seconds apart each runners is projected to finish a race (technically, you need to divide the points in half to get seconds, but no formula could predict human performance of two or more runners in a 5k race down to the second).
  • "How is it that Runner B can beat Runner A, yet Runner A remains ranked ahead of Runner B?" This is covered a few times on this page, but it still comes up often. What would you do when, over the span of a week, Runner A beats Runner H, Runner H beats Runner B, and Runner B beats Runner A? Circular conflicts like the 'A > H > B > A' situation happen all the time. What would your tiebreakers be? What would you do about runners C-through-G who may have their own conflicting results against Runners A and B? In a math-based ranking, the math decides. In an opinion-based ranking, injecting your own biases to break the circular conflicts is inevitable. The ranking would become biased and personal. It shouldn't be personal. Mocorunning chooses the math method.
  • "This ranking is too slow. It took six weeks for Runner X to climb the ranking when we all knew he should have been there from day one." I agree with this criticism. There are ten months between the last ranking in early November and the next update in early September. I pay attention to track results year around just like you. Runners make drastic improvements (or drop-offs) during the winter, spring, and summer while this cross country ranking remains stuck in the past. The problem is exacerbated when Runner X does not race much in September, but I do not think there is a realistic solution. I am not entirely opposed to incorporating track results somehow, but the formula was designed for cross country only. Not every XC runner runs track and not every XC runner excels on the track. I am open to suggestions. Ultimately, more frequent XC races improve the accuracy of the ranking and the speed at which it aligns with people's opinions, but what is good for the ranking (more races) is not necessarily what is good for the athletes. Coaches know what is best for the athletes so we have to be patient with the runners who do not race much in September and patient with a ranking system that has inertia by design.
  • "Teams train through dual meets. Dual meets should not be included in the rankings." Dual meets are crucial for the success of this ranking. It forces teams to "mix" frequently early in the season. More data and more mixing is always better. I assert that the vast majority of runners do try their best in dual meets. For upper tier runners who are asked to run a tempo run during a race (tempo pace is typically ~85% race pace), tempo runs are still fast enough to create separation among teammates of differing abilities. Upper tier runners practically never let opponents of similar ability level run away from them for the sake of a workout, and if they do, well, tough luck. Their Mocorunning ranking may drop for turning a race into a workout. They will have to earn the points back in the next race. As for the most elite runners, just win by fifteen seconds and the safe victory rule will be triggered to protect their ranking.
  • "This ranking compares performances from multiple races in a given meet. That is not true head-to-head comparison." Mocorunning combines all performances from a given meet. Others who use a similar ranking formula never compare runners across different races due to the potential for different paces, race strategies, and even course conditions. I acknowledge those important differences, but the most important thing for a healthy ranking system is always more mixing and more comparisons. Additionally, I like to include junior varsity runners if they are fast enough to make the ranking, and that requires a results merge when only seven runners are allowed in a varsity race.
  • "Returning unranked runners can be added near the top of the ranking while returning ranked runners may have to claw their way from the bottom after a year of improvement." This was a valid point brought to my attention where a returning runner who did not make the cutoff for the ranking for three years got a clean slate in year #4. Not having a history in the ranking system is akin to having some unknown negative base score. This is a tough one for me, and I don't think there is a solution that will satisfy everyone. Ultimately, it is most important to add new runners to the ranking as closely as possible to their ability level. In the first year of the ranking, I misunderstood the mechanism for adding new runners. I assigned 15 points to all runners when they were added to the ranking regardless of where they probably should have been added. This unfairly pulled down the top ranked runners for the rest of the season, and it took way too long for the new runners to climb from the bottom. That was an unintentional experiment on what it would be like to add all new runners to the bottom of the ranking as if they were on-the-rise. It was not good. Today's method for adding new runners may be unfair to some returning ranked runners at first, but with the help of the provisions, breakout runners can move up the ranking very quickly after their second race.
  • "This ranking is not even close to athletic.net." Good day to you.

    Final Comments

  • Coaches: please, please, please always correct bib/name errors. I get it. Bib/name errors happen on meet day, but please try to get errors corrected before results are published. Alternatively, anyone may kindly reach out to mocorunning to alert me of bib/name errors in published results. Nothing wreaks havoc on this ranking system more than a 30:00 runner wearing the bib of a 16:00 runner or a 16:00 runner wearing the bib of a 30:00 runner. Realistically, when that happens, I usually have the common sense to reach out to someone and verify anomalous performances. But bib/name errors may very well go undetected and lead to erroneous movement in the rankings.
  • Published rankings are rarely revised/re-issued. If a meet must be re-scored after a ranking is published, the revisions will be reflected in the next published rankings. This can cause unexpected changes to a runner's points even if they did not race for the week.
  • This ranking is all about runners running well relative to one another. Differences of opinion are welcome, but keep those 5k time rankings out of here. Running "fast" times does not influence this ranking. Cross country courses that take longer to complete offer slightly more opportunity for greater time spreads and thus greater ranking mobility.
  • It is possible that errors can be made in the hundreds of weekly calculations and the application of all the provisions. The ranking should pass your "smell test," so if your gut tells you something is wrong, please point it out with an email. I will be happy to know that you are taking an interest.
  • Runners can not get added to the ranking if they do not race ranked runners. A runner may run the race of their life, but if no other ranked runners were competing in that race, runners will not be added and points will not change.
  • Results must be received in a timely manner. Typically, rankings are published on Sundays or Mondays. If results are received after rankings are published, results can still be applied to rankings the following week. If results are received after a runner's next race, those results likely will not count in the rankings.
  • Performances labeled as "Time Trial" or "Scrimmage" will not be scored.
  • Meets that are posted on other websites and are labeled "pending," "under review," "unofficial," or only publish results from one team will not be scored.
  • Performances missing information like first names or school names will not be scored. If T. Barney runs 17:44, I would rather not guess what Barney's first name is or which school he attends.
  • Postseason meets beyond state and conference championship meets are not scored. The rankings are finalized after state championship meets each year.
  • Homeschool athletes will be considered for this ranking system if they are considered to be in grades 9 through 12, live in Montgomery County, Maryland and intend to race consistently against Montgomery County high school competition. The athlete must intend to race a minimum of four high school races during the fall season including the Maryland Private Schools State Championship at the end of the season. Email webmaster@mocorunning.com if you are interested in having a homeschool athlete ranked.
  • You are encouraged to ask polite questions. I will be pleased that you are taking an interest. I will consider your questions and comments thoughtfully, but if your aim is to tell me how it is, consider my non-response your mission accomplished.


    Thank you for reading the ranking rules. Please email webmaster@mocorunning.com with questions and concerns.
    Return to Rankings

    This Ranking Formula vs. Speed Ratings

    If everything you just read seems pretty geeky, try Google searching "cross country speed ratings." Speed ratings were started in New York in the 1990's and adopted in a few other places across the country. Mocorunning's XC Rankings are not speed ratings, but the formula is a close cousin to the reference runner method of determining speed ratings. If you read everything on this page and everything on that link, you should see many commonalities. The root of both systems assesses runners almost exclusively based on the number of seconds apart they finish the race. There are at least a few major differences.
  • The speed rating system outputs independent ratings for each scored race whereas Mocorunning's ranking system outputs base scores weekly or biweekly. Mocorunning's base scores are tethered to all previous races and are only used to compare runners within the ranking system during a given week. Independent speed ratings published for each scored race can provide a good, clear indication of standout/outlier races and/or race-by-race improvement, and can (with some level of error) be compared over the years.
  • The speed rating system determines a composite ranking and overall speed rating that is based on all of the independently assessed speed ratings. Certain races are selectively weighted more heavily in determining the overall speed rating whereas Mocorunning's ranking formula treats all meets equally with the most recent meet always having the greatest impact on the base score.
  • A very important distinction is that the speed rating methodology changes from race-to-race based on the availability of data. Mocorunning's ranking formula never changes no matter how big or small the race or how much historical data is available. I said in the beginning that Mocorunning's formula was specifically similar to the "reference runner method" of determining speed ratings, but this method can be very inaccurate early in the season. Mocorunning pushes through the inaccuracies in the early weeks of the season whereas the speed ratings methodology utilizes course conversion tables or statistical sampling as alternative methods when athlete fitness may be questionable.
  • Mocorunning's ranking formula is particularly well-suited for a close grouping of schools with a lot of consistent "mixing." It would become increasingly difficult to execute Mocorunning's ranking formula with a broader geographic territory which would have less guaranteed mixing. The multiple methods used in the speed rating doctrine allows it to better adapt to a broader region that has less guaranteed mixing.
  • As lengthy as this page is, Mocorunning's ranking system uses very simple arithmetic. The multiple methods used in the speed rating system are more data intensive and sometimes require more data collection and interpretation. Mocorunning's ranking formula never requires familiarity with or historical database records of cross country courses. I send all my respect to the few brave souls who have embraced, computed, and published speed ratings for so many years.

    This Ranking Formula vs. 5k Season Bests

    Mocorunning has offered ranked 5k lists since the inception of this website. There is some value in ranked 5k lists, for example quickly gleaning which boys are capable of running under 17:00 or how many teams have five girls under 21:00 minutes.

    5k times help us form assessments of runners especially when we know the history of the courses that those 5k times match up with, but even a novice cross country fan can see that ranked 5k lists are insufficent as a ranking system due to differences in course difficulty, weather, and other factors. If a team runs on the fastest 5k course (or a questionably short course advertised as 5k), those performances will rank as the fastest times for the entire year with no further fluctuation in the rankings later in the season as fitness levels evolve. Mocorunning's ranking formula, adopted from runwv.com years ago, made relative head-to-head competition the only influencing factor in ranking athletes throughout the season. The formula nullifies the influence of course speed while remaining data driven and objective.

  • Email | About | Misc