The following is a continuation of the Scorecard Survey results and analysis. Part I Survey Results reveals the survey results. Part II below discusses the thought process behind the survey and ideas for future improvement.
In August 2019, Run Washington leveraged its connections to push out the survey to DC Area coaches and the accomplished athletes who were selected to the All-Run Washington Teams. It was a great sampling of athletes and coaches across the area. 108 people completed 230 scorecards with at least 10 reviews per course. It would have been great to have 50 surveys per course, but what the survey lacked in quantity, I believe it made up for in quality. Nobody seemed to attempt to manipulate the survey, and if they had scored all 0's or all 100's, their survey would have been tossed.
I see two insurmountable challenges with rating XC courses: 1. It is highly subjective, and 2. Nobody has experienced all of the courses, and even if they had, courses change from meet-to-meet and year-to-year so one's experience with the course three years ago may not be what it is now.
1. The best solution to the subjectivity problem is twofold. Seek a large number of opinions to cancel out any individual's experience which may be skewed. Additionally, break down the rating into categories that force athletes and coaches to think critically about the physical features of the course. I still included the category of OVERALL ENJOYMENT because that is important: do you like it or not? But the other eight more objective categories outweigh the subjectivity of "like-or-not." We see this flush out with Burke Lake Park where it has the fifth highest ENJOYMENT score, but falls to eighth in OVERALL score when athletes and coaches are asked to consider individual features. Conversely, Kenilworth Park has a relatively low ENJOYMENT score, but it scores a little higher OVERALL when athletes and coaches consider that specific features of the course are perfectly suitable for a good XC race.
2. By selecting a variety of XC courses used by public and private schools across three state borders, nobody has experienced all of the XC courses in the DC area (and even if they had, it would have had to span over many years). Therefore, one single person could not be the judge and jury. Even a committee of "experts" would not serve the purpose any better because nobody knows what they have not experienced. The best solution is to ask a large number of people to rate what they know and skip what they don't know. There is absolutely no direct comparisons of the different courses until after the final results are compiled, which is the beauty of the scorecard approach.
A third issue needed to be addressed and that was which courses to include. It was decided that we would stay within Run Washington's coverage area and select the most utilized courses with the most mass appeal. It would be a mix of Virginia, DC, and Maryland XC courses that the most athletes and coaches would be familiar with. It was not meant to disrespect some of the other great XC courses in the area. I have not once said that these are the nine best courses in the DC Area. If you pull me to the side, I will name a dozen lesser known courses that could contend with those selected for the survey. This was a numbers game.
After designing and launching a survey that I thought reasonably overcame the major issues, a flaw in the survey was pointed out. It was invalid to name a park and ask a large audience to rate a cross-country course at that park. As touched on earlier, this led to multiple people rating different versions of an XC course from different years. That was particularly true with Lake Fairfax Park and Bull Run Regional Park which each host different courses at the same park even within the same season. And because I placed no time limits on the scorecards, voters did not hesitate to reach way back in their memories to assess a venue based on their experiences 10+ years ago. If this survey is repeated in the future, we will have to limit it to a specific meet just within the past year, which will in turn limit participation in the surveys. But hey, if we keep the voting pool fresh, that would certainly allow for shake-ups from year to year.
Back to the topic of ENJOYMENT, I really wanted to separate the XC course from the XC meet, but that was not entirely possible. There are dozens of factors not related to the course that influence enjoyment including timing, music, food trucks, awards, porta-potties, spectators, parking, and on and on. I never wanted to rate the meets. Meet directors never asked for their meets to be critiqued either. Let's examine the categories that were included and excluded in this survey.
START
I think we all can agree that the start is a pretty important factor in the race, but is it really a feature of the course? Yes, and no. The start is a combination of how quickly the course narrows multiplied by how many runners are jammed on the starting line. Take Oatlands for example. If there were only 50 runners on the starting line, you would say that the start is ridiculously oversized. How could you not rate it 100 out of 100? But because some races at Oatlands put 1,000+ runners on the starting line, some athletes bumped it down a little bit.
It is interesting to note that out of all the categories included in the survey, the START is where athletes and coaches diverged the most in their assessments. In most cases, athletes rated START significantly lower than coaches.
Some may argue that a start that bottlenecks rapidly is just part of the sport. It adds strategy for those aspiring to lead, but I think we can all agree that giving athletes adequate room to space-out before they hit a sharp turn or a tunnel is important for fairness and safety.
PASSIBILITY
Spell check tells me that PASSIBILITY is a fake word, but I don't care. You know what I mean, right? Do you have room to pass opponents when you want to? There is nothing worse than being forced to run single file or else risk stepping on an undesirable path to get around someone.
Even moreso than a narrow start, a narrow portion of a cross-country course most definitely adds strategy and is not altogether bad. A narrow path may allow one runner to completely manipulate his or her opponents, but is it really what you want for a major championship meet for fairness and safety? Good PASSIBILITY is probably always more desirable in an XC course.
SURFACES
I defined surfaces in the survey as 100% impenetrable versus 100% spike country. In otherwords, how much pavement versus dirt or grass. I've got nothing against road racing, but pavement racing is not XC racing. XC is digging into the earth with your spikes, so the more you can eliminate concrete, pavement, stony and impenetrable surfaces, the better.
HAZARDS
Out of all the categories, I was really surprised by how low most of these courses scored in the HAZARD category. I defined the scale as "Country Club Smooth" versus "Ankle-Twisting Roots + Bears." I have never seen a bear at a cross-country meet, but I have seen mass bee stingings. Generally the injury risk seems very low at the nine courses included in this survey, so why did none of the courses get a HAZARD score above 76? Why did we get composite scores in the 40's and 50's out of 100? I wonder if some voters got it backwards. To be in line with the rest of the survey, the high hazard end of the scale was zero and the lowest hazard was a 100, but did some voters get it reversed? I think maybe yes, but there was no opportunity to ask. I will consider making the survey more clear in the future.
As for the category of HAZARDS, my intention was to assess the risk of injury based on the footing, animal or insect attacks, or any other feature of the course like a tree in the middle of the downhill. I did not want HAZARDS to be confused with obstacles. Minor obstacles like a hay bale or a creek crossing are perfectly acceptable in cross-country in my opinion, just so long as it does not turn into one of those mudder/obstacle runs.
SETTING
SETTING is not at all important when it comes to safety or fairness assuming that the meet director has otherwise set up an acceptable XC course. But it is a perfecly valid category when it comes to overall positive or negative experience. Most of us, I assume, would prefer to smell fresh air in a quiet setting rather than on urban sidewalks with cars honking and exhausting fumes in our direction. Most of us, I assume, appreciate the color of leaves turning in the fall and the simplistic beauty of rolling, grassy hills. Those factors, maybe more than anything else, influence your memory and your desire to return.
SHADE
SHADE is a perk. It is not critical. DC is hot but most conditioned runners can handle the sun for 5 kilometers. The race has a start and a finish and you can usually find shade, even if only in a tent, before and after the race. Wouldn't it be nice if you could get out of the sun in August and September? Isn't that what we are after afterall? Are we not trying to define what would be the absolute ideal XC course, even if it may not exist in nature? Shade usually opposes the growth of grass which means more of a trail race rather than a grassy meadow race. You could have elements of both, but usually not both at the same time.
In most cases, voters knocked down courses appropriately in the SHADE category. The highly regarded courses scored in the 50's and 60's in the SHADE category which is good as long as it is equally applied, but I do believe that Kenilworth Park and Oatlands were unfairly critiqued in the SHADE category. Why? Those venues host major meets in September and are historically hot. If we moved those meets to October, I bet we would see SHADE ratings more in line with the other courses, but if all the meets were in October, we probably would not consider SHADE an important enough category anymore.
STIMULATION
This is my favorite category. I think you all knew exactly how to assess STIMULATION. Think treadmill in a closed room with no people, no music, no TV. Then think about navigating rolling terrain with unique landmarkers and a gorgeous backdrop. Are you running on a hamster wheel just to complete a 5k distance, or are you voyaging a passage through an unexplored wilderness that some adventurer blazed for you to follow? Too much? Come on! You have to let your imagination run wild sometimes. We all do it.
GROOMING
GROOMING is where it becomes impossible to separate a physical XC course from the meet and the meet management, as some coaches pointed out. We all agree that grooming is critically important. A course may receive all 100's in preceding categories, but if runners get lost or hurt over something preventable, it spoils everything. Grass must be cut to a reasonable level, tree limbs trimmed, and major hazards averted. Runners must be able to find their way via course markings. Rarely do those things take care of themselves over a 5k course without a huge effort on the part of many volunteers, so this category, moreso than any of the other categories, is reflective of human effort. Therefore, GROOMING must be tied to a specific meet on a specific date, and that will be corrected if the survey is conducted in the future.
OVERALL ENJOYMENT
At last, I thought it was important to include the most basic question: do you enjoy it? It is completely subjective, but really not captured by any of the other categories. It allows voters a moment of ease after answering difficult questions to finally give their opinion, even if it only counts for one-ninth of the overall score. It is your chance to say, "maybe everything about this course stinks, but I love it, and that matters, damn it!"
TERRAIN (hilliness)
First of all, I will change the name of TERRAIN to HILLINESS because terrain does not imply changing of elevation so I picked the wrong word in the beginning. The TERRAIN score (hilliness) in this survey was NOT used in the overall score. It is tricky. Flat is not bad and hilly is not bad, but is there such a thing as too flat or too hilly? Are there ways that we could incorporate hilliness into the overall score in the future? We could make "too flat" or "too hilly" a score of zero and "appropriate hilliness" a score of 100. Or we could put "Moderate Hilly" in the middle and make that a max score of 100 which assumes that "Moderate" is the gold standard. With "Too flat" on the left and "Too hilly" on the right, each point to the left or right of "Moderate" is a point off a perfect score of 100, and thus the minimum score would only be 50. Or we could just continue to leave it out of the composite scoring.
Your Suggestions
The survey collected a lot of feedback. Most of it was really great.
There were multiple requests to include more courses in the survey, allow the ability to comment on each course while taking the survey instead of at the end of the survey, and a very valid suggestion to offer a smaller point scale, i.e. 10 pt scale instead of 100 pt scale.
I agree 100% with the point scale comment, but it was a non-adjustable feature of Survey Monkey so that probably will not change. A comment box for each course is definitely doable.
Including more courses poses certain problems. We chose the most utilized courses in the DC area with the highest participation levels. If we were to expand to XC courses outside of the DC area or include lesser used, lesser known courses within the DC area, the number of surveys collected per course will go down. We need to have a large enough voting base to get a large sample size, and with some of the courses, we pushed that limit as it was.
Here are the categories that survey respondents suggested be included as categories in the future:
Spectator friendliness / accessibility could be incorporated into the survey, but it is complex. Is there such a thing as spectator accessibility that is good for everyone and what does that look like? Roping off spectator zones and restricting access is undesirable for spectators but often necessary for the safety and clear passage of the runners. Spectator-friendly also implies that spectators can see the athletes for a great deal of the race, but that often requires smaller loops and a repetitive course which opposes the STIMULATION category for the runner. A perfect harmony of spectator access and runner safety and enjoyment may be nearly impossible to achieve, but like with the SHADE category, we could always include it even if it means no course could possibly score 100's across the board.
Weather resilience / drainage is probably too "inside" for athletes and coaches to vote on. Either a meet is canceled or it's not. Athletes lose their shoes in mud or they don't. It takes inside knowledge and years of experience to know how a course handles weather so that does not make for a great general category.
Parking. You don't have to tell me. Parking is extremely important, but that blurs the line of courses versus meets and it is not at all important after you've secured parking (or gotten off the bus).
Course length. Do we really want to go there? If so, what is a bad course length? Is the 2.5 mile Manhattan Invitational course wrong and bad? I think the only bad course length is a wrong course length and even then, does it really matter? And should athletes be voting on how wrong a course length is?
Toughness. I thought about this before publishing the survey but what is toughness? Toughness would probably correlate very closely to hilliness, but toughness is also overcoming many of the categories that I listed. If you have to overcome narrow trails, hazards, poor footing, monotony, etc. that is toughness. But then by my earlier definitions, those things are all bad so is toughness good or bad? I don't know. Toughness would be like hilliness - we all probably agree some level is good, but it's impossible to say what level is perfect.
Usefulness is an interesting category that I did not consider. Coaches will choose courses with A, B, and C early in the season and courses with X, Y, and Z later in the season to work on different things. In those terms, USEFULNESS really is a category unto itself not already covered by other categories. Very good suggestion, but also one that may be difficult for athletes to answer.
Tradition. Tradition is great but how would it be scored? Would that simply be the number of years that the course has been unchanged? In that case, the longstanding courses would just have a built-in booster. Or would it be a judgement of how great the tradition is, in which case that probably would correlate closely with overall enjoyment? Is tradition not also in opposition to ingenuity and improvement?
Porta-potties. This was a crappy suggestion. I am not saying that I wouldn't include it.