As part of my preparation for the new winter season I attended the Snowsport Scotland (SSS) Summit, which included tutor training, and the Irish Association of Snowsports Instructors (IASI) Educator training and this prompted me to reflect on my role as an educator/tutor and examiner of snowsport instructors, roles which I have been involved with since the early 90s! One of the primary goals for all examiners, who assess coaches and instructors for their certification levels, within any accrediting body, is to apply and use the assessment criteria in a consistent and fair manner. Such discussions, during recent training, led to the use of the words objective and subjective when undertaking that examining role and I felt it would be useful to look, more closely, at the 'role' of the examiner in 'measuring' performance in sport, what exactly those measurements are and whether they are objective or subjective?
In sport objective measurement would include things like time over distance e.g., 100m sprint in athletics, 200m front crawl in swimming; time over distance and around obstacles e.g., a slalom race in alpine skiing; height e.g. high jump in athletics; distance e.g. javelin or hammer in athletics; points or goals scored e.g. soccer, tennis, badminton etc. The key element here is that the 'measurement' is done by something that is external to those that are observing e.g., a stopwatch, tape measure, scoreboard.
On the other hand, subjective measurement in sport involves 'judgement' by an observer or observers e.g., a panel of judges in ice skating, an examiner or examiners for a level of instructor/coach certification in many sports. The key element here is that these judgements are made, using a set of assessment criteria (a more objective tool), by highly trained individuals who are able to rate the performance against said criteria.
The objective to subjective continuum
However, in spite of the examples above it would be wrong to conclude that 'measurement' is clearly either objective or subjective. Therefore, it is probably better to think of it as a continuum, with objectivity at one end and subjectivity at the other end. Sports already mentioned, such as soccer and tennis, while objective in terms of goals and points scored also involve subjective judging via referees, umpires and line judges. Nonetheless, it is fair to say that for those individuals involved with judging performance their goal is not to be purely subjective and, where possible, to move towards the objective end of the continuum, thus striving for consistency and fairness, across all judges, as mentioned in the introduction of this article.
Making subjective measurement fair and consistent
So, how can sports that involve subjective measurement be made as fair as possible? And how can the outcomes of these judgements e.g., results, be made consistent across a number of judges/examiners and from one competition/exam to another? Looking at my own sport of alpine skiing and snowsport in general, the method for assessing instructors varies considerably from one country to another. For example, some of the alpine nations require the candidates to wear numbered bibs and gather at the top of a run before performing various demonstrations, one by one, and being observed/assessed by several examiners, with performance ratings being averaged. This pure assessment method has both advantages and disadvantages, with the former being that the examiners do not know the individual candidates and there is more than one person making the judgement. The disadvantages include the limited number of runs the candidate has to perform at the level required, the length of time waiting for each run and the potential pressure associated with that type of exam environment.
Other nations favour either continual assessment or combined training and assessment.
Continual assessment can be done over a number of days e.g., three, with one examiner allocated to each group. The obvious advantage here is that candidates get the opportunity to perform multiple times and in different conditions (weather, snow etc.) so if their performance is at, or above, the level they can show consistency. The pressure associated with this type of exam is different to the one run assessment with bibs, mentioned earlier, but can also be difficult for some people to manage. And depending on the climate within the group that pressure may even be accentuated!
Combined training and assessment
Courses that combine training and on-going assessment tend to be of one to two weeks duration, but the biggest challenge here is that managing these two roles (trainer and examiner) can be difficult and requires a great deal of skill on the part of the deliverer. While there is sufficient time for candidates to develop their performance and make positive changes, in terms of skill acquisition, if those candidates are starting from a weaker position e.g., below the standard, then it is still unlikely that they will have sufficient practice time to move from the motor learning phase to the acquired performance phase, which in itself creates pressure that then has to be endured over a longer period!
But perhaps the biggest challenge, with this type of delivery, is dealing with the cognitive bias of the "Horns and Halo Effect". Essentially this means that, as human beings, we can easily allow our perception of a person (either good - Halo; or bad - Horn) to overshadow other traits, behaviours, beliefs or actions (performances) and this can lead to poor or unfair judgements. And the reason that this is so important, with this method of assessment (measurement), is because the examiner gets to know the candidates and develop a relationship. Hence, the following strategies, for mitigating these downsides, are crucial.
Mitigating the downsides of continual and combined training and assessment
There are a number of strategies that can be used to make these methods of assessment less subjective and these include;
Allowing other examiners to view and comment on the video of candidates performances during the course.
Another examiner attending part of the course to provide quality control and ensure the course is being delivered and assessed in a consistent manner to other courses.
Having more than one group/course taking place at the same time, in the same venue, so that examiners can view candidates from other groups while the course is actually happening.
Providing regular training events for examiners, which include rating scale exercises where the examiners view previous candidate performances and agree why a performance is below, at, or above the level, hence deepening the collective understanding and interpretation of the assessment criteria.
In conclusion, the organisations that I work for, and have previously worked for, as an examiner of instructors, all use either continual assessment, or combined training and assessment, for their snowsport instructor exams. However, all of these organisations use (to a greater or lesser extent) a variety of the aforementioned strategies for mitigating the downsides of these subjective assessment methods and are therefore somewhere along the objective to subjective continuum, albeit closer to the subjective end. There is no perfect way to measure performance in sports that are subjective and no perfect method for examining snowsport instructors being assessed for their qualifications but, from my own perspective, I am confident that the organisations that I am involved with strive to make these judgements as fair as possible.
For snowsport instructors, progressing through their certification levels, who want to improve their mental skills for dealing with exam pressure the IASI Coaching Theory course includes useful content on Psychology in Snowsports. In addition, learning more about flow and mindfulness in sport can help as a coping strategy.