Tuesday 8 April 2014

Reliability of public examinations

Image: freeimages
What makes for a reliable exam series? And who actually knows or cares?


A community effort?

Baird, Greatorex and Bell (2004) set out to study the effect that a community of practice could have on marking reliability - their study concluded that discussion between examiners at a standardisation meeting had no significant effect on marking reliability, therefore the effect of a community of practice was called into question. I have to challenge this conclusion, because the artificial and synchronous conditions of a standardisation meeting most definitely do not represent the potential for the asynchronous nature of a community of practice. The authors didn't take this into account - interesting that they actually cited Wenger in their references. Later on in the paper they actually seem to acknowledge that tacit knowledge already acquired in a community of assessment practice might actually explain why the format of standardisation meetings seemed to have no significant effect - this seems more in line with the effect that I would expect a community of practice to have.



What do these people know anyway?

Later research by Taylor (2007) sought to investigate public perceptions of the examining process. The study involved a range of participants - students, parents and teachers, using a range of interview techniques, and looking at issues such as how papers are marked, the reliability and quality of marking, and the procedure for re-marks. The level of awareness about these issues varied somewhat, but it seemed very few people (even teachers) had full knowledge of the entire process.

There seemed to be a perception among students and parents that several examiners would mark each script (p.6) and arrive at a final mark through consensus, while teachers generally felt that a single examiner marked each paper, but this was based on perception rather than firm knowledge. Students and parents did not seem to have any real knowledge of how examiners might arrive at a mark, and even teachers were not aware of the hierarchical method, although they agreed that it seemed sensible when it was explained to them. All groups believed that the system would work better with multiple examiners, although they did acknowledge that this might be unrealistic given time and financial constraints. When questioned about the possible merits of having multiple examiners mark a single paper, examiners commented that any gains would be minimal in comparison to the present hierarchical system, and the cost would be prohibitive.

Members of the public seemed to have a better awareness of the concept of reliability (p.7), with teachers having a similar perception to examiners about the potential for marks between examiners to differ within a band of ability. Students and parents seemed to understand the potential for marks to differ, although their expectations were more optimistic than the real situation. There was much less understanding of how quality of marking was assured (p.8), with perceptions varying from an assumption that there was no checking at all. to some people believing that far more scripts were checked than is the case. All parties agreed that quality checking was desirable, and examiners appeared to support the current system. Re-marking was more common knowledge to all the participants (p.10), although there was little knowledge of the precise system used.

The report considered whether attempts to increase public understanding of the exam system would promote public trust - although some believe that greater transparency might actually invite criticism, other literature seems to suggest that revealing the workings of the system would lead to more realistic expectations and improved engagement, rather than a focus on failings. Establishing a clearer link between understanding of the exams process and public confidence was suggested for future research.

More on reliability

Chamberlain (2010) set out to build on the research into public awareness of assessment reliability - which also ties in well with some of the points raised by Billington (2007) about whether the general public are acting on good information or misinformation.
The research into public awareness used focus groups as a means of drawing out understanding, as it bypasses some of the possible problems of researcher bias (since the researchers were AQA employees, and can help foster a collaborative environment to trigger the sharing of opinions and ideas (p.6). One remaining source of bias was that the groups were small and were composed primarily of people with a particular interest in the exams.

Research with several groups of people, including a number of teachers and trainee teachers, suggested little general understanding of the concept. Secondary teachers had more understanding, often due to exposure to requesting re-marks for their pupils. Promisingly, most of the participants cared enough to indicate they would like to be better informed about the overall process of examinations, although there was no support for any quantification of reliability - most people already seemed to accept that there would inevitably be some variance in how marks were awarded, but placed trust in examiners to act as professionals. The exception to the rule is when a very public failure occurs, but even in this case attention rapidly fades once the problem seems to be 'fixed', with little real gain in understanding. The video below is one attempt to get some understanding out into the public domain:


Making The Grades - A Guide To Awarding



References:


No comments:

Post a Comment