- 简介: There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates’ performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when dealing with extended textual answers in different modes. This research project explored whether the mode in which a set of extended essay texts were accessed and read systematically influenced the assessment judgements made about them. During the project, 12 experienced English literature assessors marked two matched samples of 90 essay exam scripts on screen and on paper. A variety of statistical methodswere used to compare the reliability of the essay marks given by the assessors across modes. It was found that mode did not present a systematic influence on marking reliability. The analyses also compared examiners’ marks with a gold standard mark for each essay and found no shifts in the location of the standard of recognised attainment across modes.
- 分类: 暂无分类
-
标签:
- 学习内容
- 需要
- 提示
- reliability
- investigation
- 超链接
- 上载
- 内容