Marking across vocational courses in Further Education (FE) is a substantial contributor to lecturer workload. With lecturers potentially marking hundreds of assignments per academic year, any strategy to reduce marking time could contribute towards the development of internal policy to manage workload and improve marking processes. One process widely cited as positively impacting marking time is peer-reviewing work before submission of an assignment. The aim of this practice-based action research was to answer the question, “Will implementing a peer review policy for learner assignments alleviate time required for marking and improve the marking process?” A focused sample of four active practitioners/lecturers in sports qualifications, with varying degrees of experience, was selected for the research project. The findings from the sample unanimously concurred that the implementation of a peer marking policy reduced lecturers’ marking time by a perceived average of 15%. This was attributed to (a) less time required to correct Spelling, Punctuation and Grammar (SPAG) issues, (b) less annotation of scripts required, (c) tasks were answered more completely and (d) it lessened the requirement to return scripts for resubmission and remarking. The results may offer support for inclusion into a broader marking policy, or provide a foundation for further investigation with the goal of contributing towards a more manageable lecturer workload. The Inquiry may serve to support practitioners in their own review and provide a perspective for potential policy and practice changes within establishments throughout the sector.
This project took place within an Independent Training Provider operating in the FE sector delivering study programmes to 16-18 year old learners. Participants were delivering vocational Sport qualifications at Level 3 across different sites in the south east of England. This project utilised action research to provide initial results toward the development of a marking and workload policy for FE lecturers. The present professional paradigm in FE is one of high workload and teacher attrition (Department for Education, DfE, 2018). A major component of this workload within the FE sector is the marking of summative assessments. By sampling practitioners of varying experience, this project examined the impact of a process of peer review on time taken for lecturers to mark learner work. The working hypothesis is that assessment work undergoing a level of screening in the drafting process should result in fewer mistakes in content, spelling, punctuation, grammar and structure, in turn reducing the requirement for resubmission and remarking.
RESEARCH GOAL, METHOD, AND OUTCOME
Research Rationale and Goal
In November 2018, an open letter was published by the DfE (2018) including signatures from the secretary of state for education, her majesty’s chief inspector and the General Secretary of School and College Leaders. The letter outlined the urgent need to address teacher workload in order to reduce the number of teachers leaving the profession.
Workload, a UK Government Agenda
In 2014 the DfE undertook a landmark consultation via an online survey entitled, The Workload Challenge, and published its findings the following year (Gibson, Oliver, & Dennison 2015). Pertinent to this project, 53% of responders attributed marking as a primary burdensome task with only data inputing and analysis scoring higher. The authors’ suggested solution to alleviate this aspect of workload was for schools to modify marking arrangements. Additionally, in response to the aforementioned consultation the DfE produced three independent workload review groups, including Eliminating Unnecessary Workload Around Marking Report (DfE 2016). One key finding from this was that a contributing factor to marking load was that learners’ work is often submitted without sufficient review, requiring extensive marking and feedback.
Peer Marking and Its Relation with Workload
A broad body of research exists supporting the idea that a peer review process can result in the saving of time spent on marking and increase the efficiency of staff time (Bould 1989, Bould and Holmes 1995, Hanrahan and Isaacs 2001, Harris 2011, Hughes 2001, Kelmar 1993 and Macpherson 1999). These studies provide a range of justifications but a particular study by Haswell (1983) provides insight into why this may be the case. This study compared the total number of mistakes noted by a teacher to that of students when reviewing the same work. In total, learners were able to identify 61.1% of all errors the teacher highlighted. The author did not attribute any measurement of time to quantify this percentage, but it may be seen as significant. Despite Haswell’s research results, evidence also exists to the contrary. Black, Harrison, Lee, Marshall, and Wiliam (2004) write that for the process to be efficient it will take considerable investment of time to familiarise learners with the peer-review process. Falchikov (2001) and Carter (1997) add that from both a teacher and learner perspective the process can be time consuming and create extra components for teachers to consider. However, the latter argument suggests that the process of awarding of grades was the main contributory factor in increasing the time required. Awarding grades would not be within the scope of the peer review in the present project and thus would likely to negate the heavy time component.
It should be noted that the literature available is limited in directly referencing FE and generally focuses on academic subjects, not vocational, as per the scope of this enquiry. This is significant, as nuances in the research such as, types of feedback, level, sector and purpose do not go all the way to provide conclusive support or contravene its implementation for saving a lecturer time in FE. Therefore, this absence of sector-specific research provided justification for the focus of this enquiry. Working to develop a robust and time efficient marking policy, the inclusion of peer review would require an understanding of its direct impact with FE specificically. The current research can provide a foundation for the continued research necessary for the FE field.
As a result of this gap in the research as well as the government suggestions, this study aimed to address the question, Will implementing a peer review policy for learner assignments alleviate time required for marking and improve the marking process?
By providing initial insight into the adoption of this practice within a particular FE setting, this research aims to provide colleagues with some practice-based research which they may review and find helpful in efforts to create a marking policy within their establishments. Furthermore, it intends to provide a contribution to further academic study of workload management. Personally, the current research formed the first phase of a larger workload/marking policy development project.
From inception of this research project, the “Research Pathways” presented by Hitchcock and Hughes (1995 p.78) were used as a framework to develop the study in line with academic processes (PowerPoint Figure 1, Slide 3). Within this framework the methodology also adopted the underpinning “Central features” of action research provided by Kemmis and McTaggart (1988 p.5), to ensure that its design was: (1) Concerned with practices in a social situation, (2) Cyclical and (3) a fusion of action and research. These features were applied to the action research process of McNiff (1988) which can be seen in Figure 2 (Slide 4). To start the cycle learners were to peer-review work on class assignments completed by a peer prior to submission following the process outlined in Figure 3 (Slide 5). The session prior to submitting the assignment, learners paired up with a peer and reviewed their work. No upskilling of students took place as they were able to utilise assignment briefs and their pre-existing knowledge of spelling, punctuation and grammar. As a pre-requisite to the programme, all learners will have already achieved Grade 4 or above in English which will support the process. Learners provided feedback to their peer with suggestions to improve the work before final submission. The lecturers followed their usual marking process and were subsequently presented with a survey to complete related to their perceptions of the impact of the intervention.
To keep the study tight and manageable, four lecturers were selected as participants. This small sample size helped with selecting appropriate participants who would be able to reliably contribute to completing the research as requested. I had to ensure participants were not already utilising peer review for assessment as that would not constitute an intervention and any impact of the current research would likely not be observable. As lecturing is a stressful activity to most lecturers, only staff who had the capacity to manage the addition of another activity (i.e. use of peer-reviewing) were selected. Participants selected were working without a substantial backlog of marking which likely would have acted as an additional stressor. Participants with varying degrees of teaching experience were selected, ranging from 1-9 years.
Research Instrument and Data Collection
A questionnaire was selected as the primary research instrument and was provided to the participating lecturers to assess their perceived impact of the peer marking intervention. Anderson (2002) provides a six-step model for creating high-quality questionnaires, which was adopted to ensure the effectiveness of designing the instrument (Figure 4, Slide 6).To aid with anonymity and the likelihood of getting truer responses, no personal data was required to be completed. This will help with the potential issue of professional relationships between the participants and myself. By anonymising feedback it was hoped that the responses would be honest and avoid any effort to appease the researcher by choosing perceived ‘correct’ answers.
In addition to providing anonymity to protect participants from any concern regarding impact on their professional role, a number of other considerations were made in order to ensure the research was ethical. Prior to the study and before it commenced I completed the University ethics approval, which rated my research as low risk. Additionally, I completed an ethics form, which was approved by the module tutor. I familiarised myself with BERA (2018) Ethical Guidelines for Educational Research and other relevant legislation such as GDPR (2018), which was considered throughout the planning and implementation of the project. Letters were given to participants explaining the aim and structure of the research and the requirements of their participation and followed up with an exploratory phone call. Consent forms were also completed and collated. A similar process was also conducted with my employer, although I had already been commissioned to conduct the review.
Research Findings and Considerations for Further Development
Initial findings suggested that peer-reviewing assignments before submitting them to the course lecturer does reduce the time required to mark learners’ work, thus improving the marking process in regards to lecturer workload. All four participants agreed that the process had saved noticeable time when marking assignments. This was found both with participants who had prior experience of marking the chosen unit of study and with those who were new to this unit. See detailed results in graphics (Slides 8 – 9).
Interestingly, despite all participants agreeing that the process had saved marking time, only three of the four participants stated that they would recommend that colleagues implement this practice. The reason given by one participant who would not recommend to colleagues was because the participant felt the learners would, “…struggle to engage and apply themselves to the practice with the frequency required. Getting them to complete it this time was tough and they did not seem to enjoy the experience. I envisage that I would spend too much time having to manage behaviour of pupils who are bored and disengaged with the process.”
The range of scripts marked varied by participant with the lowest numbers being 14 and the maximum being 20 (Slide 8). It is worth noting that due to a standardised delivery model, all participants were marking the same assignment allowing for uniformity across the sample. The participants with the fewest scripts perceived the least time saved whereas the participant who marked 19 saw the highest. This may suggest the time saving is cumulative and builds with the more scripts that are marked.
The range of perceived time saved was between 10% and 20% with an average of 15%. The improvements recognised from the learner’s scripts were found to be in: fewer spelling mistakes (all 4 participants); improved spelling, punctuation and structure of work (3); fewer overall mistakes for correction (4); and scripts required less annotation and tasks tended to be answered more completely, which in turn resulted in fewer scripts being returned to learners for resubmission (3).
As a takeaway from this project, the results suggest that peer-reviewing work can save time for lecturers as it minimises mistakes in learner work. These were predominantly highlighted as improvements in spelling, punctuation, grammar, or surface mistakes (Slide 11) as suggested in the earlier research by Haswell (1983). The findings also generally agree with various prior research (Boud 1989, Hanrahan & Isaacs 2001, Harris 2011, Hughes 2001, Kelmar 1993, Macpherson 1999) which suggests that the process of peer review can decrease time spent by teachers on marking.
No feedback was found to support the claims of authors such as Black et al (2004) and Bostock (2000) who suggested that the process takes excessive planning or classroom time to conduct. However, valuable feedback was gathered relating to the reality of regularly embedding this practice due to potential issues with learner engagement. Despite the initial findings that appeared positive, I approach the results with relative caution for a variety of reasons which are discussed below. The critique and reflection on the current research finding will assist in further action planning before progressing to the next stage of research.
Considerations for Subsequent Research
The above findings can provide a foundation for me and other practitioners to build upon. Despite being able to provide a relatively conclusive answer to the question posed, this action research was designed to form a foundation and justify further academic research. For transparency and to support ongoing research, a critical reflection highlights a number of potential areas to consider regarding further research and action.
Although the conclusions were positive, there were only four lecturers who participated in this research, thus limiting any suggestion of a causal relation between the intervention and saving lecturers’ time. Continued research is required to further accumulate evidence in the FE area.
Lecturers without a substantial marking backlog were selected in order not to add more to their workload. This may mean that those who might findthe peer-review approach most beneficial were not a part of the enquiry. It is arguable that the impact on lecturers who struggle with managing workload would be the most important one to observe. A greater diversification within the sample may address this dilemma and improve generalisability. However, consideration of the ethical implications of potentially adding additional stressors would need to be considered.
Another aspect which would further support the findings would be the collection of data providing specific and quantifiable data of any time saved. This enquiry relied on the participants’ perception of the impact of the intervention. The fact that participants were acutely aware of the process and that they were applying a deeper level of conscious consideration may have affected their perception. A comparative study or adding a control group with timings taken may provide greater credence to the time-saving claim.
The nature of my research instrument did not allow for further probing or enquiry. At times I was unable to find out why an answer was given, or how an answer was reached. Such information would have helped with informing practice through having more qualitative data to work with in reaching conclusions. This indicates that interviews may have been a better instrument for data collection, or that interviews could have complimented the survey. This would have been difficult on this occasion given time constraints and geographical location of staff.
Within the results, no feedback was received to suggest an increase in time invested in the classroom in order to complete the process. This was suggested in the literature review but not accounted for or explored within the survey. As this may have a significant role to play in any potential adoption of practice, this will need to be explored more explicitly with its broader impact reviewed.
As a result of the critique and reflection, it is apparent that the research conducted had areas for development, however, it has provided an indication that there may be a line of enquiry to follow on this topic and that by refining the focus and reframing the plan I will be able to provide more conclusive findings.
The question in focus was, Will implementing a peer review policy for learner assignments alleviate time required for marking and improve the marking process? The literature provided an insight into the current educational landscape and key research findings in relation to the two elements making up the enquiry, workload and peer marking. The project validated the need for further investigation into the topic area.
Despite the initially positive findings, the process of reflection highlighted a number of issues with the research, design and implementation. It was concluded that the evidence gathered should be viewed cautiously. However, it did provide a foundation for further exploration of the subject with greater research rigour and an enhanced research process.
Anderson, G. (2002) Fundamentals of Educational Leadership. London, Routledge-Falmer
Black, P. Harrison, C. Lee, C. Marshall, B. and Wiliam, D. (2004) ‘Working inside the Black Box: Assessment for Learning in the Classroom’, Phi Delta Kappan, Vol. 86, No. 1, pp. 8–21.
Bostock, S. (2000) Student peer assessment, Learning Technology, Vol. 5 No. 1, pp. 8-16
Bould, D. (1989) The role of self‐assessment in student grading. Assessment in Higher Education, Vol 14, No. 1, pp.20-30.
Boud, D. and Holmes, H, (1995). Self and peer marking in a large technical subject. Enhancing learning through self-assessment. London: Kogan Page.
British Educational Research Association (BERA) (2018) Ethical guidelines for educational research. 4th ed. Available at: https://www.bera.ac.uk/researchers-resources/publications/ethical-guidelines-for-educational-research-2018 (Accessed: 30 April 2019)
Carter, C.R. (1997) Assessment: Shifting the responsibility. Journal of Secondary Gifted Education, Vol 9, No. 2, pp.68-75.
Department for education (2016) Eliminating unnecessary workload around marking [online] Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/511256/Eliminating-unnecessary-workload-around-marking.pdf (accessed 3 June 2019)
Department for Education (2018) Joint Letter to All School Leaders Including Head teachers, Leaders of Academy Trusts and Governing Boards. [online] Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/753668/Reducing_Teacher_Workload_-_Letter_to_School_Leaders.pdf (Accessed 10 May 2019)
Falchikov, N. (2001) Learning together: Peer tutoring in higher education. Brighton: Psychology Press.
General Data Protection Regulation. (2018). General Data Protection Regulation (GDPR) – Final Text Neatly Arranged. [online] Available at: https://gdpr-info.eu (Accessed 9 May 2018)
Gibson, S. Oliver, L. & Dennison, M (2015) Analysis of teacher consultation responses Research report (2015) [online] Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/401406/RR445_-_Workload_Challenge_-_Analysis_of_teacher_consultation_responses_FINAL.pdf (accessed 3 June 2019)
Hanrahan, S.J. and Isaacs, G., (2001). Assessing self-and peer-assessment: The students’ views. Higher Education Research & Development, Vol. 20, No. 1 , pp. 53-70.
Harris, J.R. (2011) Peer assessment in large undergraduate classes: an evaluation of a procedure for marking laboratory reports and a review of related practices. Advances in physiology education, Vol. 35, No. 2, pp.178-187.
Haswell, R.H. (1983) Minimal marking, College English, Vol. 45, No. 6, pp. 600-604.
Hitchcock, D.H., Hitchcock, G. and Hughes, D., (1995) Research and the teacher: A qualitative introduction to school-based research. London. Psychology Press.
Hughes, I. (2001) But isn’t this what you’re paid for? The pros and cons of peer and self-assessment, Planet, Vol. 3, No. 1, pp. 20-23.
Kelmar, J.H. (1993) Peer assessment in graduate management education. International Journal of Educational Management, Vol. 7, No. 2. pp 2-8.
Kemmis, S. and McTaggart, R. (1988) The Action Research Planner. 3rd ed. Victoria: Deakin University.
Macpherson, K. (1999) The Development of Critical Thinking Skills in Undergraduate Supervisory Management Units: efficacy of student peer assessment, Assessment & Evaluation in Higher Education, Vol. 24, No. 3, pp. 273-284.
McNiff, J. (1988) Action Research: Practice and Principles. London: Routledge.
To cite this work, please use the following reference:
Peters, A. (2020, April 26). The impact of learners’ peer reviewing on lecturers’ assessment marking time. Social Publishers Foundation. https://www.socialpublishersfoundation.org/knowledge_base/the-impact-of-learners-peer-reviewing-on-lecturers-assessment-marking-time/