Author
Greg Robinson, Education Insight and Evaluation analyst, FoE Edtech Lab.
In the summer term 22/23, FeedbackFruits (GME) was piloted in the Faculty of Engineering across six departments and thirty modules including around two-thousand students, replacing WebPA as a peer assessment & feedback tool.
There were three main use cases to evaluate:
- a general approach of using the tool for feedback and/or individualised group grade/scores;
- a consistent template/rubric approach used across ten modules in one department;
- a novel approach for student groups to assess group poster work.
Planning the evaluation:
Success measures were agreed (ahead of the pilot) for each of these use cases, against the strategic objectives of active learning pedagogy, curriculum and assessment, and inclusivity. These were defined as per the example below:
e.g.
Objective: “the tool should support group dynamics within the criteria set to deliver a satisfactory piece of coursework”
Action: “students to give feedback to one another and communicate effectively to help each other understand their contributions and how to improve”
Success Indicators: “groupwork delivered collaboratively and effectively; both teacher and students satisfied with the groupwork process; students encounter minimal system challenges”
Metrics: “staff and student survey questions around the ease and effectiveness of the tool”
Using these success measures, staff and student survey questions were generated, using best practice in questionnaire design (Best practice in questionnaire design | Research and Innovation | Imperial College London ). The results of these as well as staff interviews (for key stakeholders) were used to gauge the success of the pilot.
Analysis:
Once staff interviews had been conducted and survey results had been received some analyses could be conducted. Survey response rates are generally low, especially where there is no ‘reward’ or where surveys are not conducted in-person. In this case there were 54 student responses, a fair number albeit a low percentage, so this had to be considered in deducing the strength of any findings and recommendations.
Likert responses on e.g. the ease of giving honest feedback to peers were displayed in pie charts to show clearly the proportions of responses. As well as categorised responses, there were many text comments that could be assessed for positive or negative sentiment, before pulling out key findings and recommendations.
Findings and recommendations:
There were recurring themes, for example, of concerns about the anonymity of feedback noted by both staff and students, which could be used to determine how to apply the tool in future and to which year groups and group sizes, since smaller groups meant that anonymity was harder to maintain.
There were also recurring themes that were used to give feedback to the vendor, for the ease of understanding the different scores presented in the tool. It was also possible to determine where EdTech Lab support could be improved in future.
The findings were communicated to all relevant stakeholders in various ways, from written reports to presentations and talks.
Author: Greg Robinson, Education Insight and Evaluation analyst, FoE Edtech Lab.