Artificial intelligence (AI) could be used to mark the work of trainee teachers who are trying to identify pupils with potential learning difficulties, a study suggests.
Researchers said it could be an “effective substitute” when personal feedback is not readily available.
A child during a class at a primary school.
In a trial, 178 German trainee teachers were asked to assess six fictionalised pupils to decide whether they had learning difficulties such as dyslexia or Attention Deficit Hyperactivity Disorder (ADHD), and to explain their reasoning.
They were given examples of their schoolwork, as well as other information such as behaviour records and transcriptions of conversations with parents.
Immediately after submitting their answers, half of the trainees received a prototype ‘expert solution’, written in advance by a qualified professional, to compare with their own.
This is typical of the practice material that German trainee teachers usually receive outside taught classes.
The others received AI-generated feedback, which highlighted the correct parts of their solution and flagged aspects they might have improved.
The tests were scored by researchers, who assessed both their diagnostic accuracy – whether the trainees had correctly identified cases of dyslexia or ADHD – and their diagnostic reasoning: how well they had used the available evidence to make this judgement.
A teacher with her class during a lesson.
The average score for diagnostic reasoning among trainees who had received AI feedback during the six preliminary exercises was an estimated 10 percentage points higher than those who had worked with the pre-written expert solutions.
The reason for this may be the ‘adaptive’ nature of the AI, according to the study, led by academics at Cambridge University and Ludwig Maximilian University in Munich
Because it analysed the trainee teachers’ own work, rather than asking them to compare it with an expert version, the researchers believe the feedback was clearer.
There is no evidence, therefore, that AI of this type would improve on one-to-one feedback from a human tutor or high-quality mentor, but if such close support was not readily available, it could have benefits, particularly for trainees on larger courses.
Dr Michael Sailer, from LMU Munich, said: “Obviously we are not arguing that AI should replace teacher-educators: new teachers still need expert guidance on how to recognise learning difficulties in the first place.
“It does seem, however, that AI-generated feedback helped these trainees to focus on what they really needed to learn.
“Where personal feedback is not readily available, it could be an effective substitute.”
The study used a system capable of analysing human language and spotting certain phrases, ideas, hypotheses or evaluations in the trainees’ text.
It was created using the responses of an earlier cohort of pre-service teachers to a similar exercise.
By segmenting and coding these responses, the team ‘trained’ the AI system to recognise the presence or absence of key points in the solutions provided by trainees during the trial.
The system then selected pre-written blocks of text to give the participants appropriate feedback.
Riikka Hofmann, associate professor at Cambridge University’s Faculty of Education, said: “Teachers play a critical role in recognising the signs of disorders and learning difficulties in pupils and referring them to specialists.
“Our findings suggest that AI could provide an extra level of individualised feedback to help them develop these essential competencies.”
The research is published in the journal Learning and Instruction.