• Visit
  • Apply
  • Give

Researchers have established that computer models are highly predictive of how humans would have scored a given piece of writing.

However, Joshua Wilson, School of Education, took his research another step to look at how the software might be used in conjunction with instruction and not as a standalone scoring/feedback machine.

In earlier research, Wilson and his collaborators showed that teachers using the automated system spent more time giving feedback on higher-level writing skills – ideas, organization, word choice. Those who used standard feedback methods without automated scoring said they spent more time discussing spelling, punctuation, capitalization and grammar.

If computer models provide acceptable evaluations and speedy feedback, they reduce the amount of needed training for human scorers and, of course, the time necessary to do the scoring. But Wilson wanted to know–could automated scoring and feedback produce benefits throughout the school year, shaping instruction and providing incentives and feedback for struggling writers, beyond simply delivering speedy scores?

He introduced software called PEGWriting (which stands for Project Essay Grade Writing), to teachers of third-, fourth- and fifth-graders at Mote and Heritage Elementary Schools in Delaware’s and asked them to try it during the 2014-15 school year.

The software uses algorithms to measure more than 500 text-level variables to yield scores and feedback regarding the following characteristics of writing quality: idea development, organization, style, word choice, sentence structure, and writing conventions such as spelling and grammar.

“If we use the system throughout the year, can we start to improve the learning?” Wilson said. “Can we change the trajectory of kids who would otherwise fail, drop out or give up?”

Teachers said students liked the “game” aspects of the automated writing environment and that seemed to increase their motivation to write quite a bit. Because they got immediate scores on their writing, many worked to raise their scores by correcting errors and revising their work over and over.

“There was an ‘aha!’ moment,” one teacher said. “Students said, ‘I added details and my score went up.’ They figured that out.”

And they wanted to keep going, shooting for higher scores.

That same quick score produced discouragement for other students, though, teachers said, when they received low scores and could not figure out how to raise them no matter how hard they worked. That demonstrates the importance of the teacher’s role, Wilson said. The teacher helps the student interpret and apply the feedback.

Teachers agreed that the software showed students the writing and editing process in ways they hadn’t grasped before, but some weren’t convinced that the computer-based evaluation would save them much time. They still needed to have individual conversations with each student – some more than others.

How teachers can use such tools effectively to demonstrate and reinforce the principles and rules of writing is the focus of Wilson’s research. He wants to know what kind of training teachers and students need to make the most of the software and what kind of efficiencies it offers teachers to help them do more of what they do best: teach.

For more details, read UDaily’s The algorithm of writing.