Although research from the University of Akron found that automated grading of essays can be as accurate as human grading, the conclusions are debatable. Think of those students who are terrific at creative writing but don’t follow normal patterns of writing. If their work doesn’t contain what the robot grader is expecting, too bad. Akron’s leading researcher, Mark D. Shermis, admits that the programs aren’t good at evaluating creativity and should only be used as supplementary resources for teachers who teach the basics of writing. English teacher Renee Moore says that it’s not the grade that’s important on an essay; it’s the learning process that takes place through student-teacher interaction. Automated graders can’t sit down next to students and go over their essays with them. They can’t tell fact from fiction, and according to Les Perelman, Director of Writing at MIT, they can be fooled.
Comments