Deeper Learning Not Lighter Journalism
One thing that really disappoints me is newspaper reporters that try so hard to be cute that they miss–or botch–a story. Stephanie Simon from Reuters is a pandering case in point. After a number of my colleagues invested a lot of time with her discussing the Hewlett Foundation sponsored Automated Student Assessment Prizes (ASAP), she wrote Robo-readers: the new teachers’ helper in the U.S. As you can tell from the headline, it makes light of a serious subject. Worse, it misleads and enlists fear rather than inform.
The Hewlett Foundation is concerned about the unintended consequences of standardized testing and, while they fully appreciate the potential benefits of online assessment, they worry that cost pressures on states could result cheap online tests that could actually make things even worse. The foundation is sponsoring ASAP as part of their commitment to deeper learning. They want to see kids writing and thinking more every day in classrooms and on states tests.
Real journalists don’t pander to base instincts, they create context and build understanding. They treat serious subjects seriously and don’t let humor or headline get in the way of the work.
Hundreds of people have invested thousands of hours in the ASAP project that my organization manages. Their work will result in better state tests and that will result in better American schools. The subject deserves better treatment than it received from Reuters. John Fallon from Vantage, one of the participants in the private vendor demonstration, was also troubled by the sloppy treatment. Below his open letter on automated scoring of student writing–it’s a great response to a sloppy piece. The other participating vendors offer similar products (e.g., Pearson’s Write to Learn and CTB’s Writing Roadmap) and have expressed similar sentiments.
—
Instead of focusing on the positive nature of such software, which is used extensively in the education market for good reason, it seemed to me that the writer used intimidating imagery and half-truths to belittle the intelligence behind the technology.
American high school students are terrible writers, and one education reform group thinks it has an answer: robots.
My company, Vantage Learning, uses its high-tech knowledge for a lot of powerful tools that help students, teachers and schools. One of the platforms we are most known for is MY Access!®, an instructional writing and assessment tool powered by IntelliMetric®, the “gold-standard” in automated essay scoring. It creates better student writers by automatically scoring student essays and providing prescriptive, developmentally appropriate feedback.
Yes, it’s a computer system that scores student writing. No, it is not a robo-reader.
Let me make one thing perfectly clear: we do not now, nor have we ever, nor will we ever, intend to replace teachers. All of our learning platforms, including MY Access!®, were created as tools for teachers.
There is a large market for instructional writing technology because America’s students are struggling with writing. And, as many experts will tell you, writing is a skill that is imperative for every child’s future.
The theory is that teachers would assign more writing if they didn’t have to read it.
That is NOT the “theory” behind our technology. That is not what MY Access!® is about. It is about helping teachers identify and remediate specific student deficiencies in writing. It is about helping students not get bogged down in minor errors in their writing. It is about creating a process for writing and supporting students as they make the journey to become better writers.
MY Access!® is a tool for teachers to create a dialogue with their students. We do NOT recommend a student write to a prompt to receive a score and the conversation is over. That should just be the beginning! From the benchmark set by the platform, teachers can launch their students into learning and continuing the writing process.
But machines do not grade essays on either the SAT or the ACT, the two primary college entrance exams.
Yes, this is absolutely true. However, a quick Internet search will turn up at least three other high stakes exams that are scored by artificial intelligence. And these exams are not any less weighty for assessing the skill levels of students.
And American teachers by and large have been reluctant to turn their students’ homework assignments over to robo-graders.
There are millions of students around the world who use our technology to become better writers. From elementary school to graduate school, students rely on artificial intelligence technology for instructional support and assessment insight.
He argues that the best way to teach good writing is to help students wrestle with ideas; misspellings and syntax errors in early drafts should be ignored in favor of talking through the thesis.
We whole-heartedly agree with Mr. Jehn. That’s the idea with MY Access!®. Students have to use critical thinking skills to reflect on how to edit their writing to improve their score. And, by getting immediate feedback instead of waiting a week or two for comments from their teacher, they become empowered to do so. Also, as mentioned previously, MY Access!® helps students not get bogged down by minor errors.
Also, you might notice the fact that I keep using the word “score.” We don’t grade essays. Again, we’re not replacing teachers with a grade-spouting robot like in the “artist’s impression” from the article. We score essays and encourage teachers to change scores if they believe that is the best decision.
Even supporters of robo-graders acknowledge its limitations.
Yes, our technology isn’t perfect, and no form of technology is. But, again, we’re creating environments where teachers and students can have a common ground for learning and understanding. We fully expect teachers to be engaged with their students’ writing and review their essays for understanding as well as deficiencies.
’The reality is, humans are not very good at doing this… It’s inevitable,’ that robo-graders will soon take over.
As many teachers will tell you, and as our studies have shown in the past and as the new study coming out in April will also attest, our scoring is pretty darn accurate. We’ve found in the past that our technology tends to be more reliable than a human scorer. But, you must also remember that the technology itself was taught by human scorers. It read hundreds of essays and learned.
I don’t want the public getting the wrong idea about automated essay grading. Technology like ours is making a substantial and measurable difference in the lives of our users. Learners are writing more often, becoming better writers and, in turn, becoming better students. Isn’t that what we want for all of our children?
-John Fallon
Vice President of Marketing, Vantage Learning
Joy Pullmann
Tom,
Just a minor note: Usually reporters don't write their own headlines. Copyeditors write those, and reporters have no control over them. I realize the letter you're reprinted here outlines more problems with nuance, but that's an important thing for non-reporters to understand.
Also, with a lot of the nuance, I do agree some is poor understanding but that's a function of the reporter not having spent years of everyday developing the product and talking about it to have the language tailored to how the vendors prefer.
Joy