MENLO PARK, Calif. – The William and Flora Hewlett Foundation announced today that it will award a $100,000 prize to the designers of software that can reliably automate essay grading for state tests to drive testing of deeper learning in its new ASAP competition.
Today, experts believe that critical reasoning and writing are one of the suite of skills that students need in the 21st century. The automated scoring competition intends to solve the longstanding problem of high cost and long turnaround of current testing. The goal is to shift testing away from standardized bubble tests to tests that evaluate critical thinking, problem solving and other 21st century skills.
“Better tests support better learning,” says Barbara Chow, Education Program Director at the Hewlett Foundation. “Rapid and accurate automated essay scoring will encourage states to include more writing in their state assessments. And the more we can use essays to assess what students have learned, the greater the likelihood they’ll master important academic content, critical thinking, and effective communication.”
The Hewlett Foundation makes grants to educators and nonprofit organizations in support of what it calls “deeper learning,” which embraces the mastery of core academic content, critical reasoning and problem solving, working collaboratively, communicating effectively, and learning how to learn independently.
The competition will determine if current software scoring programs are as effective as expert human scoring and seeks to accelerate innovation for faster and more accurate scoring of student work. If the programs can be shown to be as reliable as human scoring it will increase their acceptance and reduce the need to rely exclusively on costly and time-consuming human scoring.
The competition will be conducted with the support of the two state testing consortia: the Partnership for Assessment of Readiness for College and Careers and Smarter Balanced Assessment Consortium, which together work with forty-four state departments of education. The two testing consortia recently received $365 million from the U.S. Department of Education to develop new assessments.
The competition will be conducted in two phases. The first will demonstrate the capabilities of existing vendors who create and market software for grading essays. The second phase will be open to the public and will award prize money to competitors who demonstrate software that can score essays as well as human graders.
Open Education Solutions, a blended learning service provider that helps educators combine the best of online and classroom work, and The Common Pool, a consulting business that specializes in developing effective incentive models for solving problems, designed and will manage the competition. Tom Vander Ark, CEO of OpenEd, says, “Prizes are a proven strategy for mobilizing talent and resources to solve problems.” “We’re excited about the potential of emerging assessment capabilities,” says Jaison Morgan of The Common Pool, “and confident that focused incentives will accelerate innovation.”
Dr. Mark Shermis, dean of the University of Akron College of Education, author of Classroom Assessment in Action, and noted expert on automated scoring, will chair the Academic Advisory Board.
The competition will be hosted on Kaggle, a platform for predictive modeling competitions that helps companies, governments, and researchers identify solutions to some of the world’s hardest problems by posting them as competitions to a community of more than 25,000 PhD-level data scientists located around the world. “Kaggle has solved problems for NASA, insurance industry leaders, and HIV researchers,” says Anthony Goldbloom, founder and chief executive officer of Kaggle. “The ASAP competition is our most ambitious yet, having the potential to touch more Americans than any other project we’ve run so far.”
The vendor demonstration will be completed in January in time for the results to be incorporated into spring test development. The open competition will run through April to allow competitors time to develop new scoring algorithms. A public leader board will monitor progress.
For more, view the following blogs on Getting Smart: