Does Online Assessment Increase or Reduce Bias?

EdTech

Dr Mark Shermis is Dean of Education at The University of Akron. He’s the author of Classroom Assessment in Action and an expert on automated essay scoring.  Shermis is the academic advisor to the Hewlett Foundation funded Automated Student Assessment Prize  (ASAP), a project managed by OpenEd.  ASAP started with a February demonstration of current vendor capabilities.  Dr. Shermis reported that the nine vendors scored thousands of student essays from eight data sets with high levels of agreement with expert graders.

This morning Dr. Shermis address a question from a reader of a recent NYTimes story (covered here) who expressed concerned that online assessment programs may be “geared towards students who are middle-class, white, and less attention is paid to the interests, attitudes, and communication/language styles of people who might comfortably describe themselves using any of the following labels: Low-income, Black, Homosexual, Feminist, etc.”

Thank you for your letter of concern. First of all you should know that the automated essay scoring software will ultimately make it easier to administer more writing assignments, raising the literacy bar for all school children, not just the privileged students that you allege in your opening paragraph. The technology is relatively inexpensive (about $15 per year per student when purchased through a school district), though it does require access to a computer.  We are sensitive to the costs associated with computer access, but most schools that take advantage of the technology administer writing assignments in a school media lab.
Typically the scoring systems are based on models derived from human raters.  This can a benefit or a bane.  If the human rater scoring is biased against any of the groups you mention, it will be reliably reflected in the models that are developed.  However it is possible to identify the factors that may contribute to human rater bias and adjust them with the machine scoring models.  For example Michigan’s high-stakes assessment MEAP instructs human raters to ignore expressions of non-standard English (e.g., Black English), but no matter how well the raters are trained, they will inevitably assign lower scores to essays that utilized non-standard English.  It is theoretically possible to adjust the machine scoring models to be more consistent with the test’s instructions (to ignore the dialect).
Finally, the writing instructional systems that use automated essay scoring typically have features that may communicate better with the heterogeneity of students out there.  So for example, Hispanic students can get feedback in Spanish even though they may be writing their essays in English.

For more, see these GettingSmart posts:

Measurement is Friend Not Foe to Creativity

Setting the Story Straight on Essay Scoring

How Formative Assessment Supports Writing to Learn

Auto Essay Scoring Headlines NCME, Addresses Critics

Automated Essay Scoring Demonstrated Effective in Big Trial

Less Grading, More Teaching, Deeper Learning

Deeper Learning Not Lighter Journalism

Getting Ready for Online Assessment

Hewlett Sponsored Assessment Prize Draws Amazing Talent

Hewlett Foundation Sponsors Prize to Improve Automated Scoring

How Intelligent Scoring Will Help Create an Intelligent System

Tom is CEO of Open Education Solutions, the manager of ASAP

1 Comments

company of heroes 2 serial key /

Why visitors still make use of to read news papers when in this technological world all is existing on net?