As artificial intelligence is rapidly developing, more and more fascinating tools continue to come out almost every week. The educational field also gets its fair share of interesting high-tech stuff that teachers could only dream of just a few years ago.

One of the most sophisticated and a bit unexpected areas of technological advancement in the educational field that we’re going to cover in this article is essay grading. Or, automatic computer grading of written essays, to be exact.

According to A-Writer, an essay writing and coaching service, educational institutions and research facilities around the world are finding ways for the automated software to give students reliable feedback on their writing where educators lack time to grade their papers.

This could be an excellent solution to grade hundreds and even thousands of essays and research papers, all within amazingly short timeframes.

Sounds interesting? Read below, then, and you’ll know how technology affects the way educators grade essay writing.

The University of Michigan’s M-Write Program

One of the most impressive technological tools that are already making an impact is M-Write. It’s a project Gayle Morris Sweetland Center for Writing at Michigan that focuses on implementing writing-to- learn strategies in large enrollment courses to grade student essays and multiple choice assignments.

The amazing thing about M-Write is that it uses natural language processing technology, automated peer review with rubrics, conceptual writing prompts, and personalized feedback to grade essays and even increase student comprehension of key concepts of the courses.

Last year, M-Write was added with even more impressive technology such as automated text analysis (ATA) to identify students who may need extra help with writing essays. Grading is obviously as complicated as writing a research paper or an essay, so more technology need to be added to ensure a continuous progression.

ATA works as follows:

  • A student completes and submits a writing assignment
  • M-Write analyzes and grades the assignment
  • A writing fellow, who is a  former student who excelled in the class, reviews the grading
  • A built-in eCoach system delivers the score along with assignment analysis and feedback to the student.

M-Write is constantly updated with tools like ATA to enhance the grading ability – which is a necessary requirement since writing a research paper or an essay involves many elements to grade – but it’s already a special tool, obviously. At this point, it combines the automation with human oversight, which is a smart strategy because the technology isn’t sophisticated enough to deliver a human-like-level of quality content analysis.

As the result, it speeds up the grading process, giving the faculty and the students at the University of Michigan the kind of personalized insights on writing essays that they would get from a face-to-face interaction. This amazing ability also makes the technology attractive to businesses such as college paper writing services that require sophisticated tools to assist students with essay writing.

Project Essay Grade (PEG®) How Technology Affects the Way Teachers Grade Essay Writing

Another impressive technology is Project Essay Grade (PEG), an automated essay scoring system developed by Measurement Incorporated, a provider of customized assessment services for state governments and educational institutions. The company has been working on student essay writing scoring solutions since the 1980s, and PEG is one of their most sophisticated tools.

The heart of PEG is an automated essay scoring engine which uses natural language processing, classification methods, and semantic and syntactic analysis to deliver reliable scoring.

The educators looking for effective solutions to grade students writing a research paper or an essay will be glad to know that the engine is able to calculate over 500 features that reflect both basic and advanced characteristics of essay writing, including:

  • Construction
  • Grammar
  • Diction
  • Fluency.

PEG uses machine scoring and Artificial Intelligence (AI) scoring to grade student essays in both summative and formative assessments. According to PEG Current Usage and Research report prepared by Measurement Incorporated, the impressive ability of the tool to grade written works has been tested by numerous educational organizations throughout the U.S.

For example, the list of PEG users included the Utah State Office of Education, the Connecticut SBAC Aligned Practice Assessment (APA), the Smarter Balanced Assessment Consortium, the Hewlett Foundation, and lots of universities. For example, in 2013, PEG was deployed by the Smarter Balanced Assessment Consortium and scored 213,000 essay and short answer responses for a pilot test.

Independent scholars have also tested the effectiveness of PEG to grade university student essays. For example, a recent study published in Assessment & Evaluation in Higher Education Journal found that the tool was a cost-effective means of grading student work.

In fact, according to the researchers, 

“The PEG software was an efficient means for grading the essays with a capacity for approximately three documents graded every second.”

Clearly, human graders cannot match this speed, so they should be used as reviewers of PRG’s analysis. As with M-Write, the combination of human and machine grading was an effective solution to grading hundreds of essays in just a couple of hours.

Final Thoughts

With a number of students and assignments they have to complete increasing every year, taking a helping hand from technology makes a perfect sense. Amazing grading software options like the ones we’ve just described allows to grade more writing within a shorter timeframe, thus giving the educators more time to focus on helping students. In addition to that, the technology actually helps educators understand how students are learning. Now, this is going to really transform the education!



Please enter your comment!
Please enter your name here