Universities are increasingly using computer programs to supervise university students sitting their exams. Is this the long run of testing?

Due to the pandemic, institutions worldwide have rapidly adopted exam software like Examplify, ExamSoft and ProctorU.

Proctoring technology allows exam-takers to be monitored off-campus. They can sit exams of their homes, as a substitute of an individual having to observe them in a conventional exam room. Some programs simply enable an individual to supervise students remotely.

More sophisticated, automated proctoring software hijacks the coed’s computer to dam and monitor suspicious activity. These programs often use artificial intelligence (AI) to scrutinise exam conduct.

Our recent research paper explored the ethics of automated proctoring. We found the promise of the software alluring, however it carries substantial risks.



Some educational institutions claim proctoring technologies are needed to forestall cheating. Some other institutions and students are concerned about hidden dangers.

Indeed, students have launched protests, petitions and lawsuits. They condemn online proctoring as discriminatory and intrusive, with overtones of Big Brother. Some proctoring firms have responded with attempts to stifle protest, which include suing their critics.

A student’s criticism that test proctoring AI wrongly flagged her as cheating attracted tens of millions of views on TikTok.

What does the software do?

Automated proctoring programs offer tools for examiners to forestall cheating. The programs can capture system information, block web access and analyse keyboard strokes. They can even commandeer computer cameras and microphones to record exam-takers and their surroundings.

Some programs use AI to “flag” suspicious behaviour. Facial recognition algorithms check to make certain the coed continues to be seated and nobody else has entered the room. The programs also discover whispering, atypical typing, unusual movements and other behaviours that would suggest cheating.

After this system “flags” an incident, examiners can investigate further by viewing stored video and audio and questioning the coed.

Why use proctoring software?

Automated proctoring software purports to scale back cheating in remotely administered exams — a necessity through the pandemic. Fair exams protect the worth of qualifications and signal that academic honesty matters. They are a key a part of certification requirements for skilled fields like medicine and law.

Cheating is unfair to honest students. If left unchecked, it increases incentives for these students to cheat.

The firms selling proctoring software claim their tools prevent cheating and improve exam fairness for everybody — but our work calls that into query.

So what are the issues?

Security

We evaluated the software and located easy technical tricks can bypass lots of the anti-cheating protections. This finding suggests the tools may provide only limited advantages.

Requiring students to put in software with such powerful control over a pc is a security risk. In some cases the software surreptitiously stays even after students uninstall it.

Access

Some students may lack access to the best devices and the fast web connections the software requires. This results in technical issues that cause stress and drawback. In one incident, 41% of scholars experienced technical problems.

Privacy

Online proctoring creates privacy issues. Video capture means examiners can see into students’ homes and scrutinise their faces without being noticed. Such intimate monitoring, which is recorded for potential repeat viewings, distinguishes it from traditional in-person exam supervision.

Fairness and bias

Proctoring software raises significant fairness concerns. Facial recognition algorithms within the software we evaluated are usually not at all times accurate.

A forthcoming paper by one in every of us found the algorithms utilized by the most important US-based manufacturers don’t discover darker-skinned faces as accurately as lighter-skinned faces. The resulting hidden discrimination may add to societal biases. Others have reported similar concerns in proctoring software and in facial recognition technology generally.

Proctoring software uses facial recognition technology, which has well-documented problems of ethnic bias.
Shutterstock


Also of concern, the proctoring algorithms may falsely flag atypical eye or head movements in exam-takers. This could lead on to unwarranted suspicions about students who are usually not neuro-typical or who’ve idiosyncratic exam-sitting styles. Even without automated proctoring, exams are already stressful events that affect our behaviour.

Investigating baseless suspicions

Educational institutions can often select which automated functions to make use of or reject. Proctoring firms may insist AI-generated “flags” are usually not proof of educational dishonesty but only reasons to analyze possible cheating on the institution’s discretion.

However, merely investigating and questioning a student can itself be unfair and traumatic when based on spurious machine-generated suspicions.

Surveillance culture

Finally, automated exam monitoring may set a broader precedent. Public concerns about surveillance and automatic decision-making are growing. We must be cautious when introducing potentially harmful technologies, especially when these are imposed without our real consent.



Where to from here?

It’s necessary to search out ways to fairly administer exams remotely. We is not going to at all times have the ability to interchange exams with other assessments.

Nonetheless, institutions using automated proctoring software should be accountable. This means being transparent with students about how the technology works and what can occur to student data.

Examiners could also offer meaningful alternatives reminiscent of in-person exam-sitting options. Offering alternatives is key to informed consent.

While proctoring tools seemingly offer a panacea, institutions must rigorously weigh the risks inherent within the technology.

This article was originally published at theconversation.com