Since the beginning of the pandemic, more public school students are using laptops, tablets or similar devices issued by their schools.

The percentage of teachers who reported their schools had provided their students with such devices doubled from 43% before the pandemic to 86% in the course of the pandemic, a September 2021 report shows.

In one sense, it is perhaps tempting to rejoice how schools are doing more to maintain their students digitally connected in the course of the pandemic. The problem is, schools usually are not just providing kids with computers to maintain up with their schoolwork. Instead – in a trend that might easily be described as Orwellian – the overwhelming majority of colleges are also using those devices to maintain tabs on what students are doing of their personal lives.

Indeed, 80% of teachers and 77% of highschool students reported that their schools had installed artificial intelligence-based surveillance software on these devices to observe students’ online activities and what’s stored in the pc.

This student surveillance is happening – at taxpayer expense – in cities and college communities throughout the United States.

For instance, within the Minneapolis school district, school officials paid over $355,000 to make use of tools provided by student surveillance company Gaggle until 2023. Three-quarters of incidents reported – that’s, cases where the system flagged students’ online activity – took place outside school hours.

In Baltimore, where the general public school system uses the GoGuardian surveillance app, cops are sent to children’s homes when the system detects students typing keywords related to self-harm.

Safety versus privacy

Vendors claim these tools keep students secure from self-harm or online activities that may lead to hassle. However, privacy groups and news outlets have raised questions on those claims.

Vendors often refuse to disclose how their artificial intelligence programs were trained and the kind of data used to coach them.

Privacy advocates fear these tools may harm students by criminalizing mental health problems and deterring free expression.

As a researcher who studies privacy and security issues in various settings, I do know that intrusive surveillance techniques cause emotional and psychological harm to students, disproportionately penalize minority students and weaken online security.

Artificial intelligence not intelligent enough

Even the most advanced artificial intelligence lacks the power to grasp human language and context. This is why student surveillance systems pick up loads of false positives as an alternative of real problems.

In some cases, these surveillance programs have flagged students discussing music deemed suspicious and even students talking concerning the novel “To Kill a Mockingbird.”

Harm to students

When students know they’re being monitored, they’re less likely to share true thoughts online and are more careful about what they search. This can discourage vulnerable groups, reminiscent of students with mental health issues, from getting needed services.

When students know that their every move and all the pieces read and written is watched, also they are less prone to grow to be adults with a high level of self-confidence. In general, surveillance has a negative impact on students’ ability to act and use analytical reasoning. It also hinders the event of the talents and mindset needed to exercise their rights.

More hostile impact on minorities

U.S. schools disproportionately discipline minority students. African American students’ possibilities of being suspended are greater than 3 times higher than that of their white peers.

After evaluating flagged content, vendors report any concerns to highschool officials, who take disciplinary actions on a case-by-case basis. The lack of oversight in schools’ use of those tools may lead to further harm for minority students.

The situation is worsened by the indisputable fact that Black and Hispanic students rely more on school devices than their white peers do. This in turn makes minority students more prone to be monitored and exposes them to greater risk of some form of intervention.

Students of color usually tend to depend on school-issued laptops than their white peers are.
Igor Alecsander/E+ via Getty Images

When each minority students and their white peers are monitored, the previous group is more prone to be penalized since the training data utilized in developing artificial intelligence programs often fails to incorporate enough minorities. Artificial intelligence programs usually tend to flag languages written and spoken by such groups. This is on account of the underrepresentation of languages written and spoken by minorities within the datasets used to coach such programs and the dearth of diversity of individuals working on this field.

Leading AI models are 50% more prone to flag tweets written by African Americans as “offensive” that those written by others. They are 2.2 times more prone to flag tweets written in African American slang.

These tools also affect sexual and gender minorities more adversely. Gaggle has reportedly flagged “gay,” “lesbian” and other LGBTQ-related terms because they’re related to pornography, though the terms are sometimes used to explain one’s identity. Gaggle says it monitors this language to forestall cyberbullying.

Increased security risk

These surveillance systems also increase students’ cybersecurity risks. First, to comprehensively monitor students’ activities, surveillance vendors compel students to put in a set of certificates generally known as root certificates. As the highest-level security certificate installed in a tool, a root certificate functions as a “master certificate” to find out your entire system’s security. One drawback is that these certificates compromise cybersecurity checks which can be built into these devices.

Gaggle scans digital files of greater than 5 million students annually.

In addition to working with Gaggle, some schools also contract with a vendor like Contentkeeper to put in a root certificate on students’ computers. This tactic of putting in certificates is analogous to the approach that authoritarian regimes, reminiscent of the Kazakhstani government, use to monitor and control their residents and that cybercriminals use to lure victims to infected web sites.

Second, surveillance system vendors use insecure systems that hackers can exploit. In March 2021, computer security software company McAfee found several vulnerabilities in student monitoring system vendor Netop’s Vision Pro Education software. For instance, Netop didn’t encrypt communications between teachers and students to dam unauthorized access.

The software was utilized by over 9,000 schools worldwide to observe hundreds of thousands of scholars. The vulnerability allowed hackers to achieve control over webcams and microphones in students’ computers.

Finally, personal information of scholars that’s stored by the vendors is vulnerable to breaches. In July 2020, criminals stole 444,000 students’ personal data – including names, email addresses, home addresses, phone numbers and passwords – by hacking online proctoring service ProctorU. This data was then leaked online.

Schools would do well to look more closely on the harm being attributable to their surveillance of scholars and to query whether or not they actually make students more secure – or less.

This article was originally published at