New technologies deployed on borders for migration management and border security under the umbrella of smart border solutions are ignoring the elemental human rights of migrants.

Unmanned aerial vehicles (drones, for instance) are sometimes deployed within the surveillance of refugees in the US and the EU; big data analytics are getting used to watch migrants approaching the border. Though methods of border security and management vary, a terrific deal are increasingly used to forestall migratory movements.

Artificial intelligence (AI) is a crucial component of migration management. For instance, the EU, the US and Canada spend money on AI algorithms to automate decisions on asylum and visa applications and refugee resettlement. Meanwhile, the real-time data collected from migrants by various smart border and virtual wall solutions resembling satellites, drones and sensors are assessed by AI algorithms on the border.

At the US-Mexico border, for instance,the US Customs and Border Protection (CBP) agency is using artificial intelligence, military drones with facial recognition technologies, thermal imaging and pretend cellphone towers to watch migrants before they even reach the border. They can hearken to conversations between migrants, attempt to discover them from their faces, try their social media accounts and locate people attempting to cross borders.

A brand new UN report has warned in regards to the risks of so-called “smart” border technology on refugees specifically. These technologies are helping border agencies to stop and control the movement of migrants, securitise migration governance by treating migrants as criminals and ignore the elemental rights of individuals to hunt asylum. Furthermore, they collect all data without taking the consent of migrants – aspects that in other circumstances would likely be criminal if deployed against residents.

As researcher Roxana Akhmetova has written: “the automated decision-making processes can exacerbate pre-existing vulnerabilities by adding on risks resembling bias, error, system failure and theft of information. All of which can lead to greater harm to migrants and their families. A rejected claim formed on an erroneous basis can result in persecution.”

This is an excellent example of how algorithmic technology more generally might be influenced through the biases of its creators to discriminate against the lower classes of society and serve the privileged ones. In the case of refugees, individuals who have needed to flee their homes due to war are actually being subjected to experiments with advanced technology that can increase the risks carried by this already vulnerable population.

Data and consent

Another issue at stake here is the informed consent of refugees. This refers to the concept that refugees should understand the systems they’re subjected to and will have the prospect to opt out of them. While voluntary informed consent is a legal requirement, many academics and humanitarian NGOs deal with “meaningful informed consent” which is greater than signing a paper and helping refugees to totally understand what they’re subject to. Secret surveillance gives them no such likelihood. And the technologies involved are so complex that even the staff operating them have been said to lack the expertise to evaluate the moral and practical implications.



Recognition of the precise of ‘beneficiaries’ to reject these technologies isn’t realistic, neither is it practical.
EPA

Despite the recent UN report warning on the smart border solutions, many governments and various UN agencies coping with refugees increasingly prefer to employ tech-based solutions, for instance to evaluate people’s claims for aid, money transfer and identification. But what happens to individuals who aren’t willing to share their data, for any reason, be it political, religious or personal?

Use of those technologies requires public-private partnerships and technical preparations for a protracted time period before refugees encounter them on the bottom. And at the top of all of the processes to determine, fund and develop algorithms, recognition of the precise of “beneficiaries” to reject these technologies isn’t realistic, neither is it practical. Therefore, most of those tech-based investments categorically undermine refugees’ informed consent because the character of the work of those behind these decisions is to disclaim their rights.

Refugees can profit from the increasing use of digital technology, as smartphones and social media may help them connect with humanitarian organisations and stay in contact with families back home. But ignoring the facility imbalance created through their lack of rights consequently of using such technology results in the romanticisation of the connection between refugees and their smartphones.

It’s not too late to vary this course of technological development. But refugees would not have the identical political agency as domestic residents to organise and oppose government actions. If you must see what a dystopian tech-dominated future during which people lose their political autonomy looks like, the every day experiences of refugees will provide sufficient clues.

This article was originally published at theconversation.com