When people take into consideration artificial intelligence (AI), they might have visions of the longer term. But AI is already here. At its base, it’s the recreation of facets of human intelligence in computerised form. Like human intelligence, it has wide application.

Voice-operated personal assistants like Siri, self-driving cars, and text and image generators all use AI. It also curates our social media feeds. It helps firms to detect fraud and hire employees. It’s used to administer livestock, enhance crop yields and aid medical diagnoses.

Alongside its growing power and its potential, AI raises ethical and moral questions. The technology has already been on the centre of multiple scandals: the infringement of laws and rights, in addition to racial and gender discrimination. In short, it comes with a litany of ethical risks and dilemmas.

But what exactly are these risks? And how do they differ amongst countries? To discover, I undertook a thematic review of literature from wealthier countries to discover six high-level, universal ethical risk themes. I then interviewed experts involved in or related to the AI industry in South Africa and assessed how their perceptions of AI risk differed from or resonated with those themes.

The findings reflect marked similarities in AI risks between the worldwide north and South Africa for instance of a world south nation. But there have been some essential differences. These reflect South Africa’s unequal society and the incontrovertible fact that it’s on the periphery of AI development, utilisation and regulation.

Other developing countries that share similar features – an enormous digital divide, high inequality and unemployment and low quality education – likely have the same risk profile to South Africa.

Knowing what ethical risks may play out at a rustic level is very important because it may help policymakers and organisations to regulate their risk management policies and practices accordingly.

Universal themes

The six universal ethical risk themes I drew from reviewing global north literature were:

  • Accountability: It is unclear who’s accountable for the outputs of AI models and systems.

  • Bias: Shortcomings of algorithms, data or each entrench bias.

  • Transparency: AI systems operate as a “black box”. Developers and end users have a limited ability to know or confirm the output.

  • Autonomy: Humans lose the ability to make their very own decisions.

  • Socio-economic risks: AI may end in job losses and worsen inequality.

  • Maleficence: It might be utilized by criminals, terrorists and repressive state machinery.



Then I interviewed 16 experts involved in or related to South Africa’s AI industry. They included academics, researchers, designers of AI-related products, and other people who straddled the categories. For essentially the most part, the six themes I’d already identified resonated with them.

South African concerns

But the participants also identified five ethical risks that reflected South Africa’s country-level features. These were:

  • Foreign data and models: Parachuting data and AI models in from elsewhere.

  • Data limitations: Scarcity of knowledge sets that represent, reflect local conditions.

  • Exacerbating inequality: AI could deepen and entrench existing socio-economic inequalities.

  • Uninformed stakeholders: Most of the general public and policymakers have only a crude understanding of AI.

  • Absence of policy and regulation: There are currently no specific legal requirements or overarching government positions on AI in South Africa.

What all of it means

So, what do these findings tell us?

Firstly, the universal risks are mostly technical. They are linked to the features of AI and have technical solutions. For instance, bias will be mitigated by more accurate models and comprehensive data sets.

Most of the South African-specific risks are more socio-technical, manifesting the country’s environment. An absence of policy and regulation, for instance, will not be an inherent feature of AI. It is a symptom of the country being on the periphery of technology development and related policy formulation.

South African organisations and policymakers should subsequently not only concentrate on technical solutions but in addition closely consider AI’s socio-economic dimensions.

Secondly, the low levels of awareness among the many population suggest there’s little pressure on South African organisations to exhibit a commitment to moral AI. In contrast, organisations in the worldwide north have to point out cognisance of AI ethics, because their stakeholders are more attuned to their rights vis-à-vis digital services and products.

Finally, whereas the EU, UK and US have nascent rules and regulations around AI, South Africa has no regulation and limited laws relevant to AI.



The South African government has also failed to present much recognition to AI’s broader impact and ethical implications. This differs even from other emerging markets resembling Brazil, Egypt, India and Mauritius, which have national policies and methods that encourage the responsible use of AI.

Moving forward

AI may, for now, seem far faraway from South Africa’s prevailing socio-economic challenges. But it’ll turn out to be pervasive in the approaching years. South African organisations and policymakers should proactively govern AI ethics risks.

This starts with acknowledging that AI presents threats which can be distinct from those in the worldwide north, and that must be managed. Governing boards should add AI ethics to their agendas, and policymakers and members of governing boards should turn out to be educated on the technology.

Additionally, AI ethics risks ought to be added to corporate and government risk management strategies – just like climate change, which received scant attention 15 or 20 years ago but now features prominently.

Perhaps most significantly, the federal government should construct on the recent launch of the Artificial Intelligence Institute of South Africa, and introduce a tailored national strategy and appropriate regulation to make sure the ethical use of AI.

This article was originally published at theconversation.com