We have accepted the usage of artificial intelligence (AI) in complex processes — from health care to our day by day use of social media — often without critical investigation, until it is just too late. The use of AI is inescapable in our modern society, and it might perpetuate discrimination without its users being aware of any prejudice. When health-care providers depend on biased technology, there are real and harmful impacts.

This became clear recently when a study showed that pulse oximeters — which measure the quantity of oxygen within the blood and have been a necessary tool for clinical management of COVID-19 — are less accurate on individuals with darker skin than lighter skin. The findings resulted in a sweeping racial bias review now underway, in an try and create international standards for testing medical devices.

There are examples in health care, business, government and on a regular basis life where biased algorithms have led to problems, like sexist searches and racist predictions of an offender’s likelihood of re-offending.

AI is usually assumed to be more objective than humans. In reality, nonetheless, AI algorithms make decisions based on human-annotated data, which will be biased and exclusionary. Current research on bias in AI focuses mainly on gender and race. But what about age-related bias — can AI be ageist?

Ageist technologies?

In 2021, the World Health Organization released a worldwide report on aging, which called for urgent motion to combat ageism due to its widespread impacts on health and well-being.

Ageism is defined as “a technique of systematic stereotyping of and discrimination against people because they’re old.” It will be explicit or implicit, and may take the shape of negative attitudes, discriminatory activities, or institutional practices.

The pervasiveness of ageism has been delivered to the forefront throughout the COVID-19 pandemic. Older adults have been labelled as “burdens to societies,” and in some jurisdictions, age has been used as the only criterion for lifesaving treatments.

The WHO’s campaign to handle ageism.

Digital ageism exists when age-based bias and discrimination are created or supported by technology. A recent report indicates that a “digital world” of greater than 2.5 quintillion bytes of knowledge is produced every day. Yet though older adults are using technology in greater numbers — and benefiting from that use — they proceed to be the age cohort least prone to have access to a pc and the web.

Digital ageism can arise when ageist attitudes influence technology design, or when ageism makes it tougher for older adults to access and luxuriate in the total advantages of digital technologies.

Cycles of injustice

There are several intertwined cycles of injustice where technological, individual and social biases interact to supply, reinforce and contribute to digital ageism.

Barriers to technological access can exclude older adults from the research, design and development technique of digital technologies. Their absence in technology design and development might also be rationalized with the ageist belief that older adults are incapable of using technology. As such, older adults and their perspectives are rarely involved in the event of AI and related policies, funding and support services.

The unique experiences and wishes of older adults are ignored, despite age being a more powerful predictor of technology use than other demographic characteristics including race and gender.

AI is trained by data, and the absence of older adults could reproduce and even amplify the above ageist assumptions in its output. Many AI technologies are focused on a stereotypical image of an older adult unwell — a narrow segment of the population that ignores healthy aging. This creates a negative feedback loop that not only discourages older adults from using AI, but in addition ends in further data loss from these demographics that might improve AI accuracy.

Developers need to think about how older adults use technology as a way to design for them.

Even when older adults are included in large datasets, they are sometimes grouped in accordance with arbitrary divisions by developers. For example, older adults could also be defined as everyone aged 50 and older, despite younger age cohorts being divided into narrower age ranges. As a result, older adults and their needs can turn out to be invisible to AI systems.

In this fashion, AI systems reinforce inequality and magnify societal exclusion for sections of the population, making a “digital underclass” primarily made up of older, poor, racialized and marginalized groups.

Addressing digital ageism

We must understand the risks and harms related to age-related biases as more older adults turn to technology.

The first step is for researchers and developers to acknowledge the existence of digital ageism alongside other types of algorithmic biases, resembling racism and sexism. They must direct efforts towards identifying and measuring it. The next step is to develop safeguards for AI systems to mitigate ageist outcomes.

There is currently little or no training, auditing or oversight of AI-driven activities from a regulatory or legal perspective. For instance, Canada’s current AI regulatory regime is sorely lacking.

This presents a challenge, but in addition a possibility to incorporate ageism alongside other types of biases and discrimination in need of excision. To combat digital ageism, older adults have to be included in a meaningful and collaborative way in designing latest technologies.

With bias in AI now recognized as a critical problem in need of urgent motion, it’s time to think about the experience of digital ageism for older adults, and understand how growing old in an increasingly digital world may reinforce social inequalities, exclusion and marginalization.

This article was originally published at theconversation.com