ChatGPT has forged long shadows over the media as the newest type of disruptive technology. For some, ChatGPT is a harbinger of the top of academic and scientific integrity, and a threat to white collar jobs and our democratic institutions.

How concerned should we be about generative artificial intelligence (AI)? The developers of ChatGPT describe it as “a model… which interacts in a conversational way” while also calling it a “horrible product” for its inconsistent results.

It can write emails, summarize documents, review code and supply comments, translate documents, create content, play games, and, in fact, chat. This is hardly the stuff of a dystopian future.



We shouldn’t fear the introduction of technologies, but neither should we assume they serve our interests. Societies are in a relentless means of cultural evolution defined by inertia from the past, temporary consensus and disruptive technologies that introduce recent ideas and approaches.

We must understand and embrace the co-evolution of humans and technology by considering what a technology is designed to do, the way it pertains to us and the way our lives will change from it.

Are ChatGPT and DALL-E really creators?

Along with intelligence, creativity is commonly considered a uniquely human ability. But creativity will not be exclusive to humans — it’s a property that has emerged across species as a product of convergent evolution.

Species as diverse as crows, octopuses, dolphins and chimpanzees can improvize and use tools as well.

Despite the liberal use of the term, creativity is notoriously hard to capture. Its features include the amount of output, identifying connections between seemingly unrelated things (distant associations) and providing atypical solutions to problems.

Creativity doesn’t simply reside in the person; our social networks and values are also essential. As the presence of cultural variants increases, we have now a bigger pool of ideas, products and processes to attract from.

Visitors view artist Refik Anadol’s exhibit on the Museum of Modern Art in January 2023 in New York. The art installation is AI-generated and meant to be a thought-provoking interpretation of the New York City museum’s prestigious collection.
(AP Photo/John Minchillo)

Our cultural experiences are resources for creativity. The more diverse ideas we’re exposed to, the more novel connections we are able to make. Studies have suggested that multicultural experience is positively related to creativity. The greater the gap between cultures, the more creative products we are able to observe.

Creativity may result in convergence. Different individuals can create similar ideas independent of each other, a process known as scientific co-discovery. The invention of calculus and the idea of natural selection are essentially the most outstanding examples of this.

Artificial intelligence is defined by its ability to learn, discover patterns and use decision-making rules.

If linguistic and artistic products are patterns, then AI — especially those like ChatGPT and DALL-E — must be able to creativity by assimilating and mixing divergent patterns from different artists. Microsoft’s Bing chatbot claims that as one among its core values.

AI needs people

There is a fundamental problem with such programs: art is now data. By scooping up these products through a means of evaluation and synthesis, we are able to ignore the contributions and cultural traditions of human creators. Without citing and crediting these sources, they could be seen as high-tech plagiarism, appropriating artistic products which have taken generations to build up. Concerns of cultural appropriation must even be applicable to AI.

AI might someday evolve in unpredictable ways, but for the moment, they still depend on humans for his or her data, design and operations, and the social and ethical challenges they present.

Humans are still needed for quality control. These efforts often reside throughout the impenetrable black box of AI, with these operations often outsourced to markets where labour is cheaper.

The recent high-profile story of CNET’s “AI journalist” presents one other example of why expert human interventions are needed.

CNET began discretely using an AI bot to jot down articles in November 2020. After significant errors were identified by other news sites, the web site ended up publishing lengthy corrections for the AI-written content and did a full audit of the tool.

A robotic hand and a human hand touch their index fingers together, emulating the famous 'The Creation of Adam' painting by Michelangelo
AI might someday evolve in unpredictable ways, but for the moment, it still relies on humans.
(Shutterstock)

At present, there aren’t any rules to find out whether AI products are creative, coherent or meaningful. These are decisions that should be made by people.

As industries adopt AI, old roles occupied by humans will probably be lost. Research tells us these losses will probably be felt essentially the most by those in already vulnerable positions. This pattern follows a general trend of adopting technologies before we understand — or care about — their social and ethical implications.

Industries rarely consider how a displaced workforce will probably be re-trained, leaving those individuals and their communities to handle these disruptions.

Systemic issues transcend AI

DALL-E has been portrayed as a threat to artistic integrity due to its ability to robotically generate images of individuals, exotic worlds and fantastical imagery. Others claim ChatGPT has killed the essay.

Rather than seeing AI because the reason for recent problems, we would higher understand AI ethics as bringing attention to old ones. Academic misconduct is a standard problem attributable to underlying issues including peer influence, perceived consensus and perception of penalties.

Programs like ChatGPT and DALL-E will merely facilitate such behaviour. Institutions must acknowledge these vulnerabilities and develop recent policies, procedures and ethical norms to handle these issues.



Questionable research practices are also not unusual. Concerns over AI-authored research papers are simply an extension of inappropriate authorship practices, resembling ghost and gift authorship within the biomedical sciences. They hinge on discipline conventions, outdated academic reward systems and an absence of private integrity.

As publishers reckon with questions of AI authorship, they need to confront deeper issues, like why the mass production of educational papers continues to be incentivized.

New solutions to recent problems

Before we shift responsibility to institutions, we’d like to think about whether we’re providing them with sufficient resources to satisfy these challenges. Teachers are already burned out and the peer review system is overtaxed.

One solution is to fight AI with AI using plagiarism detection tools. Other tools could be developed to attribute art work to its creators, or detect using AI in written papers.

The solutions to AI are hardly easy, but they could be stated simply: the fault will not be in our AI, but in ourselves. To paraphrase Nietzsche, if you happen to stare into the AI abyss, it’s going to stare back at you.

This article was originally published at theconversation.com