Creativity for all – but lack of skills?
Potential inaccuracies, biases and plagiarism
With humans surpassed, area of interest and ‘handmade’ jobs will remain
Old jobs will go, recent jobs will emerge
Leaps in technology result in recent skills


Creativity for all – but lack of skills?

Lynne Parker, Associate Vice Chancellor, University of Tennessee

Large language models are making creativity and knowledge work accessible to all. Everyone with a web connection can now use tools like ChatGPT or DALL-E 2 to specific themselves and make sense of big stores of knowledge by, for instance, producing text summaries.

Especially notable is the depth of humanlike expertise large language models display. In just minutes, novices can create illustrations for his or her business presentations, generate marketing pitches, get ideas to overcome author’s block, or generate recent computer code to perform specified functions, all at a level of quality typically attributed to human experts.

These recent AI tools can’t read minds, after all. A brand new, yet simpler, form of human creativity is required in the shape of text prompts to get the outcomes the human user is in search of. Through iterative prompting – an example of human-AI collaboration – the AI system generates successive rounds of outputs until the human writing the prompts is satisfied with the outcomes. For example, the (human) winner of the recent Colorado State Fair competition within the digital artist category, who used an AI-powered tool, demonstrated creativity, but not of the kind that requires brushes and one eye on color and texture.

While there are significant advantages to opening the world of creativity and knowledge work to everyone, these recent AI tools even have downsides. First, they might speed up the lack of vital human skills that can remain vital in the approaching years, especially writing skills. Educational institutes have to craft and implement policies on allowable uses of huge language models to make sure fair play and desirable learning outcomes.

Educators are preparing for a world where students have ready access to AI-powered text generators.

Second, these AI tools raise questions around mental property protections. While human creators are frequently inspired by existing artifacts on the earth, including architecture and the writings, music and paintings of others, there are unanswered questions on the correct and fair use by large language models of copyrighted or open-source training examples. Ongoing lawsuits at the moment are debating this issue, which can have implications for the long run design and use of huge language models.

As society navigates the implications of those recent AI tools, the general public seems able to embrace them. The chatbot ChatGPT went viral quickly, as did image generator Dall-E mini and others. This suggests an enormous untapped potential for creativity, and the importance of constructing creative and knowledge work accessible to all.


Potential inaccuracies, biases and plagiarism

Daniel Acuña, Associate Professor of Computer Science, University of Colorado Boulder

I’m an everyday user of GitHub Copilot, a tool for helping people write computer code, and I’ve spent countless hours twiddling with ChatGPT and similar tools for AI-generated text. In my experience, these tools are good at exploring ideas that I haven’t considered before.

I’ve been impressed by the models’ capability to translate my instructions into coherent text or code. They are useful for locating recent ways to enhance the flow of my ideas, or creating solutions with software packages that I didn’t know existed. Once I see what these tools generate, I can evaluate their quality and edit heavily. Overall, I feel they raise the bar on what is taken into account creative.

But I actually have several reservations.

One set of problems is their inaccuracies – small and massive. With Copilot and ChatGPT, I’m consistently searching for whether ideas are too shallow – for instance, text without much substance or inefficient code, or output that’s just plain mistaken, resembling mistaken analogies or conclusions, or code that doesn’t run. If users usually are not critical of what these tools produce, the tools are potentially harmful.

Recently, Meta shut down its Galactica large language model for scientific text since it made up “facts” but sounded very confident. The concern was that it could pollute the web with confident-sounding falsehoods.

Another problem is biases. Language models can learn from the information’s biases and replicate them. These biases are hard to see in text generation but very clear in image generation models. Researchers at OpenAI, creators of ChatGPT, have been relatively careful about what the model will reply to, but users routinely find ways around these guardrails.

Another problem is plagiarism. Recent research has shown that image generation tools often plagiarize the work of others. Does the identical occur with ChatGPT? I consider that we don’t know. The tool is likely to be paraphrasing its training data – a complicated type of plagiarism. Work in my lab shows that text plagiarism detection tools are far behind with regards to detecting paraphrasing.

Plagiarism is less complicated to see in images than in text. Is ChatGPT paraphrasing as well?
Somepalli, G., et al., CC BY

These tools are of their infancy, given their potential. For now, I consider there are answers to their current limitations. For example, tools could fact-check generated text against knowledge bases, use updated methods to detect and take away biases from large language models, and run results through more sophisticated plagiarism detection tools.


With humans surpassed, area of interest and ‘handmade’ jobs will remain

Kentaro Toyama, Professor of Community Information, University of Michigan

We human beings like to consider in our specialness, but science and technology have repeatedly proved this conviction mistaken. People once thought that humans were the one animals to make use of tools, to form teams or to propagate culture, but science has shown that other animals do each of these things.

Meanwhile, technology has quashed, one after the other, claims that cognitive tasks require a human brain. The first adding machine was invented in 1623. This past yr, a computer-generated work won an art contest. I consider that the singularity – the moment when computers meet and exceed human intelligence – is on the horizon.

How will human intelligence and creativity be valued when machines grow to be smarter and more creative than the brightest people? There will likely be a continuum. In some domains, people still value humans doing things, even when a pc can do it higher. It’s been 1 / 4 of a century since IBM’s Deep Blue beat world champion Garry Kasparov, but human chess – with all its drama – hasn’t gone away.

a magazine cover illustration showing an astronaut striding toward the viewer on a desert-like planet
Cosmopolitan magazine used DALL-E 2 to provide this cover.
©Hearst Magazine Media, Inc.

In other domains, human skill will seem costly and extraneous. Take illustration, for instance. For probably the most part, readers don’t care whether the graphic accompanying a magazine article was drawn by an individual or a pc – they simply want it to be relevant, recent and maybe entertaining. If a pc can draw well, do readers care whether the credit line says Mary Chen or System X? Illustrators would, but readers won’t even notice.

And, after all, this query isn’t black or white. Many fields can be a hybrid, where some discover a lucky area of interest, but a lot of the work is completed by computers. Think manufacturing – much of it today is completed by robots, but some people oversee the machines, and there stays a marketplace for handmade products.

If history is any guide, it’s almost certain that advances in AI will cause more jobs to fade, that creative-class individuals with human-only skills will grow to be richer but fewer in number, and that those that own creative technology will grow to be the brand new mega-rich. If there’s a silver lining, it is likely to be that when much more individuals are and not using a decent livelihood, people might muster the political will to contain runaway inequality.


Old jobs will go, recent jobs will emerge

Mark Finlayson, Associate Professor of Computer Science, Florida International University

Large language models are sophisticated sequence completion machines: Give one a sequence of words (“I would love to eat an …”) and it’s going to return likely completions (“… apple.”). Large language models like ChatGPT which have been trained on record-breaking numbers of words (trillions) have surprised many, including many AI researchers, with how realistic, extensive, flexible and context-sensitive their completions are.

Like any powerful recent technology that automates a skill – on this case, the generation of coherent, albeit somewhat generic, text – it’s going to affect those that offer that skill within the marketplace. To conceive of what might occur, it is helpful to recall the impact of the introduction of word processing programs within the early Nineteen Eighties. Certain jobs like typist almost completely disappeared. But, on the upside, anyone with a pc was in a position to generate well-typeset documents with ease, broadly increasing productivity.

Further, recent jobs and skills appeared that were previously unimagined, just like the oft-included resume item MS Office. And the marketplace for high-end document production remained, becoming rather more capable, sophisticated and specialized.

I feel this same pattern will almost actually hold for giant language models: There will now not be a necessity so that you can ask other people to draft coherent, generic text. On the opposite hand, large language models will enable recent ways of working, and likewise result in recent and as yet unimagined jobs.

To see this, consider just three elements where large language models fall short. First, it could possibly take quite a little bit of (human) cleverness to craft a prompt that gets the specified output. Minor changes within the prompt may end up in a serious change within the output.

Second, large language models can generate inappropriate or nonsensical output all of sudden.

Third, so far as AI researchers can tell, large language models have no abstract, general understanding of what’s true or false, if something is correct or mistaken, and what’s just common sense. Notably, they can’t do relatively simple arithmetic. This signifies that their output can unexpectedly be misleading, biased, logically faulty or simply plain false.

These failings are opportunities for creative and knowledge staff. For much content creation, even for general audiences, people will still need the judgment of human creative and knowledge staff to prompt, guide, collate, curate, edit and particularly augment machines’ output. Many varieties of specialized and highly technical language will remain out of reach of machines for the foreseeable future. And there can be recent varieties of work – for instance, those that will make a business out of fine-tuning in-house large language models to generate certain specialized varieties of text to serve particular markets.

In sum, although large language models actually portend disruption for creative and knowledge staff, there are still many precious opportunities within the offing for those willing to adapt to and integrate these powerful recent tools.


Leaps in technology result in recent skills

Casey Greene, Professor of Biomedical Informatics, University of Colorado Anschutz Medical Campus

Technology changes the character of labor, and knowledge work isn’t any different. The past twenty years have seen biology and medicine undergoing transformation by rapidly advancing molecular characterization, resembling fast, inexpensive DNA sequencing, and the digitization of drugs in the shape of apps, telemedicine and data evaluation.

Some steps in technology feel larger than others. Yahoo deployed human curators to index emerging content throughout the dawn of the World Wide Web. The advent of algorithms that used information embedded within the linking patterns of the online to prioritize results radically altered the landscape of search, transforming how people gather information today.

The release of OpenAI’s ChatGPT indicates one other leap. ChatGPT wraps a state-of-the-art large language model tuned for chat right into a highly usable interface. It puts a decade of rapid progress in artificial intelligence at people’s fingertips. This tool can write passable cover letters and instruct users on addressing common problems in user-selected language styles.

Just as the abilities for locating information on the web modified with the arrival of Google, the abilities essential to attract the very best output from language models will center on creating prompts and prompt templates that produce desired outputs.

For the quilt letter example, multiple prompts are possible. “Write a canopy letter for a job” would produce a more generic output than “Write a canopy letter for a position as a knowledge entry specialist.” The user could craft much more specific prompts by pasting portions of the job description, resume and specific instructions – for instance, “highlight attention to detail.”

As with many technological advances, how people interact with the world will change within the era of widely accessible AI models. The query is whether or not society will use this moment to advance equity or exacerbate disparities.


This article was originally published at theconversation.com