The release of OpenAI’s ChatGPT chatbot has given us a glimpse into the long run of teaching and learning alongside artificial intelligence.

Educators immediately identified the chatbot’s ability to generate meaningful responses to questions from assessments and exams. And it’s often impossible to attribute these responses to a selected source – making it difficult to detect plagiarism.

Concerns didn’t go unnoticed. Shortly after ChatGPT’s release, OpenAI announced it was developing a “digital watermark” to embed into the chatbot’s responses. This form of watermark is embedded as a digital signal that may discover the content as being AI-generated, and which (in theory) ought to be difficult to remove.

But as experts have pointed it, people will likely find workarounds. And it won’t be long before other AI models just as capable as ChatGPT pop up. What will this mean for our academic spaces?

It’s secure to say we are able to expect some difficult years ahead.



But first, what’s ChatGPT?

ChatGPT is powered by GPT3, a big language model (LLM) trained on an infinite amount of text data — about 500 billion words to be precise. Large language models work by predicting the following most-likely word when given a listing of input words (often phrased as a matter).

The model is trained using many billions of parameters that may represent the complexity of the world. By inputting real-world data, the algorithm readjusts its parameters until it could actually produce human-like responses to a variety of questions. For example:

How many eggs can fit right into a grain of rice?

It isn’t possible for eggs to suit right into a grain of rice, as a grain of rice is far smaller in size than an egg. A grain of rice is a small, round seed that is usually lower than a millimetre in diameter, whereas an egg is a much larger object that typically measures several centimetres in length and width […]

Although it’s not foolproof, ChatGPT’s capabilities each shock and encourage. It can write songs, programming code, and simulate entire job interview sessions. It even passed the Amazon Web Services Certified Cloud Practitioner exam, which generally takes 2-6 months to arrange for.

Perhaps what’s most alarming is the technology continues to be in its early stages. The hundreds of thousands of users exploring ChatGPT’s uses are concurrently providing more data for OpenAI to enhance the chatbot.

The next version of the model, GPT4, could have about 100 trillion parameters – about 500 times greater than GPT3. This is approaching the variety of neural connections within the human brain.



How will AI affect education?

The power of AI systems is placing an enormous query mark over our education and assessment practices.

Assessment in schools and universities is generally based on students providing some product of their learning to be marked, often an essay or written task. With AI models, these “products” will be produced to a better standard, in less time and with little or no effort from a student.

In other words, the product a student provides may now not provide real evidence of their achievement of the course outcomes.

And it’s not only an issue for written assessments. A study published in February showed OpenAI’s GPT3 language model significantly outperformed most students in introductory programming courses. According to the authors, this raises “an emergent existential threat to the teaching and learning of introductory programming”.

The model may also generate screenplays and theatre scripts, while AI image generators corresponding to DALL-E can produce high-quality art.



How should we respond?

Moving forward, we’ll need to think about ways AI will be used to support teaching and learning, slightly than disrupt it. Here are 3 ways to do that.

1. Integrate AI into classrooms and lecture halls

History has shown repeatedly that educational institutions can adapt to recent technologies. In the Seventies the rise of portable calculators had maths educators concerned about the long run of their subject – however it’s secure to say maths survived.

Just as Wikipedia and Google didn’t spell the top of assessments, neither will AI.
In fact, recent technologes result in novel and revolutionary ways of doing work. The same will apply to learning and teaching with AI.

Rather than being a tool to ban, AI models ought to be meaningfully integrated into teaching and learning.

2. Judge students on critical thought

One thing an AI model can’t emulate is the of learning, and the mental aerobics this involves.

The design of assessments could shift from assessing just the ultimate product, to assessing the whole process that led a student to it. The focus is then placed squarely on a student’s critical pondering, creativity and problem-solving skills.

Students could freely use AI to finish the duty and still be marked on their very own merit.

3. Assess things that matter

Instead of switching to in-class examination to ban the usage of AI (which some could also be tempted to do), educators can design assessments that concentrate on what students to know to achieve success in the long run. AI, it seems, can be one in every of these items.

AI models will increasingly have uses across sectors because the technology is scaled up. If students will use AI of their future workplaces, why not test them on it now?

The dawn of AI

Vladimir Lenin, leader of Russia’s 1917 Bolshevik Revolution, supposedly said:

There are many years where nothing happens, and there are weeks where many years occur.

This statement has come to roost in the sphere of artificial intelligence. AI is forcing us to rethink education. But if we embrace it, it could empower students and teachers.

This article was originally published at theconversation.com