Speech on the “Generative AI: Shaping the Future” symposium on November twenty eighth, MIT’s inaugural event Generative AI WeekKeynote speaker and iRobot co-founder Rodney Brooks warned attendees to not uncritically overestimate the capabilities of this latest technology, which underlies increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype results in hubris, and hubris results in hubris, and hubris results in failure,” warned Brooks, who can be a professor emeritus at MIT, former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founding father of Robust.AI.

“No technology has ever surpassed the rest,” he added.

The symposium that attracted tons of from Participants from academia and industry within the institute’s Kresge Auditorium were stuffed with hopeful messages about the chances that generative AI offers to make the world a greater place, including through art and creativity, interspersed with cautionary tales about what’s going to occur to those AIs -Tools that might go unsuitable are usually not developed responsibly.

“Generative AI” is a term that describes machine learning models that learn to generate latest material that is comparable to the information they were trained on. These models have demonstrated some incredible abilities, reminiscent of the flexibility to supply human-like creative writing, translate languages, generate functional computer code, or create realistic images from text prompts.

In her opening remarks on the opening of the symposium, MIT President Sally Kornbluth highlighted several projects that faculty and students have undertaken to make use of generative AI to have a positive impact on the world. For example, the work of the Axim Collaborative, a web based education initiative launched by MIT and Harvard, includes exploring the tutorial features of generative AI to assist disadvantaged students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects exploring how AI will transform people’s lives throughout society.

By hosting Generative AI Week, MIT hopes not only to showcase this type of innovation, but in addition to generate “collaborative collisions” amongst participants, Kornbluth said.

Collaboration between scientists, policymakers and industry will likely be critical if we’re to soundly integrate a rapidly evolving technology like generative AI that’s humane and helps people solve problems, she told the audience.

“I truthfully can’t imagine a challenge that higher suits MIT’s mission. It is an enormous responsibility, but I’m confident that if we face it head-on and as a community, we are able to rise to it,” she said.

While generative AI holds the potential to assist solve a few of our planet’s most pressing problems, the emergence of those powerful machine learning models has blurred the road between science fiction and reality, CSAIL Director Daniela Rus said in her opening remarks. It’s not about whether we are able to make machines that produce latest content, she said, but about how we are able to use these tools to enhance businesses and ensure sustainability.

“Today we are going to discuss the potential for a future during which generative AI not only exists as a technological marvel, but acts as a source of hope and power for good,” said Rus, who can be the Andrew and Erna Viterbi Professor within the Department of Electrical Engineering and computer science.

But before the discussion delved into the chances of generative AI, participants were first asked to reflect on their humanity while MIT professor Joshua Bennett read an original poem.

Bennett, a professor in MIT’s literature department and Distinguished Chair of the Humanities, was asked to put in writing a poem about what it means to be human and was inspired by his daughter, who was born three weeks ago.

The poem recounted his experiences as a boy watching together with his father and discussed the importance of passing traditions on to the following generation.

In his keynote speeches, Brooks got down to make clear a few of the deep scientific questions surrounding generative AI and explore what the technology can tell us about ourselves.

First, he tried to resolve a few of the mysteries surrounding generative AI tools like ChatGPT by explaining the fundamentals of how this massive language model works. ChatGPT, for instance, generates text word by word by determining what the following word must be within the context of what it has already written. While a human could write a story by fascinated about entire sentences, ChatGPT only focuses on the following word, Brooks explained.

ChatGPT 3.5 is predicated on a machine learning model with 175 billion parameters and was exposed to billions of pages of text on the internet during training. (The latest version, ChatGPT 4, is much more comprehensive.) It learns correlations between words on this huge corpus of text and uses that knowledge to suggest which word might come next when prompted.

The model has demonstrated some incredible abilities, reminiscent of the flexibility to put in writing a sonnet about robots within the form of Shakespeare’s famous work Sonnet 18. During his talk, Brooks introduced the sonnet he had asked ChatGPT to put in writing side by side together with his own So nice.

Although researchers still don’t understand exactly how these models work, Brooks assured the audience that the seemingly incredible capabilities of generative AI aren’t magic and that doesn’t suggest these models can do anything.

His biggest fears about generative AI don’t revolve around models that might someday surpass human intelligence. Rather, he’s most fearful about researchers who might throw away many years of fantastic work that was on the verge of breakthroughs simply to rush into shiny latest advances in generative AI; enterprise capital firms blindly swarming on technologies that may deliver the best margins; or the chance that a complete generation of engineers will ignore other types of software and AI.

Ultimately, those that consider generative AI can solve the world’s problems and people who consider it can only create latest problems have at the least one thing in common: Both groups are likely to overestimate the technology, he said.

“What’s so conceited about generative AI? The conceit is that it can in some way result in artificial general intelligence. It’s not that per se,” Brooks said.

Following Brooks’ presentation, a gaggle of MIT faculty spoke about their work with generative AI and took part in a panel discussion on future advances, vital but under-researched research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor within the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, Delta Electronics Professor of EECS and member of CSAIL; Ev Fedorenko, associate professor of brain and cognitive sciences and researcher on the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a distinguished professor of computer science and associate director of CSAIL. It was moderated by William T. Freeman, Thomas and Gerd Perkins Professor of EECS and member of CSAIL.

Panelists discussed several possible future research directions around generative AI, including the potential for integrating perception systems using human senses reminiscent of touch and smell, reasonably than focusing totally on speech and pictures. The researchers also spoke in regards to the importance of working with policymakers and the general public to make sure that generative AI tools are produced and used responsibly.

“One of the large risks of generative AI today is the chance of digital snake oil. “There is an enormous risk that many products will come onto the market that claim to work wonders but could possibly be very harmful in the long run,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater major, followed by a panel discussion on the long run of generative AI. The discussion included Joshua Tenenbaum, professor within the Department of Brain and Cognitive Sciences and member of CSAIL; Dina Katabi, Thuan and Nicole Pham Professor of EECS and principal investigator at CSAIL and the MIT Jameel Clinic; and Max Tegmark, Professor of Physics; and was moderated by Daniela Rus.

A spotlight of the discussion was the potential for developing generative AI models that transcend what we are able to do as humans, reminiscent of tools that may sense an individual’s emotions using electromagnetic signals to know how respiratory and heart rate change change an individual.

But a key to soundly integrating such AI into the true world is ensuring we are able to trust it, Tegmark said. When we all know that an AI tool meets the specifications we require, “we not need to be afraid to construct really powerful systems that exit into the world and do things for us,” he said.

This article was originally published at news.mit.edu