Few technologies have shown as much potential for shaping our future as artificial intelligence. Specialists in fields from medicine to microfinance to the military are evaluating AI tools and exploring how they may change their work and world. For creative professionals, AI presents unique challenges and opportunities – particularly generative AI, using algorithms to convert massive amounts of information into latest content.

The way forward for generative AI and its impact on art and design was the subject of a sold-out panel discussion on October 26 on the MIT Bartos Theater. It was a part of the annual meeting of the Council for the Arts at MIT (CAMIT), a bunch of alumni and other supporters of the humanities at MIT, and was presented jointly by MIT Center for Arts, Science and Technology (CAST), a cross-school initiative for artist residencies and interdisciplinary projects.

Introduced by Andrea Volpe, Director of CAMIT, and moderated by Onur Yüce Gün SM ’06, PhD’16, the panel featured multimedia artist and social science researcher Ziv Epstein SM’19, PhD’23, MIT Professor of Architecture and Director of the , involved SMArchS and SMArchS AD programs Ana Miljački and artist and roboticist Alex Reben MAS ’10.

play video

Panel discussion: How is generative AI changing art and design?
Thumbnail image created using Google DeepMind AI image generator.
Video: Art at MIT

The discussion revolved around three themes: emergence, embodiment and expectations:

Origin

Moderator Onur Yüce Gün: In lots of your works, an issue often arises – an ambiguity – and this ambiguity is inherent to the creative process in art and design. Does generative AI allow you to resolve these ambiguities?

Ana Miljacki: In the summer of 2022, the memorial cemetery in Mostar (in Bosnia and Herzegovina) was destroyed. It was a post-World War II Yugoslavian monument and we wanted to search out a approach to uphold the values ​​that the monument stood for. We compiled video footage from six different monuments and used AI to create a non-linear documentary, a triptych running on three video screens and accompanied by a soundscape. With this project We have created an artificial memory, a approach to embed these memories and values ​​within the minds of people that have never lived these memories or values. This is the form of ambiguity that might be problematic in science and is fascinating for artists, designers and designers. It’s also a bit scary.

Ziv Epstein: There is a few debate about whether generative AI is a tool or an agent. But even when we call it a tool, we must do not forget that tools are usually not neutral. Think about photography. When photography emerged, many painters feared that it could mean the tip of art. But it turned out that photography gave painters the liberty to pursue other things. Generative AI is in fact a unique form of tool, because it draws on a considerable amount of other people’s work. An artistic and artistic agency is already anchored in these systems. There are already ambiguities about how these existing works might be represented and what cycles and ambiguities we’ll proceed.

Alex Reben: I’m often asked whether these systems are literally as creative as we’re creative. From my very own experience, I even have often been surprised by the outcomes I create using AI. I see that I can take things in a direction that’s consistent with what I might have done alone, but sufficiently different from what I could have done, reinforced or modified or modified. So there are ambiguities. However, we must do not forget that the term AI can also be ambiguous. It’s actually many various things.

embodiment

Moderator: Most of us use computers each day, but we experience the world through our senses, through our bodies. Art and design create tangible experiences. We hear them, see them, touch them. Have we achieved the identical sensory interaction with AI systems?

Milački: As long as we work in images, we work in two dimensions. But at the least within the project we did across the Mostar Monument, for me we were in a position to create impact on different levels, levels that together produce something larger than a two-dimensional image moving in time. Through images and a soundscape now we have created a spatial time experience, a wealthy sensory experience that goes beyond the 2 dimensions of the screen.

Vines: I believe embodiment for me means with the ability to contact, interact with and alter the world. In one in every of my projects, we used AI to create a “Dali-like” image after which first converted it right into a three-dimensional object using 3D printing after which solid it in bronze in a foundry. There was even a patina artist who refined the surface. I cite this instance to point out how many individuals were ultimately involved within the creation of this murals. Human fingerprints may very well be seen at every step.

Epstein: The query is how we will embed meaningful human control into these systems in order that they may more closely resemble, say, a violin. A violin player has all varieties of causal inputs – physical gestures with which he can transform his artistic intention into outputs, into notes and sounds. We are currently still a great distance from that with generative AI. Our interaction is actually typing slightly text and getting something back. We’re mainly screaming at a black box.

expectations

Moderator: These latest technologies are spreading so quickly, almost like an explosion. And there are huge expectations about what they are going to do. Instead of hitting the accelerator here, I’d moderately test the brakes and ask what these technologies don’t do. Are there guarantees they can not keep?

Milački: I hope we don’t go to “Westworld.” I understand that we’d like AI to unravel complex computing problems. But I hope it shouldn’t be used to switch considering. Because as a tool, AI is definitely nostalgic. It can only work with what already exists after which produce probable results. And that signifies that it reproduces all of the biases and gaps within the archive that fed it. In architecture, for instance, this archive consists of works by white male European architects. We have to work out how to not perpetuate this sort of bias but to challenge it.

Epstein: In some ways, using AI now’s like putting on a jetpack and wearing a blindfold. You’re driving really fast, but you do not really know where you are going. Now that this technology appears to have the option to do human-like things, I believe that is an awesome opportunity for us to take into consideration what it means to be human. My hope is that generative AI generally is a form of ontological wrecking ball, that it may well shake things up in very interesting ways.

Vines: I do know from history that it is kind of difficult to predict the longer term of technology. Therefore, with this latest technology, it is nearly unimaginable to predict the negative – which can not occur. If you look back at what we expected now, on the predictions that were made, it’s totally different than what we even have. I do not think anyone can say with certainty today what AI won’t have the option to do at some point. Just as we cannot say what science or humans can achieve. The neatest thing we will do at once is attempt to advance these technologies in ways which might be helpful.

This article was originally published at news.mit.edu