+ -

Art or code? A game show at the interface of AI and creativity

AI-generated image with musicians and instruments
(Bild: KI generiert, Interfinity/zvg)

The interactive game show “AI vs. Human Composers” explores the boundaries between human creativity and artificial intelligence in music at the Voltahalle Basel on March 20, 2024. Computer science professor Heiko Schuldt on an experiment that invites us to reflect on the evolution of technology and art.

26 February 2024 | Reto Caluori

AI-generated image with musicians and instruments
(Bild: KI generiert, Interfinity/zvg)

Dr. Schuldt, you are working with music festival Interfinity to pit AI-generated compositions against human-made music in a game show, with the audience then able to vote on who created the works. What result do you expect?

Ideally, the result will be completely undecided. My assumption is that the quality of AI compositions is so good that, at least to the untrained ear, it will not be possible to distinguish them from human compositions. I therefore do not expect there to be a clear bias toward one side or the other.

Portrait of Heiko Schuldt
Professor Heiko Schuldt. (Photo: University of Basel, DMI)

Does it make a difference to you if we can tell which music is which?

In many applications, it is very important to distinguish human-generated content from AI-generated content. In this case, however, I do not consider it essential. We are talking about classical pieces here, and it is no trivial matter to compose a piece for a complete orchestra in a way that sounds harmonious. Of course, this is something you can learn in composition education, but I think that AI can learn it just as well from data.

AI is already supporting medical diagnostics and automating processes in industry. With the game show, you will be challenging our ideas about creativity and the value of art. Are they the last refuge of the human mind?

Perhaps not anymore, and we can see this in other areas as well. AI can generate music and create images. As a result, more and more creative professions are being called into question. For a long time, there was a belief that it was mainly simple, manual, repetitive activities that would be replaced. Now, however, creative professions such as copywriting and graphic design are also at risk on a very large scale, because these creative services can now be provided by an AI.

Creativity and emotional resonance play a central role in art and music. Aren’t we less willing to accept an artificial composition than AI-generated software for process optimization?

I think this is a process. And once people see how good the quality of AI creations is, uptake will also increase, much as we have already seen with images. AI-generated images are now being used on a large scale because we have realized that the quality is really not so bad.

That sounds very optimistic. Do you also see any disadvantages to this development?

It definitely has drawbacks. For example, in the case of visual information, it soon becomes impossible to know what really exists and what has been generated by AI. What has really happened? What has been visually faked? AI text generation has already demonstrated that these systems can hallucinate. And this will be a major problem, which we can probably only solve with a very high degree of transparency. This means that we always have to make clear what is AI-generated content and what is real content that corresponds to actual events.

Following the game show, researchers from different disciplines will invite the audience to take part in a discussion. What expertise does society need to address the impact of AI?

We need expertise from all areas, because all areas are affected by it. The University of Basel, with its expertise across the full range of disciplines, is therefore perfectly positioned to advance precisely this debate on AI. It is not just a technical discussion, but also involves much broader questions: what are the legal consequences of using AI in a given field? What ethical guidelines do we need to define? What are the consequences for society? These are all things that I believe need to be discussed in a very broad and holistic way.

As a computer scientist, you understand better than many others what AI systems are based on and how they work. Where do you notice the biggest misunderstandings in how AI is perceived?

Often, an AI or a large language model is seen as a source of facts and reliable information – which they are not. The models produce results that are statistically relevant, but not necessarily real. That’s why it’s important to understand the response you get from a language model: a text that looks as if it communicates facts, but in which not all the facts are actually true. Or a picture that suggests something happened when it didn’t. Here, it is important to know where the boundaries lie.

And an understanding of the technology can help?

Today, for example, people often query ChatGPT instead of Google, but of course it makes a huge difference whether the technology is searching through a lot of documents or generating something new to meet the needs of the searcher. Naturally, there is no guarantee that everything you find on Google is actually correct. But a result generated specifically for a query is something different.

Today’s AI applications are based on decades of publicly funded research. But now big companies like Microsoft and Alphabet are edging ahead. What role do you see for universities in the future?

Developing these very large AI models takes huge amounts of data and enormous computing power, and universities simply don’t have that combination. However, I see two very important tasks for them: firstly, defining ethical guidelines which then also need to be accepted by society. This is a discussion that should be led by universities in the form of an interdisciplinary, holistic dialog.

And secondly, the large models are extremely non-transparent. You don’t know what data they’ve been trained with or what biases they have, and it can often be impossible to know how a result was generated. We therefore need new approaches to make AI decisions more transparent and traceable. And it should be up to the universities to define the relevant methods. Finally, one more point: we need models that are open and transparent rather than large and powerful, making it possible to understand the whole process from start to finish. Here, too, universities have a role to play.

Artificial Art – a three-part event series

Under the direction of Lukas Loss, the Interfinity music festival and the University of Basel will present an extraordinary concert together at the Voltahalle Basel on March 20, 2024. In the game show AI vs. Human Composers, an ensemble will play works composed by both humans and AI. The audience will then be challenged to guess the origin of the individual pieces. The interactive event will be enhanced by contributions from researchers, followed by a drinks reception where scientists will mingle with the audience to discuss AI.

The game show is part of the “Artificial Art” event series, which will begin in the Novartis Pavilion on March 18, 2024 with a panel discussion featuring top-class speakers including Professors Gerd Folkers, Frank Petersen, Bianca Prietl, Heiko Schuldt and Martin Vetterli, as well as Damir Bogdan, CEO of QuantumBasel. This will be followed by a lecture recital on the topic of “Artificial Paradises” with author Alain Claude Sulzer and pianist Denis Linnik on March 19.

Artificial Art is organized by Interfinity in collaboration with the Responsible Digital Society RDS research network at the University of Basel. The Swiss National Science Foundation is supporting the game show through the Agora funding instruments.

To top