Pin It
Pablo Picasso self-portrait with AI
Pablo Picasso ‘paints’ a posthumous self-portrait with the help of AIVia StarryAI

Cyborg art: AI is reshaping creativity, and maybe that’s a good thing

As text-to-image generators stoke fears of job replacement in the creative industries, they’re also opening doors to exciting new forms of artistic expression

After seeing a collection of early photographs in 1839, the French painter Paul Delaroche is said to have proclaimed: “From today, painting is dead.” Today, of course, that feels like an exaggeration – painting may not be the same as what it was when photography first came along, but the new medium didn’t kill it off completely, either. In fact, the new technology expanded the horizons of the ancient form, helping breed new genres such as Cubism that pushed back against its scientific realism. Nevertheless, Delaroche’s pessimistic statement gives some indication of the cultural panic that surrounded the new technology in the mid-19th century. It’s also a handy reminder about our own short-sightedness when it comes to emerging forms of expression.

Needless to say, we’ve failed to leave this panic in the past. But today, our fears have little to do with physical technology: they have moved into a realm that is even harder to map and define – the realm of software, and algorithms, and endless data streams. Just yesterday, artists aired their fears on the timeline, commenting on a viral post by the digital artist RJ Palmer, which showed an impressive array of images from the recently-launched text-to-image model Stable Diffusion. “A new AI image generator appears to be capable of making art that looks 100 per cent human made,” Palmer wrote. “As an artist I am extremely concerned.”

Responses to this post are wide-ranging, arguing for and against the new art generator, which is trained on over five billion images scraped from the internet (apparently including the work of living, working artists) and turns prompts typed by users into very convincing artworks. Critics suggest that it will allow corporations to replace the labour of hard-working artists with virtual knock-offs, benefitting from their stylistic signatures for a fraction of the cost, and none of the credit. Defenders suggest that this new, laissez-faire system will simply encourage artists to innovate, and to find new styles that exist outside of current trends in commercial art.

Technically speaking, art carried out by algorithms isn’t a brand new field. For decades, the late artist and sci-fi writer Herbert W Franke used generative software to make art such as 1979’s MONDRIAN (a programme that creates visuals in the style of the Dutch painter – Franke minted one of its sequences as an NFT earlier this year, just weeks before he died). In the last few years, however, AI art has crossed decisively into the mainstream. First, it was used to churn out poor imitations of more famous artists’ work. By 2021, its overlords were generating artworks that most people couldn’t distinguish from art created by real, human hands.

By far the most prolific form of AI art in 2022, though, is that created by text-to-image generators like DALL-E 2, Imagen, or – as of last week – Stable Diffusion. Even if you’re not familiar with their names, you’ll likely be familiar with the images these programmes create: distorted, eerie approximations of whatever bizarre scenarios their users can come up with, in whatever styles they dictate. Thomas the Tank Engine trundling through surreal and spooky landscapes. Intricate illustrations of otherworldly cities. Dreamlike visions of Frank Ocean walking the Prada runway.

Since DALL-E 2 launched in April 2022 (also bringing attention to DALL-E Mini, a publicly-accessible version created by AI enthusiast Boris Dayma), these uncanny and sometimes beautiful images have haunted our timelines, bringing with them Delarochian claims about the seismic cultural shifts lurking just over the horizon.

This scepticism isn’t completely unwarranted, of course. Between replacement by robot painters like Ai-Da, arbitrary censorship on platforms that were supposed to open up the art world, and the pandemic of ugliness that plagues NFT markets, the future often seems pretty dire for today’s emerging artists. Add to that the fact that several of the most prominent text-to-image programmes are developed by tech behemoths such as Google (Imagen), Meta (Make-A-Scene), Elon Musk’s OpenAI (DALL-E 2), or most recently TikTok – whose founders’ Silicon Valley “move fast and break things” ethos doesn’t typically make allowances for small, struggling artists – and it’s understandable that these artists think there’s cause for concern.

However, no amount of scepticism is going to stop the rise of AI art. Yes, copyright law and government regulations may catch up to the new artistic form one day, and make ripping off other artists’ styles via AI more complex, the same way laws are being proposed to crack down on deepfake porn. But staying on top of these new technologies seems near impossible – it’s no secret that culture and technology move much faster than the forces that police them. Then, there’s the question of whether governments or courts even should be allowed to stifle new forms of creative expression, given that none of us truly understand their implications.

So what are artists supposed to do, when faced with their replacement by machines that can conjure entire landscapes, paint immaculate portraits, and whip up tailor-made concept art in seconds, based on a short phrase? If you can’t beat ’em, join ’em, Ai-Da creator Aidan Meller told Dazed back in May. “Those who can embrace the new digital realm coming, I think they’ll do very, very well,” he said. “In actual fact, the future of art will be embracing the change rather than resisting it.”

Already, there have been some notable attempts at this cyborg embrace, spawning truly unusual artworks that may go down as the first steps in a movement characterised by the collaboration of human artists and “thinking” machines. (It’s unclear, here, where to draw the line between throwaway memes and “original” images with their own inherent value and artistic intent.)

With the help of AI, Amalia Ulman has crafted images of cigarette soup, infinitely more sickening than a photo of real cigarette butts in a box of rainwater that presumably served as the inspiration, thanks to the programme’s literal, gastronomical interpretation. Honor Levy has curated warped images of schoolgirls and bug-eyed angels. The initial experiments with Meta’s technology are slightly more tame – like Sofia Crespo’s psychedelic lifeforms – because the company has only tested it with a few selected artists, but no doubt when they roll it out to the general public, the best results will be just as bizarre.

In the mid-2000s, an art movement known as cyborgism emerged in Britain. It saw artists enhance their senses, or add “new” senses, with cybernetic implants, creating “a system composed of both organic and inorganic parts”. They then used these senses to create their respective artworks. Artworks that are created with contemporary AI programmes may not involve implanting actual machinery in your body, but you could say that they take this transhuman approach to art even further: today, we invite machines into our minds via vast data streams, and using this relationship they manifest our very thoughts. Or, if we’re feeling less generous, we can envision AI’s role in these systems as something more servile, like the studios full of highly-skilled workers that realise the artistic ambitions of today’s biggest artists, such as Damien Hirst, Jeff Koons, or Takashi Murakami.

As Aidan Meller suggests, it’s up to artists themselves to decide whether they want to create art in “collaboration” with AI, or attempt to stand their ground against the technology, or even give up and down tools in the face of its inevitable ascent.

If they want to do the former, it may also involve learning a whole skillset that revolves around new concepts such as “prompt engineering” – in other words, communicating the images you want an AI to create, in language it can best understand. Dean Kissick wrote about this novel skill in Spike last month, noting that text-to-image programmes require “a uniquely literary approach to image-making”. Back in June, digital artist Beeple also suggested that it will take significant time and effort to learn such skills, “how [to] use these brand new tools in ways that have all the earmarks of great art – craft, intent, nuance, deep meaning about the human condition, etc…” 

What some may see as a brick wall, others will see as an exciting opportunity to expand the horizons of contemporary art, and reach out toward new forms and techniques that we can’t even imagine yet. Just look back to the dawn of photography. If Delaroche’s doomsaying teaches us anything, in retrospect, it’s that the birth of a new form of expression isn’t necessarily synonymous with the death of the forms that came before it. If anything, creativity benefits from a diverse and disruptive set of tools, although we should be careful not to confuse those tools with the artists who wield them (at least for now). If we do claim that human art is dead now that AIs have been trained to do it better, then we might just end up with egg on our face a couple of hundred years down the line.