IN THE ORIGIN STORY FOR SURREALISM that André Breton provides in his 1924 manifesto, he claims that “a rather strange phrase” came to him in the hypnagogic state before sleep: “There is a man cut in two by the window.” Easy as it is to link the phrase with Surrealism’s preoccupation with transgressing binaries and seeking passages between life’s apparently divided aspects (what if dreams are real and reality a dream?), Breton seems to ignore the phrase’s substance to fixate on the means of its arrival:
I realized I was dealing with an image of a fairly rare sort, and all I could think of was to incorporate it into my material for poetic construction. No sooner had I granted it this capacity than it was in fact succeeded by a whole series of phrases, with only brief pauses between them, which surprised me only slightly less and left me with the impression of their being so gratuitous that the control I had then exercised upon myself seemed to me illusory and all I could think of was putting an end to the interminable quarrel raging within me.
What mattered was not the content but the loss of conscious control over the language that was coming to him. Breton concludes that “poetic construction” should be a matter not of will but of surrender: You can assume an aesthetic distance from yourself and become a spectator to your own thought process, ideally a detached connoisseur of it. “Let yourself be carried along,” Breton declares, “events will not tolerate your interference.”
In Breton’s view, this approach to creation was democratizing; he proclaimed in the 1933 essay “The Automatic Message” that “it is to the credit of Surrealism that it has proclaimed the total equality of all ordinary human beings before the subliminal message, that it has constantly insisted that this message is the heritage of all.” It was also (somewhat absurdly, given Breton’s manifest egomania) a means of escaping individualist egotism: Michel Carrouges, the author of an early sympathetic study of Surrealism, goes so far as to say that Breton discovered the “natural link” between “personal unconscious, collective unconscious, and even cosmic unconscious.”
Breton’s flash of inspiration would coalesce into the inaugural Surrealist method of automatic writing, the attempt to outflank the conscious mind by scribbling words or doodles down faster than the speed of thought. For Breton, this radical abdication technique was a breakthrough; it allowed writers to seemingly repudiate intentionality and ambition and become “simple receptacles” and “modest recording instruments.” Not only did this thwart any debasing, approval-seeking tactics on the part of the artist, it offered the “superior reality of certain forms of previously neglected associations, in the omnipotence of dream, in the disinterested play of thought.”
Uninhibited by a rationalistic need to make sense or link causes to effects, automatic writing can surprise us with the intentions and connections we discover retroactively in what might otherwise seem like random gibberish or the product of sheer coincidence. Breton claimed that automatic writing, for one thing, “tends to ruin once and for all all other psychic mechanisms and to substitute itself for them in solving all the principal problems of life.”
As hyperbolic as that sounds, it anticipates the hype frequently deployed today to justify a similar sort of passivity. Only now, rather than turn over our agency to “objective chance” and the unfathomable power of the collective unconscious as the Surrealists preached, we are invited to give way to machine-learning algorithms, often touted as artificial intelligence. Where Breton imagined that significant truths were somehow mystically imprinted in our unconscious depths, proponents of AI can point to the billions of actual data points (i.e., “mechanical recordings” of reality), aggregating the observable effects of countless human decisions and capable of positing “previously neglected associations” within the data on demand, and investing these correlations with the air of oracular truth. Predictive systems have been widely introduced to solve the “principal problems of life,” to foster efficient processing everywhere. They are used to sort social media feeds and tailor retailing sites to individual users, and have been implemented to automate decisions in banking, government services, and the judicial system. They are often said to be “disinterested” (though they have repeatedly been shown to be riddled with bias).
In this respect, AI represents a realization of what for Breton was merely a speculative faith in decision-making procedures that could surmount human calculation. The rapid cultural rise of the “algorithm”—in its vernacular sense of connoting an oracular technological deity—testifies to the success of the Surrealist revolution that Breton never tired of promising. AI cultivates and caters to our passivity, seeming to offer the fruits of creativity and self-examination without the effort and self-doubt. Algorithms always find us interesting and always testify to our insatiable desires by showing us all the things we should still want. We can routinely experience the fascination of being surprised by our own depths, revealed to us for our delectation by personalized feeds. The experience of AI in everyday life renders us default Surrealists, deferring to opaque automatic processes that no longer need be arduously evoked with Ouija-esque analog rituals.
Natural language processing models, like those developed by OpenAI—a research lab that launched with $1 billion of funding from the likes of Elon Musk and Peter Thiel with a mission of developing “artificial general intelligence”—make the link between Surrealism and AI seem especially clear. When fed a textual prompt, GPT-3, a model that OpenAI launched in 2020, predicts what sentences should follow based on its statistical analysis of billions of words of text pulled from the internet. How it completes whatever prompt it’s fed can be interpreted as a “social average” response, making it a kind of oblique search engine of the collective consciousness, liberated from any of the contextual social relations that would discipline what it produces. It doesn’t experience inhibition or self-satisfaction. Thus it seems to fulfill Breton’s wildest dreams for automatic writing, producing text that is estranged from human agency yet nonetheless has some perceivable sense to it that a reader can extract, or project on it, “reason’s role being limited to taking note of, and appreciating, the luminous phenomenon,” as Breton put it, of unpremeditated language that can at the same time be parsed.
GPT-3 has its visual equivalent in AI-generated images, which over the past few years have come in a variety of flavors from a range of approaches. Some systems, like Google’s DeepDream, were developed to try to document how image-recognition algorithms work, producing videos, often described as “surreal,” in which eyeballs and dog heads and the like (prominent shapes in the training data) emerge grotesquely from images the system is fed. Others like Nvidia’s StyleGAN can make realistic human faces. Some algorithms have been trained on images from the fine art corpus to produce “paintings” in the style of particular eras or artists, as with Microsoft’s Next Rembrandt project. OpenAI’s Dall-E (get it?) produces images in response to text prompts by using a GPT-3-like model trained on pixel sequences rather than words.
Artists have adopted these kinds of tools to produce generative works that are sometimes described as instances of machine creativity. Leonel Moura, who had works included in the 2018 “Artists & Robots” show of AI art at the Grand Palais in Paris, has linked his practice with “artbots” directly to Surrealism and its attempts to “take human consciousness out of the loop.” In a review of the 2021 generative art exhibition “LUX: New Wave of Contemporary Art” in London for the New Left Review, art historian Julian Stallabrass argues that “what is new here, and undeniably impressive, is the scale and speed of this processing, the vast datasets on which it draws, and the hypnotic vision of an inhuman intelligence playing with human cultural techniques and material.” Of work by onetime Google “artist-in-residence” Refik Anadol, in which machines trained on Italian Renaissance paintings project morphing images that approximate and deform faces and landscapes, Stallabrass writes that “the viewer is held in the sublime of a vision of a superior generator of painterly form. . . . The work opens up a glimpse of a future in which the traces or indeed ruins of human creation are reworked forever by inhuman intelligences.”
Many journalists have also been warily impressed with GPT-3. While some worry about its capacity to produce well-tailored disinformation on demand or eliminate journalistic careers, most commentators have responded with guarded wonder, balancing gee-whiz enthusiasm with vague concern about the future of humanity. At the same time that GPT-3 can perform stunts like writing explanatory articles about itself or review books on AI, it can be an exploratory tool for writers to expand their creative potential. Last April, in the New Yorker, writer Stephen Marche likened using GPT-3 to the ancients invoking the muses. Novelist Vauhini Vara used it to help her write a requiem about her dead sister for the Believer. GPT-3’s language, she notes, “was weird, off-kilter—but often poetically so, almost truer than writing any human would produce.” In an essay for n+1, critic Meghan O’Gieblyn notes the parallels between GPT-3 and automatic writing, pointing out the similarities between an automatic writing text like Breton and Philippe Soupault’s The Magnetic Fields and one written using GPT-3.
The text outputs of generative models are at once mechanistic and unpredictable; they are based entirely on calculations and old data, but they can come across as original, ingenious—combinations humans would likely never think of. For years, researcher Janelle Shane has been playing with generative models like OpenAI’s to explore their limits and extract funny and surprising output from them for her blog, “AI Weirdness”: AI names your pet, AI bakes some cakes, AI makes New Year’s resolutions. In these experiments, Shane tweaks the models, adjusting their settings and her inputs until they cohere as a system of well-managed distortions, producing phrases or images that home in on uncanniness. Their odd juxtapositions garble received ideas just enough to create a frisson; they seem funny or clever in a way that cannot be anticipated but appreciated only after the fact. From Shane’s recent list of New Years’ resolutions: “Record every adjective I hear on the radio.” “Act like a cabbage for a month.” “At 4 o’clock every day I will climb a tree.” “Speak only to apples for 24 hours.” In the process, Shane trains herself and her readers to enjoy these occurrences, learning in a sense to be alive to the results of unpredictable creativity—what Breton called “convulsive beauty.” By interacting with AI, one can refine the sense of one’s own unique human capacity, capable of seeing what the machine cannot about its own production. The flight from agency appears to redeem itself in such acts of aesthetic recognition.
But the fact that AI and Surrealism so readily fit together is reason to be suspicious of both. Surrealism was supposed to free minds from bourgeois rationality by subtracting intentionality and tapping into deep levels of mythic experience through randomness and dreams, celebration of “primitivism” and “subjectlessness.” But this experience is readily elicited by AI, the supremely rational application of reductive ideas about how minds work.
In Compulsive Beauty (1993), Hal Foster argues that the Surrealists seized upon the “irrational residue” left behind by the most mechanistic, capitalistic production processes to undermine the “modernisms that value industrialist objectivity.” Their fixation on uncanny hybrids of animate and inanimate, human and machine, of robotized laborers of various types, could be understood as critique, Foster argues, and such approaches as automatic writing were “a form of autonomism that parodies the world of automatization.” But the Surrealist critique proved readily susceptible to capitalist co-optation: “In the postmodern world of advanced capitalism, the real has become the surreal,” Foster acknowledges, and he wonders whether “our forest of symbols is less disruptive in its uncanniness than disciplinary in its delirium.” Surrealism’s dreamscapes no longer posit an escape from the bourgeois life of convention but form a commonplace expression and experience of it.
AI systems make generating surreal images or conducting surrealistic experiments trivially easy. They don’t require rigor but invite us to let go, to see AI’s efforts to predict us as a form of play or a kind of dream state. Their immense processing power and capacity for digesting troves of data on our preferences and predilections allows them to construct and exhaust the field of imaginative possibility for us. In Anadol’s November 2021 discussion with MoMA curators Michelle Kuo and Paola Antonelli, the artist describes training AI on data sets of the museum’s collection as tracing a vast multidimensional “latent space” of “other creations and imaginations and outcomes.” Kuo sees this as yielding “totally fantastical images, almost automatist, or like automatic writing or drawing.”
But another way of describing AI systems is that they systematically work through countless concatenations of ideas on their own terms and overwhelm us with them, as if that is all there could possibly be. As artist and media theorist Joanna Zylinska writes in AI Art: Machine Visions and Warped Dreams (2020), paraphrasing the ideas of Polish writer Jacek Dukaj, “AI exponentially amplifies the knowledge shared by marketing experts with regard to our desires and fantasies” and is “much quicker and much more efficient” in putting that knowledge to use. Under such conditions, AI art can become “an outpouring of seemingly different outcomes whose structure has been predicted by the algorithmic logic that underpins them, even if not yet visualized or conceptualized by the carbon-based human.”
Recuperated as AI, Surrealism provides the basis not for liberation but for further entrapment in existing cultural patterns reshuffled in novel ways, but not fundamentally changed. Rather, they are further ingrained. The idea of escaping from the control exercised by reason ends up being a way of fully submitting to a different form of programming, to what a machine learning model can produce and what algorithmic forms of control can induce.
OpenAI hopes that GPT-3 will be integrated across a range of applications where it’s necessary to generate spontaneous text. It could produce more dynamic nonplayer characters for games, or make automated small talk in customer service settings. It could turn search engines into a kind of conversation between human and machine. Such interactions turn Surrealism into a business model, following in the footsteps of artists like Dalí who long ago discovered and exploited its commercial potential.
Another business model for the Surrealism-AI symbiosis is evident in generative art NFTs (Crypto Kitties, Bored Apes, and countless other copycat projects), where the content associated with any token is basically a pretext for financial speculation. The absence of human intention in the generated image for any given NFT makes sure ideas don’t interfere with trading, but the Surrealist pedigree for “automatic” creation helps substantiate the alibi that works of art are still involved. In the other direction, the whimsicality of artists’ experiments with machine learning—whether Shane’s quirky lists or Moura’s toylike action-painting robots—help domesticate AI, providing a framework for making sense or making light of its glitches and anomalies, rendering it more acceptable, perhaps, when it attempts to dictate options to us, framing our sense of the range of choices. We can experience the way technology is being deployed to inhibit and control us as wonder and surprise. We can imagine that our effort to direct it toward outputs that amuse us doesn’t at the same time function as a form of surveillance, of data collection that will be used to refine its more ominous capabilities.
Predictive text functionality (e.g., autocompletion of your words or sentences) already lives in email and texting apps and will ultimately move into all sorts of consumer products, a form of control implemented as a mercy of convenience. Its point is not to estrange us from the familiar channels of our thoughts in the classic Surrealist manner but to more efficiently conduct us through them. Algorithmic text completion intervenes in how we think, making us absent where we are expected to be present, at the moment we are ostensibly speaking. It assures us that we don’t need to be the speaking subject behind our words, just as Surrealism promised.
This article appears in the April 2022 issue, pp. 78–83.