Chat’s Entertainment

2024-03-28 23:16:536
Sean Michaels , August 24, 2023

Chat’s Entertainment

ChatGPT and the coming blandification of AI © Wesley Fryer
Word Factory W
o
r
d

F
a
c
t
o
r
y

The worst kinds of duels are the ones where everybody loses. Thus far, the debate over generative AI can be split into two camps. There are those who believe that tech like GPT-4 and Stable Diffusion is overrated, empty, a “blurry JPEG of the web” rich in simulated insight and low on substance. And there are those who claim it could (possibly literally) take charge of the planet—an all–knowing expert ready to rearrange every aspect of our lives. Every computer-generated Balenciaga promo, each machine-mastered SAT, is another bullet whistling through the air, glittering with algorithmic gunpowder.

Both views presuppose the same future: a world where “artificial intelligence” is a relatively singular thing. Perhaps it will be an Oracle, shaping wise sausages out of the lips and assholes of human experience. More likely we’ll have constructed a mechanical shill, a mindless parrot, or a “firehose of discourse,” as Andrew Dean put it in the Sydney Review of Books, spraying “university mission statements until the end of time.” We know some things for sure: the AI will be controlled by large companies. It will rely on closed datasets, dubious labor practices, and grievous energy consumption. And these hallowed, expurgated, optimized creations will be homogenous and interchangeable, like microwaves or search engines or limping late-capitalist democracies.

Chat’s Entertainment

The people making AI say they want to make something dependable. Critics claim the coders won’t succeed: that you can’t make a dumb script smart, a hollow thing whole. But the debate over whether AI can become a trustworthy or untrustworthy authority elides the question of whether authority is what we really want. Authority is what allows Google or Instagram to dictate the ways we see the internet; it’s what permits Big Tech’s algorithms to capture our attention, drawing our gaze like a magician with his silks. History has taught us that the more centralized an authority, the fewer guardrails there are on its behavior. Conformity goes up; negotiating power goes down; the scope of imagination narrows.

Much has gone wrong in the story of the internet, from the ascendancy of advertisers to the fascist expropriation of Pepe the Frog. But one of the starkest mistakes of recent years was the way the web’s wild chaos gave way to a series of private playgrounds run by billionaires of questionable taste. Quick as it may be changing, AI’s still in its infancy. The furniture’s not nailed down. It’s not yet one thing, or another—which means there’s a chance it could be both.


In 2019, I followed a link to a site called Talk to Transformer. It’s a bad name for a website—meaningless to the uninitiated, hard to remember, weirdly evocative of the Swedish poet Tomas Tranströmer. Created by the Toronto computer scientist Adam Daniel King, talktotransformer.com allowed visitors to “talk” to an early version of OpenAI’s Generative Pre-trained Transformer (GPT) model. You would enter a sentence, wait a few seconds, and the software would continue where you left off. When I fed it the beginnings of a story, I often found the results surprising. “He was looking at me like I was one of those dumb little creatures for having my body parts in the wrong place,” Talk to Transformer once wrote. “If I did not answer his questions he would use that thing he called ‘a teeth.’” The system’s output unsettled me: it wasn’t lifeless, it didn’t hew to cliché. Its literary voice was at least as compelling as an unhinged seventh-grader’s. At the same time, I felt slightly compromised whenever I used the technology—as if I was betraying my fellow writers.

AI is not yet one thing, or another—which means there’s a chance it could be both.

I wanted to explore these ideas some more, and I began working on a novel about an aging writer who accepts a commission from an AI company. She goes to California and spends a week writing poetry with their next-generation bot, creating what the company hopes will be a “historic partnership between human and machine.” I thought it would be interesting if my book in turn represented a “collaboration”—if it was contaminated in the same way—and so over the next four years I experimented with Large Language Models (LLMs) to generate certain passages of the text. GPT-2 and GPT-3 are both good mimics of prose style, assuming you give them large samples, but they find poetry difficult—as does OpenAI’s current model, GPT-4. The biggest problem is rhyme: as others have since uncovered, the big LLMs struggle with free verse. But these tools also flounder when tasked with distilling or imitating a particular poetic style, even when given examples. In my case, after trying and failing to get GPT-3 to generate poetry in a consistent and specific voice, I used a government arts grant to hire an engineer; we built something extremely rickety ourselves.

I have no doubt that OpenAI and Google will eventually figure out poetry, and that subsequent versions of their products will be as good at aping Ashbery poems as they are Seinfeldscreenplays. But the experience of writing this book underlined the oligopolistic nature of what pretends to be a diverse, open-handed field. High-end generative AI require enormous resources. Concerns over copyright, regulation, and what’s known as “safety” (which centers on the risk of killer robots, not AI’s social impact) reduce access even further. As much as generative AI companies may claim their tools can create anything, in any style, they are trying to build a technology that will attract the broadest audience and the least controversy.

In other words, free verse is a low priority. The market will always pick consistency over creativity, the obvious over the avant-garde. Wishing for a tool that diverges from the priorities of what I call “monumental AI” is a familiar feeling: it’s like waiting for Apple to give the iPhone a headphone jack, or for the Democratic Party to defund the police.


It’s tempting to pretend there’s an easy off-ramp. To insist that instead of wrestling with the impacts this technology will have, and how to mitigate the harms, we can just opt out, refuse, relegating generative AI to the same shabby dustbin as Segways and 3D printers. So long as we don’t use this stuff, the argument goes, we can remain authentic, uncorrupted, not numbers, but free men.

Unfortunately, the puritans are destined to lose. It really won’t take much: a better text-to-image app on our phones; a better LLM auto-integrated into Microsoft Word. Give Apple and Microsoft five years (or six months); the tech’ll be everywhere and all of us will use it, with the same grudging surrender as when we allow the “Magic Wand” tool to touch-up photos of our kids. This is how AI spreads—small annoying task by small annoying task—until we no longer pay heed to which library the algorithm was trained on, or what conditions were like in the factory, or whether it’s been catastrophic for the professional photo touch-up industry.

Already, and with astonishing speed, amateurs using AI tools have begun displacing pros in industries like copywriting, commercial illustration, and stock photography. This sucks, for two different reasons. The main one—obviously—is that when people lose their jobs, they are not OK. Our country cares so little about their people that when someone loses their shitty press-release-writing gig, they can literally go hungry. It’s a disgraceful failure on the part of society, but AI is neither the cause nor the solution.

The second reason it sucks when AI jockeys supplant, say, stock photographers, is that stock photography is a craft (maybe even an art), and the loss of that craft (or art) matters. It is right to feel a kind of grief when a vocation disappears—whether elevator operators, icemen, or calligraphers—but it’s also important to distinguish between the scale of these losses. The decline of handpainted signs is a much greater tragedy than the advent of mechanical dishwashers; many of these jobs fall somewhere in between.

For the purposes of this essay, we must set aside the need to overthrow capitalism. We must set aside the nostalgia we feel for almost anything that evanesces. The grim verity is that generative AI is coming, or it is here, and human beings will need to be ready to confront it—in the courts, in the boardroom, and also in the imaginary, as its prospective collaborators.

If algorithms are increasingly to be our partners, or our paints, then we must pay more attention. Not just to whether AI companies will succeedat building their authorities, but whether the goals of monumental AI are even in line with our own. Nowhere is this more evident than when it comes to art.


As everyone knows, ChatGPT lies. It guesses, it makes facts up, it “hallucinates”—and these traits are all perceived as errors. AI creators’ greatest ambition is for their product to be reliable; this is hardly the top priority when making art. Bots make headlines by getting just a single fact wrong; so-called “hallucinations” are cited as one of this technology’s biggest risks. Researchers hoping to create centralized, automated authorities aren’t aiming for the same outcomes as makers of diverse, provocative fictions: ChatGPT is already more reputable than many people I know—and that doesn’t make it any better at writing poetry. 

On the image side, AI is being pointed towards a similar, frictionless future. Tools like Dall-E, Midjourney, and Stable Diffusion can conjure up Muppet versions of John Hughes films, or the fantasy of Alejandro Jodorowsky’s TRON, but each successive version generates graphics that are more tasteful, professional, and polite. When early models struggled to understand hands or teeth, the creators were quick to train the glitches out. After someone discovered that Dall-E Mini had an oddly specific conception of something called a “Crungus,” the developers fixed the “bug” for their next release. The surprising homogeneity of these images recalls a blandification that’s already plaguing the Anthropocene. It makes image-generators useful for Cosmocovers or political campaigns, but what about everything else?

The developers of monumental AI must always choose the compromise: the feature set that serves the widest possible audience in the mildest possible way. Each neural network is descended from a prior one, like the next in a line of pharaohs, and each is more sophisticated, like an ever-more-tasteful DJ. Never mind that the model is not alive; that it has no sense organs, nor the ability to assign new information to memory. Researchers must always pray there’s “one weird trick” that will allow their software to synthesize the quintessence of art, and manifest it.

For now, developers have focused on the arbitrary logic at the heart of great art: it’s this, they hope, that distinguishes Twin Peaks and My Neighbor Totoro from Young Sheldon or Paw Patrol. The works of Lynch and Miyazaki inhabit a mysterious place between coherence and randomness, where an intuitive sense links unlike things. That notion—that “mysterious place”—isn’t so dissimilar from a setting that machine learning experts call ”temperature.” When set to a lower temperature, an AI will make safer and more statistically probable choices. When turned up high, an AI’s responses are more unpredictable and “imaginative.” Platforms like Bard, Bing, and ChatGPT have their default temperature set relatively low: it’s more important to be “normal” and accurate than interesting. But other AI tools allow users to specify the temperature, increasing the likelihood that they will make the wrong—or inventive—choice. Years ago, I asked an LLM to finish a sentence that began “The rose’s color was . . .” With the temperature turned down, it only ever answered “red”—or, perhaps, “a vibrant shade of crimson.” When I dialed up the temperature, the system’s tenor changed. “The rose’s color,” it wrote, “was affected by the blade.”

But let’s be clear: when I say that temperature “isn’t so dissimilar” from the ineffable logic of art 🤡, I don’t mean they’re actuallythe same thing. The unique writing voice of Thomas Pynchon or Ottessa Moshfegh isn’t just Nora Roberts with the randomness cranked. Temperature’s not enough: human creativity is born out of biology, biography, and dream. We are a glitchy, weak, unreliable people; we are diverse, unpredictable, and strange. Whenever Charles Darwin discovered a new species, he liked to eat it; during Terrence Malick’s editing sessions for The Thin Red Line, he was rocking out to Green Day. The problem with monumental AI isn’t that the temperature’s too low: it’s that the technology’s ever narrowing, converging, as if Silicon Valley could finally design the perfect, all-capable multi-tool. The fantasy of AGI, “artificial general intelligence,” becomes a vision of a synthetic Bo Jackson—one who doesn’t merely play sports, but paints portraits, flies planes, and sings the high parts in La Traviata.

Instead of an AI that’s singular, optimized, and versatile, we should be calling for technology that’s eccentric, customized, and plural. Not one or two or five AIs, like drugged and tranquil genies—but a thousand different familiars, heterodox and helpful and odd. This is the landscape of Star Wars, where disparate droids quip, grieve, and spin records; it’s the universe of author Iain M. Banks, where spaceship-sized Minds save planets and build sculptures; but it’s a galaxy far, far away from the product suite of OpenAI, where the only model is generic.

A few years ago, engineers at the Australian Institute for Machine Learning approached the artist Laurie Anderson about creating an AI based on her work. The team ended up building three different familiars: one model trained on Anderson’s work, another trained on that of her late husband, Lou Reed, and a wild combination of the couple. “I’m addicted to this process [of using the AI],” Anderson later told the East Hampton Star.  “You put the old story in and it grinds out another thing . . . It comes up with all kinds of crazy lines.”

So far, we have been bamboozled by big AI companies into assuming that we want big AI. Products like ChatGPT prioritize authority over personality, predictability over innovation; the same is true even of tools like Notion and Sudowrite, which are directly aimed at writers. All the major stakeholders seem focused on one-size-fits-all solutions, rather than developing tools that allow ordinary people to imagine original, artificial minds—trying out different architectures, applying different training data, embracing glitches, hunches, and obsession. It shouldn’t just be hackers at institutes who get to raise up a sprite trained on “O Superman” and Metal Machine Music, nor should coders in the wild constrain their imaginations to the brushed-metal monotony of Bard and Midjourney. Eloquent and accurate responses aren’t the only benchmarks for worthwhile tech. We need the AI equivalent of A New Session, a contemporary art journal drawing on glitchy, decades-old tech; we need more open-source platforms like Hugging Face; we need mischievous collectives and a thousand regional scenes. Some are nervous about AI developed in China or India or the UAE—in fact we should encourage even more of these, in Glasgow and Hawaii and Kerala, wherever communities have unique priorities and different ways of thinking.

The fantasy of “artificial general intelligence,” becomes a vision of a synthetic Bo Jackson—one who doesn’t merely play sports, but paints portraits, flies planes, and sings the high parts in La Traviata.

Developers may argue that this change can be made post-facto—that once the models are smart and safe enough, they can be individualized. But personality isn’t cosplay. If what we seek is true variety—computerized Ariel and Caliban, Frog and Toad, Sherlock Holmes and Björk and Lisa Simpson—then what we crave are different ways of seeing, different ways of thinking, not just a gifted cybernetic impersonator.

Those who are invested in monumental AI plead safety. They claim their algorithms can only be trusted if they are controlled as tightly as drug manufacturing, or nuclear power. But it’s not clear to me that trustshould be the priority, nor that a single strain of technology is more resistant to blight. Mightn’t it be better if we were to distrust these systems? Or at least make them earn it? A panoply of diverse AI, some of them trusted and some of them not, whose qualities are heterogeneous, seems better for humanity and maybe even existential risk. People will be suitably skeptical if they deal with crazy AI every day; they’re more likely to remain creative if they assemble their own weird array of tools. And if an evil artificial super-brain ever does wake up, we’ll be glad to have a host of various and free-thinking pals. 

Whatever the objectivists and narcissists might say, the history of civilization is not a saga of individual excellence. It’s a tale of collaboration. Two thousand centuries of messy, mortal human beings: banding together and falling apart; working with (and against) the natural world; finding new ways to interact with technology. We invent new tools and eventually they change us: alphabets, lenses, gyroscopes, movable type. Writing gave us literature and also lawyers; electricity gave us atom bombs and reggae. No, it doesn’t always turn out well. But we are at our best, I think, when we listen not to one voice but to many. The future of artificial intelligence is too often portrayed as a competition: to build the fastest, cleverest mind, while society scrambles to keep up. It needn’t be that way—not a competition, or even a struggle.

It could be a chorus.

本文地址:http://panshare.cn/caijingtouzi/2020/0106/1888.html
版权声明

本文仅代表作者观点,不代表本站立场。
本文系作者授权发表,未经许可,不得转载。

友情链接