11 Comments
Sep 2Liked by Five Good Hours

'In art, we don’t begin from choice, but from cliché, from a position of determination and dependency. And every proper choice we make in the production of an artwork is achieved by consciously cultivated tactics of resistance to this cliché.'

Nice. A bit like how your values as a person can only really be determined (/proven) by the actions you take that aren't the default for someone in your position.

Expand full comment

I came down to the comments section to call out this exact quote, but someone beat me to it. :D

I think my best writing happens after I realize I'm copying someone else, and opt to make a different choice. Not the original trope, or even the 180 subversion of it, but something fresh and original.

Expand full comment

loved this post overall—agree that this is THE standout quote & insight from it!!!

Expand full comment

Nice piece. I went and read the Chiang article afterwards--never read him before--and found his entire argument pretty weak and simplistic. If we're gonna defend educational practices, reading, writing, teaching, learning, art, etc. from AI, it's gonna have to be better than what he's offering. Art is about "making a lot of choices"? When we write a hundred-word prompt we've made a hundred choices? What does this even mean? He sounds like SBF trying to explain why Shakespeare's no good. Thanks for responding and showing that there are better ways to defends the arts against AI.

Expand full comment
author

I wanted to be as generous to him as possible, but I think this is a classic instance of a very smart person thinking that public writing means dumbing down your arguments because the Average Reader isn’t capable of understanding the Real Story. Many such cases, especially in the New Yorker. It fails because it never rises above the level of the individual at the desk, alone, making choices. But it’s progress against the usual boosterism.

Expand full comment

If you say he's a very smart person, I'll give him a chance (have seen his story collection around), but yeah, going off this piece he seems to think he has to write at something like an 8th grade reading level. On the other hand, who knows, maybe he's not wrong!

Expand full comment
Sep 6Liked by Five Good Hours

In Chiang's defense—I really think he's one of the most amazing sci-fi writers working today. His short stories are a masterclass in thoroughly understanding some bit of historical or contemporary science/tech, and then using it to construct a really emotionally moving story.

My read of his New Yorker essay, though, is that he might not be very immersed in the different ways people have justified the social function of art, the role of the artist, what defines artmaking, and whether an artist can really be a solitary, totally autonomous genius making independent choices…or if they are a product of their context, tradition, and so on…or he IS immersed in that, but didn't want to make the essay too complicated?

But I think the question of "what is art?" and "how will art be affected by AI?" is a genuinely complex one—and Chiang's argument could have benefited from a more nuanced definition of what art means to people.

Expand full comment

Good point, it’s true that to talk about what makes art “art” you have to learn an entire vocabulary and really, a lexicon. Maybe Chiang purposely avoided that in order to reach a larger audience.

Expand full comment
Sep 3·edited Sep 3

Thank you for this. I found Chang's essay weak for similar reasons. Distinguishing between AI and human cognition on the basis of how many choices are made in the process of expressing or presenting creativity seems quite reductive. Looking closely at the place of choice in creative work, what becomes immediately obvious is that it is as much a matter of why certain words are chosen over others as it is about how many of these choices are made. The choice of once word over another, especially in compelling writing, is often inflected, as you point out, with a range of impressions & allusions, of valences of feeling, such that even if the exact same words are written next to each other in the exact same order they would evoke different layers of meaning. Take, for example, the repetition of 'what did I know' in Hayden's Those Winter Sundays or the repetition of 'I grow old' in Eliot's Love Song of J Alfred Prufrock. A key reason why AI cannot reproduce these poems is not because of how many fewer choices it would make it's also because it cannot understand why two phrases composed of the same semantic choices would read differently individually and in relation to each other.

Expand full comment

Excellent stuff! Reminded me a lot of an episode of this one animist-anthropology podcast called “The Emerald” where they talked about how discussions on A.I bring into question arguments about what counts as “alive” — and therefore — what shapes people (and objects)in a post-humanist sense, outside of the voluntarist mindset. I think the episode was called “So You Want To Be a Sorcerer In The Age of Mythic Powers.” I highly recommend!

Expand full comment

I think both this argument and Chiang's are weak, although this one is better formulated, even if I also think it has deeper conceptual problems. I get the impression that no critic investigates much how people use the demon technology beyond maybe taking a look at ChatGPT, Dall-E, or something like that. Chiang's essay is more representative of that problem, as in this passage:

"If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland. Another is to instruct the program to engage in style mimicry, emulating the choices made by a specific writer, which produces a highly derivative story. "

Most people who have used LLMs know that you can adjust the output in both cases, either by having less likely tokens chosen or by instructing the model in terms of multiple authors and getting an "averaged" mimicry that isn't any particular author or even by giving it your own text and making it "derivative" of your own style (or, of course, just by not providing an author).

While you're right not to focus on a number of choices, autonomy has a similar problem in a disguised form since the "room to maneuver" is something like an ability to choose between distinct (or novel) alternatives. Although this isn't equivalent to Chiang's, which seems to suggest some sort of raw number of choices qualifies a text or image as art, it introduces separate problems: how is relative autonomy in relation to a tradition to be determined? since apparently not everything regarded as art in some sense is included as part of a tradition, what authority decides which works exist within a tradition and outside of it? There have been similar criticisms of Gadamer's idea of tradition (as well as Heidegger's), which this seems related to.

Another question would be how abjectly dependent are users of these AI models in practice. Typically, there is some process of editing a text or image after its generation, at least at the level of having a choice in using or discarding the image, and there are normally choices made during the generation as well, so AI output is rarely just that. I'm not sure why the choices made during and after generating a text/image are neglected in these discussions in favor of a vision of this very passive "AI output consumer" that doesn't hold much in practice.

To give an example of why this might be important: if someone created a dozen AI generated images and put them into a collage resembling something like Yeats's beast in "The Second Coming," maybe as a statement against "AI art," would this person not be an artist because of the generated images? Or does this person become an artist at some point during the editing process? If so, at which point does this person make the requisite number of choices or else achieve a sufficient level of autonomy within the work for the work to be considered art? Or is it only the relative autonomy of the artist that's important, in which case "AI art" could just as well be "real art" as long as it's sanctified by its passage through a "real" artist's power of choice? Regardless, my sense is that any attempt to impose a limit will result in a number of edge cases, hence a proliferation of new limits that try to distinguish between these cases, with each creating new yet different edge cases, and the eventual collapse of such limits altogether.

But, if collaging and editing those images don't qualify the end result as art, whether due to dependence or the number of choices, would "non-art" also apply to the bulk of Henry Darger's work? Maybe only sections or parts? How about Max Ernst's "Une semaine de bonté"? What about poets who used the cut-up technique?

It should also be asked: what "art" are we even talking about in the first place? These discussions tend to be ambiguous as to whether we mean "art" as a label (i.e. descriptively, even though it hardly makes sense when "AI art" already implies an "art" identification in some way) or "art" as some special designation or status. This distinction between non-art things called "art" and a "real art" functions like the old distinction between an unskillful and derivative work of "low" art versus a skillful and innovative work of "high" art, even among people who would otherwise deny such a distinction. You could relate this further to linguistic descriptivism/prescriptivism and nominalism/realism, all of which similarly place into opposition what is to what should (or must) be, is/ought or (analogously) appearance/essence, depending on whether you consider the problem in terms of subject or substance.

Despite saying all of this, I am sympathetic to the "anti-AI art" position, if only because many people associated with AI (including the "existential threat" types) are despicable. I just think these arguments don't work or are at best self-defeating.

Expand full comment