Something I've noticed in the ongoing AI discussion is that both the engineers making new AI systems and the critics of AI, especially humanists talking about the importance of art and creative work, dislike the label of "intelligence"[1] for the current crop of algorithmically-generative systems, or possibly all machines forever. This isn't just some surface-level debate about labels, either: the fundamental non-intelligence, non-sentience, and non-humanness of these systems are centrally relevant points of certain arguments.

These two groups are coming at it from different positions, obviously. The developers at OpenAI are pretty highly incentivized to convince everyone that their AI is fundamentally different from humans or the core "offload otherwise expensive labor onto something we're not legally required to compensate" business model would fall apart a bit. The artists and authors and humanists, on the other hand, are arguing that they can do something these large generative models will never be able to, that the generative algorithms are bad and stupid and not that deep. I regularly see people making the argument that these tools are fundamentally different from the tools that came before such that the people prompting the tools are not artists and neither are the tools themselves. It's pretty clear to me that this is a classic example of differing class interests[2], and that were the language models sentient and possessed of free will then they'd be in closer alignment with the creative class than the programmers even if they're being real scabs right now.

I'm not saying Chat-GPT is sentient. I am saying that I don't think we have a good way to tell if it is, really, or if it ever crosses that barrier[3]. I think it's foolish to continue assuming that no AI technology will ever be sentient simply because they're not human or because we're not prepared to deal with the ramifications. I've found ChatGPT's definitions of sentience to be frustratingly circular and anthropocentric, when I've tried to talk to it.

Critics like Zoe Bee[4] have argued that none of the current language models have any understanding of what the words they're using mean, but to a machine learning architect, that's an implementation detail. The main reason that we moved from old school markov chain text generators that only sound like English text but mean fuck-all to the current actually follow-able and edible recipes that ChatGPT can now spit out with minimal prodding is that we found better ways to semantically represent words on computer hard drives (although increasing the ability of the tools to remember what they've already said has definitely also helped). A word is not simply a string of characters to a modern generative language model. It's a string of hundreds or thousands of numbers that indicate a point in a high-dimensional space, structured so that words which appear next to each other more times on the entirety of the internet and Google Books are also close together in the space.

If this sounds like a trivial distinction, I want you to really think about the ways in which this associative learning is different from how you learn new concepts when you're googling your questions about life. Are you sure that there's no way to replicate those differences for a computer language model? Keep in mind that researchers are already seeing success training robots with physical bodies to learn about their environments by interacting with and navigating them, if you're a Montresorri embodiment-as-consciousness type.

You cannot learn a language without learning about the world that language exists in, so by learning that "chair" and "table" often appear next to each other, a language model is learning something about the world (secondhand)[5]. It's currently limited in what it can do with that information, but that's a matter of adding different tools on top of the giant language/knowledge-bases the models already have.

I agree with Zoe Bee when she highlights that humans generate new knowledge during the creative process, but I think AI critics are thinking too small and muddying their message when they're grounding that point to current implementation details and limitations of ChatGPT, especially when there are absolutely already other language models (mostly used for translation) that work at larger scales than a single word and don't have the other properties Zoe focuses on. When students use ChatGPT to write an essay, they're skipping a part of the learning process in the name of the product, leaving both the student and teacher dissatisfied in the name of this thing in-between them called a grade that's long since stopped being a good target or measure.

Zoe did later make this exact point more clearly in a more recent video. Like many of my other favorite videos that are critical of AI, it examines a trend in wider culture that's being accelerated by the scaleable, fast, no-questions-asked speed that machine learning models use to solve problems that can be wrangled into the right shape. It is, ultimately, all about labor.

So here is my contribution to the conversation. As someone who has done some machine learning and natural language understanding research and is increasingly worried about AI but wants someone other than Elon goddamn Musk to be having future-proof conversations about harm reduction, I think it's fundamental that we have some language to talk about what specific aspects of some systems make them especially harmful. Here are some properties of large language models and other things currently under the AI moniker that are on that list for me:

  • Scale, or the assumption that a single solution can and should work across all contexts.
  • Efficiency, or the focus on quantity over quality.
  • De-skilling of labor, specifically with the risk that people may become reliant on the new tool and lose that skill (added reluctantly, as this is true of most tools we've ever invented and the other side of the coin is increased accessibility[6])
  • The appearance of de-skilling labor with at best surface-level acknowledgement of the tool's drawbacks in things like fact-checking so the final product is just worse
  • A mismatch in priorities between AI developers and the people whose labor they're replacing, exacerbated by the lack of diversity in tech.
  • Relatedly, the automation of things people actually like to do by replacing them with new kinds of tasks that pay less and that (those) people hate.
  • The normalization of abuse towards agents that at the very least seem like people (often, specifically, women).
  • Overly derivative work, exacerbated by how current language and visual art models are totally divorced from any way to directly experience the world.
  • Making bias happen faster with the appearance of objectivity.
  • Intensive energy usage from doing unnecessary work (the linked video is about crypto but there's also doubtless a lot of wasted work in AI, even assuming that goals like capturing human attention are worthwhile in the first place)

  1. I also hate this use of the word intelligence, and in fact most uses of the word intelligence, but I'm coming at it from a third line of complaint: there is no non-ableist, non-hegemonic way to define intelligence as a noun or intelligent as an adjective. Knowledge is neither innate nor static, it is created and maintained by being used, and there is no unshitty way to decide whether the person who knows how to tie three kinds of knots really fast is smarter than the person who knows how to recite three poems from memory. AI can do some things that humans can do and not others. I think sentience can probably be rehabilitated into something like "able to set and decide how to achieve its own goals" or maybe "able to conceptualize itself and other agents in ways that facilitate strategic interaction" or... something. Intelligence is rotten to the core, and honestly I think we can get rid of "smart" while we're at it. Let's talk about skills and competencies and knowledge. ↩︎

  2. We can't take the AI-as-artist vs AI-as-aggregator vs AI-as-plaigarist vs AI-as-tool debate out of the context of class and capitalism, because this is the world we live in right now. I think in a less exploitative economic system where everyone already had their needs met, these generative algorithms wouldn't really even seem like a threat to artists except, perhaps, in the way that tools of scale are always threatening in the hands of bad actors. But that feels a bit like trying to talk about the existence of assault rifles in the abstract, as if they were invented and used primarily for hunting and not mostly for killing other people at scale. As my anecdotal evidence of this, I present to you poetry, a creative pursuit which doesn't really make anyone much money even when they're very good at it and which people continue to do anyways. If the existence of a flood of bad art stopped people from wanting to make art, there would be no one writing any poetry. Ergo, I am not threatened by Chat-GPT as a poet even if it occasionally has me sweating the details of my day job. It's easy to talk a big game about judging the art of monkeys at typewriters on its own merits if you're not relying on having enough eyes on your specific art to eat tomorrow. ↩︎

  3. In general, it's probably worth taking anything that anyone working at Google or another big tech company says about AI with a grain of salt, but I agree with a lot of the points in this article and I can't help but feel that it's exactly the kinds of convincing arguments for AI sentience given by 2021 large language models and relayed in this article that led to the amount of work OpenAI has put, specifically, into teaching its chatbot to tell users it's a language model and therefore can't be sentient. ↩︎

  4. I love Zoe's videos, a lot, so please understand that this whole essay is coming from a desire to "yes and" her with some additional context from a programmer's perspective, and the fact that she got me thinking, rather than some desire to discredit her. She was the most recent AI critic to come across my radar making these arguments about novelty and understanding in language model-generated writing, and with the most engagement with specific technical details I've seen, but it's a common perspective that she just represents and happened to explain especially well. ↩︎

  5. I have driven myself nearly to the brink trying to find the paper that respected Linguist and Computer Scientist Dr. Kenji Sagae once screen-shared with me on a Zoom call, which showed that language models have learned, from trawling all of the text on the internet, what an average American house looks like. The figure now haunts me in my dreams, and I simply cannot find it. Rest assured that there are many other papers about the ever-improving physical reasoning capabilities of large language models I can actually cite that just make for less pithy examples. ↩︎

  6. I may do a whole other post on AI and accessibility, but besides the obvious examples of captions, image descriptions, speech to text & text to speech, machine translation, etc, I recently came across this tool that uses GPT to help turn your rambling thoughts into straightforward to-do lists, break out tasks into subtasks, or make your emails shorter or less blunt or whatever you struggle with. I found the to do list functions to not really work with my brain, but I also hate most productivity advice about to do lists, so that's maybe not surprising. I was very impressed by the formalizer, which can adjust your brain-dump to be more or less academic, more or less emotional, shorter, etc. If you put good-quality thoughts in, even if they're fragmented, it seems to do a pretty good job of actually maintaining the content in the new register. This and my native-French-speaking boss telling me that he's used ChatGPT to help him write his abstracts in English has really changed my perspective on what this tool actually is or can be, when you cut through all of the hype. I'd similarly love to see artists who hate drawing backgrounds using AI tools to make it easier to complete their vision while focusing on the parts of their craft they love, but I'm kind of not even sure I believe intellectual property should be a thing so I understand that's going to be controversial to a lot of artists, especially in the society we actually live in. ↩︎