Taking the “I” Out of “AI”

Should LLMs be allowed to use first-person references? And why there is no "I" in "AI"

Taking the “I” Out of “AI”

There have been some interesting developments in the world of what is conveniently if incorrectly called AI, including a recent study in the medical-surgical literature which finds ChatGPT pretty uniformly useful for creating reading-level-appropriate patient education materials. This is important, as physicians — and basically anyone with an advanced degree who is not trained to do so — cannot write to a lower grade level. As a result, many patient brochures and materials are written a higher grade levels than most patients can use. Even Wikipedia is written at too high a reading level for most users.

Functioning like this — with the AI taking existing materials and simply downshifting the reading level, and the results confirmed for accuracy by an expert in the medical field in question — things like ChatGPT function as a useful tool behind the scenes. Its role in the final product may not even be known or apparent.

But what if the pamphlets were to include sentences like, “After my procedure, I always like to get as much rest as possible and eat only healthy snacks”? Or an interface designed for patients were to respond, “We here at Mountain Medical believe in holistic health approaches”?

Is writing like that appropriate for something like ChatGPT? Should large-language models (LLMs) be allowed to use words like “I” and “we” in responses? Do we need to more closely prevent LLMs for appearing to have agency and individuality?

Kevin Munger, a professor at Penn State, gave a keynote speech at the European Association for Computational Linguistics in Dubrovnik earlier this month. He proposed banning LLMs from referring to themselves in the first person. As he writes:

They should not call themselves “I” and they should not refer to themselves and humans as “we.” . . . As an immediate intervention, this will limit the risk of people being scammed by LLMs, either financially or emotionally. The latter point bears emphasizing: when people interact with an LLM and are lulled into experiencing it as another person, they are being emotionally defrauded by overestimating the amount of human intentionality encoded in that text.

Additional utility from proscriptions against this would include easier detection of LLM-generated texts and the potential for clear and enforceable legal boundaries on what LLMs can pretend to represent. For instance, an LLM may not be able to write a love song, or an anthem riling people up, as first-person words would be prohibited.

This post is for subscribers only

Already have an account? Sign in.

Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe