An early and swift warning: yes, I am yet another among many people—probably too many—writing about large language models (LLMs). If that's reason enough to cause you weariness, you have my complete sympathy and understanding should you decide to stop reading. If it's not sufficient to extinguish your curiosity, as I hope, please continue.
A second warning: the author of these words acknowledges being surprised and impressed by the capabilities and quality of what some of these "machines" produce. This sentiment results not only from reading about the subject but, above all, from direct experiences with various AI systems like LLMs that generate text, and image generators that create visuals based on text commands. In short, I understand the enchantment felt when reading texts or seeing images produced by these systems. I understand because I feel it too. But that enchantment quickly mutates, turning into disillusionment and even disbelief or secondhand embarrassment, when I encounter comments that reveal a dangerous fascination.
the seductive promise of technological salvation
In the same way we're attracted to the human face—allowing us extraordinary abilities to infer feelings, emotions, and ideas through representations as simple as emojis or rudimentary drawings—there's something moving, albeit in a sadly patronizing and simplistic way, about the enthusiasm that arises when we encounter such intelligence in an "entity" without eyes, nose, or mouth. It's equivalent to the excitement (and fear) of discovering intelligent extraterrestrial beings. Science isn't necessary, folk wisdom suffices to know that too much enthusiasm can render our attention myopic and our reflections anemic. Passion, whatever its object, as wonderful as it may be to experience, doesn't always contribute to making the best decisions, though it does make deciding and acting easier.
The critic Joseph Weizenbaum, back in 1983, warned of the "magical" effect technology has on us—magical in the sense that we perceive the results it presents as magic[1]. Apple under Steve Jobs, for example, grew precisely by building on this idea. "It works like magic" is a common expression found in their product launches. This manifestation isn't exclusive to that brand. Advertisements for new technologies and features of existing technologies—phones, watches, productivity programs and systems, cars—typically generate enthusiasm through the promise of smooth functioning that doesn't always materialize in reality. As with other forms of advertising, the goal is to enchant and convince us to buy, literally and/or figuratively, without regard for truth, just like the nonsense a con artist tries to sell us.
We shouldn't underestimate the importance of aesthetics either. I remember using SAP in 2007, and how painful it was to do so. Not only because of the system's slowness and complexity but, especially in my case, because it was ugly. Purists will say that adding "glitter," fluid animations, and a more pleasant appearance won't make a system function better. I concede that's not the fundamental aspect regarding capability. But it's certain that the more pleasant an experience is, the more we'll want to repeat it.
Weizenbaum, at the beginning of our history with computers, warned that "the computer has been a solution in search of problems—the ultimate technological fix that insulates us from having to address problems." Machines have already freed us from many heavy tasks and make our lives easier in many others. How much better is our life? What have we done with this (supposed) evolution? We've created new problems instead of solving other important ones whose persistence could have been avoided. What problems are we neglecting as our fascination-induced myopia worsens?
from outsourcing fundamental capabilities to the (more than probable) degradation of our intelligence
A colleague of Weizenbaum, Lewis Mumford, coined the concept of "megatechnic bribe," which points to our tendency to neglect the disadvantages of technologies when promised a share of their benefits[2]. Individual memory seems like a good example: who still memorizes phone numbers or email addresses? Almost all of us have mini-computers in our pockets and wallets that do this for us. These devices also interfere with collective memory and produce changes in how we converse. Who waits for associations, for the "train of ideas and thoughts," when trying to remember which actress starred in that film we liked and want to tell our friends about? Any doubt is now cleared up by a quick and easy phone consultation.
We can associate this phenomenon with trust, or the lack thereof, when, for example, we doubt the answers others give us using "just" their memory. From legitimate doubt to fundamentalist skepticism, where people are not believed but only "data" is trusted, is a small step. Curiously, these dynamics make us more prone to the infiltration of nonsense when we blindly trust data instead of people. Just as we've delegated memory to machines, critical thinking has also been entrusted to them, ceasing to be part of conversational dynamics and relationships. We sacrifice memory and associative capacity for speed. We kill questions, doubt, and uncertainty for commands that guarantee answers. To get better answers, one must learn to ask better questions[3].
Like us, LLMs are lazy, and contrary to what's announced, they don't come to help us deal with laziness. Conversely, they contain the real risk of contributing to its increase. I'm referring to the type of laziness that leads to the deterioration of our capabilities, conceiving that there exists another type that potentially allows us to evolve (for example, the known creative effects of leisure and boredom).
It's easy to find a pattern in the many writings available on the subject, particularly in education, where many people are concerned about the harmful effects such tools can have on critical thinking and the related ability to translate it into words in essay format. There are also those who use this phenomenon as a pretext to criticize the sector, arguing that we're in an optimal phase to break with antiquated mentalities and outdated methods[4].
"If you don't write better than a machine, why are you even writing?" "We've entered a new world. Goodbye homework!" These two statements were written by two heavyweights (at least in terms of visibility and perceived importance) in today's world, Marc Andreessen and Elon Musk, respectively. I consider these types of comments dangerous as they disregard crucially important dimensions. These supposedly progressive thoughts commonly seem to forget that progress should not be made at the expense of important or even essential losses. What should be lost with progress is what's wrong, what's excessive, and what causes us suffering. Not what allows us to evaluate what's right, what allows us to be more and better.
Thinking about homework, for example, or essay writing, we cannot forget that language (written and/or oral) and thought are intimately connected. Improving one dimension is improving the other and vice versa. Manuel Monteiro, in his books "Por Amor à Língua" ("For Love of Language") and "O Mundo pelos Olhos da Língua" ("The World Through the Eyes of Language"),[5] suggests that the liberties we take with language, the way we express ourselves, either through writing or speaking, reveal some contemporary aspects of our interiority and social functioning: the impoverishment of our capacity for critical thinking; a growing inability to substantiate our opinions; the ease with which we adhere to opinions formulated by others; the difficulty in distinguishing truth, lies, and nonsense.
The existence of systems capable of impressing with the quality and "originality" of their productions should not make the learning process unnecessary. Therefore, it's not just about knowing how to ask good questions to get good answers; the process of finding and formulating questions and the path to conceiving answers are fundamental and irreplaceable. These are activities we should not outsource or delegate to a machine, nor even to another person. It is, must be, an individual pursuit. Nobody can learn or acquire our capabilities for us.
The capability of these language models to sell us nonsense is also notorious. The creators of these systems themselves warn about this and also extend their warnings to the biases, errors, and misinformation their creations can offer users. The internet is full of examples, some hilarious, of LLMs' errors. It's not the errors that concern me. It's our increasing inability to detect nonsense.
By better understanding how large language models work, we can comprehend the reasons that lead to such errors and biases[6]. Trying to explain it too simply, these models are very competent at mimicking the semantic competence of humans; and since we "know" they use information available on the internet—"the data"—with a learning capacity far superior to ours, it's easy to believe without questioning the answers they offer us. This is precisely one of the many problems, as Weizenbaum had already warned us, in the form of questions: who/what are the sources of information? What systems and criteria do their creators use to ensure the ethics, justice, and truth of their responses?
Outsourcing our essential capabilities, those that make us who we are, is always risky. At the limit, without gaining more and better awareness of ourselves and the use of any tools at our disposal, we'll be contributing to a form of eugenics of thought and feeling: an artificial "brain" indirectly commanding all others.
the dystopian fear of human extinction caused by machines
Beyond fears directed at education, other evident ones translate into concern for the eventual (predictable, according to some opinions) obsolescence of some professions and activities such as journalism, law, consulting, creative writing. Fundamentally, for some, the idea that machines will take our place has now begun to happen. It's the fear of our own obsolescence. Finally, we would have the terrible answer to the dreaded question "what do we/I do here?": nothing!, because there's a machine that does it in our place. Some detect in these artificial intelligences the potential to be better than us in demands we consider exclusive to our species, those that make us special, in our eyes, of course.
I have no doubt that AI, in general, can free us from many tasks that have no added value being performed by people. In fact, we should already be using some existing capabilities in these systems to free people from tasks known to cause disease (physical and mental). For people, we should maintain conditions or create them, in the many cases where they don't exist, to add real value, for themselves and for others.
a great opportunity for improvement
If it wasn't clear or I wasn't clear until now, this text is not a criticism of AI or large language models. It's a self-criticism, of us, human beings. We frighten and excite ourselves with increasingly fewer reasons to do so, which leads us to adopt or reject novelties with a speed that doesn't match the time we need to really learn.
But this text is not just critical. It's also a manifesto of hope, an alert to the opportunity to improve. By detecting these failures, fears, and longings of ours, we can, for that very reason, devise solutions. It's moments like this that allow us to ask good and important questions; and find quick, though not definitive, answers to problems found or anticipated[7]; and return our attention to what's important to pay attention to.
Language is a way of connecting minds and people, in real-time and delayed. We're still far, and perhaps it will never happen, from connecting with machines in relationships that go beyond utility and transactionality, no matter how many of us believe we have deep relationships with some machines. As Tim Leberecht tells us[8], these artificial intelligences "are not capable of establishing relationships [except between the data they collect, I add]: with themselves, with others, with truth, with the future. We, humans, define ourselves through relationships." Perhaps the democratized access and predictably rapid evolution of this type of technology can finally free us to dedicate ourselves to what is really important to improve as humans.
May there exist and may even more tools like LLMs emerge that show us the importance of embarking on a path that allows us to be more critically intelligent, and openly skeptical, and rigorously romantic, and serious without taking ourselves too seriously.
Footnotes
"Computers enable fantasies" - On the continued relevance of Weizenbaum's warnings ↩︎
Manuel Monteiro is a Portuguese author and linguist. His works on language and its relationship with thought and society have been influential in Portuguese-speaking countries. The original titles in European Portuguese are "Por Amor à Língua" and "O Mundo pelos Olhos da Língua." ↩︎
Additional references
- LLMs Are a Mirror of Our Times
- Welcome to the Next Level of Bullshit
- Lies, BS and AI
- AI-Generated Bullshit Is A Challenge To Our "Vigilance"
- AI: Automatic Expensive BS at Scale
- The Undergraduate Essay Is About to Die
- The College Essay Is Dead
- LLMs Are Everywhere. Here's Where They Came From
- The End of Writing
- Economics of AI: A Labor Market View on the Occupational Impact of Artificial Intelligence
- Could AI Do My Job?