Opinion - Hurting AI’s Feelings
There's a famous scene in the 1997 film “Good Will Hunting” where Will (Matt Damon) and his future psychologist Seán (Robin Williams) are sitting on a park bench and Seán is explaining to Will how he can help him.
To paraphrase, he tells him:
You can probably tell me the skinny on everything there is to know about Michelangelo, but you can’t tell me what it smells like in the Sistine Chapel.
If I ask you about love, you could probably quote me a sonnet by Shakespeare, you’ve probably even had girlfriends, but you can’t tell me what it’s like to wake up next to a woman and feel truly happy.
It might seem odd to start a conversation about Artificial Intelligence (AI) with this quote, but I do believe it applies in more ways than is immediately easy to recognise.
Apple recently released a study into comparisons of how AI models can handle complex computations and analysis. While we can all agree Apple have not proven themselves to be at the forefront of AI (with much of their promised Apple Intelligence still not seeing the light of day), the study is interesting, not in that it purports to say how one AI model is better than others; it’s interesting in that ALL the AI models failed at creative and subjective thinking.
When we think about AI and the base context of computing, everything – when brought back to its atomic level – is all just 1s and 0s [1]. Everything is logic-based; black or white, and does not allow for grey. We live in a world where when we plan, a multitude of factors come into effect. When we go to a coffee shop and place an order, what we desire is based on our feelings. We might have a regular order, the same order we place every day; the barista may even get to know us and say, “The usual?” as we walk in the door. But one day, for no reason other than how we feel, we might change that order. AI will never be able to do this. One could argue that AI can randomise, but even that randomisation will be based on some algorithmic logic, not on a sense of awareness of self. And the core facts of computing dictate that this will never be possible (not in our lifetimes anyway; I can’t say what quantum computing may do in the future). The clue is in the name “Artificial Intelligence”; we are given the illusion of intelligence, but all AI is algorithmic. As Seán says in the scene above, can we really believe that AI will understand what the experience of being an orphan is, just because it’s consumed the text of Oliver Twist?
If you’re wondering what the point is, it’s to understand that AI is a tool and, for the rest of our lives, it will remain so. AI will not replace human creativity; it can emulate it, it can replicate it, but it can’t do it on its own. Human ingenuity and analysis; creative, critical and discernible thinking will become the currency of the future job market. These are the skills of the future that we need to be teaching our kids today, for a post-AI world. We need to understand this to use AI as an effective tool in aiding our daily lives.
Make no mistake, Large Language Models (LLMs) are here to stay, and they will continue to be called AI. How we leverage those tools and use them to our benefit in our jobs is down to us. If we let them do our jobs for us, then we write our own exit interview in the process. We need to learn to challenge AI models, in the way they are learning to emulate us and our “thinking process” (an oxymoron for AI if ever there was one) and recognise that they only replace us, if we let them. It’s a very risky trap to fall into the idea that AI can do everything we want, quicker and easier. While it’s true that there are many, many tasks (in my personal experience, writing code more efficiently, better understanding accessibility application and better understanding the nuances of French grammar, vocabulary and slang!) that AI can speed up and make more efficient, these are only the tasks it should be used for. For AI to perform these tasks, it must have learned from previous examples or datasets, on which to make a “best guess” as to what an output should be. This has the unfortunate effect of plagiarising or (at worst) stealing existing content (the AI “art” conversation is a whole other article). AI should not be used (nor should it be expected) to replace our critical thinking nor our creativity. These are human traits and, for now, will remain so.
Tady Walsh
(P.S., No AIs or LLMs were harmed in the writing of this article).
[1] The irony of my learning how to use em dashes because of my experience using LLMs is not lost on me…