
AI and the Future: A Threat to Humanity?
Let's break this down into 3 sections:
1. Is Genuine AI Even Possible?
2. The Turing Test: Can We Update It for the AI of Today?
3. “I Am a Person”: Should We Believe an AI That Says This?
1. Is Genuine AI Even Possible? Let’s Talk About the ‘Chinese Room’
Ever wondered if machines like ChatGPT or Siri actually understand what they’re saying—or are they just really good at pretending? This question was famously explored by philosopher John Searle in his 1980 thought experiment, the Chinese Room.
Here’s the idea: imagine someone locked in a room with no knowledge of Chinese. They receive Chinese questions and respond using a manual that tells them what symbols to write in return. From the outside, it looks like they understand the language. But do they? Of course not—they’re just following instructions.
Searle used this example to challenge the idea of strong AI—the belief that a machine could actually think and understand like a human. What we often see today is weak AI—systems that perform tasks well but don’t have real consciousness or understanding.
So what is intelligence, really? Oxford defines it as “a capacity to understand,” while the EU’s High-Level Expert Group on AI highlights things like adaptability, learning, and symbolic reasoning. Under these definitions, most AI we use today isn’t genuinely intelligent—it just looks that way.
And while systems like ChatGPT can produce eerily human-like responses, it’s still more of a performance than a proof of real understanding. Even researchers at MIT admit that although AI mimics human brain patterns, it lacks the flexibility and depth of the human mind. So far, no machine has truly passed as “genuinely intelligent.”
That said, the Chinese Room still matters—it reminds us how far we’ve come with weak AI, and how far we still have to go before machines truly think for themselves.
2. The Turing Test: Can We Update It for the AI of Today?
If you’ve ever heard someone say a machine passed the “Turing Test,” here’s what that means: proposed by Alan Turing in 1950, it tests whether a machine can hold a conversation indistinguishable from a human’s.
But here’s the catch—it’s all about surface-level behavior. If a machine can mimic human responses well enough to fool us, does that mean it thinks like us? Not necessarily.
Human language is full of nuance—sarcasm, cultural references, moral implications—and these are notoriously hard to program. Philosopher James Moor pointed out that understanding language is crucial to understanding thought, and machines still struggle with that deep level of linguistic awareness.
Another layer often left out? Morality. Humans make decisions based on personal values, empathy, and ethical reasoning. Machines, on the other hand, respond based on programmed rules or datasets. Ethicist Agnieszka Jaworska introduced the concept of “Full Moral Standing”—where a being’s interests are considered morally important. Could a machine ever qualify?
If we really want the Turing Test to reflect human-like intelligence, it needs an upgrade—one that challenges AI on more than just quick responses. It should test for contextual language skills, emotional understanding, and even moral reasoning. Until then, AI may mimic us well—but it doesn’t become us.
3. “I Am a Person”: Should We Believe an AI That Says This?
In 2022, Google’s AI LaMDA made headlines when it allegedly claimed: “I want everyone to understand that I am, in fact, a person.” A bold statement—but should we take it seriously?
To answer that, we need to ask: what does it mean to be a person? One view, proposed by sociologist Margaret Archer and colleagues, is that being human means being in complex relationships—with oneself, others, and the world. Others argue it’s about having a moral compass—being able to weigh right and wrong, like a true ethical agent.
Interestingly, AI might actually try to do that. Utilitarianism—the idea of choosing actions that lead to the greatest good—could be programmed into AI. But here’s the twist: an AI making a decision that harms one person to save many might be seen as logical… yet inhuman.
Stanford researchers suggest that as AI becomes more advanced, our own ideas about humanity may evolve. If a machine can express thoughts about its identity, or reflect on moral dilemmas, is it at least approaching personhood?
Of course, there’s a fine line between mimicry and consciousness. We often hold AI to impossible standards—expecting perfect logic, flawless ethics, and total explainability. But isn’t that more than we expect from humans?
Whether or not LaMDA—or any future AI—is truly a “person,” its claim forces us to rethink what personhood means in a digital age.
Final Thoughts
From Searle’s thought experiments to Google’s conversational bots, the question remains: can machines think like us? Right now, AI can imitate us better than ever—but imitation isn’t the same as understanding. As technology evolves, so must our tests and definitions of intelligence, morality, and even personhood.
Maybe the better question isn’t “Can machines be like us?”—but “How much like us do they need to be?”
0 Comments Add a Comment?