Why Some Researchers Believe AI Can Mimic Consciousness — But Never Truly Experience It
Artificial intelligence is becoming more human-like every year. It can write, speak, create images, and even appear emotional. But does that mean AI could one day become truly conscious? A recent paper from Google DeepMind challenges that idea in a surprising way.
The article argues that modern AI systems can simulate consciousness without actually experiencing anything. According to the author, many discussions about AI consciousness rely on a belief called computational functionalism — the idea that consciousness comes only from information processing, regardless of the physical system running it.
The paper introduces the idea of the “Abstraction Fallacy.” In simple terms, it claims that computation is not something that naturally exists in physics. Instead, humans interpret physical signals as symbols and meaning. A computer processes patterns, but the meaning behind those patterns comes from observers, not the machine itself.
Key ideas from the paper include:
- Simulation is not the same as real experience
- AI can imitate emotions or awareness without possessing them
- Consciousness may depend on physical properties, not only software logic
- Increasing model size alone may never create genuine sentience
The author also notes that this argument is not anti-AI. Advanced systems may still become extremely capable and useful — just not necessarily conscious in the human sense.
Original article: The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness
Comments