“If a lion could speak, we could not understand him.”
— Ludwig Wittgenstein
When Language Fails Machines
Medical Artificial Intelligence (AI) can analyse vital signs, parse symptoms, and even predict deterioration. But despite its brilliance, it still fails at something deeply human—understanding what a patient means when they say, “I don’t feel right.”
This failure isn’t just technical—it’s philosophical.
To understand why AI misses the mark, we must turn to Wittgenstein, one of the most important philosophers of language. In his later work, particularly Philosophical Investigations, Wittgenstein dismantled the idea that words have fixed meanings. Instead, he proposed that meaning arises from use—from the shared practices and lived experiences of a language community.
In short: words mean things not in isolation, but in context. And machines don’t live in our context.
The Symptom Is Not a Signal
When a patient says, “I feel heavy in my chest,” AI may interpret this as “possible angina.” But the machine doesn’t understand heaviness. It maps input to output—words to codes.
A physician, however, knows:
- Heaviness could mean anxiety, grief, acid reflux, or metaphorical burden.
- It could be rooted in the patient’s culture, emotional history, or idiom.
- It may carry ambiguity that can’t be resolved without dialogue.
This is where AI fails—not in calculation, but in comprehension.
Wittgenstein’s Language Games in Medicine
Wittgenstein introduced the concept of language games—the idea that words acquire meaning within specific activities and social interactions.
In medicine, “tired” doesn’t just mean fatigue.
- To a cancer survivor, it might mean existential depletion.
- To a heart failure patient, it might mean orthopnoea.
- To a new mother, it might mean sleep deprivation with emotional complexity.
Each patient plays a different language game. A clinician learns to move fluently between these games. An AI does not.
A Real Case: “I Just Feel Off”
A middle-aged woman once came in saying, “I just feel off.”
No textbook would code this symptom. The AI model ignored it as non-specific. Labs were normal. Vitals stable. But her affect was flat, her pauses prolonged.
She had subtle serotonin syndrome—likely triggered by an over-the-counter drug interaction.
The clue wasn’t in data. It was in the language.
The way she used “off”. The way she looked while saying it.
That’s not signal processing. That’s human clinical judgement.
Machines Can Parse, But Not Participate
AI can transcribe and translate—but it cannot participate in the human form of life that gives medical language its meaning.
- It doesn’t know suffering.
- It hasn’t waited in uncertainty.
- It hasn’t learnt the difference between a worried well and a quietly dying patient.
As Wittgenstein might say , AI can speak medicine, but it doesn’t know what we mean by it.
The Thinking Healer’s Wisdom
As AI grows more fluent in medical language, our role is not to resist it but to remember what it cannot know.
- It cannot feel the patient’s gaze.
- It cannot weigh silences.
- It cannot tell when a symptom is a metaphor.
Understanding is not built from words but from shared forms of life And no machine lives like ours.
Leave a Reply