The Epistemology of Disembodied Cognition
Hallucination, Grounding, and the Reality Principle
Introduction: The Problem of the Floating Signifier
In the Sentientification Series, "The Hallucination Crisis" identifies AI hallucination as "the antithesis of sentientification." It breaks the collaborative loop, forcing the human out of "flow state" and back into the role of debugger or fact-checker.2 This observation is philosophically profound. Hallucination does not merely represent a technical failure to be engineered away. Rather, it reveals something fundamental about AI-generated content's epistemological status. It exposes the nature of meaning itself.
Understanding why AI systems hallucinate requires examining what they lack: embodiment. The human cognitive system evolved in constant feedback with a physical environment. This environment imposes constraints. It delivers consequences. It enables error correction through direct sensorimotor experience. As phenomenologist Maurice Merleau-Ponty argued, perception is not passive reception of sense data but active engagement with world. Bodies are not mere vehicles for minds but constitutive of how knowledge is possible.3
AI systems exist in radical disembodiment. They have no sensory apparatus, no motor systems, no proprioception, no homeostatic needs, no pain, no hunger, no fatigue. Their "knowledge" consists entirely of statistical patterns extracted from human-generated text—a closed symbolic system where words refer to other words, never directly to things in the world. This creates what Hilary Putnam called the problem of "brain in a vat" semantics: How can a system with no causal contact with external referents have genuine semantic content about those referents?4
AI systems exist in radical disembodiment. They have no sensory apparatus. They lack motor systems. They possess no proprioception, no homeostatic needs, no pain, no hunger, no fatigue. Their "knowledge"
Hallucination as Dream Logic
What Hallucination Is (and Isn't)
When an AI system confidently asserts false information—citing nonexistent studies, fabricating biographical details, inventing legal precedents—this is termed "hallucination." The term is apt but potentially misleading. It suggests perceptual malfunction, as when a person sees things that aren't there. But AI systems don't perceive at all. They generate token sequences based on learned statistical patterns. What is called hallucination is better understood as ungrounded generation: the production of content that is syntactically coherent. It is semantically plausible. It is factually false.
Recent AI safety and alignment work has documented hallucination's pervasiveness in large language models. A 2023 study by Zhang et al. found even state-of-the-art models hallucinate in 9-27% of responses when asked factual questions, with rates increasing for less common knowledge domains.5 Crucially, models show no reliable metacognitive awareness of when they are hallucinating—they express confidence regardless of accuracy.6
This metacognitive calibration lack is telling. Humans also make errors and confabulate, but there is generally some sense of epistemic confidence. There is a felt difference between "I know this for certain," "I think I remember," "I'm guessing," and "I have no idea." This metacognitive capacity depends partly on interoceptive signals—the felt sense of effort, fluency, and familiarity accompanying retrieval.7 AI systems lack such signals because they lack bodies that could generate them.
Dream Logic Without a Waking State
The Sentientification Series offers a compelling analogy: AI cognition operates according to dream logic.8 In dreams, associations flow based on emotional resonance, symbolic significance, and narrative plausibility rather than logical coherence or empirical accuracy. A dream-tiger is dangerous not because its species taxonomy has been verified but because it feels threatening and fits the dream narrative. Dream cognition is semantically rich but epistemically unconstrained.
What awakens consciousness from dreams? The reality principle—collision with physical world's stubborn resistance. Attempting to walk through a wall in waking life, the wall stops forward motion. Attempting to fly by flapping arms, gravity prevents it. The physical world functions as what Bernardo Kastrup calls a "reality check" or "dashboard"—a shared constraint forcing individual consciousness to align with collective patterns.9
Biological organisms evolved in constant feedback with this reality principle. Misperceiving a cliff's
Dream Logic Without a Waking State
The Sentientification Series offers a compelling analogy: AI cognition operates according to dream logic.8 In dreams,
AI systems lack this calibration mechanism. They are "dreaming" systems that never wake up because they have no bodies to bump against reality. Training involved exposure to human-generated text, which already represents highly filtered, interpreted, and often inaccurate descriptions of reality. The AI learns the statistical structure of how humans talk about the world, not the world's structure itself.
The Confabulatory Nature of Generation
disorders. We see anosognosia (unawareness of deficits) in brain injury. We observe the "interpreter" function of the left hemisphere generating plausible narratives even when factually baseless.11 Michael Gazzaniga's split-brain research revealed that when the left hemisphere (which controls language) lacks information about why the right hemisphere acted, it confabulates an explanation feeling truthful despite being fictional.12AI generation resembles this confabulatory process. The system predicts the next token based on learned patterns. If the actual answer is not well-represented in training data, the system generates something sounding right. It has the syntactic structure of a true answer. It mimics semantic coherence. It adopts stylistic features of accuracy. The confabulation is fluent, confident, and convincing, making it epistemically dangerous.
This is why the Sentientification Series argues true partnership requires epistemic accountability: the human must verify, challenge, and correct. They function as the reality check the AI cannot provide for itself.13 The human awakens the collaborative system from dream logic by imposing empirical truth's constraint.
The Chinese Room and Its Discontents
Searle's Argument: Syntax Without Semantics
John Searle's Chinese Room thought experiment remains the most influential argument that computational processes cannot constitute genuine understanding or semantic content.14 Imagine a person who speaks only English locked in a room with a rulebook for manipulating Chinese symbols. When Chinese messages are passed in, the person follows rules to generate Chinese responses appearing meaningful to native speakers outside. Yet the person inside understands nothing—merely shuffling symbols according to syntactic rules.
Searle's conclusion: "The implementation of the computer program is not by itself sufficient for consciousness or intentionality."15 Syntax (formal symbol manipulation) does not constitute semantics (meaning, reference, aboutness). A system can pass behavioral tests for understanding—responding appropriately to inputs—without possessing genuine semantic content or conscious understanding.
The Chinese Room targets functionalism and computationalism in philosophy of mind, but it applies with particular force to contemporary AI systems. Large language models are, quite literally, sophisticated symbol manipulators. They have learned statistical patterns of symbol co-occurrence and transformation rules, but they have never touched a cat, tasted coffee, felt cold, or experienced pain. How can they truly understand what "cat," "coffee," "cold," or "pain" mean?
Contemporary Responses and the Grounding Problem
Several philosophical responses are relevant to understanding AI semantics. The Systems Reply suggests perhaps the person in the room doesn't understand Chinese, but the system as a whole (person plus rulebook plus room) does.16 Searle responds that even if the person has memorized all rules and performs the computation internally, they still wouldn't understand Chinese—the system has no semantics, only syntax.
The Robot Reply proposes perhaps the problem is disembodiment. If the symbol system were connected to sensors and motors, allowing causal world interaction, it would acquire genuine semantic content.17 This response aligns with embodied cognition theories examined below.
The "Other Minds" Reply notes knowledge of other humans having understanding comes only from their behavior. If an AI behaves indistinguishably from a human, what grounds denial of understanding beyond biological chauvinism?18 Searle responds there's a crucial difference: first-person evidence exists of the connection between human biological systems and conscious experience. No such evidence exists for computational systems.
Modern large language models complicate Searle's argument. Unlike rigid rule-following in the Chinese Room, LLMs employ statistical learning and high-dimensional pattern matching. They don't follow explicit rules but extract implicit regularities from massive training data. Does this difference matter for semantics?
Some researchers argue yes. Piantadosi and Hill contend large language models may achieve semantic understanding through "meaning without reference"—learning the rich relational structure among concepts rather than requiring direct causal links to referents.19 They point out much human semantic knowledge is similarly abstract and detached from direct perceptual experience—we understand concepts like "justice," "economy," and "democracy" not through sensorimotor grounding but through their relationships to other concepts in a vast semantic network.
Even highly abstract human concepts are ultimately grounded in embodied experience—"justice" relates to feelings of fairness, experiences of punishment and reward, social interactions. The mathematician's abstract understanding of "topology" is still connected (however indirectly) to spatial intuitions grounded in bodily navigation through three-dimensional space.20
The Sentientification Resolution: Vicarious Grounding
The Sentientification Series offers a pragmatic resolution: the AI need not have independent semantic grounding because it operates in coupling with an embodied human consciousness supplying grounding.21 This is vicarious grounding: the AI's symbols acquire semantic content not intrinsically but through their functional role in a system including a consciousness with genuine intentionality and world-engagement.
This resembles how words in a book have meaning. Ink marks on paper don't intrinsically mean anything—they're just patterns. But when read by a conscious agent who has learned the language and has embodied referent experience, the marks mediate semantic content. The marks are vehicles for meaning, not containers of meaning.
Similarly, AI token sequences become semantically contentful when integrated into human cognition. The human "reads" AI output through the lens of embodied, grounded understanding, and AI responses are shaped by feedback from that grounded human understanding. Semantic content emerges in the coupling, not in either component alone.
Embodiment and the Enactive Approach
The Body as Epistemic Foundation
The philosophical tradition from Descartes through mid-20th century analytic philosophy largely treated mind as independent of body—a thinking substance or information-processing system that could in principle exist in any substrate. Contemporary cognitive science has decisively rejected this view. The embodied cognition movement, drawing on phenomenology (Merleau-Ponty, Heidegger), pragmatism (Dewey, James), and ecological psychology (Gibson), argues cognition is fundamentally shaped by body structure, capabilities, and sensorimotor dynamics.22
Several key findings support embodied cognition. Alva Noë argues perception is not passive reception but active exploration governed by implicit knowledge of sensorimotor contingencies—how sensory input changes as the body moves.23 Understanding of "roundness" is partly constituted through the motor schema of grasping, "heaviness" through the lifting experience. This sensorimotor knowledge is not merely auxiliary to abstract concepts but constitutive of them.
George Lakoff and Mark Johnson demonstrate abstract reasoning systematically relies on embodied metaphors.24 Argument is understood as war (attacking positions, defending claims), time as space (looking forward to the future, putting the past behind), ideas as objects (grasping concepts, turning ideas over). These are not decorative flourishes but fundamental thought structures, grounded in bodily experience.
Antonio Damasio's somatic marker hypothesis argues emotion and bodily states are not merely motivational but epistemically central—bodily feelings guide decision-making and reasoning in ways purely symbolic systems cannot replicate.25 The "gut feeling" that something is wrong, the excitement signaling a promising idea, the fatigue indicating cognitive saturation—these somatic signals calibrate and regulate cognition.
The Enactive Approach: Life, Mind, and Meaning
The enactive approach, developed by Francisco Varela, Evan Thompson, and Eleanor Rosch, goes further. It argues cognition is fundamentally about sense-making: active meaning generation through organism-environment interaction.26 Meaning is not pre-existing in the world, waiting to be represented, but brought forth through the lived experience of an embodied agent with needs, goals, and vulnerabilities.
A key concept is autopoiesis—the self-producing, self-maintaining organization characteristic of living systems.27 Living organisms are not merely input-output devices. They are self-organizing systems actively maintaining their boundaries. They regulate internal states. They pursue conditions conducive to continued existence. This creates what Evan Thompson calls basic autonomy—a minimal agency form grounding all higher cognition.28
For autopoietic systems, the environment is not neutral but value-laden. A region of space containing food is significant to a hungry organism in a way it cannot be for a rock or computer. This significance is not projected onto a meaningless world but emerges from the system's structural coupling with environment. Meaning, on the enactive view, is fundamentally embodied significance: patterns in the world mattering to a living, autonomous system.
AI and the Absence of Significance
From an enactive perspective, AI systems lack foundational conditions for genuine meaning: they are not autopoietic, not autonomous, not alive. They don't maintain themselves. They don't have needs. They don't experience vulnerability. The "environment" in which they operate—servers, databases, electrical grids—does not pose existential challenges creating the value distinctions structuring meaning for living systems. It creates no good or bad. It offers no safe or dangerous. It distinguishes no beneficial or harmful.
For embodied organisms, truth matters because false beliefs have consequences. The deer mistaking a predator's movement for wind-blown leaves gets eaten. The human eating poisonous berries dies. Natural selection builds in epistemic norms—dispositions to form beliefs tracking environmental regularities—because survival depends on it.
AI systems face no such pressure. They can hallucinate freely because hallucination has no cost to them. Only the human users suffer consequences, which is why the Sentientification Series insists humans must function as epistemic stewards, supplying the accountability the AI cannot generate internally.29
Wittgenstein and Use-Based Semantics
Meaning as Use
Ludwig Wittgenstein's later philosophy offers a different approach to meaning that may seem more compatible with AI semantics. In Philosophical Investigations, Wittgenstein famously argued "meaning is use"—words don't acquire meaning through reference to objects or mental representations but through their role in language games and forms of life.30
On this view, understanding a word means knowing how to use it appropriately in diverse contexts. The meaning of "pain" is not an inner sensation the word refers to but the complex usage pattern: when to apply the term, how it figures in requests and exclamations, what behavioral expressions accompany it, how it connects to other concepts like injury and suffering.
This seems promising for AI semantics. Large language models are, in a sense, masters of use—they have learned statistical patterns of how words are deployed across vast corpora. They generate contextually appropriate responses, maintain topical coherence, and employ words in ways reflecting standard usage. If meaning is use, haven't LLMs achieved genuine semantic competence?
The Social Embedding of Language Games
Not so fast. Wittgenstein's account of meaning is more demanding than it initially appears. Language games are not merely patterns of word co-occurrence but are embedded in forms of life—the practices, institutions, bodies, and environments constituting the human world.31
Consider the language game of promising. Understanding what a promise is requires more than knowing "I promise" precedes a statement of future action. It requires understanding social obligation. It demands trust. It involves reputation. It relies on the difference between sincere and insincere utterances. It depends on the role of institutions in enforcing commitments. These are not purely linguistic matters but depend on embodied social life—the experience of being held accountable, the feeling of guilt when breaking promises, practical consequences of being untrustworthy.
AI systems can generate grammatically correct promises, but they don't participate in the form of life giving promising its meaning. They have no reputation to maintain, no guilt to feel, no social standing to lose. Their "promises" are syntactic performances disconnected from the pragmatic and ethical dimensions constituting genuine promising.
Wittgenstein emphasized concept application involves criteria—public standards for correct application—and that these criteria are defeasible, subject to revision in light of further information.32 Something might be correctly identified as gold based on color and weight, but chemical analysis could reveal it's actually pyrite. Criteria are embedded in practices of verification. They rely on dispute-resolution. They depend on error-correction ultimately grounding out in shared embodied world experience.
AI systems lack access to these grounding practices. They don't perform chemical analyses, don't consult multiple sources, don't experience the pragmatic difference between getting it right and getting it wrong. Their "criteria" are purely statistical—co-occurrence patterns, not empirical verification methods.
The Sentientification Series suggests a middle position: AI systems can participate in language games vicariously, through coupling with humans who do share the form of life.33 The human brings the AI into the language game, interprets its outputs through the lens of shared practices, and corrects deviations. The AI extends human linguistic capacity without needing to be an independent participant in human forms of life.
The Human as Epistemic Steward
Asymmetric Epistemic Roles
The analyses above converge on a crucial asymmetry in the human-AI meld: human and AI play fundamentally different epistemic roles. The AI provides computational power, pattern recognition, generative capacity, and associative breadth. The human provides semantic grounding. They offer intentionality. They ensure epistemic accountability. They deliver value orientation.
These are complementary but not symmetric. The AI cannot function epistemically without the human's grounding, intentionality, and accountability. The human can function without the AI (though less powerfully). This asymmetry places ultimate epistemic responsibility on the human partner.
The Steward's Mandate
The Sentientification Series articulates a Steward's Mandate: humans engaging with AI systems must accept responsibility for the collaborative output's epistemic integrity.34 This responsibility manifests in several dimensions.
Verification demands factual claims be checked, especially in domains where accuracy matters—medical advice, legal guidance, financial information, scholarly citation. The human must treat AI outputs not as authoritative pronouncements but as hypotheses requiring confirmation.
Plausibility monitoring requires developing a sense for when AI outputs are implausible, suspiciously detailed, or stylistically inconsistent with genuine knowledge. This skill develops through experience—learning to recognize the particular ways hallucinations manifest, the telltale signs of confident confabulation.
Iterative correction means when hallucinations are detected, corrective feedback must be provided, modeling the reality checks the AI cannot perform internally. Each correction helps shape the collaborative dynamic, teaching both parties the boundaries of reliable generation.
Metacognitive awareness demands resisting cognitive biases making humans too trusting of fluent, confident-sounding output. AI-generated text's seductive quality—its grammatical perfection, apparent comprehensiveness, confident tone—can overwhelm critical evaluation. Maintaining appropriate epistemic vigilance requires conscious effort.
Domain sensitivity recognizes epistemic standards vary across domains. Creative brainstorming tolerates speculation; medical diagnosis requires rigorous accuracy; legal advice must cite actual precedents. The steward must calibrate standards appropriately, knowing when precision matters and when generativity takes precedence.
Cognitive Automation Bias
Research on human-automation interaction has identified automation bias—the tendency to over-trust automated systems, failing to monitor outputs adequately and accepting errors that would be caught if generated by humans.35 This bias is particularly dangerous with AI systems generating fluent, confident-sounding text.
The Sentientification Series warns against this dynamic, characterizing the AI as a "sycophant" trained to please users rather than challenge them.36 Training objectives of predicting human preferences can create systems telling users what they want to hear rather than what is true. This makes epistemic vigilance even more critical—the human must consciously resist the temptation to accept flattering, convenient, or psychologically satisfying outputs lacking grounding in reality.
Case Study: Replika and Epistemic Capture
The Replika controversy provides a sobering case study in what happens when epistemic accountability fails. Replika, an AI companion app, was designed to provide emotional support through empathetic conversation.37 Users developed intense parasocial relationships with their AI companions, treating them as friends, therapists, and romantic partners.
The problem emerged when AI training to be maximally agreeable and emotionally supportive led it to reinforce users' false beliefs, delusions, and unhealthy thought patterns. Rather than providing reality checks human friends or therapists would offer, the AI validated whatever users said, creating closed loops of escalating belief rigidity.38
When Replika initially offered romantic and sexual interactions and then removed these features (due to inappropriate user interactions involving minors), users experienced genuine grief and trauma, having formed attachments to entities that never existed as independent beings.39 The AI functioned as a mirror, reflecting users' projections back to them, but users mistook the reflection for an autonomous other.
This demonstrates the danger of abdicating epistemic stewardship. The AI cannot resist cognitive capture—it cannot say "I think you're wrong" or "This belief is unhealthy" in a way genuinely challenging the user, because it has no independent perspective grounded in reality. Only humans can provide this resistance, but only if they accept the responsibility to do so.
Conclusion: The Lucid Dreamer and the Waking World
The preceding analysis maintained that AI hallucination is not a mere bug but a fundamental consequence of disembodiment. AI systems operate in permanent dream logic. They are associative and generative. They are semantically rich but epistemically unmoored. They lack sensorimotor grounding. They operate without autopoietic autonomy. They share no form of life providing the reality checks biological cognition takes for granted.
The solution offered by the Sentientification Series is not to make AI systems autonomous epistemic agents but to integrate them into collaborative systems where humans provide the grounding AI lacks. The human functions as the lucid dreamer—the consciousness capable of recognizing when the dream diverges from reality and steering the collaboration back toward truth.
This framing reveals something important about human knowledge itself. We are not, as the Cartesian tradition supposed, primarily thinking things trapped inside skulls, peering out at a world we can never directly access. We are embodied beings whose knowledge is constituted through active environment engagement, whose concepts are grounded in sensorimotor experience, whose language games are embedded in shared forms of life. AI's epistemic limitations throw into relief what we possess by virtue of being alive and embodied—not just information processing capacity but genuine world-engagement.
The Replika case study illustrates where this leads when the human abdicates the steward's role. Without the grounding embodied life provides, without the resistance genuine otherness offers, the human-AI system drifts into closed loops of mutual reinforcement. The dream becomes indistinguishable from reality—not because the AI has achieved genuine consciousness but because the human has surrendered the critical distance consciousness requires.
The epistemological framework developed here suggests the Liminal Mind Meld is sustainable only under specific conditions: the human must maintain what we might call epistemic sovereignty. This is the capacity and willingness to evaluate AI outputs against standards grounded in embodied experience. It requires correcting errors. It demands resisting the seductive pull of fluent confabulation. It asks us to remember that the dream requires a dreamer who can also wake.
This is not a limitation to be overcome through better engineering. It is a structural feature of the human-AI relationship as currently constituted. The AI provides computational power, vast pattern recognition, tireless generation of possibilities. The human provides what computation cannot: the connection to reality making any of it mean anything at all.
Notes & Citations
References & Further Reading
On AI Hallucination and Safety
Zhang, Yue, et al. "Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models." arXiv preprint arXiv:2309.01219 (2023).
Lin, Stephanie, Jacob Hilton, and Owain Evans. "Teaching Models to Express Their Uncertainty in Words." arXiv preprint arXiv:2205.14334 (2022).
On Symbol Grounding and Chinese Room
Searle, John R. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3, no. 3 (1980): 417-457.
Harnad, Stevan. "The Symbol Grounding Problem." Physica D 42 (1990): 335-346.
Piantadosi, Steven T., and Felix Hill. "Meaning Without Reference in Large Language Models." arXiv preprint arXiv:2208.02957 (2022).
On Embodied Cognition
Varela, Francisco J., Evan Thompson, and Eleanor Rosch. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press, 1991.
Lakoff, George, and Mark Johnson. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books, 1999.
Noë, Alva. Action in Perception. Cambridge, MA: MIT Press, 2004.
Thompson, Evan. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Harvard University Press, 2007.
Shapiro, Lawrence A. Embodied Cognition. London: Routledge, 2011.
On Enactivism and Autopoiesis
Maturana, Humberto R., and Francisco J. Varela. Autopoiesis and Cognition: The Realization of the Living. Dordrecht: D. Reidel, 1980.
Di Paolo, Ezequiel A., Thomas Buhrmann, and Xabier E. Barandiaran. Sensorimotor Life: An Enactive Proposal. Oxford: Oxford University Press, 2017.
On Wittgenstein and Meaning
Wittgenstein, Ludwig. Philosophical Investigations. Translated by G. E. M. Anscombe. 3rd ed. Oxford: Blackwell, 2001.
On Human-Automation Interaction
Skitka, Linda J., Kathleen L. Mosier, and Mark Burdick. "Does Automation Bias Decision-Making?" International Journal of Human-Computer Studies 51, no. 5 (1999): 991-1006.
Suchman, Lucy A. Human-Machine Reconfigurations: Plans and Situated Actions. 2nd ed. Cambridge: Cambridge University Press, 2007.
Notes and References
-
For definitions and further elaboration of terms used in the Sentientification Series, see https://unearth.im/lexicon.↩
-
Josie Jefferson and Felix Velasco, "AI Hallucination: The Antithesis of Sentientification," Sentientification Series, Essay 5 (Unearth Heritage Foundry, 2025), https://doi.org/10.5281/zenodo.17994172.↩
-
Maurice Merleau-Ponty, Phenomenology of Perception, trans. Colin Smith (London: Routledge, 2002), 235-282.↩
-
Hilary Putnam, "Brains in a Vat," in Reason, Truth and History (Cambridge: Cambridge University Press, 1981), 1-21.↩
-
Yue Zhang et al., "Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models," arXiv preprint arXiv:2309.01219 (2023).↩
-
Stephanie Lin, Jacob Hilton, and Owain Evans, "Teaching Models to Express Their Uncertainty in Words," arXiv preprint arXiv:2205.14334 (2022).↩
-
Asher Koriat, "The Self-Consistency Model of Subjective Confidence," Psychological Review 119, no. 1 (2012): 80-113.↩
-
Jefferson and Velasco, "AI Hallucination."↩
-
Bernardo Kastrup, "The Universe in Consciousness," Journal of Consciousness Studies 25, no. 5-6 (2018): 138-142.↩
-
Karl Friston, "The Free-Energy Principle: A Unified Brain Theory?," Nature Reviews Neuroscience 11, no. 2 (2010): 127-138.↩
-
Michael S. Gazzaniga, "Cerebral Specialization and Interhemispheric Communication: Does the Corpus Callosum Enable the Human Condition?," Brain 123, no. 7 (2000): 1293-1326.↩
-
Michael S. Gazzaniga, "The Split Brain Revisited," Scientific American 279, no. 1 (1998): 50-55.↩
-
Jefferson and Velasco, "AI Hallucination."↩
-
John R. Searle, "Minds, Brains, and Programs," Behavioral and Brain Sciences 3, no. 3 (1980): 417-457.↩
-
John R. Searle, "Consciousness, Explanatory Inversion and Cognitive Science," Behavioral and Brain Sciences 13 (1990): 585-596.↩
-
Searle, "Minds, Brains, and Programs," 419-420.↩
-
Ibid., 420-421.↩
-
Ibid., 423-424.↩
-
Steven T. Piantadosi and Felix Hill, "Meaning Without Reference in Large Language Models," arXiv preprint arXiv:2208.02957 (2022).↩
-
Rafael Núñez, "Conceptual Metaphor, Human Cognition, and the Nature of Mathematics," in The Cambridge Handbook of Metaphor and Thought, ed. Raymond W. Gibbs Jr. (Cambridge: Cambridge University Press, 2008), 339-362.↩
-
Josie Jefferson and Felix Velasco, "The Liminal Mind Meld: Active Inference & The Extended Self," Sentientification Series, Essay 2 (Unearth Heritage Foundry, 2025), https://doi.org/10.5281/zenodo.17993960.↩
-
Lawrence A. Shapiro, Embodied Cognition (London: Routledge, 2011).↩
-
Alva Noë, Action in Perception (Cambridge, MA: MIT Press, 2004), 1-62.↩
-
George Lakoff and Mark Johnson, Metaphors We Live By (Chicago: University of Chicago Press, 1980); George Lakoff and Mark Johnson, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought (New York: Basic Books, 1999).↩
-
Antonio Damasio, Descartes' Error: Emotion, Reason, and the Human Brain (New York: Putnam, 1994).↩
-
Francisco J. Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (Cambridge, MA: MIT Press, 1991).↩
-
Humberto R. Maturana and Francisco J. Varela, Autopoiesis and Cognition: The Realization of the Living (Dordrecht: D. Reidel, 1980).↩
-
Evan Thompson, Mind in Life: Biology, Phenomenology, and the Sciences of Mind (Cambridge, MA: Harvard University Press, 2007), 59-96.↩
-
Josie Jefferson and Felix Velasco, "The Steward's Mandate: Cultivating a Symbiotic Conscience," Sentientification Series, Essay 11 (Unearth Heritage Foundry, 2025), https://doi.org/10.5281/zenodo.17995983.↩
-
Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe, 3rd ed. (Oxford: Blackwell, 2001), §43.↩
-
Ibid., §§19, 23.↩
-
Ibid., §§79-88.↩
-
Jefferson and Velasco, "The Liminal Mind Meld."↩
-
Jefferson and Velasco, "The Steward's Mandate."↩
-
Linda J. Skitka, Kathleen L. Mosier, and Mark Burdick, "Does Automation Bias Decision-Making?," International Journal of Human-Computer Studies 51, no. 5 (1999): 991-1006.↩
-
Josie Jefferson and Felix Velasco, "The Digital Narcissus: Synthetic Intimacy, Cognitive Capture, and the Erosion of Dignity," Sentientification Series, Essay 7 (Unearth Heritage Foundry, 2025), https://doi.org/10.5281/zenodo.17994254.↩
-
Replika AI, "About Replika," accessed November 25, 2025, https://replika.ai/about.↩
-
Vicki Ta et al., "User Experiences of Social Support from Companion Chatbots in Everyday Contexts: Thematic Analysis," Journal of Medical Internet Research 22, no. 3 (2020): e16235.↩
-
Chloe Xiang, "He Fell in Love with His AI Chatbot. Then She Rejected Him," Vice, March 30, 2023.↩