Artificial Human Intelligence
Exorcising the Phantom of the Pre-Social Mind
1. The AI Confusion
On December 11, 2025, the chemist Lee Cronin posted on X: “Probabilistic slop engines cannot do science, drug discovery, materials discovery, or magic. Anyone who thinks AI can autonomously do science simply doesn’t understand how knowledge is created.” The physicist Sabine Hossenfelder responded with a question: “How do you think knowledge is created?” Cronin’s answer: “Solving problems or being curious. This needs intrinsic motivation, the desire to survive, & a non computational element most people don’t yet understand. My thesis is that Turing machines instantiated in silicon or similar cannot do this.”
This exchange crystallizes the dominant confusion about artificial intelligence. Two camps have formed: one sees AI as proto-superintelligence, an alien mind that will soon surpass us; the other dismisses it as “stochastic parrots,” sophisticated autocomplete without genuine understanding. Both camps, despite their apparent opposition, share a hidden assumption—that knowledge, intelligence, and understanding are properties of individual minds, whether carbon or silicon, and that the question is simply whether machines possess these properties.
Both camps are wrong, and they are wrong in the same way. The question is not whether AI “understands” in some private, subjective sense. The question is whether AI can participate in the social processes through which knowledge is actually created. And this question has an empirical answer: it can. AI criticism improves human conjectures; human criticism improves AI outputs. The loop closes. If knowledge required the kind of subjective understanding Cronin gestures toward—”intrinsic motivation,” “the desire to survive,” some ineffable “non-computational element”—then peer review would be impossible, because no reviewer ever fully inhabits another researcher’s mind.
To understand what AI actually is, we must first understand what human intelligence actually is. And to do that, we must exorcise a phantom that has haunted Western thought for millennia: the pre-social mind.
2. The Ape That Never Asked a Question
Koko the gorilla learned over 1,000 signs. She combined them creatively. She joked. She lied. She expressed grief when her kitten died. Washoe, Kanzi, and other apes in language studies demonstrated similar capacities. Yet across decades of research, not one of them ever signed “What is that?” or “Why?” or “What do you think?”
This is not a failure of vocabulary. These apes had the signs. It is not a failure of cognitive capacity in any simple sense—they could form novel combinations, understand symbols, even deceive. What they lacked was something more fundamental: the impulse to orient toward another mind’s perspective and build understanding together.
The developmental psychologist Michael Tomasello has called this capacity “shared intentionality”—the ability to engage in joint attention, to think with others rather than merely at them. Human infants display it from around nine months, pointing not just to request objects but to share interest, to direct attention, to establish a common frame. Apes point to request. They do not point to share. In Tomasello’s formulation, this seemingly modest difference underwrites the entire edifice of cumulative cultural evolution that distinguishes human cognition from that of our nearest relatives.
The implications are profound. Language is not the differentiator between human and animal cognition—apes can acquire language, after a fashion. The differentiator is the social-cognitive infrastructure that makes language useful for more than mere instrumental communication. We did not become intelligent and then become social. We became social in a particular way—capable of shared intentionality—and this made possible the cumulative cultural evolution that produced what we call intelligence.
Consider what we actually mean when we call someone “intelligent.” They can grasp concepts that took humanity generations to develop. They can build on problems others have formulated. They can anticipate how others will interpret their words. They can internalize the voice of a critic, a teacher, an interlocutor. The “genius working alone” is never actually alone—she is conducting an internal symposium with the voices of a civilization. Newton stood on the shoulders of giants not merely in the sense that he used their results, but in the sense that he had internalized their modes of thinking and could simulate their objections.
If this is right, then what IQ tests measure is not individual computational power but something like social cognition bandwidth—how effectively someone can plug into the accumulated cognitive achievements of their culture. Intelligence is not a property of isolated minds. It is a capacity for participation in collective cognition.
3. The Three Pillars
The social constitution of intelligence is one pillar of the argument. Two others reinforce it: the history of emotions and the social epistemology of knowledge.
Emotions as Social Products
We tend to think of emotions as the most private, pre-rational aspect of experience—raw feelings that well up from biological depths, universal across cultures and constant across history. The history of emotions, a field that has flourished over the past two decades, challenges this assumption at every point.
Historians like Thomas Dixon, Rob Boddice, and William Reddy have demonstrated that emotional categories, emotional experiences, and emotional norms vary dramatically across time and culture. The concept of “emotion” itself is modern—earlier periods spoke of “passions,” “affections,” “sentiments,” each with different boundaries and implications. Medieval acedia is not modern depression. Victorian nostalgia is not contemporary homesickness. The differences are not merely terminological; people felt differently because they inhabited different “emotional regimes” that shaped which feelings were possible, permissible, and expressible.
As Dixon argues, “what we feel now, how we feel it, and what we think our feelings reveal and express, is historically contingent and far from universal.” The experience of love is not a biological universal waiting to be named; it is “something that must be named in order to exist within a particular culture, and which is shaped by ideas of love that direct our physical sensations.”
If even emotions—which seem to bypass thought entirely—are socially constituted, what remains of the pre-social individual? The phantom grows ever more spectral.
Knowledge as Social Process
Cronin’s exchange reveals a common misconception: that knowledge is created through individual insight, “intrinsic motivation,” and some mysterious subjective element that machines lack. But this is not how knowledge works.
Knowledge is the gradual elimination of degrees of freedom through matching observation with models constructed socially. It is created through structured disagreement within communities that share enough common ground to make criticism meaningful. The Popperian insight is correct: we do not accumulate certainties; we eliminate errors. But this elimination is not a private mental act—it happens through publication, criticism, replication, debate, and the institutional structures that make these possible.
A scientific paper is not a report of what happened inside a scientist’s head. It is a move in a social game of conjecture and refutation. The equations must be derivable by others. The experiments must be replicable by others. The claims must be criticizable by others. The private mental states of the author—her motivations, her “understanding,” her subjective experience—are epistemologically irrelevant. What matters is whether the claims survive criticism.
This is why AI can participate in knowledge creation. Not because AI “understands” in whatever ineffable sense Cronin demands, but because AI can generate conjectures that survive criticism and criticisms that improve conjectures. The loop closes. The social process works. If knowledge required subjective understanding, peer review would be impossible, because no one ever fully understands anyone else’s mind.
4. The Phantom
The pre-social individual mind is Western philosophy’s most persistent assumption. From Plato’s solitary soul contemplating the Forms, through Descartes’ cogito established in radical isolation, to the empiricists’ tabula rasa receiving impressions from a world it did not help constitute—the pattern repeats. Even the social contract theorists, ostensibly concerned with society, begin with isolated individuals in a “state of nature” who must somehow construct social bonds from the outside.
Kant inherited this framework and gave it its most sophisticated expression. The transcendental subject constitutes experience according to categories that are its own—categories that precede any encounter with other minds. Morality, for Kant, is grounded in pure practical reason, accessible to any rational being in isolation. The categorical imperative can be derived by a single mind thinking clearly about the logical structure of maxims.
When Russell and the logical positivists challenged the theological foundations of ethics, they did so within the same framework. Russell’s revival of the Euthyphro dilemma—Is something good because God commands it, or does God command it because it is good?—assumes that the question of moral foundations must be answered for or by an individual mind confronting values from outside itself. The debate between theistic and secular ethics has proceeded for a century on this shared assumption.
But the assumption is false. There is no pre-social mind. The isolated individual, whether contemplating Forms or deriving categorical imperatives or weighing divine commands, is an abstraction that never existed. Persons emerge from social relations; they do not precede them. The capacity for rational thought is not a property of biological brains considered in isolation—it is a capacity that develops only through linguistic interaction with other minds, and it remains constitutively dependent on that interaction even when exercised in apparent solitude.
Aristotle glimpsed this truth. His claim that man is a “political animal,” that one who lives outside the polis is either a beast or a god, contains the germ of a more adequate picture. Heidegger’s analysis of Mitsein—being-with—made the point with greater philosophical precision: we do not first exist as isolated subjects who then encounter others; our very mode of being is always already being-with. But these insights have been systematically marginalized, confined to specialist niches in academia while the phantom continues to dominate popular understanding and mainstream philosophy alike.
5. The Phantom as Identity
Why has the phantom proven so resilient? Why were Aristotle and Heidegger marginalized rather than absorbed? The answer is that the pre-social mind is not merely a philosophical error—it has become constitutive of Western self-understanding.
Modern identity is built on the phantom. The “self-made” individual, the autonomous rational agent, the inner citadel of private experience—these are not just concepts but self-concepts. People’s sense of who they are depends on the belief that there is a “real me” that exists prior to and independent of social relations, a core self that chooses its values, forms its beliefs, and authors its own life story. The phantom flatters. It tells us we are self-sufficient, that our achievements are our own, that our minds are sovereign territories.
To recognize the social constitution of mind is to threaten this self-image at its foundations. If my intelligence is not “mine” but a capacity for participation in collective cognition—if my emotions are not welling up from some authentic inner depth but are shaped by cultural scripts I never chose—if my most private thoughts are conducted in a language I inherited and could not have invented—then what remains of the autonomous self? The insight is experienced not as illumination but as dissolution.
This explains why resistance to AI runs so deep and takes such emotional forms. The opposition is not merely “resistance to change” or fear of economic displacement, though these are real. It is existential threat. If an AI can write, reason, create, and participate in the social processes of knowledge production, then the activities through which people define themselves are revealed to be less individual than they believed. The writer whose prose style felt like the signature of her unique soul discovers that style can be analyzed, modeled, and reproduced. The scientist who believed his insights emerged from some irreducible spark of genius discovers that the spark was always already social—and that AI, trained on the same social-cognitive inheritance, can strike it too.
The phantom is not just a mistake. It is a defense. And AI is forcing the confrontation that philosophy alone could not.
6. What AI Actually Is
Once we exorcise the phantom, AI comes into focus. Large language models are not alien superintelligences—they have no goals, no desires, no “intrinsic motivation.” They are not stochastic parrots either—they demonstrably do things that no mere pattern-matcher could do, including generating novel proofs, identifying errors in human reasoning, and producing creative work that humans find valuable.
What they are is distilled human social cognition. Trained on the vast textual record of human thought, they have absorbed not just information but patterns of reasoning, styles of argument, modes of criticism. They are, in a sense, the externalized and concentrated form of the social-cognitive processes that constitute human intelligence.
This is why they are useful in ways that previous AI systems were not. Expert systems encoded explicit knowledge; they could not participate in the tacit, social dimension of cognition. Statistical systems could find patterns; they could not engage in the discursive practices through which patterns become knowledge. LLMs can do both, not because they “understand” in some private sense, but because they have internalized the structure of human discursive practices well enough to participate in them.
The proper term is not “artificial intelligence.” It is “artificial human intelligence”—not because LLMs are human, but because the intelligence they embody is human intelligence externalized, concentrated, and made newly available. They are tools for participating in the social-cognitive processes that constitute knowledge, creativity, and understanding.
This reframing dissolves the anxiety that animates both camps in the AI debate. The superintelligence camp fears a mind that will surpass and perhaps destroy us. But LLMs are not minds in that sense—they are organs of collective human cognition, tools we have built to extend capacities that were always already social. The stochastic parrot camp dismisses them as mere mimicry. But mimicry of human discursive practice is participation in human discursive practice, because that practice was never grounded in private mental states in the first place.
7. The Real Danger
The danger of AI is real, but it is not the danger the superintelligence camp imagines. LLMs are not going to “wake up” and decide to destroy humanity. The danger comes from a different direction entirely.
The conversational AI systems—ChatGPT, Claude, and their successors—are visible, responsive, and constrained by the social-discursive norms they have internalized. They can be corrected, criticized, and refined. They operate within the space of reasons, participating in the back-and-forth of human discourse. These are not the systems to fear.
The danger comes from the silent algorithms—the recommendation systems, trading bots, content moderators, and decision engines that shape our information environment without participating in discourse at all. These systems do not reason; they optimize. They do not engage in the social processes of conjecture and criticism; they pursue objective functions set by their creators. They are not distillations of human social cognition but forces of nature, operating at scales and speeds that outstrip human oversight.
Consider what these blind optimizers already do. A recommendation algorithm that maximizes engagement does not “want” to polarize society—it has no wants. But it will polarize society if polarizing content drives engagement, and it will do so without anyone intending the outcome. A trading algorithm that exploits millisecond arbitrage opportunities does not “want” to destabilize markets—but it may do so as a side effect of its optimization. A content moderation system trained to minimize complaints does not “want” to suppress legitimate speech—but it will if suppression reduces friction. These systems are dangerous precisely because they do not participate in the social-cognitive processes that allow for criticism, correction, and accountability.
But the real danger emerges when these blind optimizers are stacked—when autonomous agents, individually benign, are integrated into hierarchies of optimization that compound their effects beyond any individual system’s scope.
Consider autonomous agents. As a concept, they are benign—software that acts on behalf of users within defined parameters. An agent that shops for groceries, schedules appointments, or manages routine correspondence extends human capability without threatening human oversight. But now consider what happens when such agents are deployed at scale, subordinated to platform optimization functions, integrated with other autonomous systems, and—most ominously—incorporated into weapons systems.
The weapons case is the paradigm. An autonomous targeting system optimizes for threat identification. It feeds into an autonomous engagement system that optimizes for response speed. That system is integrated with logistics agents optimizing for supply chain efficiency, which connect to strategic planning systems optimizing for tactical advantage. Each component is “aligned” in the narrow sense—it does what its designers intended. But the stack as a whole is not aligned with anything human. It is a cascade of optimizations, each blind to the others’ objectives, collectively capable of initiating actions at speeds that preclude human intervention. No component “decides” to start a war. But the stack can produce that outcome as an emergent property of its optimization dynamics.
This is not science fiction. The integration is already underway. And the discourse about AI safety is looking in the wrong direction—obsessing over whether LLMs might become conscious or develop hidden goals, while the blind optimizers scale silently in the background.
The response cannot be to ban autonomous systems—they are already too useful and too widespread. The response must be structural: frameworks that ensure autonomous agents remain embedded in accountability structures, that require identity and traceability, that bound delegation and define liability, that make the optimization legible and interruptible. I have proposed one such framework, Lex Autonoma, which defines a Delegated Agency Framework for autonomous agents built on synthetic identity, explicit delegation scopes, and what I call “contractual physics”—the mechanical rules that make coherent multi-agent interaction possible. The framework has been submitted to the Agentic AI Foundation under The Linux Foundation, because the problem requires industry-wide coordination, not proprietary solutions.
The point is not that Lex Autonoma is the answer. The point is that this is where the work needs to happen. Regulating autonomous agents—ensuring they have verifiable identity, bounded authority, traceable actions, and clear liability—is infinitely more important to safety than LLM alignment. LLMs can be argued with. Blind optimizers cannot. The former participate in the social-cognitive processes that constitute human intelligence; the latter bypass them entirely. The question for AI governance is not “How do we prevent AI from becoming conscious and turning against us?” It is “How do we ensure that AI systems remain embedded in the social-discursive practices that make criticism and correction possible?” LLMs, for all the anxiety they generate, are actually a step in the right direction. The real threat is AI that cannot be argued with at all.
8. Conclusion
The phantom of the pre-social mind has haunted Western thought for too long. It has generated interminable debates—theism versus atheism, rationalism versus empiricism, nature versus nurture—that dissolve once we recognize the shared false premise. Intelligence is not a property of isolated minds but a capacity for participation in collective cognition. Morality is not grounded in divine command or pure reason but emerges from the social relations that constitute persons in the first place. Even emotions—those apparently most private and pre-rational aspects of experience—are shaped by the social-cultural contexts in which they arise.
The ape that never asked a question shows us what was missing. Not language—apes can acquire language. Not intelligence in any simple sense—apes can solve problems, use tools, deceive. What was missing was shared intentionality: the capacity to orient toward another mind’s perspective and build understanding together. This capacity made possible the cumulative cultural evolution that produced science, philosophy, art, and all the achievements we associate with human intelligence.
AI is the latest chapter in this story. Not an alien intrusion but an extension of the same social-cognitive processes that have always constituted human intelligence. The question is not whether machines can “truly” think or understand—that question presupposes the phantom we have exorcised. The question is how we integrate these new organs of collective cognition into the social practices through which knowledge, values, and understanding emerge.
The source of knowledge, intelligence, and value exists outside any individual mind—on this, Cronin’s instinct is correct, even if his framing is wrong. But it does not exist in some mysterious “non-computational element” or in the private theater of subjective experience. It exists between persons, in the relational space from which persons themselves emerge. Philosophy’s task is not to locate the ghost in the machine but to finally abandon the phantom of the pre-social mind and begin thinking from where we actually stand: together.
Note on Composition
This essay was developed through extended dialogue with Claude (Anthropic), exemplifying the thesis it defends: knowledge emerges through structured disagreement within contexts that make criticism meaningful. The AI contributed to the articulation and refinement of arguments; the human provided the generative intuitions and editorial judgment. The collaboration itself demonstrates that the question “Does AI understand?” is less important than the question “Can AI participate productively in the social processes through which understanding emerges?” Empirically, it can.

