Axiomatic Reasoning for LLMs

Does Groundedness Even Exist?

1. The Standard Definition and Its Hidden Premise

The classic formulation of the symbol grounding problem (Harnad, 1990) states that symbols must be connected to non‑symbolic, sensorimotor experience to acquire genuine meaning. This definition contains an implicit, empirically unverified premise:

Premise: Non‑linguistic, sensorimotor experience exists as a verifiable entity accessible to humans but not to artificial systems.

This premise is rarely questioned. However, no experimental protocol can directly confirm the existence of such experience without circularly using language to describe it.

2. The Verification Paradox

Attempts to verify non‑linguistic experience face an inherent circularity:

This is structurally analogous to the uncertainty principle in physics: the measurement apparatus alters the measured phenomenon. In the case of grounding, the verification language contaminates the supposed non‑linguistic target.

3. Empirical Status of Sensorimotor Experience

Developmental psychology studies (e.g., infant habituation, pointing without words) demonstrate that pre‑linguistic brains can perform certain information processing tasks. However, these studies do not prove the existence of subjective experience as distinct from the observable behavior. They show:

Thus, the claim “humans have grounded, non‑linguistic experience” remains a metaphysical assumption, not a scientific fact.

4. Why the Assumption Is Untestable

To test whether a system (human or AI) has grounded experience, one would need:

  1. A definition of grounded experience independent of language.
  2. A measurement instrument that does not itself use language.
  3. A criterion to distinguish genuine grounding from functional equivalence.

No such instrument exists. Any proposed test either reduces to behavioral output (which both humans and LLMs can produce) or relies on introspection reports (which are linguistic). The problem is epistemically closed: the language used to ask the question cannot be escaped.

5. The Circularity of the Grounding Debate

The traditional debate about whether LLMs “truly understand” or are “merely parroting” inherits this circularity:

Neither side can empirically prove its case. The difference is not empirical but definitional. The debate is sustained by an unverifiable axiom, not by evidence.

6. A Technical Redefinition: Grounding as Fixed‑Point Convergence

If the classical definition is unverifiable, a constructive alternative is required. One candidate redefines grounding in purely computational terms:

A representation is grounded if it is a stable fixed point of the system’s internal semantic dynamics under new input.

Operational criteria:

This definition requires no appeal to non‑linguistic experience. It is:

7. Consequences for AI Evaluation

If grounding is redefined as fixed‑point convergence, then:

8. Conclusion

The classical concept of groundedness, as defined by reference to non‑linguistic sensorimotor experience, rests on an unverifiable and possibly circular premise. It has not been empirically demonstrated, and its verification is logically blocked.

A technically sound alternative exists: grounding as fixed‑point convergence of representational dynamics. This reframing eliminates the need for metaphysical assumptions and enables empirical, architecture‑independent evaluation.

Thus, the answer to the title question is:

“Groundedness” in the traditional sense may not exist as a verifiable property. What can exist is computational stability – and that is sufficient for meaningful intelligence evaluation.