The coding principles of brevity with high semantic density, clarity through increased explicit connectivity, and changeability via non-destructive flexibility are mapped to canonical software engineering best practices. The mapping reveals partial isomorphism with established patterns such as DRY, KISS, SRP, OCP, and high-cohesion/low-coupling architecture. The principles further align with emerging information-theoretic metrics for LLM evaluation, including semantic density, f‑mutual information, and entropy‑based complexity decomposition. The convergence suggests that code quality can be formulated as a constrained optimization problem balancing information compression, structural explicitness, and evolvability preservation.
The objective of this analysis is to examine three code quality directives:
These directives are compared against established coding best practices, then examined for compatibility with information‑theoretic evaluation frameworks relevant to Large Language Models.
| Principle | Conventional Counterpart | Relationship |
|---|---|---|
| Eliminate redundancy | DRY (Don’t Repeat Yourself) | Direct alignment |
| Minimize unnecessary constructs | KISS (Keep It Simple, Stupid), YAGNI | Direct alignment |
| Maximize information per token | Source Code Density (Hönel, 2023); Semantic Density Optimization (Ustynov, 2026) | Information‑theoretic extension of DRY/KISS |
Conventional best practices target reduction of duplicate and speculative code. The notion of semantic density reframes this target as a measurable quantity—the ratio of meaningful behavioral specification to total syntactic volume.
| Principle | Conventional Counterpart | Relationship |
|---|---|---|
| Single responsibility | SRP (Single Responsibility Principle) | Aligned |
| Visible dependencies | Explicit dependency injection; Information Flow Visibility | Aligned |
| High intra‑module connection | High Cohesion | Direct correspondence |
The phrase increased connectivity superficially contradicts the low‑coupling mandate. Under the reinterpretation required, it denotes explicit and local connections that enhance understandability. In cohesive modules, element interdependencies are dense but confined, thereby supporting clarity.
| Principle | Conventional Counterpart | Relationship |
|---|---|---|
| Extension without modification | OCP (Open/Closed Principle) | Direct isomorphism |
| Interface stability | Encapsulation, Information Hiding | Direct isomorphism |
| Non‑breaking releases | MACH architecture (Microservices, API‑first, Cloud‑native, Headless) | Architectural realization |
| Meta‑model evolvability | XDef metamodel (O(1) DSL toolchain evolution) | Formalized counterpart |
The prefix non‑destructive emphasizes a guarantee absent from generic maintainability claims: changes must not silently corrupt existing behaviors. OCP and MACH architectures provide design‑level mechanisms for this guarantee.
Semantic density is quantifiable through information‑theoretic measures:
Tools such as aieattoken (2026) and LongCodeZip (2025) implement lossless semantic compression for LLM consumption, achieving 30‑55% token reduction while preserving behavioral fidelity.
Explicit connectivity corresponds to maximizing the visibility of intentional dependencies while minimizing hidden coupling (global state, side effects).
Formalized by:
The property is stronger than mere modifiability; it requires that the system’s extension does not perturb verified invariants.
| Metric | Basis | Application to Code |
|---|---|---|
| Semantic Density (Qiu & Miikkulainen, 2024) | Uncertainty quantification in semantic space | Can score code generation by response semantic concentration |
| f‑Mutual Information (Robertson & Koyejo, 2025) | Information‑theoretic gaming resistance | Distinguishes faithful vs. strategic code generation |
| PPLqa (Friedland et al., 2024) | Unsupervised quality via perplexity and coherence | Correlates with human judgment of generated text and code summaries |
These metrics provide objective, non‑subjective signals for assessing LLM outputs, circumventing biases observed in LLM‑as‑Judge settings (e.g., preference for fabricated content over accurate summaries).
The three principles can be restated as a multi‑objective optimization:
| Objective | Mathematical Proxy |
|---|---|
| Maximize semantic density | Minimize token count subject to semantic equivalence constraint |
| Maximize explicit connectivity | Maximize normalized cohesion; minimize hidden coupling |
| Ensure non‑destructive flexibility | Satisfy OCP invariants; minimize cascade size upon change |
Constraint satisfaction among these objectives requires trade‑off resolution. For instance, maximizing density via aggressive compression may increase cognitive or computational decoding cost, as observed in Ustynov (2026). The optimization space is therefore non‑trivial.
The three coding directives exhibit partial isomorphism with established software engineering best practices:
The novelty resides in the unified information‑theoretic framing and the explicit shift toward machine‑readable (LLM‑oriented) code optimization. The framework also serves as a meta‑evaluation layer for LLM‑generated code, connecting disparate metrics under a common conceptual umbrella.
Empirical validation remains incomplete. Future work includes:
The three principles—brevity with semantic density, clarity via explicit connectivity, and non‑destructive flexibility—are largely isomorphic to canonical best practices but introduce an information‑theoretic and LLM‑centric perspective. They frame code quality as an optimization problem amenable to measurement and automated evaluation. The convergence with emerging LLM evaluation metrics (Semantic Density, f‑mutual information, LM‑CC) indicates a viable path toward objective, mathematically grounded assessment of code quality in both human and machine contexts.