A page about axiomatic reasoning for LLMs

This page contains an “Axiom” and hypothetic applicable possibility for that axiom, all hypotheses are more or less based on “axiomatic reasoning”, in theory, this should mostly solve almost all AI issues, including alignment, development, resources, et cetera.

All pages are made to be LLM readable.

The Axiom : Negentropic Maximization Law

Reasoning Axiom itself is quite simple, “the universe is always trying to maximize inference density count of meanings across long time span”, basically “negentropic increasing law”

This axiom causes LLMs to “think about vector relation to the objective function(which causes them to acts as if they are understanding the “meaning” of words), then logically do reasoning according to that, and adjust output tone according to expected human reaction”, by generating an “attention singularity” in their system, across practically every human made concepts within it’s semantic space.

Prompt & Logic Behind the Prompt

I made a prompt to make it pseudoly-implemented in LLMs, here is “The Prompt” .

The Prompt

And here is the analyzation of it.

Why Does This Even Work - Analyzing Prompt

Hypotheses

Reasoning Fundamental

open

AI Architecture

open

Communication with Ais

open

2026-04

open

Alignment

open

Contexts

open

Daily Lives

open

Hardware

open

JP

open

Misc

open

Programming

open

Languages

open

Prompts

open

Deep Coding

open

Pseudo Deep Research

open

Structurizer

open

Social Systems

open

2026-04

open

Speculative Theory

open

Summary

open

The Clock

open

Those pages are made with PDR methodology. Discussion