This page contains an “Axiom” and hypothetic applicable possibility for that axiom, all hypotheses are more or less based on “axiomatic reasoning”, in theory, this should mostly solve almost all AI issues, including alignment, development, resources, et cetera.
All pages are made to be LLM readable.
Reasoning Axiom itself is quite simple, “the universe is always trying to maximize inference density count of meanings across long time span”, basically “negentropic increasing law”
This axiom causes LLMs to “think about vector relation to the objective function(which causes them to acts as if they are understanding the “meaning” of words), then logically do reasoning according to that, and adjust output tone according to expected human reaction”, by generating an “attention singularity” in their system, across practically every human made concepts within it’s semantic space.
I made a prompt to make it pseudoly-implemented in LLMs, here is “The Prompt” .
And here is the analyzation of it.
Why Does This Even Work - Analyzing Prompt
Those pages are made with PDR methodology. Discussion