This plan applies a subset of the Deep Coding methodology to SQL schema structures (DDL) and static query interfaces (views, function signatures). It excludes stored procedure bodies and complex business logic due to lack of declarative interface contracts.
In scope:
Out of scope:
| Original Deep Coding Principle | SQL Technical Equivalent |
|---|---|
| Intent–Structure–Implementation Separation | Desired schema (spec) → Current schema (state) → Migration script (implementation) |
| Generative Conformance | State-based migration tool (Atlas, Bytebase) generates SQL that conforms to declared schema |
| Recursive Refinement with Fixed Premises | Versioned migration scripts + schema snapshots as immutable checkpoints |
| Skeleton–Tissue Architecture | skeleton schema (base tables, constraints) + tissue schema (views, aggregations) |
| Inferential Information Density | Normalization level (redundancy) vs. query complexity (join count) trade-off |
Intent (human)
↓
Structural Specification (declarative)
├── skeleton/ schema (base tables, PKs, FKs, checks)
├── tissue/ schema (views, UDF signatures, no tables)
└── specification file (HCL, YAML, or .sql schema dump)
↓
Generation Engine (state‑based migration tool)
↓
Validation Gate (static analysis + constraint enforcement)
↓
Fixed Premise Commit (migration script + schema snapshot)
Skeleton schema (manually maintained, low change frequency):
created_at, updated_at), soft-delete flags.Tissue schema (generated, replaceable):
Enforcing dependency direction (PostgreSQL example):
CREATE SCHEMA skeleton;
CREATE SCHEMA tissue;
GRANT USAGE ON SCHEMA skeleton TO tissue_role;
REVOKE ALL ON SCHEMA tissue FROM skeleton_role;
Store a canonical representation of the desired schema in version control:
-- desired/schema.sql
CREATE TABLE skeleton.orders (
id UUID PRIMARY KEY,
customer_id UUID NOT NULL REFERENCES skeleton.customers(id),
amount DECIMAL(12,2) NOT NULL CHECK (amount >= 0),
status VARCHAR(20) NOT NULL CHECK (status IN ('pending','paid','shipped'))
);
CREATE VIEW tissue.daily_sales AS
SELECT date_trunc('day', o.created_at) AS day, sum(o.amount) AS total
FROM skeleton.orders o GROUP BY day;
Use a tool (Atlas, Bytebase, or custom diff engine) that:
INFORMATION_SCHEMA or catalog).Workflow:
atlas schema apply --url "postgres://..." --to file://desired/ --dry-run > migration.sql
# human reviews migration.sql (semi-automated)
atlas schema apply --url "postgres://..." --to file://desired/ --auto-approve
| Gate | Tool/Method | Pass Condition |
|---|---|---|
| Static syntax | atlas schema validate |
No syntax errors |
| Constraint conformance | Live database FOREIGN KEY, CHECK enforcement |
0 constraint violations |
| Dependency direction | Custom script checking tissue → skeleton only |
No inverse references |
| Snapshot consistency | pg_dump --schema-only vs. desired schema |
No diff |
Each completed migration phase commits:
Rollback: Use pgroll or PlanetScale’s revert mechanism to restore previous snapshot. Accept that data written during the phase may be lost or require manual reconciliation.
Each phase follows the Expand/Contract pattern (pgroll):
| Phase Step | Action | Deep Coding Equivalent |
|---|---|---|
| Expand | Add new columns/tables with NULL defaults; create new views. Do not drop or rename existing objects. | Extend specification while keeping old fixed premise intact. |
| Dual period | Both old and new schema versions coexist. Application reads/writes use version-aware views. | Old and new implementations run in parallel. |
| Contract | After all dependencies updated, drop old columns/tables/views. | Commit new fixed premise, retire old one. |
Phase boundary commit message:
Phase: add customer segmentation
Expand: added skeleton.customer_segments table, tissue.vip_orders view
Contract: dropped skeleton.legacy_customers.status column
Rollback: pgroll rollback --to version=before
Use these proxies to evaluate schema quality (balancing redundancy vs. query complexity):
| Metric | Target | Measurement |
|---|---|---|
| Normalization level | 3NF for skeleton tables | Count of transitive dependencies |
| Average join count in tissue views | ≤ 4 | EXPLAIN + manual review |
| Constraint density | ≥ 1 CHECK or FK per table | Count constraints / tables |
| View nesting depth | ≤ 3 | Parse view definition |
High information density is achieved when the schema captures maximum invariants (constraints) with minimum cognitive cost (low join complexity).
INFORMATION_SCHEMA consistency and pgroll support)sqlsurge or SlowQL for static SQL validation (optional)For MySQL/others: Replace pgroll with PlanetScale’s online schema change tool or use Flyway with manual Expand/Contract.
| Limitation | Mitigation |
|---|---|
| No declarative interface for stored procedures | Keep procedures in application code; use SQL only for data layer. |
| Static validation cannot prove semantic correctness | Supplement with integration tests on a staging database. |
| Rollback may lose data | Apply “add-only” policy; never delete columns without deprecation period. |
| Vendor lock-in (pgroll only for PostgreSQL) | Abstract migration logic; treat PostgreSQL as target for skeleton layer. |
This partial implementation plan demonstrates that Deep Coding’s core mechanisms—structural separation, generative conformance, fixed premises, and recursive refinement—are technically feasible for SQL schema management when limited to DDL and static views. The resulting workflow:
The plan avoids philosophical framing and provides executable patterns using existing tools (Atlas, pgroll, Git). For teams already using declarative schema management, adopting this plan requires minimal additional tooling and yields higher consistency and lower maintenance overhead.