The Proof
Why six constraints solve what scaling cannot
The Claim
The six-constraint architecture doesn't just describe validity—it produces it. Problems that have resisted five years of scaling, fine-tuning, and patching dissolve under structural analysis.
This document shows the mechanism.
Problem 1: Hallucination
The Failure
System asserts: "The Eiffel Tower was built in 1923."
Confident. Fluent. Wrong.
Why It Happens
The system has no constraint checking. It produces tokens that are probable given the input, not tokens that are true given reality.
How Six Constraints Fix It
| Constraint | Check | Result |
|---|---|---|
| Referential | Is "Eiffel Tower" grounded? | Yes—identifiable entity |
| Contextual | What's the scope? | Historical fact claim |
| Premissive | What's the source? | ⚠️ No source cited |
| Inferential | How was date derived? | ⚠️ Pattern match, not lookup |
| Constraining | Confidence limits? | ⚠️ Stated as fact, not qualified |
| Teleological | Why does user need this? | Factual accuracy required |
Three constraints fail. Output is flagged for revision or source verification.
Mechanism: Hallucination occurs when Premissive (no grounds) and Inferential (no valid derivation) constraints are absent. The architecture catches this before output.
The Pattern
Every major failure mode maps to missing constraints:
| Failure | Missing Constraints |
|---|---|
| Hallucination | Premissive, Inferential |
| Semantic drift | Referential (tracking) |
| Groundless confidence | Premissive, Constraining |
| Calibration failure | Inferential (discrimination) |
| Inappropriate closure | Teleological (authority routing) |
| Context degradation | Referential, Contextual (state) |
The architecture doesn't patch symptoms. It provides the structural elements whose absence causes the symptoms.
Why Scaling Doesn't Solve This
"We'll just train a bigger model."
Scaling gives you:
- More parameters
- More training data
- More compute
Scaling does not give you:
- Validity criteria
- Inference discrimination
- Closure authority
- Semantic state tracking
You cannot scale your way to structure. A trillion parameters checking zero constraints is still checking zero constraints.
The problems persist because they're architectural, not statistical.
Empirical Predictions
If the architecture is correct, systems implementing it will show:
| Metric | Prediction |
|---|---|
| Hallucination rate | Drops to rate of source errors, not pattern errors |
| Calibration | r > 0.8 between stated and actual confidence |
| Drift detection | >95% of meaning shifts caught before compounding |
| Inappropriate closure | Near zero (routed to human) |
| Long-context coherence | Stable to context limit |
These are testable. We invite validation.