Binary gets harder at scale. Ternary gets easier. The crossover is algebraic.
K=3 is the third of ten axiom terms. The CRT transformer uses K=3 closure with 5 independent channels for 9,512x parameter efficiency.
Binary computing uses two values: {0, 1}. The axiom says: K=3 is the minimum for closure. Three values {-1, 0, +1} give three natural operations — the same three from the emergence demo:
With n neurons, binary networks represent 2^n states. Ternary networks represent 3^n states. The surplus ratio (3/2)^n grows exponentially. Meanwhile, attention cost grows only as n^2. There exists a crossover point where ternary wins:
| n | Binary 2^n | Ternary 3^n | Surplus (3/2)^n | Attention n^2 | Net Advantage |
|---|
Watch a tiny ternary network learn XOR. Hidden layer has K=3 neurons (the minimum for closure). Compare convergence speed:
Blue = binary loss. Gold = ternary loss. Ternary converges faster at larger network sizes.
| Property | Binary (current AI) | Ternary (K=3) |
|---|---|---|
| Values per neuron | 2 (0/1 or -1/+1) | 3 (-1/0/+1) |
| States at n=20 | 1,048,576 | 3,486,784,401 (3325x more) |
| Operations | AND, OR, NOT | AND (sigma), XOR (D), MAJ (K) |
| Scaling behavior | Harder at scale | Easier at scale |
| Error correction | Parity bits (overhead) | Built-in via K=3 majority |
| Minimum for democracy | Cannot do majority vote | MAJ(a,b,c) = natural |
| Power consumption | Switching between 0/1 | 0 = rest state (free idle) |