Solve abstract reasoning puzzles with 32 primitives + composition search. Zero training. Zero neural nets. CC0.
ARC-AGI: 800 puzzles. Each has 2-5 training examples (input -> output grids). Given a new input, predict the output. No two puzzles use the same rule. THE SCALING APPROACH: Train a billion-parameter model on millions of examples. Cost: $100M+ in compute. Accuracy: ~50% (best public). THE AXIOM APPROACH: 32 simple primitives + composition search. Primitives: flip, rotate, tile, crop, scale, gravity, flood fill, ... Search: try each primitive. Try pairs. Try with color mapping. Cost: $0. Runs in your browser. No training data. Stage 1: SANDPILE — 32 primitives, depth-1 + depth-2 composition Stage 2: SPLIT+XOR — CRT decompose-recombine (the axiom solver) Stage 3: LEVEL-K — neighborhood pattern learning Stage 4: CONSTANT — output doesn't depend on input Current score: 61/800 (7.6%). The interesting part: split+xor alone gets 77% of eval correct tasks. CRT decompose+recombine dominates. This is NOT about beating GPT-4. It's about proving that STRUCTURE beats SCALE. 32 primitives vs 175 billion parameters.
Task:
CC0 1.0 Universal - No Rights Reserved. ARC-AGI solver is public domain prior art.
32 primitives. Composition search. Zero training data. Zero neural networks.
antonlebed.com | CC0 License