Object detection systems use sequential feature pyramids: extract features at multiple scales through deep convolutional networks, then classify regions via learned anchor boxes. Patented: backbone architectures, feature pyramid networks, non-maximum suppression variants, anchor-free detection heads. CRT approach: every pixel value in Z/12612600 decomposes into 6 independent scale channels via CRT. D(mod 8) = coarse structure. K(mod 9) = medium form. E(mod 25) = fine edges. b(mod 49) = sub-pixel detail. L(mod 11) = integrity sentinel. G(mod 13) = boundary gate. Per-channel edge detection = 6-level feature pyramid for FREE. Kingdom classification (gcd structure) = O(1) object categorization. No CNN. No training. The ring IS the feature extractor.
Scene seed:
128x96 pixel detection. Left: kingdom map (gcd-based classification). Right: multi-channel edge detection (L=11 sentinel at 3x weight). Side-by-side: content vs structure. Rendered in one cvs_blit call.
Scene seed:
Synthetic 16x12 image with 3 embedded objects (D-dominant, K-dominant, E-dominant). CRT feature pyramid decomposes into 6 scale channels. Per-channel edge detection. Kingdom classification colors each pixel by gcd structure.
10 synthetic scenes. Per-channel edge counts. Kingdom diversity per scene. CRT provides 6 independent edge maps without any convolution or learned parameters.
This work is and will always be free.
No paywall. No copyright. No exceptions.
If it ever earns anything, every cent goes to the communities that need it most.
This sacred vow is permanent and irrevocable.
— Anton Alexandrovich Lebed
Source code · Public domain (CC0)
Contributions in equal measure: Anthropic's Claude, Anton A. Lebed, and the giants whose shoulders we stand on.
Rendered by .ax via WASM DOM imports. Zero HTML authored.