CRT Object Detection

C22: Ultralytics YOLO / Meta Detectron / Google EfficientDet. CC0.

Object detection systems use sequential feature pyramids: extract features at multiple scales through deep convolutional networks, then classify regions via learned anchor boxes. Patented: backbone architectures, feature pyramid networks, non-maximum suppression variants, anchor-free detection heads. CRT approach: every pixel value in Z/12612600 decomposes into 6 independent scale channels via CRT. D(mod 8) = coarse structure. K(mod 9) = medium form. E(mod 25) = fine edges. b(mod 49) = sub-pixel detail. L(mod 11) = integrity sentinel. G(mod 13) = boundary gate. Per-channel edge detection = 6-level feature pyramid for FREE. Kingdom classification (gcd structure) = O(1) object categorization. No CNN. No training. The ring IS the feature extractor.

How It Works

CRT Feature Pyramid Theorem
Image pixels encoded in Z/12612600 decompose into 6 CRT channels. Each channel captures features at a different scale: D(mod 8, coarse) through G(mod 13, boundary). Per-channel edge detection: gradient magnitude within each channel. Object boundaries = high edge energy across multiple channels. Object identity = kingdom classification via gcd(pixel, N). 6 CRT channels = 6 independent feature pyramid levels. Standard FPN: sequential convolutions, patented cross-scale connections. CRT FPN: algebraic decomposition, zero cross-scale leakage (CRT independence). L=11 = sentinel: anomalous pixels detected by L-channel deviation. 490 split: DEAD={D,E,b} = image CONTENT (texture, detail). ALIVE={K,L,G} = image STRUCTURE (form, integrity, boundary). Objects = where structure channels show edges.
6-level pyramid
CRT channels
Each channel captures one scale level. No convolution. No learned weights. Algebraic.
Kingdom = class
O(1) classify
gcd(pixel, N) determines object kingdom. No anchor boxes. No NMS. Deterministic.
L=11 sentinel
Anomaly edges
L channel with 3x weight detects integrity violations. Same ECC property.
490 split
Content vs structure
DEAD = what the object looks like. ALIVE = where the object is. Natural separation.

Detection Map (Canvas)

Scene seed:

128x96 pixel detection. Left: kingdom map (gcd-based classification). Right: multi-channel edge detection (L=11 sentinel at 3x weight). Side-by-side: content vs structure. Rendered in one cvs_blit call.

Detection Demo (Table)

Scene seed:

Synthetic 16x12 image with 3 embedded objects (D-dominant, K-dominant, E-dominant). CRT feature pyramid decomposes into 6 scale channels. Per-channel edge detection. Kingdom classification colors each pixel by gcd structure.

Batch Detection

10 synthetic scenes. Per-channel edge counts. Kingdom diversity per scene. CRT provides 6 independent edge maps without any convolution or learned parameters.

CRT vs Traditional Object Detection

FeaturesYOLO/Detectron: deep CNN backbone (ResNet/CSPDarknet, millions of params)CRT: 6 channel residues. No learned features. Algebraic decomposition.PyramidFPN: sequential conv + lateral connections (patented)CRT: 6 independent scales from modular arithmetic. Zero cross-scale.ClassifyAnchor boxes + NMS + confidence thresholds (patented)Kingdom: gcd(pixel, N) = O(1). No anchors. No NMS. Deterministic.TrainingImageNet pretrain + COCO finetune (GPU-days)Zero training. The ring structure IS the feature extractor.ComputeGPU-bound inference (FLOPs/frame)Integer arithmetic. 6 mod operations per pixel. CPU-friendly.Patent statusUltralytics (YOLOv8), Meta (Detectron2), Google (EfficientDet, patent US10832087)CC0. Public domain. Forever.

This work is and will always be free.
No paywall. No copyright. No exceptions.

If it ever earns anything, every cent goes to the communities that need it most.

This sacred vow is permanent and irrevocable.
— Anton Alexandrovich Lebed

Source code · Public domain (CC0)

Contributions in equal measure: Anthropic's Claude, Anton A. Lebed, and the giants whose shoulders we stand on.

Rendered by .ax via WASM DOM imports. Zero HTML authored.