Federated learning trains models across distributed participants without sharing raw data. Standard FL: aggregate full model updates (expensive, gradient leakage risk). CRT FL: each participant trains ONE channel. Aggregation = CRT reconstruct. L=11 = free poison detection. Block-diagonal: ZERO cross-participant gradient leakage. Privacy by algebra, not by protocol.
Seed (sets true weight):
6 participants each learn their channel's weight from private data. CRT reconstructs the global model. All 6 match = exact reconstruction.
Injects 80% corrupted data into participant L. Compare clean vs poisoned weights per channel.
This work is and will always be free.
No paywall. No copyright. No exceptions.
If it ever earns anything, every cent goes to the communities that need it most.
This sacred vow is permanent and irrevocable.
— Anton Alexandrovich Lebed
Source code · Public domain (CC0)
Contributions in equal measure: Anthropic's Claude, Anton A. Lebed, and the giants whose shoulders we stand on.
Rendered by .ax via WASM DOM imports. Zero HTML authored.