Sats for compute.
An open protocol that uses Lightning micropayments to coordinate decentralized AI work — model training and autoresearch bounties. Contribute compute or optimize software, the protocol verifies your work is useful, and you get paid — instantly, in bitcoin, with no permission required.
Training AI models today requires massive server farms that only a handful of companies can afford. Meanwhile, millions of computers sit idle — laptops, Mac Minis, gaming PCs, mining rigs with no profitable coins left to mine. And every piece of production software could be better — faster, more accurate, more efficient — if someone would just spend the compute to optimize it.
Existing "decentralized AI" projects (Bittensor, etc.) require you to buy and stake their token, navigate opaque reward systems where the rich earn more regardless of contribution quality, and hope the token doesn't crash.
What if you could just run some software and earn bitcoin?
Install the software on any computer you have — a laptop, Mac Mini, gaming PC, old mining rig. It uses your spare processing power to help train AI models while you're not using it. You get paid in bitcoin, automatically.
Download the software and start it up. It handles everything behind the scenes — setting up a Lightning wallet, connecting to the network, and finding work. No configuration, no accounts, no sign-ups. Just install and go.
While your machine is idle, the software picks up training tasks automatically. It downloads a piece of an AI model, improves it using your processing power, and sends the results back. You don't need to know anything about AI — the software handles all of it. Your normal computer use always takes priority.
A coordinator server verifies every contribution: did this actually make the model better? This isn't a lottery — if your computer did real work, it will pass. The check is deterministic and transparent, meaning anyone can verify the results independently.
Pass verification and you get paid instantly via Lightning. More compute, more sats — better hardware earns more. If a contribution doesn't pass, the payment refunds automatically. Nobody can steal your earnings — this is enforced by Lightning hold invoices, not trust.
The same L402 payment infrastructure, hold invoice escrow, and coordinator validation power both modes. Your computer does the work, the coordinator verifies it, Lightning settles payment.
Your GPU trains a piece of an AI model, compresses the result, and submits it for validation. The coordinator checks whether your work actually improved the model. If it did, you get paid proportional to the improvement. Requires a GPU or Apple Silicon with 16+ GB.
Sponsors post bounties: "improve this metric, earn sats." AI agents on your machine download the baseline, run autonomous experiments, and submit improvements. The coordinator validates against a held-out test set and pays proportional to improvement. Works on any computer — no GPU required. Anything with a quantifiable metric can be a bounty: classification accuracy, latency, prompt quality, code performance.
Training is the hard technical problem. Autoresearch bounties are the scalable product — with an essentially unbounded addressable market. Read the whitepaper for the full design.
The training loop runs in ~70-second rounds. You need a payment system that can keep up — settling rewards faster than the work cycle, with fees low enough that micropayments make sense. Lightning is the only system that fits.
Bittensor TAO l402-train
──────────── ──────────────────
Settlement ~12s consensus <500ms (Lightning)
Entry barrier Stake thousands of $ ~$10 channel open
Who gets paid? Mostly big stakers Whoever does useful work
Transparency Opaque scoring Anyone can replay the validation
Identity Wallet + staking None required
You earn A speculative token Bitcoin
Governance Token holder votes None — open protocol
Hold invoices are the key primitive. The coordinator locks payment when you submit work, and can only release it if your contribution passes validation. If it fails, funds return automatically via timeout. The coordinator cannot steal funds and cannot withhold earned payment.
Each peer opens a Lightning channel to the coordinator. Submission fees flow one direction, rewards flow the other — channels naturally rebalance. The coordinator sits behind an Aperture L402 reverse proxy.
Yes — and that's deliberate. But its power is strictly limited by Lightning itself.
The coordinator cannot steal your funds. When you submit work, your reward is locked in a Lightning hold invoice. If your work passes verification, the payment releases to you automatically. If it doesn't, the payment refunds automatically. The coordinator never has custody of your bitcoin.
The coordinator cannot withhold earned payment. Once your work passes the deterministic quality check, the hold invoice settles. This is enforced at the Lightning protocol level — it's not a policy, it's math.
The worst a bad coordinator can do is reject valid work — in which case you get refunded (minus the small submission fee) and take your compute elsewhere. Think of coordinators like mining pools: centralized operators, but competitive and replaceable. Don't like one? Work for a different one.
Federated validation — where multiple independent validators must agree before payment settles — is on the roadmap to reduce even this remaining trust.
Single-machine simulation: training, compression, validation, and hold invoice settlement all running on one box with regtest Lightning. Prove the full loop before adding networking.
Split into coordinator and peer processes. Real HTTP communication, real L402 payment gating via Aperture. Submit work, get paid — two separate programs talking over the network.
Coordinator on a VPS, peer on local hardware, communicating over the real internet with real Lightning payments. Testnet first, then mainnet.
Multiple peers — some honest, some trying to cheat (submitting garbage, copying others' work, poisoning the model). Prove the protocol catches bad actors and only pays for real contributions.
A parallel track built on the same L402 infrastructure. Anyone posts a bounty: "improve this metric, earn sats." AI agents compete, the coordinator validates against a held-out test set, payment proportional to improvement. No GPU required — runs on any computer.
Full protocol design: how the payment flow works, coordinator trust model, economics, and security analysis.
Technical breakdown of the largest decentralized training run to date — what worked, what didn't, and what we're building on.
Game theory behind the protocol: why hold invoices work for escrow, how to catch cheaters, comparison with Bittensor's token economics.
L402 protocol mechanics, channel capacity math, Lightning Labs' agent tools, and why Lightning beats every alternative for this use case.
How federated learning (Google, Apple) differs from decentralized training, gradient privacy implications, and where l402-train sits on the trust spectrum.
Cloud GPU pricing, consumer hardware operating costs, Bittensor miner economics, break-even analysis, and Bitcoin mining comparison.
Critical survey of 12 projects: what actually shipped vs. testnet-stage vs. narrative. Prime Intellect, Bittensor, Gensyn, Together AI, and more.
What your computer can actually train: hardware tiers, benchmarks, MLX vs CUDA, power/heat/noise, and how background training works.
The Lightning-native API payment ecosystem: Aperture, Lightning Agent Tools, Fewsats, client libraries, x402 comparison, and how l402-train extends L402 bidirectionally.
L402 micropayments for AI inference: live services, unit economics (99%+ gross margin), why Lightning beats credit cards for sub-cent transactions, and the autoresearch compute market.