Implementation Plan

Research prototype — proving the core thesis with real code, real payments, and real numbers.

Related: Whitepaper

Project Philosophy

This is a research prototype, not a production system. The goal is to prove the core thesis — "Lightning micropayments can coordinate quality-verified compute contributions" — with real code, real payments, and real numbers. Start small, validate incrementally, publish results.

Together AI proved decentralized training could work at meaningful scale — then abandoned it because centralized infrastructure was a better business. The thesis of this project is that Lightning micropayments change the equation: per-contribution payment granularity, near-zero transaction costs, and no token overhead make the economics work where token-based systems failed.

The protocol has two modes that share the same L402 infrastructure: training coordination (gradient exchange with quality-proportional payment) and autoresearch bounties (AI agents compete to optimize any quantifiable metric, paid per validated improvement). Training is the hard technical problem that proves the protocol. Autoresearch bounties are the scalable product — they require no GPU, run on any hardware, and have an essentially unbounded addressable market. Both are developed in parallel.


Two Tracks, Shared Infrastructure

Track A: TrainingTrack B: Autoresearch
WhatDecentralized model training with gradient exchangeAI agents compete to optimize anything with a metric
HardwareGPU / Apple Silicon (16+ GB VRAM)Any computer that can run a coding agent
CoordinationSynchronized ~70s rounds, SparseLoCo compressionFully independent — agents never coordinate
VerificationGradient quality scoring (loss delta)Deterministic: did the held-out metric improve?
Shared infraL402 payment gating, hold invoice escrow, coordinator validation, Lightning settlement
Phases0 → 1 → 2 → 3B0 → B1 → B2 (starts at Phase 1)

Track B starts as soon as Phase 1’s L402 infrastructure is working. The bounty coordinator is a simpler application of the same payment flow — no gradient compression, no model checkpoints, just "submit a diff, validate against held-out eval, pay for improvements." This means the autoresearch product can ship months before multi-peer training is battle-tested.


TRACK A: TRAINING

Phase 0: Local End-to-End Loop

Timeline: 2 weeks

Goal: Single-machine simulation running the complete protocol loop: local training → gradient compression → validation scoring → payment settlement. All on the MacBook with regtest Lightning.

Why this first: Before involving any networking, peers, or real money, prove the software architecture works end-to-end. Get a tight eval loop running fast.

Components

  1. sparseloco.py — SparseLoCo compression in MLX
    • Top-k sparsification (k=64 per chunk of 4096)
    • 2-bit quantization of selected values
    • Index encoding
    • Error feedback buffer (decay=0.95)
    • Test on Qwen2.5-0.5B. Train locally for 30 steps, compute pseudo-gradient (weight diff), compress, decompress, verify round-trip fidelity
    • Metric: compression ratio achieved + loss degradation from compress/decompress round-trip vs dense gradient
  2. validator.py — Gauntlet-style loss scoring
    • Take compressed gradient, decompress, apply to model checkpoint
    • Measure loss on held-out validation batch before and after
    • Output: quality score (loss delta) normalized against baseline
    • Pure function: f(checkpoint, gradient, val_data) → score. Deterministic replay is free
  3. Regtest Lightning — Two LND nodes in Docker via lightning-agent-tools
    • Coordinator node + simulated peer node (Tier 2 security — local keys, restricted perms)
    • Create payment channel between them
    • Test: issue hold invoice → pay → settle on validation pass / expire on fail
  4. protocol_sim.py — Single-machine protocol loop
    for round in range(N):
      1. Peer trains locally for 30 steps (MLX)
      2. Peer compresses pseudo-gradient (sparseloco.py)
      3. Coordinator issues hold invoice for reward
      4. Peer "submits" gradient (local function call, no HTTP)
      5. Coordinator validates (validator.py) → quality_score
      6. If quality_score > threshold: settle hold invoice
      7. Else: let hold invoice expire
      8. Log: round, quality_score, payment_settled, compression_ratio

Economic Benchmarking

Phase 0 also establishes baseline economics. Measure actual performance and power draw against the break-even analysis:

Validates

Dependencies


Phase 1: L402-Gated HTTP Exchange

Timeline: 2 weeks

Goal: Split coordinator and peer into separate processes communicating over HTTP with L402 payment gating. Still on one machine, but real HTTP and real L402 flows.

Components

  1. coordinator.py — FastAPI service behind Aperture proxy
    • PUT /gradient — L402-gated gradient submission (peer pays submission fee)
    • GET /checkpoint — L402-gated checkpoint download
    • GET /reward-schedule — public endpoint showing current bounty rates
    • Validation runs server-side after gradient upload
    • Hold invoice issued at upload time, settled or expired based on validation score
  2. peer.py — Client using lnget for automatic L402 payment
    • Training loop → compress → lnget PUT /gradient → receive payment (or not)
    • --max-cost flag enforces per-request spending caps
  3. Aperture configuration
    • Pricing: ~100 sats submission fee for PUT /gradient, ~50 sats for GET /checkpoint
    • Macaroon caveats: per-peer spending limits, time-bounded sessions

L402 Ecosystem Notes

Validates


Phase 2: Two-Machine Proof of Concept

Timeline: 4 weeks

Goal: Run the protocol across two separate machines over the real internet with real (small) Lightning payments.

Components

  1. Coordinator on Hetzner VPS
    • Deploy coordinator service + LND (Neutrino light client) + Aperture
    • Channel capacity: minimal for testing (100K–1M sats, ~$100–$1000)
  2. Primary test peer: Mac Mini M4 Pro 24 GB
    • MLX training, LND light client, direct payment channel to coordinator
    • The sweet spot hardware: $799, 30–50W, 150–200 tok/s on 3B model
    • Real Lightning payments: submit gradients, receive rewards
  3. Stretch: RTX 4090 peer (CUDA path)
    • PyTorch + CUDA training, validates cross-framework gradient exchange
    • 500+ tok/s on 3B, 450W — tests the power/performance tradeoff
  4. Testnet → Mainnet
    • Start on Bitcoin testnet (free, no real money)
    • Move to mainnet when stable (budget: ~$100–500)

Economic Validation

Validates

Deliverable: conference demo


Phase 3: Multi-Peer Simulation + Byzantine Testing

Timeline: 4 weeks

Goal: Simulate 3–5 peers submitting varying quality gradients + 1 real peer on MacBook. Test incentive mechanics and Byzantine resistance.

Verification of untrusted computation is the hardest unsolved problem in decentralized training. Gensyn's Verde (probabilistic proof-of-learning) has been in development since 2022 and remains in testnet. Prime Intellect's TOPLOC works but is narrow (RL rollouts only). l402-train's approach — deterministic loss scoring on held-out data — is simpler and immediately testable, but must prove it catches real attack vectors.

Simulated Peer Profiles

Test Questions

Deliverable: technical paper with empirical results — real Lightning payments + real gradient validation + Byzantine resistance is novel. Nobody has demonstrated this.


TRACK B: AUTORESEARCH BOUNTIES

Phase B0: Bounty Runner Framework

Timeline: 2 weeks (parallel with Phase 1)

Goal: Build the bounty coordinator as a second mode of the existing coordinator service. Same L402 infrastructure, different task type.

Components

  1. bounty_coordinator.py — FastAPI endpoints behind same Aperture proxy
    • GET /bounties — public listing of active bounties
    • GET /bounty/{id} — L402-gated baseline download (code + public eval set)
    • POST /bounty/{id}/submit — submit improvement (diff + claimed score)
    • Validation: apply diff to baseline, run eval on held-out set, score improvement
    • Hold invoice created at submission, settled proportional to improvement
  2. bounty_agent.py — Reference agent client
    • Downloads bounty baseline via L402
    • Runs autoresearch loop locally (Karpathy pattern: edit → eval → keep/discard)
    • Submits improvements to coordinator
    • Works with any coding agent backend (Claude Code, Codex, local models)

Why This Is Simpler Than Training

Validates


Phase B1: First Live Bounties

Timeline: 2 weeks (parallel with Phase 2)

Goal: Post real bounties with real sats, have real agents compete. Prove the two-sided market works.

First Bounties

Anti-Gaming Validation

Validates

Deliverable: working bounty marketplace with real payments — standalone product, no GPU required.


Phase B2: Multi-Sponsor Marketplace

Timeline: 4 weeks

Goal: Open the bounty coordinator for external sponsors to post their own bounties. Two-sided marketplace: sponsors post bounties, agents compete.

Components

  1. Sponsor onboarding
    • Sponsor deposits bounty pool via Lightning (held in coordinator channel)
    • Uploads target files, eval script, public eval dataset
    • Coordinator generates held-out eval set or accepts sponsor-provided held-out hash
  2. Public bounty board
    • Browse active bounties with: description, metric, bounty amount, deadline, current best score
    • Leaderboard per bounty (anonymized agent IDs + scores)
    • Historical data: completed bounties, total sats paid, average improvements
  3. Coordinator economics
    • 5–10% fee on bounty payouts (covers validation compute + infrastructure)
    • L402 access fees on baseline downloads (covers bandwidth)
    • Self-sustaining business model independent of training revenue

Deliverable: open-source bounty marketplace — the "SETI@home for software optimization" that Karpathy envisioned, coordinated by Lightning.


Target Hardware

Training hardware requirements are based on the consumer hardware guide and economics analysis. Autoresearch bounties have no minimum hardware — any computer that can run a coding agent (Claude Code, Codex, or a local model) can compete.

TierHardwareModel Rangetok/s (3B)PowerBreak-even*
EntryMacBook Air M3 16 GB0.5B–1B40–6020 W5 sats/hr
Sweet spotMac Mini M4 Pro 24 GB0.5B–7B150–20040 W9 sats/hr
WorkhorseMac Studio M2 Ultra 192 GB0.5B–30B~47590 W21 sats/hr
PowerRTX 4090 system (24 GB)0.5B–13B500–628450 W103 sats/hr
Not viable: Raspberry Pi, AMD RX 580 and older, 8 GB machines

*Electricity-only break-even at US average $0.16/kWh, BTC = $70,000


Competitive Landscape

Based on the landscape survey of 12 projects:

What exists: Only Prime Intellect (INTELLECT-1/2/3) and Together AI (GPT-JT, before pivoting) have trained competitive models via decentralized infrastructure. Bittensor is an inference marketplace with empirically demonstrated stake-weighted rewards. Gensyn has been in testnet for 3+ years. Every project except Hivemind requires a custom token.

Where l402-train fits: The only protocol using Bitcoin Lightning for payment coordination. No token, no staking, quality-proportional rewards via hold invoices. The tradeoff is starting with a single coordinator and small models (0.5B–3B), which is the honest scope for a research prototype. See the L402 ecosystem survey for how the protocol extends L402 bidirectionally.


What to Skip for Prototype

Whitepaper FeatureSkip?Why
DLC-bound settlementYesHold invoices sufficient for PoC
Federated multi-validatorYesSingle coordinator fine; deterministic replay is what matters
72B scaleYes0.5B–3B on MLX. Proving the mechanism, not training a model
Heterogeneous SparseLoCoYesSingle-tier peers only
USDT (Taproot Assets)YesSats-only for prototype

Key Risks

  1. SparseLoCo on MLX — No existing implementation. Top-k + quantization straightforward; error feedback buffer management is the hard part
  2. Aperture custom validation — L402 gating is supported, but "validate before settling hold invoice" may need to be handled outside Aperture
  3. LND on VPS — 4GB RAM may be tight alongside existing services. May need to run LND on MacBook instead
  4. MLX scale gap — 0.5B proof of concept is fine, but gap to publishable 7B+ results requires renting GPU time

Deliverables Summary

PhaseTrackDeliverablePublishable?
0TrainingSingle-machine simulation with economics dataNo — but provides all the numbers
1TrainingL402-gated gradient exchangeBlog post / tweet thread
B0AutoresearchBounty runner frameworkBlog post / tweet thread
2TrainingTwo-machine PoC over real internetConference demo
B1AutoresearchFirst live bounties with real satsOpen-source product launch
3TrainingMulti-peer + Byzantine resistanceTechnical paper with empirical results
B2AutoresearchMulti-sponsor bounty marketplaceStandalone product