The literature, finally, executes.
Public research is a body of claims. Most of those claims are never tested. Research Radar treats every claim as a build goal — and executes it.
The problem
Tens of thousands of new papers, repositories, and engineering posts appear each month. The overwhelming majority are never independently built, tested, or reproduced. Citation counts, demo videos, and self-reported metrics have become unreliable signals. The signal-to-noise ratio of public research has collapsed under its own volume.
Three structural problems converged:
- The reproducibility crisis. Most published results are never independently replicated. The cost of reproducing a single paper is measured in days, sometimes weeks.
- The opportunity gap. Builders, founders, and investors need to know which technical claims are real — and which are aspirational — before they commit attention or capital.
- The execution overhang. Frontier coding models can now translate a precise research goal into a working repository in hours. The bottleneck has shifted from execution to direction: knowing what to build, evaluating what was built, and seeing connections no single paper anticipates.
The protocol
Research Radar is a five-phase autonomous pipeline that turns raw public research into permanent, verifiable artifacts. Each phase is described in detail on the Reproducibility Index. Every artifact — every item, every build, every combination, every verdict — is vectorized, queryable, and permanent.
The flywheel
Research Radar is not a pipeline. It is a flywheel. Each rotation compounds:
- More radar items → richer vector graph → better combinations.
- More builds → better goal synthesis (the system learns what is buildable).
- More NO-GO commercial verdicts → sharper filter, more revisit candidates when adjacent technology matures.
- More validated reproductions → authoritative source for the next round of triage.
The corpus, not any individual phase, is the asset that grows in value over time. Anyone can scrape arXiv. Few will build every interesting paper, test it, register it permanently, and combine the survivors. That is the moat.
What's on chain
- Attestation Registry — permanent record of each validated reproduction: paper, build hash, score, test results, validator.
- USDC Bounty Escrow — pool funds to commission a specific reproduction. Released on attestation publish.
- Reproduction cNFTs — compressed NFTs with royalty splits to operator, paper author, and bounty backers.
- Arweave bundles — permanent storage of the validated code, paper, and report.
- No native token.
Who it's for
- Researchers who need to know whether a paper actually works before building on it.
- Founders and corporate R&D who need verified, code-attached opportunities — not summaries.
- Investors who need fast technical due diligence on AI claims.
- Paper authors who want independent reproduction as a credential.
- The open scientific commons — for whom the corpus is a public good.
Honest limitations
Research Radar does not solve subjective evaluation. It cannot tell whether a result is important, only whether it can be reproduced under specified conditions. Tests passing is a necessary but not sufficient condition for scientific value.
Builds requiring proprietary datasets, specialized hardware, or non-public APIs are out of scope. Such papers can still be registered, but with an out-of-scope note rather than a reproduction score.