Reproducibility Index · 6710087a

osp:69779686

The open-source toolkit to study and modify advanced AI systems

Independent reproduction of this paper. Validated on 2026-04-11 via a lab build. Passed on first attempt without repair.

Quality score
90%
Tests passed
121/121
Repair rounds
0
Status
reproduced

Checks

Syntax

syntax_parse blocking All 52 .py files parse cleanly

Imports

import_llm_causal_prober blocking import llm_causal_prober -- OK

Dependencies

dep_metadata major Found pyproject.toml

Tests

pytest_run major pytest: 121 passed, 0 failed, 0 errors (exit 0)

Packaging

pip_installable major pip install -e . --dry-run succeeded

Git State

worktrees_merged major No .worktrees directory
on_main_branch minor Current branch: feature/215a8779-llm-causal-prober (not main)

Cleanup

syspath_hacks minor 1 test file(s) use sys.path hacks

Whole-Repo Review

claude_review info Claude review: 5/10 — The codebase shows a well-structured attempt at causal probing in LLMs but suffers from critical wiring issues and inter

Citation & embed

Badge

Embeddable SVG, no auth required.

Research Radar reproduction badge
<img src="https://api.research-radar.com/v1/validations/6710087a/badge.svg">

BibTeX

Drop into your .bib file.

Download .bib →