KV-Fold: One-Step KV-Cache Recurrence for Long-Context Inference
Independent reproduction of this paper. Validated on via a lab build. Passed on first attempt without repair.
Quality score
75%
Tests passed
21/21
Repair rounds
0
Status
reproduced
Checks
Syntax
| ✓ | syntax_parse | blocking | All 15 .py files parse cleanly |
Imports
| ✓ | import_longfold | blocking | import longfold -- OK |
Dependencies
| ✓ | dep_metadata | major | Found pyproject.toml |
Tests
| ✓ | pytest_run | major | pytest: 21 passed, 0 failed, 0 errors (exit 0) |
Packaging
| ✓ | pip_installable | major | pip install -e . --dry-run succeeded |
Git State
| ✓ | worktrees_merged | major | No .worktrees directory |
| ✓ | on_main_branch | minor | Current branch: master |
Cleanup
| ✗ | no_venv_in_tree | major | .venv/ present (4977 MB) -- should be in .gitignore |
Whole-Repo Review
| ✓ | claude_review | info | Claude review: 5/10 — The code attempts to implement a KV-cache folding mechanism for long-context transformer inference but contains critical |
Citation & embed
Badge
<img src="https://api.research-radar.com/v1/validations/c2f87825/badge.svg">