ghfs

Benchmarks

Real measurements, real tasks

10 runs × 2 conditions × 2 task types, all with Claude Opus 4.6. No cherry-picking — every run is shown.

Single-Issue Task

Investigate one issue, implement a fix

Given an issue requesting i18n (internationalization) support, the AI investigates the codebase and produces an implementation plan. A focused task that represents typical day-to-day AI-assisted development.

Tokens (avg)
1,356K 1,164K
192K saved
AI API calls (avg)
1.0 0.0
pre-fetched by ghfs
Time (avg)
151s 131s
20s faster
Stability (CV)
37.3% 22.2%
more consistent

Token consumption per run

Without ghfs With ghfs
#1
1,234K
1,451K
#2
1,520K
1,257K
#3
2,390K
1,147K
#4
889K
885K
#5
1,497K
864K
#6
1,892K
1,667K
#7
770K
1,086K
#8
806K
1,081K
#9
1,204K
1,289K
#10
1,359K
919K

ghfs makes AI behavior more stable and efficient

With ghfs, the AI skips the tool call to fetch the issue and reads it directly from disk. This reduces token consumption by 14% on average while making behavior significantly more consistent across runs (CV 37.3% → 22.2%).

Multi-Issue Task

Cross-cutting research across all open issues

The AI reads all open issues, analyzes priorities, and produces a roadmap report. This is where ghfs changes the game — because issues are pre-fetched in the background, the AI makes zero API calls during the task.

0
AI API calls
pre-fetched in background
26
issues analyzed
vs 23 avg without
10/10
consistent
zero variance in API calls

Token consumption per run

Without ghfs With ghfs
#1
871K
925K
#2
424K
1,007K
#3
890K
1,014K
#4
869K
1,107K
#5
481K
821K
#6
739K
1,173K
#7
687K
1,097K
#8
477K
1,340K
#9
709K
1,296K
#10
1,055K
948K

Tokens increase by 49% — here's why that's a trade-off, not a problem

ghfs provides all issue data at once, which uses more tokens than selectively fetching a subset via CLI. However, it delivers complete coverage (26 vs 23 issues), more consistent behavior (CV 28.9% → 15.3%), and zero external API calls.

Want even fewer tokens? We're already on it.

Our roadmap includes features that let the AI load only the issues it needs — dramatically reducing this trade-off.

See what's coming ↓

Coming Next

The best of both worlds

Planned features that will eliminate the multi-issue token trade-off while keeping all the benefits.

Planned #407

Query Files

Load only the issues you need with condition-based filtering. Query for specific labels, states, or keywords — solving the multi-issue token increase.

Exploring #412

Semantic Search

Find related issues intelligently using embeddings and vector search. Instead of reading every issue to find connections, the AI asks a question and gets the relevant issues back — no full scan needed.

Methodology

Model claude-opus-4-6 (Claude Code default)
Version Pre-release
Runs 10 per condition
Conditions ghfs enabled vs disabled
Task 1 Single-issue: given an i18n request, investigate and produce implementation plan
Task 2 Multi-issue: analyze all open issues and produce priority roadmap
Fair conditions Clean working state, identical prompts, no cached conversations, prompt cache warmed before measurement