Setup
Better Code Review Graph — Manual Setup Guide
Section titled “Better Code Review Graph — Manual Setup Guide”2026-05-02 Update (v
+) : Plugin install (Method 1) now uses pure stdio mode. API keys are optional env vars. The previous “Zero-Config Relay” auto-spawn pattern has been removed. If you relied on the relay form to enter API keys, please:
- Set the env var directly in plugin config (Method 1), OR
- Use HTTP self-host mode (advanced; out of scope of this guide).
Method overview
Section titled “Method overview”This plugin supports 1 install method only: stdio via plugin install (uvx/npx). Reason: the plugin needs direct host access to your project files (Godot project / repo path) and doesn’t ship Docker or HTTP variants.
For comparison, the other 6 plugins in this stack (better-notion-mcp, better-email-mcp, better-telegram-mcp, wet-mcp, mnemo-mcp, imagine-mcp) support 3 methods:
- Default — Plugin install (
uvx/npx) stdio - Fallback — Docker stdio (Windows/macOS PATH issues)
- Recommended — Docker HTTP (multi-device, OAuth/relay form, claude.ai web)
⚠️ Mutually exclusive — pick ONE per plugin (applies to those 6 plugins, not crg): For the 6 plugins above that offer Method 2 (Docker stdio) or Method 3 (HTTP), do NOT stack
/plugin installAND a usermcpServersoverride — both would load simultaneously and create duplicate entries (plugin’snpx/uvxstdio + your override). Plugin matching is by endpoint (URL or command string) per CC docs, not by name — andnpx/uvx≠docker≠ HTTP URL, so all three are distinct endpoints. Choosing Method 2 or Method 3 means losing the plugin’s skills/agents/hooks/commands.better-code-review-graphonly offers Method 1, so this note is informational only — there is no Docker stdio or HTTP variant to conflict with the plugin install here.
Prerequisites
Section titled “Prerequisites”- Python 3.13 (3.14+ is NOT supported)
uvoruvxinstalled (docs)- Docker (optional, for containerized setup)
- A code repository to analyze
Method 1: Claude Code Plugin (Recommended)
Section titled “Method 1: Claude Code Plugin (Recommended)”Plugin marketplace install runs the server in pure stdio mode with optional API key env vars. No daemon-bridge, no auto-spawn, no relay form. The graph is stored locally in SQLite — no external graph database required.
Credential prompts at install
Section titled “Credential prompts at install”When you run /plugin install, Claude Code prompts you for the following credentials (declared in userConfig per CC docs). Sensitive values are stored in your system keychain and persist across /plugin update:
| Field | Required | Where to obtain |
|---|---|---|
JINA_AI_API_KEY | Optional | https://jina.ai/api-key |
GEMINI_API_KEY | Optional | https://aistudio.google.com/apikey |
OPENAI_API_KEY | Optional | https://platform.openai.com/api-keys |
COHERE_API_KEY | Optional | https://dashboard.cohere.com/api-keys |
- Open Claude Code.
- Install the plugin (Claude Code prompts for
JINA_AI_API_KEY— press Enter to skip):Terminal window /plugin marketplace add n24q02m/claude-plugins/plugin install better-code-review-graph@n24q02m-plugins - The server starts automatically when Claude Code launches.
- The SessionStart hook auto-builds the graph for the current project; PostToolUse updates it after edits.
Credential Setup
Section titled “Credential Setup”All API keys are optional. The server works with local ONNX embeddings out of the box.
Stdio Mode (Env Vars)
Section titled “Stdio Mode (Env Vars)”Set API keys in your MCP client env block or shell profile:
export JINA_AI_API_KEY="jina_..."export GEMINI_API_KEY="AIza..."Environment Variable Reference
Section titled “Environment Variable Reference”| Variable | Required | Default | Description |
|---|---|---|---|
JINA_AI_API_KEY | No | — | Jina AI: embedding + reranking (highest priority) |
GEMINI_API_KEY | No | — | Gemini: embedding (free tier). Also accepts GOOGLE_API_KEY |
OPENAI_API_KEY | No | — | OpenAI: embedding |
COHERE_API_KEY | No | — | Cohere: embedding + reranking. Also accepts CO_API_KEY |
EMBEDDING_BACKEND | No | auto-detect | cloud or local (ONNX) |
EMBEDDING_MODEL | No | auto-detect | Cloud embedding model name |
TRANSPORT_MODE | No | stdio | Set to http to enable HTTP transport (multi-user). |
PUBLIC_URL | Yes (http) | — | Server’s public URL for relay form. |
DCR_SERVER_SECRET | Yes (http) | — | HMAC secret for stateless Dynamic Client Registration. |
PORT | No | 8080 | Server port (http mode only). |
LOG_LEVEL | No | INFO | Logging level |
Embedding Provider Priority
Section titled “Embedding Provider Priority”Cloud auto-detection order: Jina AI > Gemini > OpenAI > Cohere > Local ONNX (Qwen3)
All embeddings are stored at 768 dimensions. Switching providers does NOT invalidate existing vectors.
Supported Languages
Section titled “Supported Languages”Python, TypeScript, JavaScript, Go, Rust, Java, C#, Ruby, Kotlin, Swift, PHP, C/C++, Solidity
Ignore Files
Section titled “Ignore Files”Create .code-review-graphignore in your project root to exclude paths:
generated/***.generated.tsvendor/**node_modules/**Troubleshooting
Section titled “Troubleshooting”Graph build finds no files
Section titled “Graph build finds no files”Ensure the repo_path parameter points to the root of a code repository. Check that the project contains files in a supported language.
First embedding is slow
Section titled “First embedding is slow”On first use, the local ONNX embedding model (~570MB) is downloaded. Subsequent runs are instant. Use cloud embedding (any API key) to avoid this download.
”No graph found” error
Section titled “”No graph found” error”Build the graph first:
graph(action="build", repo_path="/path/to/your/repo")Docker cannot access repo files
Section titled “Docker cannot access repo files”Ensure the volume mount is correct. The repo path inside the container is /repo:
docker run -i --rm -v "/absolute/path/to/repo:/repo:ro" n24q02m/better-code-review-graph:latest