Skip to content

Setup

WET (Web Extended Toolkit) — Manual Setup Guide

Section titled “WET (Web Extended Toolkit) — Manual Setup Guide”

2026-05-02 Update (v<auto>+): Plugin install (Method 1) uses stdio mode. Basic SearXNG search works without env; advanced features (GDrive sync, Brave, Serper, Gemini) need optional env vars OR HTTP mode for OAuth flows. The previous “Zero-Config Relay” auto-spawn pattern has been removed.

This plugin supports 3 install methods. Pick the one that matches your use case:

PriorityMethodTransportBest for
1. DefaultPlugin install (uvx/npx)stdioQuick local start, single workstation, no OAuth/HTTP needed.
2. FallbackDocker stdio (docker run -i --rm)stdioWindows/macOS where native uvx/npx hits PATH or Python version issues.
3. RecommendedDocker HTTP (docker run -p 8080:8080)HTTPMulti-device, OAuth/relay-form auth, team self-host, claude.ai web compatibility.

All MCP servers across this stack share this priority hierarchy. Note: 2 plugins (better-godot-mcp and better-code-review-graph) only support Method 1 (stdio) — they need direct host access to project files / repo paths and don’t ship Docker / HTTP variants.

⚠️ Mutually exclusive — pick ONE per plugin: If you choose Method 2 (Docker stdio override) OR Method 3 (HTTP), do NOT also /plugin install this plugin via marketplace. Both load simultaneously and create duplicate entries in /mcp dialog (plugin’s stdio + your override). Plugin matching is by endpoint (URL or command string) per CC docs, not by name — and npx/uvxdocker ≠ HTTP URL, so all three are distinct endpoints. Trade-off: choosing Method 2 or Method 3 means you lose this plugin’s skills/agents/hooks/commands. For full plugin features, use Method 1 (default plugin install) with userConfig credentials prompted at install time.

  • Python 3.13 (3.14+ is NOT supported due to SearXNG incompatibility)
  • uv or uvx installed (docs)
  • Docker (optional, for containerized setup)

For Claude Code users, the plugin approach is the simplest. Plugin install uses stdio mode — basic SearXNG web search works without any env vars. Advanced features require optional API keys.

When you run /plugin install, Claude Code prompts you for the following credentials (declared in userConfig per CC docs). Sensitive values are stored in your system keychain and persist across /plugin update:

FieldRequiredWhere to obtain
JINA_AI_API_KEYOptionalhttps://jina.ai/api-key (highest priority embedding+reranking)
GEMINI_API_KEYOptionalhttps://aistudio.google.com/apikey
OPENAI_API_KEYOptionalhttps://platform.openai.com/api-keys
COHERE_API_KEYOptionalhttps://dashboard.cohere.com/api-keys
GITHUB_TOKENOptionalhttps://github.com/settings/tokens (bumps GitHub rate limit 60->5000/hr for library docs discovery)
  1. Open Claude Code.
  2. Install the plugin (Claude Code prompts for JINA_AI_API_KEY + GEMINI_API_KEY — press Enter to skip):
    Terminal window
    /plugin marketplace add n24q02m/claude-plugins
    /plugin install wet-mcp@n24q02m-plugins
  3. Restart Claude Code — the server starts automatically when CC launches with the values injected.

Without env vars: basic SearXNG metasearch, content extraction, library docs, ONNX local embedding/reranking all work. With env vars: cloud embedding/reranking (faster), Gemini LLM analysis, premium search providers.

Note: This installs the full plugin (skills + agents + hooks + commands + stdio MCP server). If you’d rather use Method 2 (Docker stdio) or Method 3 (HTTP) below, DO NOT /plugin install this plugin — pick Method 2 or Method 3 instead. All three methods are mutually exclusive (see Method overview).

⚠️ Before adding the Docker stdio override below, ensure this plugin is NOT installed via marketplace: Run /plugin uninstall wet-mcp@n24q02m-plugins first if you previously ran /plugin install. Otherwise both entries (plugin’s npx/uvx stdio + your docker run stdio) will load simultaneously since plugin matches by endpoint (command string), not by name.

Trade-off accepted: Choosing this method means you lose this plugin’s skills/agents/hooks/commands. Use Method 1 instead if you want full plugin features.

  1. Pull the image:

    Terminal window
    docker pull n24q02m/wet-mcp:latest
  2. Run with environment variables:

    Terminal window
    docker run -i --rm \
    --name mcp-wet \
    -v wet-data:/data \
    -e JINA_AI_API_KEY=your_key_here \
    -e GEMINI_API_KEY=your_key_here \
    n24q02m/wet-mcp:latest
  3. Or add to your MCP client config:

    {
    "mcpServers": {
    "wet": {
    "command": "docker",
    "args": [
    "run", "-i", "--rm",
    "--name", "mcp-wet",
    "-v", "wet-data:/data",
    "-e", "JINA_AI_API_KEY",
    "-e", "GEMINI_API_KEY",
    "-e", "GITHUB_TOKEN",
    "n24q02m/wet-mcp:latest"
    ]
    }
    }
    }

Stdio mode is the default and works for most personal/single-user scenarios. Consider switching to HTTP mode (Method 3 self-host) when you need:

  • claude.ai web compatibility — HTTP transport is required to connect plugins to claude.ai web client (stdio only works with desktop clients)
  • One server shared across N Claude Code sessions — single daemon serves all sessions instead of spawning a fresh stdio process per session (lower memory, shared cache)
  • Browser-based GDrive OAuth flow — HTTP mode performs the Google Device Code flow via the bundled public client; no manual GOOGLE_DRIVE_CLIENT_ID setup required
  • Multi-device credential sync — self-host the HTTP server once, log in from multiple machines without re-pasting API keys
  • Multi-user team sharing — single self-hosted instance supports N users with per-JWT-sub credential isolation
  • Always-on persistent process — ideal for webhooks, scheduled agents, or background automation

⚠️ Before adding the HTTP override below, ensure this plugin is NOT installed via marketplace: Run /plugin uninstall wet-mcp@n24q02m-plugins first if you previously ran /plugin install. Otherwise both entries (plugin’s stdio + your HTTP override) will load simultaneously since plugin matches by endpoint, not name.

Trade-off accepted: Choosing this method means you lose this plugin’s skills/agents/hooks/commands. For example, the wet-mcp:fact-check skill will no longer be available. Use Method 1 instead if you want full plugin features.

Switching transport vs. setting credentials: The userConfig prompt only configures credentials for stdio mode (Method 1 / Option 1). To switch transport to HTTP, override mcpServers in your client settings per the snippets below — this is a separate path from userConfig and is not driven by the install prompt.

HTTP mode runs as a persistent multi-user server with browser-based credential setup. GDrive OAuth uses a bundled public Google Desktop client (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) per Google’s official Desktop OAuth pattern — no user-side OAuth registration is required. Users authenticate via the device-code flow in their browser.

  1. Run the server in HTTP mode:

    Terminal window
    docker run -d --name wet-mcp-http \
    -p 8084:8084 \
    -v wet-data:/data \
    -e MCP_TRANSPORT=http \
    -e PUBLIC_URL=https://wet.example.com \
    -e MCP_DCR_SERVER_SECRET=your-random-secret \
    n24q02m/wet-mcp:latest
  2. Configure your MCP client to connect to the HTTP endpoint:

    {
    "mcpServers": {
    "wet": {
    "url": "https://wet.example.com/mcp"
    }
    }
    }
  3. On first call, the client redirects to the relay form. Fill in API keys (all optional) and — if SYNC_ENABLED=true — complete the GDrive device-code flow in your browser using the bundled public client.

Each user receives an isolated credential vault keyed by JWT sub. No per-user OAuth registration needed.

Public HTTP deployments expose <your-domain>/authorize to URL discovery. To prevent random Internet users from accessing the relay form, mint a relay password:

Terminal window
openssl rand -hex 32
# Save in your skret / .env as:
MCP_RELAY_PASSWORD=<generated-32-byte-hex>

Share this password out-of-band (Signal/email/SMS) with anyone you invite to use your server. They will see a login form when first opening /authorize; once logged in, the cookie persists 24 hours.

Single-user dev exception: If PUBLIC_URL=http://localhost:8080, you can leave MCP_RELAY_PASSWORD empty to disable the gate. The server logs a warning if you skip the password with a non-localhost PUBLIC_URL.

wet-mcp requires Python 3.13 due to SearXNG incompatibility. Always use --python 3.13 with uvx:

Terminal window
uvx --python 3.13 wet-mcp

On first start, the server downloads:

  • SearXNG search engine
  • Playwright chromium browser
  • ONNX embedding and reranker models (~1.1GB total)

Use the warmup command to pre-download: setup(action="warmup")

If port 41592 is in use, change it:

Terminal window
export WET_SEARXNG_PORT=41593

If you encounter permission errors with the Docker volume:

Terminal window
docker run -i --rm -v wet-data:/data --user $(id -u):$(id -g) n24q02m/wet-mcp:latest

If ONNX model download fails behind a proxy, use cloud embedding instead by setting any API key (e.g., GEMINI_API_KEY).

All environment variables are optional. See docs/setup-with-agent.md for the complete table.

VariableDefaultDescription
JINA_AI_API_KEYJina AI: search + extraction + embedding + reranking
GEMINI_API_KEYGemini: LLM + embedding (free tier)
OPENAI_API_KEYOpenAI: LLM + embedding
COHERE_API_KEYCohere: embedding + reranking
BRAVE_API_KEYBrave Search API key (premium search)
SERPER_API_KEYSerper search API key (premium search)
GITHUB_TOKENauto-detectGitHub token for docs discovery
WET_AUTO_SEARXNGtrueAuto-start embedded SearXNG
SYNC_ENABLEDfalseEnable Google Drive sync
LOG_LEVELINFOLogging level
  • Embedding: Jina AI > Gemini > OpenAI > Cohere > Local ONNX (Qwen3)
  • Reranking: Jina AI > Cohere > Local ONNX (Qwen3)
  • LLM: Gemini > OpenAI > Disabled
  • Search: Brave > Serper > Jina AI > SearXNG (always available locally)