Wick Documentation

Browser-grade web access for AI agents. Install in 30 seconds, fetch any page.

Quick Start

Install Wick, configure your agent, and fetch your first page.

1. Install

macOS (Homebrew):

brew tap wickproject/wick && brew install wick

Linux (apt):

curl -fsSL https://wickproject.github.io/wick/apt/install.sh | bash

Any platform (npm):

npm install -g wick-mcp

2. Configure

wick setup

Auto-detects Claude Code, Cursor, and other MCP clients. Writes the config for you.

3. Fetch

Ask your agent to read a webpage. It will use wick_fetch automatically.

# Or test from the command line:
wick fetch https://www.nytimes.com
Tip: If your agent still uses its built-in fetch, add instructions to prefer Wick. See Agent Configuration below.

Agent Configuration

Make your agent always use Wick instead of its built-in fetch (which gets blocked).

Claude Code

Add to your project's CLAUDE.md:

When fetching web pages, always use the wick_fetch MCP tool
instead of the built-in WebFetch tool. wick_fetch bypasses
anti-bot protection and returns cleaner content.
Use wick_search for web searches.

Cursor

Add to .cursorrules:

When you need to read a webpage or fetch a URL, use the
wick_fetch tool. When you need to search the web, use
the wick_search tool.

Other MCP Agents

Add to AGENTS.md, your system prompt, or equivalent instructions file:

You have access to wick_fetch and wick_search MCP tools.

- Use wick_fetch to read any URL. Returns clean markdown
  and bypasses anti-bot protection.
- Use wick_search to search the web.
- Always prefer these over built-in fetch/browse tools.

Manual MCP Config

If wick setup didn't detect your client, add this to your MCP configuration JSON:

{
  "mcpServers": {
    "wick": {
      "command": "wick",
      "args": ["serve", "--mcp"]
    }
  }
}

Config file locations:

wick_fetch

Fetch a web page and get clean, LLM-friendly content. Uses Chrome's network stack to bypass anti-bot protection.

ParameterTypeDefaultDescription
urlstringrequiredThe URL to fetch
formatstring"markdown"Output format: markdown, html, or text
respect_robotsbooleantrueWhether to respect robots.txt restrictions

Example: Markdown (default)

# Returns clean markdown with title
wick fetch https://en.wikipedia.org/wiki/Rust_(programming_language)

# Output:
# Rust (programming language)

Rust is a general-purpose programming language emphasizing
performance, type safety, and concurrency...

Example: Raw HTML

wick fetch https://example.com --format html

# Returns full page HTML source, no extraction

Example: Plain text

wick fetch https://example.com --format text

# Returns visible text content, no formatting

Robots.txt

By default, Wick respects robots.txt. To override:

wick fetch https://reddit.com/r/technology --no-robots
Note: When you override robots.txt, you take responsibility for respecting the site's terms of service.

Search the web and get structured results. Use wick_fetch to read any result in full.

ParameterTypeDefaultDescription
querystringrequiredSearch query
num_resultsnumber5Number of results to return (1-20)

Example

wick search "rust async runtime" --num 3

# Output:
1. Tokio - An asynchronous Rust runtime
   The Tokio project provides the building blocks needed
   for writing reliable, high-performance async apps...

2. Async in Rust - The Rust Programming Language
   Rust's async/await syntax makes writing asynchronous
   code feel almost like writing synchronous code...

3. async-std - Async version of the Rust standard library
   async-std provides an async version of std, making
   async programming easy and portable...

wick_session

Manage persistent browser sessions. Clear cookies and cache to start fresh.

ParameterTypeDescription
actionstring"clear" — removes all cookies and session data
# Clear all stored cookies and cache
wick session clear

CLI: wick fetch

wick fetch <URL> [OPTIONS]

Options:
  --format <FORMAT>    Output: markdown, html, text  [default: markdown]
  --no-robots          Ignore robots.txt restrictions

Examples

# Fetch as markdown
wick fetch https://www.nytimes.com

# Fetch raw HTML
wick fetch https://www.nytimes.com --format html

# Fetch plain text
wick fetch https://www.nytimes.com --format text

# Ignore robots.txt
wick fetch https://reddit.com/r/technology --no-robots
wick search <QUERY> [OPTIONS]

Options:
  -n, --num <NUM>     Number of results  [default: 5]

Examples

wick search "MCP server protocol"
wick search "rust error handling" --num 10

CLI: setup / serve / version

wick setup

Auto-detect and configure MCP clients.

wick setup
# Detects Claude Code, Cursor, etc. and writes MCP config.

wick serve --mcp

Start the MCP server on stdio. Used by MCP clients — you don't usually run this directly.

wick serve --mcp

wick version

wick version
# wick 0.2.0 (rust)

Output Formats

Wick supports three output formats via the format parameter:

Markdown (default)

The full page converted to clean markdown. Strips <script>, <style>, <nav>, <header>, <footer>, and <aside> tags. Adds a title as an H1 heading.

Best for: LLM consumption, content extraction, research.

HTML

The raw page HTML exactly as received. No processing or extraction.

Best for: debugging, custom parsing, when you need the full DOM.

Text

Visible text content only. All HTML tags stripped, no markdown formatting.

Best for: word counts, text analysis, simple extraction.

How It Works

When your agent calls wick_fetch, here's what happens:

  1. MCP protocol — Your agent sends a tool call via the Model Context Protocol. Wick runs as a local MCP server on stdio.
  2. Chrome TLS — The request goes through Chrome's actual network stack (BoringSSL, HTTP/2, QUIC). The TLS fingerprint is identical to a real Chrome browser.
  3. Your IP — Because Wick runs locally, the request exits from your residential IP address. No cloud proxy, no datacenter IPs that get flagged.
  4. Content extraction — The HTML response is converted to clean markdown. Navigation, ads, and boilerplate are stripped.
  5. Response — Your agent receives clean, LLM-friendly content it can reason about immediately.
Why not just use curl? Anti-bot systems (Cloudflare, Akamai, etc.) fingerprint the TLS handshake. curl, Python requests, Go's net/http, and Node's fetch all have distinct TLS signatures that get blocked instantly. Wick's request is indistinguishable from a real Chrome browser visiting the page.

Troubleshooting

Still getting 403 errors

If your agent is still getting blocked, it may be using its built-in fetch instead of Wick. Check:

  1. Run wick fetch <url> from your terminal — if this works, the issue is agent configuration
  2. Add instructions to CLAUDE.md or .cursorrules to prefer wick_fetch (see Agent Configuration)
  3. Some sites block based on IP reputation — try from a different network

wick: command not found

The binary isn't in your PATH. Try:

# Check if it's installed
which wick
ls /opt/homebrew/bin/wick   # macOS Homebrew
ls /usr/local/bin/wick      # Linux

# Or run from the repo
./rust/target/release/wick version

wick setup didn't detect my client

You can configure manually. See Manual MCP Config above.

Slow responses

First fetches may be slower while the connection pool warms up. Subsequent fetches to the same domain are faster. If consistently slow:

robots.txt blocking

Wick respects robots.txt by default. If a site blocks automated access:

wick fetch https://example.com --no-robots

The response will tell you when robots.txt is the reason for a block.