A Live Stats Page for Wick
April 21, 2026
Wick now publishes a live stats page showing which sites it actually works on, which strategy won, and where it's currently losing. The data is pulled from real per-fetch telemetry, aggregated over the last seven days, and refreshed every five minutes.
If you want to know whether Wick will actually work on a given site — go look at the page.
Why a public stats page
"Does this scraper work?" is a near-useless question. A scraper is always a moving target: sites change their anti-bot rules, Chrome ships a new TLS version, a Cloudflare rule update breaks a fingerprint technique. What matters is "does it work today, on this site, at this version." That's exactly what the stats page answers.
It's also accountability. When a site stops working, the "Current failures" section shows up public-facing. I can't quietly let regressions sit. You can see if I'm fixing things.
What's actually collected
Every fetch records a small JSON envelope:
{
"host": "nytimes.com",
"strategy": "cef",
"escalated_from": "cronet",
"ok": true,
"status": 200,
"timing_ms": 1840,
"version": "0.10.0",
"os": "macos"
}
That's it. No URL paths, no query strings, no page content, no titles. Your IP address isn't stored as a data point (Cloudflare sees it at ingest, like any HTTP request, but it doesn't land in the event table). No user identifier. No machine ID.
Opt out with WICK_TELEMETRY=0 or touch ~/.wick/no-telemetry. Either disables all telemetry including the daily usage pings and the failure reports. Details: docs.html#telemetry.
What you can read off the page
Three tables, all grouped by host:
- Top hosts by fetch volume. What the user base actually fetches, with overall success rate and a strategy breakdown. Use this to see whether your target site is mainstream (in the top 30) and what strategy usually wins there.
- Hosts that need CEF. Sites where the lightweight Cronet path fails but the embedded-Chromium CEF path succeeds. These are the sites that justify installing CEF (
wick install cef). - Current failures. Hosts where nothing is working — no strategy has returned a real page in seven days. This is the active work queue. If you rely on one of these, open an issue or a PR.
The underlying idea
A scraper that doesn't learn is going to degrade. A scraper that learns in the aggregate gets better with every fetch. Wick's client-side site_cache does half of this — each install remembers "Cronet worked on this site" or "we had to escalate to CEF" and uses that on the next fetch. The public stats page does the other half, at the population level: across all installs, which strategies actually succeed on which sites?
When we see a site move from "100% Cronet success" to "50% Cronet, escalating to CEF" over a few days, that's a signal — the site just updated its anti-bot rules. We can investigate and ship a fix. When a site drops to "0% across all strategies," it goes on the failures list and gets worked on.
Open source scrapers usually die not because of one big failure but because the long tail of per-site breakage accumulates faster than anyone feels like fixing it. The hope is that making that queue public, with real numbers, keeps the feedback loop tight.
Try it
Stats page: getwick.dev/stats.html.
If you're already running Wick, it's already contributing — unless you've opted out, which is fine.
Not yet? brew tap wickproject/wick && brew install wick, or grab it from GitHub.
Related posts
GitHub | Docs | Live stats | Contact