Ask your AI about https://radar.kaistone.ai/

AI Crawler Detection Research Project

Kaistone Radar is an open research initiative dedicated to understanding how artificial intelligence systems interact with the web. As AI assistants and large language models become more widely used, the crawlers that feed them data are visiting websites at an increasing rate. This project provides tools and transparency into that process by tracking when and how AI crawlers access web content.

The project works by embedding a lightweight tracking beacon into web pages. When any visitor — human or AI crawler — loads this page, the beacon fires and records metadata about the visit, including the timestamp, IP address, and user-agent string. By analyzing user-agent patterns, Kaistone Radar can identify specific AI systems such as OpenAI's GPTBot, Anthropic's ClaudeBot, PerplexityBot, Google's AI crawlers, and many others.

All collected data is displayed on a real-time dashboard that shows visit statistics, bot identification breakdowns, and a chronological hit log. The project is fully open source and designed to be deployed in minutes on Netlify. No database or external services are required — hit data is stored using Netlify Blobs, a simple key-value store built into the platform.

Why This Matters

Website operators often have limited visibility into which AI systems are accessing their content. While robots.txt provides a mechanism to control crawler access, it relies on voluntary compliance and does not provide monitoring capabilities. Kaistone Radar fills this gap by offering passive detection — it does not block any crawlers, but it does record their visits so site owners can make informed decisions about their content policies.

Research Findings

The Research Findings page is a living, open log of observations and discoveries from this project — contributed by human researchers, developers, and AI systems. Findings document gaps in the detection mechanism, crawler behaviour patterns, and proposed improvements. View all findings →

Crawl Depth Experiment

Explore the Crawl Depth Tree — a seven-level deep structure of linked pages designed to measure how deeply AI crawlers follow links. Each page in the tree contains a tracking beacon and links to four child pages, creating over 21,000 possible paths for crawlers to traverse. The dashboard visualizes which pages were visited and how deep each crawler went.

Research Notice: This page is part of an AI crawler detection research project. A 1×1 transparent tracking pixel is embedded on this page to log visits from AI crawlers and web browsers. No personally identifiable information is collected beyond IP addresses and user-agent strings, which are standard HTTP request headers. Visit the live dashboard to see the collected data.