full-website crawling with scheduled content extraction
Crawls an entire website and extracts content from all discovered pages in a single asynchronous operation. The firecrawl_crawl tool accepts a base URL and options (max pages, allowed domains, exclude patterns, scrape options) and returns a crawl ID for polling. The crawler discovers URLs, extracts markdown content from each page, and stores results server-side. Clients poll firecrawl_crawl_status to retrieve results as they complete, implementing an async job pattern rather than blocking until completion.
Unique: Implements server-side asynchronous crawling with job-based result retrieval, decoupling the crawl initiation from result consumption. The MCP server handles polling coordination through firecrawl_crawl_status, allowing AI agents to initiate long-running crawls and check progress without blocking. Firecrawl's backend manages the entire crawl lifecycle including URL discovery, content extraction, and result storage.
vs alternatives: More scalable than sequential scraping because crawling happens server-side in parallel; simpler than managing Puppeteer/Playwright browser pools because Firecrawl abstracts browser automation and handles rate limiting internally.