概覽
Firecrawl MCP 伺服器
一個模型上下文協議 (MCP) 伺服器實現,與 Firecrawl 集成以提供網頁爬蟲功能。
特別感謝 @vrknetha、@knacklabs 的初始實現!
什麼是 Firecrawl MCP 伺服器?
Firecrawl MCP 伺服器是一個強大的工具,旨在進行網頁爬蟲、爬取和數據提取。它利用模型上下文協議來促進與各種應用程序的無縫集成,使用戶能夠高效地收集和分析網絡數據。憑藉其強大的功能,它滿足了希望自動化網絡數據收集的開發者和研究人員的需求。
功能
- 網頁爬蟲、爬取和發現:高效地從網站收集數據。
- 搜索和內容提取:從各種來源查找和提取相關信息。
- 深度研究和批量爬取:進行廣泛研究並同時爬取多個頁面。
- 自動重試和速率限制:確保可靠的數據收集,內置錯誤處理。
- 雲端和自托管支持:靈活運行於雲端或自托管環境。
- SSE 支持:利用伺服器發送事件進行實時數據流。
在 MCP.so 的遊樂場上試用我們的 MCP 伺服器 或在 Klavis AI 上試用。
如何安裝 Firecrawl MCP 伺服器
使用 npx 運行
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
手動安裝
npm install -g firecrawl-mcp
在 Cursor 上運行
對於 Cursor 版本 0.45.6 及以上,請參考 Cursor MCP 伺服器配置指南 獲取詳細說明。
- 打開 Cursor 設置
- 轉到功能 > MCP 伺服器
- 點擊 "+ 添加新的全局 MCP 伺服器"
- 輸入指定的配置代碼。
在 Windsurf 上運行
將以下配置添加到你的 ./codeium/windsurf/model_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}
價格
Firecrawl MCP 伺服器根據使用情況和功能提供多種定價層級。欲了解詳細定價信息,請訪問 Firecrawl 價格頁面。
有用的提示
- API 密鑰管理:始終保持你的 Firecrawl API 密鑰安全,並且不要在公共代碼庫中暴露它。
- 速率限制:注意 Firecrawl API 所施加的速率限制,以避免服務中斷。
- 批量處理:在處理多個 URL 時,利用批量爬取提高效率。
- 錯誤處理:在你的腳本中實施健全的錯誤處理,以優雅地管理重試和失敗。
常見問題
Firecrawl MCP 伺服器用於什麼?
Firecrawl MCP 伺服器主要用於網頁爬蟲,允許用戶高效地從網站提取數據。
我如何獲取我的 Firecrawl API 密鑰?
你可以在 Firecrawl 網站 上創建一個帳戶以獲取你的 API 密鑰。
我可以在本地運行 Firecrawl MCP 伺服器嗎?
是的,Firecrawl MCP 伺服器可以在本地或雲環境中運行,根據你的需求而定。
我可以使用哪些編程語言與 Firecrawl MCP 伺服器集成?
Firecrawl MCP 伺服器可以與任何支持 HTTP 請求的編程語言集成,使其在各種應用中都具有靈活性。
是否有故障排除的支持?
是的,Firecrawl 提供文檔和社區支持以解決常見問題。你也可以聯繫他們的支持團隊以獲取幫助。
詳細
Firecrawl MCP Server
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @knacklabs for the initial implementation!
Features
- Web scraping, crawling, and discovery
- Search and content extraction
- Deep research and batch scraping
- Automatic retries and rate limiting
- Cloud and self-hosted support
- SSE support
Play around with our MCP Server on MCP.so's playground or on Klavis AI.
Installation
Running with npx
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
Manual Installation
npm install -g firecrawl-mcp
Running on Cursor
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.48.6
- Open Cursor Settings
- Go to Features > MCP Servers
- Click "+ Add new global MCP server"
- Enter the following code:
{ "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
To configure Firecrawl MCP in Cursor v0.45.6
- Open Cursor Settings
- Go to Features > MCP Servers
- Click "+ Add New MCP Server"
- Enter the following:
- Name: "firecrawl-mcp" (or your preferred name)
- Type: "command"
- Command:
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key
with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Running on Windsurf
Add this to your ./codeium/windsurf/model_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}
Running with SSE Local Mode
To run the server using Server-Sent Events (SSE) locally instead of the default stdio transport:
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
Use the url: http://localhost:3000/sse
Installing via Smithery (Legacy)
To install Firecrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
Running on VS Code
For one-click installation, click one of the install buttons below...
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P
and typing Preferences: Open User Settings (JSON)
.
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
}
Optionally, you can add it to a file called .vscode/mcp.json
in your workspace. This will allow you to share the configuration with others:
{
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
Configuration
Environment Variables
Required for Cloud API
FIRECRAWL_API_KEY
: Your Firecrawl API key- Required when using cloud API (default)
- Optional when using self-hosted instance with
FIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instances- Example:
https://firecrawl.your-domain.com
- If not provided, the cloud API will be used (requires API key)
- Example:
Optional Configuration
Retry Configuration
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)
Configuration Examples
For cloud API usage with custom retry and credit monitoring:
### Required for cloud API
export FIRECRAWL_API_KEY=your-api-key
### Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff
### Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
### Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com
### Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth
### Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Usage with Claude Desktop
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}
System Configuration
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};
These configurations control:
-
Retry Behavior
- Automatically retries failed requests due to rate limits
- Uses exponential backoff to avoid overwhelming the API
- Example: With default settings, retries will be attempted at:
- 1st retry: 1 second delay
- 2nd retry: 2 seconds delay
- 3rd retry: 4 seconds delay (capped at maxDelay)
-
Credit Usage Monitoring
- Tracks API credit consumption for cloud API usage
- Provides warnings at specified thresholds
- Helps prevent unexpected service interruption
- Example: With default settings:
- Warning at 1000 credits remaining
- Critical alert at 100 credits remaining
Rate Limiting and Batch Processing
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
- Automatic rate limit handling with exponential backoff
- Efficient parallel processing for batch operations
- Smart request queuing and throttling
- Automatic retries for transient errors
How to Choose a Tool
Use this guide to select the right tool for your task:
- If you know the exact URL(s) you want:
- For one: use scrape
- For many: use batch_scrape
- If you need to discover URLs on a site: use map
- If you want to search the web for info: use search
- If you want to extract structured data: use extract
- If you want to analyze a whole site or section: use crawl (with limits!)
- If you want to do in-depth research: use deep_research
- If you want to generate LLMs.txt: use generate_llmstxt
Quick Reference Table
Tool | Best for | Returns |
---|---|---|
scrape | Single page content | markdown/html |
batch_scrape | Multiple known URLs | markdown/html[] |
map | Discovering URLs on a site | URL[] |
crawl | Multi-page extraction (with limits) | markdown/html[] |
search | Web search for info | results[] |
extract | Structured data from pages | JSON |
deep_research | In-depth, multi-source research | summary, sources |
generate_llmstxt | LLMs.txt for a domain | text |
Available Tools
1. Scrape Tool (firecrawl_scrape
)
Scrape content from a single URL with advanced options.
Best for:
- Single page content extraction, when you know exactly which page contains the information.
Not recommended for:
- Extracting content from multiple pages (use batch_scrape for known URLs, or map + batch_scrape to discover URLs first, or crawl for full page content)
- When you're unsure which page contains the information (use search)
- When you need structured data (use extract)
Common mistakes:
- Using scrape for a list of URLs (use batch_scrape instead).
Prompt Example:
"Get the content of the page at https://example.com."
Usage Example:
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}
Returns:
- Markdown, HTML, or other formats as specified.
2. Batch Scrape Tool (firecrawl_batch_scrape
)
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
Best for:
- Retrieving content from multiple pages, when you know exactly which pages to scrape.
Not recommended for:
- Discovering URLs (use map first if you don't know the URLs)
- Scraping a single page (use scrape)
Common mistakes:
- Using batch_scrape with too many URLs at once (may hit rate limits or token overflow)
Prompt Example:
"Get the content of these three blog posts: [url1, url2, url3]."
Usage Example:
{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Returns:
- Response includes operation ID for status checking:
{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}
3. Check Batch Status (firecrawl_check_batch_status
)
Check the status of a batch operation.
{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}
4. Map Tool (firecrawl_map
)
Map a website to discover all indexed URLs on the site.
Best for:
- Discovering URLs on a website before deciding what to scrape
- Finding specific sections of a website
Not recommended for:
- When you already know which specific URL you need (use scrape or batch_scrape)
- When you need the content of the pages (use scrape after mapping)
Common mistakes:
- Using crawl to discover URLs instead of map
Prompt Example:
"List all URLs on example.com."
Usage Example:
{
"name": "firecrawl_map",
"arguments": {
"url": "https://example.com"
}
}
Returns:
- Array of URLs found on the site
5. Search Tool (firecrawl_search
)
Search the web and optionally extract content from search results.
Best for:
- Finding specific information across multiple websites, when you don't know which website has the information.
- When you need the most relevant content for a query
Not recommended for:
- When you already know which website to scrape (use scrape)
- When you need comprehensive coverage of a single website (use map or crawl)
Common mistakes:
- Using crawl or map for open-ended questions (use search instead)
Usage Example:
{
"name": "firecrawl_search",
"arguments": {
"query": "latest AI research papers 2023",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Returns:
- Array of search results (with optional scraped content)
Prompt Example:
"Find the latest research papers on AI published in 2023."
6. Crawl Tool (firecrawl_crawl
)
Starts an asynchronous crawl job on a website and extract content from all pages.
Best for:
- Extracting content from multiple related pages, when you need comprehensive coverage.
Not recommended for:
- Extracting content from a single page (use scrape)
- When token limits are a concern (use map + batch_scrape)
- When you need fast results (crawling can be slow)
Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
Common mistakes:
- Setting limit or maxDepth too high (causes token overflow)
- Using crawl for a single page (use scrape instead)
Prompt Example:
"Get all blog posts from the first two levels of example.com/blog."
Usage Example:
{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com/blog/*",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}
Returns:
- Response includes operation ID for status checking:
{
"content": [
{
"type": "text",
"text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."
}
],
"isError": false
}
7. Check Crawl Status (firecrawl_check_crawl_status
)
Check the status of a crawl job.
{
"name": "firecrawl_check_crawl_status",
"arguments": {
"id": "550e8400-e29b-41d4-a716-446655440000"
}
}
Returns:
- Response includes the status of the crawl job:
8. Extract Tool (firecrawl_extract
)
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Best for:
- Extracting specific structured data like prices, names, details.
Not recommended for:
- When you need the full content of a page (use scrape)
- When you're not looking for specific structured data
Arguments:
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extraction
When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service. Prompt Example:
"Extract the product name, price, and description from these product pages."
Usage Example:
{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}
Returns:
- Extracted structured data as defined by your schema
{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}
9. Deep Research Tool (firecrawl_deep_research
)
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
Best for:
- Complex research questions requiring multiple sources, in-depth analysis.
Not recommended for:
- Simple questions that can be answered with a single search
- When you need very specific information from a known page (use scrape)
- When you need results quickly (deep research can take time)
Arguments:
- query (string, required): The research question or topic to explore.
- maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
- timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
- maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
Prompt Example:
"Research the environmental impact of electric vehicles versus gasoline vehicles."
Usage Example:
{
"name": "firecrawl_deep_research",
"arguments": {
"query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}
Returns:
- Final analysis generated by an LLM based on research. (data.finalAnalysis)
- May also include structured activities and sources used in the research process.
10. Generate LLMs.txt Tool (firecrawl_generate_llmstxt
)
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
Best for:
- Creating machine-readable permission guidelines for AI models.
Not recommended for:
- General content extraction or research
Arguments:
- url (string, required): The base URL of the website to analyze.
- maxUrls (number, optional): Max number of URLs to include (default: 10).
- showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
Prompt Example:
"Generate an LLMs.txt file for example.com."
Usage Example:
{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}
Returns:
- LLMs.txt file contents (and optionally llms-full.txt)
Logging System
The server includes comprehensive logging:
- Operation status and progress
- Performance metrics
- Credit usage monitoring
- Rate limit tracking
- Error conditions
Example log messages:
[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...
Error Handling
The server provides robust error handling:
- Automatic retries for transient errors
- Rate limit handling with backoff
- Detailed error messages
- Credit usage warnings
- Network resilience
Example error response:
{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}
Development
### Install dependencies
npm install
### Build
npm run build
### Run tests
npm test
Contributing
- Fork the repository
- Create your feature branch
- Run tests:
npm test
- Submit a pull request
Thanks to contributors
Thanks to @vrknetha, @cawstudios for the initial implementation!
Thanks to MCP.so and Klavis AI for hosting and @gstarwd, @xiangkaiz and @zihaolin96 for integrating our server.
License
MIT License - see LICENSE file for details
伺服器配置
{
"mcpServers": {
"firecrawl-mcp": {
"command": "npx",
"args": [
"-y",
"firecrawl-mcp"
],
"env": {
"FIRECRAWL_API_KEY": "fc-af1b3ac1a0c2402485402fd0e34da158"
}
}
}
}