Pinecone 模型上下文协议服务器,适用于 Claude 桌面版。
模型上下文协议服务器,以便允许从 Pinecone 进行读写。基本的 RAG
概覽
什麼是 MCP-Pinecone?
MCP-Pinecone 是一個模型上下文協議伺服器,旨在促進從 Pinecone(一個向量數據庫服務)的讀取和寫入操作。這個伺服器提供了一個基本的檢索增強生成(RAG)能力,允許開發者無縫地將先進的 AI 功能整合到他們的應用程序中。通過利用 MCP-Pinecone,用戶可以增強他們的數據檢索過程,使訪問和利用存儲在 Pinecone 中的信息變得更加容易。
MCP-Pinecone 的特點
- 與 Pinecone 的整合:直接連接到 Pinecone,實現高效的數據管理和檢索。
- RAG 能力:支持檢索增強生成,通過利用外部數據提高生成響應的質量。
- 開源:可供公眾使用,允許開發者根據需要貢獻和修改代碼。
- 用戶友好的界面:設計時考慮到易用性,使其對新手和經驗豐富的開發者都可輕鬆訪問。
- 活躍的社區:由貢獻者社區支持,確保持續改進和支持。
如何使用 MCP-Pinecone
-
安裝:從 GitHub 克隆代碼庫並安裝必要的依賴項。
git clone https://github.com/sirmews/mcp-pinecone.git cd mcp-pinecone npm install -
配置:在環境變量或配置文件中設置您的 Pinecone API 密鑰和其他配置。
-
啟動伺服器:啟動伺服器以開始與 Pinecone 互動。
npm start -
發送請求:使用提供的 API 端點來讀取和寫入數據到 Pinecone。請參考文檔以獲取有關可用端點及其用法的詳細說明。
-
貢獻:如果您希望為該項目做出貢獻,隨時可以分叉代碼庫,進行更改,並提交拉取請求。
常見問題解答
什麼是 Pinecone?
Pinecone 是一個完全管理的向量數據庫,專為機器學習應用而設計。它允許用戶高效地存儲和查詢高維向量,非常適合涉及 AI 和數據科學的應用。
MCP-Pinecone 如何增強數據檢索?
通過實施檢索增強生成(RAG),MCP-Pinecone 允許應用程序根據模型的訓練和來自 Pinecone 的實時數據生成響應,從而產生更準確和上下文相關的輸出。
MCP-Pinecone 是免費使用的嗎?
是的,MCP-Pinecone 是一個開源項目,您可以免費使用它。然而,使用 Pinecone 可能會根據您使用其服務的情況產生費用。
我可以為 MCP-Pinecone 項目做出貢獻嗎?
當然可以!歡迎貢獻。您可以分叉代碼庫,進行改進,並提交拉取請求以與社區分享您的增強。
我可以在哪裡找到 MCP-Pinecone 的文檔?
文檔通常包含在代碼庫中。您還可以查看 README 文件以獲取設置說明和使用指南。
詳細
Pinecone Model Context Protocol Server for Claude Desktop.
Read and write to a Pinecone index.
Components
flowchart TB
subgraph Client["MCP Client (e.g., Claude Desktop)"]
UI[User Interface]
end
subgraph MCPServer["MCP Server (pinecone-mcp)"]
Server[Server Class]
subgraph Handlers["Request Handlers"]
ListRes[list_resources]
ReadRes[read_resource]
ListTools[list_tools]
CallTool[call_tool]
GetPrompt[get_prompt]
ListPrompts[list_prompts]
end
subgraph Tools["Implemented Tools"]
SemSearch[semantic-search]
ReadDoc[read-document]
ListDocs[list-documents]
PineconeStats[pinecone-stats]
ProcessDoc[process-document]
end
end
subgraph PineconeService["Pinecone Service"]
PC[Pinecone Client]
subgraph PineconeFunctions["Pinecone Operations"]
Search[search_records]
Upsert[upsert_records]
Fetch[fetch_records]
List[list_records]
Embed[generate_embeddings]
end
Index[(Pinecone Index)]
end
%% Connections
UI --> Server
Server --> Handlers
ListTools --> Tools
CallTool --> Tools
Tools --> PC
PC --> PineconeFunctions
PineconeFunctions --> Index
%% Data flow for semantic search
SemSearch --> Search
Search --> Embed
Embed --> Index
%% Data flow for document operations
UpsertDoc --> Upsert
ReadDoc --> Fetch
ListRes --> List
classDef primary fill:#2563eb,stroke:#1d4ed8,color:white
classDef secondary fill:#4b5563,stroke:#374151,color:white
classDef storage fill:#059669,stroke:#047857,color:white
class Server,PC primary
class Tools,Handlers secondary
class Index storage
Resources
The server implements the ability to read and write to a Pinecone index.
Tools
semantic-search: Search for records in the Pinecone index.read-document: Read a document from the Pinecone index.list-documents: List all documents in the Pinecone index.pinecone-stats: Get stats about the Pinecone index, including the number of records, dimensions, and namespaces.process-document: Process a document into chunks and upsert them into the Pinecone index. This performs the overall steps of chunking, embedding, and upserting.
Note: embeddings are generated via Pinecone's inference API and chunking is done with a token-based chunker. Written by copying a lot from langchain and debugging with Claude.
Quickstart
Installing via Smithery
To install Pinecone MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install mcp-pinecone --client claude
Install the server
Recommend using uv to install the server locally for Claude.
uvx install mcp-pinecone
OR
uv pip install mcp-pinecone
Add your config as described below.
Claude Desktop
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Note: You might need to use the direct path to uv. Use which uv to find the path.
Development/Unpublished Servers Configuration
"mcpServers": {
"mcp-pinecone": {
"command": "uv",
"args": [
"--directory",
"{project_dir}",
"run",
"mcp-pinecone"
]
}
}
Published Servers Configuration
"mcpServers": {
"mcp-pinecone": {
"command": "uvx",
"args": [
"--index-name",
"{your-index-name}",
"--api-key",
"{your-secret-api-key}",
"mcp-pinecone"
]
}
}
Sign up to Pinecone
You can sign up for a Pinecone account here.
Get an API key
Create a new index in Pinecone, replacing {your-index-name} and get an API key from the Pinecone dashboard, replacing {your-secret-api-key} in the config.
Development
Building and Publishing
To prepare the package for distribution:
- Sync dependencies and update lockfile:
uv sync
- Build package distributions:
uv build
This will create source and wheel distributions in the dist/ directory.
- Publish to PyPI:
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token:
--tokenorUV_PUBLISH_TOKEN - Or username/password:
--username/UV_PUBLISH_USERNAMEand--password/UV_PUBLISH_PASSWORD
Debugging
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm with this command:
npx @modelcontextprotocol/inspector uv --directory {project_dir} run mcp-pinecone
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Source Code
The source code is available on GitHub.
Contributing
Send your ideas and feedback to me on Bluesky or by opening an issue.
伺服器配置
{
"mcpServers": {
"mcp-pinecone": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"ghcr.io/metorial/mcp-container--sirmews--mcp-pinecone--mcp-pinecone",
"mcp-pinecone --index-name index-name --api-key api-key"
],
"env": {}
}
}
}