Pinecone Model Context Protocol Server For Claude Desktop.
Model Context Protocol server to allow for reading and writing from Pinecone. Basic Retrieval-Augmented Generation (RAG).
Overview
What is MCP-Pinecone?
MCP-Pinecone is a Model Context Protocol server designed to facilitate reading and writing operations from Pinecone, a vector database service. This server provides a basic Retrieval-Augmented Generation (RAG) capability, allowing developers to seamlessly integrate advanced AI functionalities into their applications. By leveraging MCP-Pinecone, users can enhance their data retrieval processes, making it easier to access and utilize information stored in Pinecone.
Features of MCP-Pinecone
- Integration with Pinecone: Directly connects to Pinecone, enabling efficient data management and retrieval.
- RAG Capabilities: Supports Retrieval-Augmented Generation, enhancing the quality of generated responses by utilizing external data.
- Open Source: Available for public use, allowing developers to contribute and modify the code as needed.
- User-Friendly Interface: Designed with ease of use in mind, making it accessible for both novice and experienced developers.
- Active Community: Backed by a community of contributors, ensuring continuous improvement and support.
How to Use MCP-Pinecone
-
Installation: Clone the repository from GitHub and install the necessary dependencies.
git clone https://github.com/sirmews/mcp-pinecone.git cd mcp-pinecone npm install -
Configuration: Set up your Pinecone API key and other configurations in the environment variables or configuration files.
-
Running the Server: Start the server to begin interacting with Pinecone.
npm start -
Making Requests: Use the provided API endpoints to read and write data to Pinecone. Refer to the documentation for detailed instructions on available endpoints and their usage.
-
Contributing: If you wish to contribute to the project, feel free to fork the repository, make your changes, and submit a pull request.
Frequently Asked Questions
What is Pinecone?
Pinecone is a fully managed vector database designed for machine learning applications. It allows users to store and query high-dimensional vectors efficiently, making it ideal for applications involving AI and data science.
How does MCP-Pinecone enhance data retrieval?
By implementing Retrieval-Augmented Generation (RAG), MCP-Pinecone allows applications to generate responses based on both the model's training and real-time data from Pinecone, leading to more accurate and contextually relevant outputs.
Is MCP-Pinecone free to use?
Yes, MCP-Pinecone is an open-source project, and you can use it for free. However, using Pinecone may incur costs based on your usage of their services.
Can I contribute to the MCP-Pinecone project?
Absolutely! Contributions are welcome. You can fork the repository, make improvements, and submit a pull request to share your enhancements with the community.
Where can I find the documentation for MCP-Pinecone?
Documentation is typically included in the repository. You can also check the README file for setup instructions and usage guidelines.
Details
Pinecone Model Context Protocol Server for Claude Desktop.
Read and write to a Pinecone index.
Components
flowchart TB
subgraph Client["MCP Client (e.g., Claude Desktop)"]
UI[User Interface]
end
subgraph MCPServer["MCP Server (pinecone-mcp)"]
Server[Server Class]
subgraph Handlers["Request Handlers"]
ListRes[list_resources]
ReadRes[read_resource]
ListTools[list_tools]
CallTool[call_tool]
GetPrompt[get_prompt]
ListPrompts[list_prompts]
end
subgraph Tools["Implemented Tools"]
SemSearch[semantic-search]
ReadDoc[read-document]
ListDocs[list-documents]
PineconeStats[pinecone-stats]
ProcessDoc[process-document]
end
end
subgraph PineconeService["Pinecone Service"]
PC[Pinecone Client]
subgraph PineconeFunctions["Pinecone Operations"]
Search[search_records]
Upsert[upsert_records]
Fetch[fetch_records]
List[list_records]
Embed[generate_embeddings]
end
Index[(Pinecone Index)]
end
%% Connections
UI --> Server
Server --> Handlers
ListTools --> Tools
CallTool --> Tools
Tools --> PC
PC --> PineconeFunctions
PineconeFunctions --> Index
%% Data flow for semantic search
SemSearch --> Search
Search --> Embed
Embed --> Index
%% Data flow for document operations
UpsertDoc --> Upsert
ReadDoc --> Fetch
ListRes --> List
classDef primary fill:#2563eb,stroke:#1d4ed8,color:white
classDef secondary fill:#4b5563,stroke:#374151,color:white
classDef storage fill:#059669,stroke:#047857,color:white
class Server,PC primary
class Tools,Handlers secondary
class Index storage
Resources
The server implements the ability to read and write to a Pinecone index.
Tools
semantic-search: Search for records in the Pinecone index.read-document: Read a document from the Pinecone index.list-documents: List all documents in the Pinecone index.pinecone-stats: Get stats about the Pinecone index, including the number of records, dimensions, and namespaces.process-document: Process a document into chunks and upsert them into the Pinecone index. This performs the overall steps of chunking, embedding, and upserting.
Note: embeddings are generated via Pinecone's inference API and chunking is done with a token-based chunker. Written by copying a lot from langchain and debugging with Claude.
Quickstart
Installing via Smithery
To install Pinecone MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install mcp-pinecone --client claude
Install the server
Recommend using uv to install the server locally for Claude.
uvx install mcp-pinecone
OR
uv pip install mcp-pinecone
Add your config as described below.
Claude Desktop
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Note: You might need to use the direct path to uv. Use which uv to find the path.
Development/Unpublished Servers Configuration
"mcpServers": {
"mcp-pinecone": {
"command": "uv",
"args": [
"--directory",
"{project_dir}",
"run",
"mcp-pinecone"
]
}
}
Published Servers Configuration
"mcpServers": {
"mcp-pinecone": {
"command": "uvx",
"args": [
"--index-name",
"{your-index-name}",
"--api-key",
"{your-secret-api-key}",
"mcp-pinecone"
]
}
}
Sign up to Pinecone
You can sign up for a Pinecone account here.
Get an API key
Create a new index in Pinecone, replacing {your-index-name} and get an API key from the Pinecone dashboard, replacing {your-secret-api-key} in the config.
Development
Building and Publishing
To prepare the package for distribution:
- Sync dependencies and update lockfile:
uv sync
- Build package distributions:
uv build
This will create source and wheel distributions in the dist/ directory.
- Publish to PyPI:
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token:
--tokenorUV_PUBLISH_TOKEN - Or username/password:
--username/UV_PUBLISH_USERNAMEand--password/UV_PUBLISH_PASSWORD
Debugging
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm with this command:
npx @modelcontextprotocol/inspector uv --directory {project_dir} run mcp-pinecone
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Source Code
The source code is available on GitHub.
Contributing
Send your ideas and feedback to me on Bluesky or by opening an issue.
Server Config
{
"mcpServers": {
"mcp-pinecone": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"ghcr.io/metorial/mcp-container--sirmews--mcp-pinecone--mcp-pinecone",
"mcp-pinecone --index-name index-name --api-key api-key"
],
"env": {}
}
}
}