by @pinecone
Official Pinecone Developer MCP Server implementation (Early Access, not for production) enabling AI assistants to interact with Pinecone vector database via standardized Model Context Protocol. Provides 9 core tools: search-docs (search official Pinecone documentation), list-indexes (list all indexes), describe-index (get index configuration), describe-index-stats (get data statistics, record counts, namespaces), create-index-for-model (create index with integrated inference model), upsert-records (insert/update records with integrated inference), search-records (search with text query, metadata filtering, reranking), cascading-search (search across multiple indexes with deduplication and reranking), rerank-documents (rerank records/documents using specialized model). Focused on improving developer experience with coding assistants. Supports ONLY indexes with integrated embedding (external embedding models NOT supported). Requires Pinecone API key for index operations (documentation search works without key). Built with TypeScript 98.7%, JavaScript 1.3%, Node.js v18+. Published as npm package @pinecone-database/mcp. Installation: npx -y @pinecone-database/mcp with PINECONE_API_KEY environment variable. Supports Cursor (project/.cursor/mcp.json or global ~/.cursor/mcp.json), Claude Desktop (claude_desktop_config.json), Claude Code (claude mcp add-json command), Gemini CLI (gemini extensions install). Example prompts: "Search Pinecone docs for metadata filtering", "List all my indexes", "Create index called my-docs using multilingual-e5-large model", "Upsert these documents into my index", "Search my index for authentication best practices", "What namespaces exist in my index?". Key features: documentation search without API key, index configuration based on application needs, code generation informed by index config and docs, upsert/search data for testing queries, evaluate results in dev environment. Limitations: only integrated inference indexes, no support for Assistants/standalone embeddings/vector search/indexes without integrated inference. GitHub: 52 stars, 19 forks, 99 commits, 1 open issue, 5 open PRs, 6 contributors. Apache-2.0 license. Configuration: stdio transport with npx command. Troubleshooting: verify Node.js v18+, check npx in PATH, valid JSON config, restart AI tool, verify API key in Pinecone console, check corporate firewall for api.pinecone.io access. Community contributions welcome via GitHub issues and PRs. Separate from Assistant MCP (designed for AI assistants with knowledge base context). Use cases: coding assistance, index management automation, documentation-informed code generation, query testing, result evaluation in dev environment.
This server provides the following tools for AI assistants:
Search the official Pinecone documentation for guides, API references, and examples
Lists all Pinecone indexes in your project with their configurations
Describes the configuration of a specific index (dimensions, metric, model, cloud, region)
Provides statistics about the data in the index (number of records, available namespaces, dimensions)
Creates a new index that uses an integrated inference model to embed text as vectors
Inserts or updates records in an index with integrated inference (text automatically embedded)
Searches for records in an index based on a text query, using integrated inference for embedding. Supports metadata filtering and reranking
Searches for records across multiple indexes, deduplicating and reranking the results for best matches
Reranks a collection of records or text documents using a specialized reranking model for improved relevance