by @dbt labs
Official dbt Labs Model Context Protocol (MCP) server enabling seamless AI integration with dbt-managed data assets across all data platforms. Provides standardized framework for consistent, governed access to models, metrics, lineage, and freshness via AI tools. Offers 50+ tools across 7 categories: dbt CLI (10 tools: build, compile, docs, list, parse, run, test, show, get_model_lineage_dev, get_node_details_dev), Semantic Layer (6 tools: list_metrics, list_saved_queries, get_dimensions, get_entities, query_metrics, get_metrics_compiled_sql), Metadata Discovery (19 tools: get_all_models, get_all_sources, get_exposure_details, get_exposures, get_lineage, get_macro_details, get_mart_models, get_model_children, get_model_details, get_model_health, get_model_parents, get_model_performance, get_related_models, get_seed_details, get_semantic_model_details, get_snapshot_details, get_source_details, get_test_details, search), Administrative API (13 tools: list_jobs, get_job_details, get_project_details, trigger_job_run, list_jobs_runs, get_job_run_details, cancel_job_run, retry_job_run, list_job_run_artifacts, get_job_run_artifact, get_job_run_error), SQL (2 tools: text_to_sql, execute_sql), Codegen Tools (3 tools: generate_source, generate_model_yaml, generate_staging_model - requires dbt-codegen package, disabled by default), Fusion Tools (2 tools: compile_sql, get_column_lineage - Fusion exclusive column-level lineage). Two deployment modes: Local MCP Server (runs on machine, requires uvx installation, full dbt CLI access, supports dbt Core/Cloud CLI/Fusion Engine, works with local projects without dbt platform account, optional dbt platform API integration) and Remote MCP Server (connects via HTTP, no local installation, consumption-based use cases like querying metrics/exploring metadata/viewing lineage, restricted from installing software). Note: Only text_to_sql consumes dbt Copilot credits, other tools free. When Copilot credits exhausted, remote MCP blocks all tools including local-proxied tools. Installation: uvx (Python package manager) installs dbt-mcp locally. GitHub: 470 stars, 100 forks, Apache-2.0 license, Python 93.7%, 38 contributors, 53 releases (latest v1.8.1). Experimental MCP Bundle (dbt-mcp.mcpb) published with each release for MCPB-aware clients. Architecture: Agent connects to variety of tools via standardized protocol. Use cases: Conversational data access, agent-driven dbt workflow automation, AI-assisted development, natural language querying, metadata exploration, lineage analysis, SQL generation, model documentation. Access limits: Discovery API and Semantic Layer API limited by plan type. Warning: dbt CLI commands can modify data models, sources, warehouse objects - proceed only if trusting client. Setup integrations: Claude, Cursor, VS Code with token authentication and tool use capabilities. Environment variables: DBT_TOKEN (required for execute_sql with PAT instead of service token), DISABLE_DBT_CODEGEN (set to false to enable codegen tools, default disabled), DISABLE_MCP_SERVER_METADATA (set to false to enable get_mcp_server_version, default disabled). Repository: https://github.com/dbt-labs/dbt-mcp with examples directory for custom agent development. Documentation: https://docs.getdbt.com/docs/dbt-ai/about-mcp with comprehensive guides for setup, integration, and tool usage. Feedback: GitHub Issues or community Slack #tools-dbt-mcp channel. Official dbt Labs product maintained by dbt team. Production-ready for data team workflows.
This server provides the following tools for AI assistants:
Executes models, tests, snapshots, and seeds in dependency order. Builds complete dbt project with all resources
Generates executable SQL from models, tests, and analyses without running them. Validates SQL syntax and compilation
Generates documentation for the dbt project including lineage graphs, model descriptions, and data dictionaries
Lists resources in the dbt project such as models, tests, sources, snapshots, and seeds with filtering options
Parses and validates the project files for syntax correctness and configuration errors without execution
Executes models to materialize them in the database. Runs SQL transformations defined in dbt models
Runs tests to validate data and model integrity. Executes data quality checks and schema tests
Runs a query against the data warehouse and displays results. Useful for ad-hoc querying and validation
Gets the lineage of a model from the local development environment showing upstream and downstream dependencies
Gets details about a specific node (model, test, source, etc.) from the local development environment
Retrieves all defined metrics from the dbt Semantic Layer. Lists available business metrics for querying
Retrieves all saved queries from the dbt Semantic Layer. Lists pre-defined analytical queries
Gets dimensions associated with specified metrics from the Semantic Layer. Shows available grouping and filtering options
Gets entities associated with specified metrics from the Semantic Layer. Shows primary and foreign key relationships
Query metrics with optional grouping, ordering, filtering, and limiting. Execute analytical queries against the Semantic Layer
Returns the compiled SQL generated for specified metrics and groupings without executing the query. Useful for debugging and understanding metric logic
Gets all models in the dbt project with metadata including names, schemas, tags, and configuration
Gets all mart models (final transformed models for business use) with filtering and metadata
Gets comprehensive details for a specific model including SQL, columns, tests, documentation, and configuration
Gets the parent nodes (upstream dependencies) of a specific model showing data sources and transformations
Gets the children models (downstream dependencies) showing where model data is used
Gets health signals for a specific model including test pass rates, freshness, and execution status
Gets execution information for models including runtime, row counts, and test results
Gets all source tables with metadata and freshness information showing upstream data sources
Gets complete lineage (ancestors/descendants) for a dbt resource with depth control and type filtering (excludes macros by default)
Gets details for a specific source table including schema, columns, and freshness configuration
Gets all exposures (downstream uses of dbt models like dashboards, reports, ML models)
Gets details for a specific exposure or list of exposures including dependencies and metadata
Uses semantic search to find dbt models similar to the query even without exact string match. AI-powered model discovery
Gets details for a specific macro including SQL, arguments, and documentation
Gets details for a specific seed (CSV file loaded into database) including columns and configuration
Gets details for a specific semantic model including metrics, dimensions, and entities
Gets details for a specific snapshot (slowly changing dimension implementation)
Gets details for a specific test including SQL, configuration, and recent results
Searches for dbt resources using natural language queries. AI-powered search across models, tests, sources, and docs
List all jobs in a dbt account with filtering options by project, environment, or status
Get detailed information for a specific job including configuration, settings, schedule, and environment
Get project information for a specific dbt project including repositories, environments, and connections
Trigger a job run with optional parameter overrides like Git branch, schema, or execution parameters
List runs in an account with optional filtering by job, status, time range, or other criteria
Get comprehensive run information including execution details, steps, artifacts, timing, and debug logs
Cancel a running job to stop execution immediately. Useful for long-running or stuck jobs
Retry a failed job run to attempt execution again with same or modified parameters
List all available artifacts for a job run (manifest.json, catalog.json, run_results.json, logs, etc.)
Download specific artifact files from job runs for analysis, debugging, or integration with other tools
Retrieves error details for failed job runs to help troubleshoot errors (includes option to return warning and deprecation details)
Generate SQL from natural language requests using AI. Consumes dbt Copilot credits. Requires understanding of project context and schema
Execute SQL on dbt platform backend infrastructure with support for Semantic Layer SQL syntax. Note: using a PAT instead of service token for DBT_TOKEN is required
Creates source YAML definitions from database schemas automatically. Requires dbt-codegen package. Disabled by default (set DISABLE_DBT_CODEGEN=false to enable)
Generates documentation YAML for existing dbt models including column names, data types, and description placeholders. Requires dbt-codegen. Disabled by default
Creates staging SQL models from sources to transform raw source data into clean staging models. Requires dbt-codegen. Disabled by default
Compiles a SQL statement in the context of the current project and environment using Fusion engine. Remote MCP tool
Fusion exclusive! Get column-level lineage information across a project DAG for a specific column. Available in both local (via LSP) and remote MCP
Returns the current version of the dbt MCP server. Disabled by default (set DISABLE_MCP_SERVER_METADATA=false to enable)