Cube vs dbt Semantic Layer: Which Metrics Layer in 2026?

Comparing Cube and dbt's MetricFlow for your semantic layer? We break down the differences and explain where Bonnard fits as a third option.

7 min read

Two approaches to the semantic layer. Cube defines and serves metrics via API with pre-aggregation caching. dbt's MetricFlow defines metrics as code within the dbt workflow but relies on partner tools for serving. Both are open source. Both solve real problems. Here's how they compare, and where a third option fits.

Cube vs dbt MetricFlow at a Glance

Feature Cube dbt MetricFlow
Approach Headless BI / semantic layer server Metrics-as-code within dbt
Metric definition YAML or JavaScript cubes YAML (MetricFlow syntax)
Serving layer REST API, SQL API, GraphQL No (needs dbt Cloud API or partner integration)
Caching / pre-aggregation CubeStore No
AI agent support (MCP) No No
Embedded analytics REST API + custom build No
Multi-tenancy Security contexts (manual config) No
Dashboards No No
License Apache 2.0 (server) Apache 2.0
Pricing Free (self-host) / Cube Cloud (paid) Free (OSS) / dbt Cloud (paid for Semantic Layer API)
dbt integration Manual schema definition Native

What are Cube's strengths and weaknesses?

What Cube Does Well

Serving layer. Cube is a running server. Define your cubes in YAML (or JavaScript), and they're queryable via REST, SQL, or GraphQL APIs. This is the fundamental difference from MetricFlow: Cube actually serves metrics, not just defines them.

CubeStore pre-aggregation. Cube pre-computes and caches frequently queried metric combinations in CubeStore, its columnar storage engine. This reduces warehouse load and improves query latency significantly. For high-volume use cases, pre-aggregation is a real advantage.

Self-hosted or Cloud. Cube's server is Apache 2.0. Run it yourself or use Cube Cloud for managed infrastructure with monitoring, auto-scaling, and team collaboration features.

JavaScript schemas. Cube supports dynamic schema generation with JavaScript. If you need to generate cubes programmatically based on database introspection or multi-tenant configurations, JavaScript schemas give you that flexibility.

Where Cube Falls Short

No MCP support. Cube exposes REST, SQL, and GraphQL APIs. None of these speak MCP. Your AI agents can't connect to Cube directly. You'd need to build a custom MCP wrapper around Cube's API.

No React SDK. Cube gives you APIs and expects you to build your own frontend. There are community templates, but no production-ready chart components.

No dashboards. Cube is headless by design. If you want dashboards, you build them yourself or use a separate BI tool on top.

Manual multi-tenancy. Cube's security contexts handle multi-tenancy, but the configuration is manual JavaScript. Managing connection pools, schema compilation, and tenant isolation at scale requires significant engineering effort.

What are dbt MetricFlow's strengths and weaknesses?

What MetricFlow Does Well

Native dbt integration. If your team already uses dbt, MetricFlow fits naturally into your existing workflow. Metrics are defined alongside your models in the same project. Same dbt build, same Git-based governance, same CI/CD pipeline.

Metrics-as-code. MetricFlow definitions are YAML files in your dbt project. They're versioned, reviewable, and testable. The metric definitions become part of your data documentation.

Growing ecosystem. dbt's Semantic Layer API (available in dbt Cloud) is integrated with Looker, Hex, Mode, and other BI tools. The ecosystem of consumers is expanding.

Where MetricFlow Falls Short

No serving layer. This is the critical gap. MetricFlow defines metrics but doesn't serve them. To query MetricFlow metrics from an application, you need dbt Cloud's Semantic Layer API (paid) or a partner integration. There's no self-hosted API endpoint for MetricFlow metrics.

No caching. MetricFlow doesn't pre-aggregate or cache. Every query hits your warehouse directly. For high-volume or customer-facing use cases, this adds cost and latency.

No multi-tenancy. dbt operates at the warehouse level. There's no concept of tenant isolation, row-level security per customer, or per-tenant API keys within MetricFlow.

No embedded analytics. MetricFlow doesn't ship UI components. Charts, dashboards, and embedded analytics are entirely separate concerns that need separate tools.

No AI agent support. No MCP integration. No protocol for AI agents to query governed MetricFlow metrics.

Where does Bonnard fit?

Bonnard builds on Cube's query engine and adds everything needed to ship governed analytics to AI agents and B2B customers.

Feature Cube dbt MetricFlow Bonnard
Semantic layer engine Cube MetricFlow Cube (same engine)
Serving layer REST, SQL, GraphQL No REST, SQL, TypeScript SDK
MCP for AI agents No No Native (publishable keys per tenant)
Embedded analytics Custom build No React SDK (BarChart, LineChart, BigValue)
Dashboards No No Markdown dashboards, deployed via CLI
Multi-tenancy Manual security contexts No Publishable keys + row-level security
Pre-aggregation CubeStore No Same pre-aggregation engine
dbt integration Manual Native bon datasource add --from-dbt
CLI No dbt CLI bon deploy, bon mcp, bon query

What Bonnard Adds to Cube's Engine

MCP server. Bonnard deploys as an MCP server with four tools: explore_schema, query, sql_query, and describe_field. Claude, Cursor, ChatGPT, and CrewAI connect directly to governed metrics. Publishable keys per tenant let your customers connect their own AI tools. This is the foundation of agentic analytics.

React SDK. @bonnard/react ships BarChart, LineChart, BigValue, and useBonnardQuery. Embed governed, multi-tenant analytics in your product without building a chart layer from scratch.

Markdown dashboards. Author dashboards in markdown. Deploy them alongside your schema with bon deploy. Each tenant gets their own view, access-controlled automatically.

Multi-tenant publishable keys. Token exchange maps your existing auth into the security context. Every query is filtered based on tenant context. No JavaScript configuration, no per-tenant schema compilation.

Admin UI with schema catalog. Browse models, views, and measures. Inspect field definitions and change history with diffs. Graph view of schema relationships.

dbt Integration

bon datasource add --from-dbt imports your dbt models. Your dbt-built tables become the foundation of your Bonnard semantic layer. The workflow:

  1. dbt transforms raw data into clean tables
  2. Bonnard defines business metrics on top
  3. bon deploy pushes the schema
  4. AI agents, apps, and dashboards consume governed metrics

dbt handles the T in ELT. Bonnard handles everything after.

FAQ

Do I need to choose between Cube and dbt?

No. They solve different problems. dbt transforms data. Cube (and Bonnard) defines and serves metrics. Many teams use dbt for transformations and Bonnard (which includes Cube's engine) for the semantic layer.

Can I migrate from Cube to Bonnard?

If you have Cube YAML schemas, drop them into your Bonnard project and run bon deploy. Bonnard doesn't support Cube's JavaScript schemas. Same warehouse connectors, same query semantics.

Is Bonnard free?

Yes. Apache 2.0 for the server. MIT for the CLI. Self-host with every feature included. Bonnard Cloud is available for managed infrastructure.

Does Bonnard work with dbt Cloud?

Bonnard connects directly to your warehouse, where dbt Cloud materializes your models. You don't need dbt Cloud's Semantic Layer API. bon datasource add --from-dbt reads your dbt project locally.

Also see: Bonnard vs Cube, Bonnard vs dbt Metrics, Bonnard vs Looker.

Which option is best for AI agent integration?

Bonnard. Neither Cube nor dbt MetricFlow supports MCP. Bonnard deploys as an MCP server with publishable keys per tenant, giving AI agents governed access to your metrics out of the box.

Or skip the debate. Try Bonnard.

Bonnard builds on Cube's engine and adds MCP for AI agents, React SDK, markdown dashboards, and multi-tenant publishable keys. Import your dbt models with one command.