Skip to main content

BigQuery MCP server

Google BigQuery is a fully-managed, serverless data warehouse that enables scalable SQL analysis across petabytes of data. With this MCP server, AI agents can query datasets, manage tables, run machine learning models, and handle data import/export operations through natural language commands.

Setting up an MCP server

This article covers the standard steps for creating an MCP server in AI Gateway and connecting it to an AI client. The steps are the same for every integration — application-specific details (API credentials, OAuth endpoints, and scopes) are covered in the individual application pages.

Before you begin

You'll need:

  • Access to AI Gateway with permission to create MCP servers
  • API credentials for the application you're connecting (see the relevant application page for what to collect)

Create an MCP server

Find the API in the catalog

  1. Sign in to AI Gateway and select MCP Servers from the left navigation.
  2. Select New MCP Server.
  3. Search for the application you want to connect, then select it from the catalog.

Configure the server

  1. Enter a Name for your server — something descriptive that identifies both the application and its purpose (for example, "Zendesk Support — Prod").
  2. Enter a Description so your team knows what the server is for.
  3. Set the Timeout value. 30 seconds works for most APIs; increase to 60 seconds for APIs that return large payloads.
  4. Toggle Production mode on if this server will be used in a live workflow.
  5. Select Next.

Configure authentication

Enter the authentication details for the application. This varies by service — see the Authentication section of the relevant application page for the specific credentials, OAuth URLs, and scopes to use.

Configure security

  1. Set any Rate limits appropriate for your use case and the API's own limits.
  2. Enable Logging if you want AI Gateway to record requests and responses for auditing.
  3. Select Next.

Deploy

Review the summary, then select Deploy. AI Gateway provisions the server and provides a server URL you'll use when configuring your AI client.


Connect to an AI client

Once your server is deployed, you'll need to add it to the AI client your team uses. Select your client for setup instructions:

Tips

  • You can create multiple MCP servers for the same application — for example, a read-only server for reporting agents and a read-write server for automation workflows.
  • If you're unsure which OAuth scopes to request, start with the minimum read-only set and add write scopes only when needed. Most application pages include scope recommendations.
  • You can edit a server's name, description, timeout, and security settings after deployment without redeploying.

Authentication

BigQuery supports two authentication methods. Service account authentication is recommended for server-to-server integrations, while OAuth 2.0 is suitable for user-centric workflows.

Service Account

Generate a JSON key from a Google Cloud service account with BigQuery Admin or Editor permissions, then upload it during MCP server creation.

OAuth 2.0

Configure OAuth 2.0 in the MCP server setup:

  • Authorization URL: https://accounts.google.com/o/oauth2/v2/auth
  • Token URL: https://oauth2.googleapis.com/token
  • Scopes:
    • https://www.googleapis.com/auth/bigquery — Full read/write access to BigQuery
    • https://www.googleapis.com/auth/bigquery.readonly — Read-only access to datasets and tables
    • https://www.googleapis.com/auth/bigquery.insertdata — Limited to data insertion only

Available tools

The BigQuery MCP server exposes query execution, dataset and table management, data loading and export, BigQuery ML operations, and access control APIs.

ToolPurpose
Query ExecutionRun interactive and batch queries, retrieve results, list jobs, configure query parameters, set destination tables, and control caching
Dataset ManagementCreate, update, and delete datasets; share datasets; manage row-level security and access control lists
Table OperationsCreate, update, and delete tables; copy tables; manage partitioning and clustering; update schemas
Data LoadingStream real-time data, load batch data from Cloud Storage, import from Cloud SQL, and load JSON files
Data ExportExport query results and tables to Cloud Storage, BigTable, or Avro format
BigQuery MLCreate and train regression, classification, time-series, and clustering models; evaluate performance; make predictions
Job ManagementMonitor query jobs, estimate processing costs, track execution status, and retrieve statistics
IAM & SecurityGrant dataset access, create service accounts, set row-level security, configure data retention policies

Tips

Use partitioning and clustering to reduce data scanned, and enable query caching to reuse results — avoid SELECT * to limit data scanned.

Set byte processing limits before executing queries and monitor slot usage to control spending.

Batch multiple small files together before loading, use appropriate formats like Parquet, and validate schemas to catch errors early.

Start with simple models and incrementally add complexity.

Use materialized views to optimize frequently-run queries and refresh patterns strategically.

Grant minimal necessary permissions through IAM roles and audit access regularly.

Encrypt sensitive columns with Cloud Data Loss Prevention scanning.