BigQuery MCP server
Google BigQuery is a fully-managed, serverless data warehouse that enables scalable SQL analysis across petabytes of data. With this MCP server, AI agents can query datasets, manage tables, run machine learning models, and handle data import/export operations through natural language commands.
Setting up an MCP server
This article covers the standard steps for creating an MCP server in AI Gateway and connecting it to an AI client. The steps are the same for every integration — application-specific details (API credentials, OAuth endpoints, and scopes) are covered in the individual application pages.
Before you begin
You'll need:
- Access to AI Gateway with permission to create MCP servers
- API credentials for the application you're connecting (see the relevant application page for what to collect)
Create an MCP server
Find the API in the catalog
- Sign in to AI Gateway and select MCP Servers from the left navigation.
- Select New MCP Server.
- Search for the application you want to connect, then select it from the catalog.
Configure the server
- Enter a Name for your server — something descriptive that identifies both the application and its purpose (for example, "Zendesk Support — Prod").
- Enter a Description so your team knows what the server is for.
- Set the Timeout value. 30 seconds works for most APIs; increase to 60 seconds for APIs that return large payloads.
- Toggle Production mode on if this server will be used in a live workflow.
- Select Next.
Configure authentication
Enter the authentication details for the application. This varies by service — see the Authentication section of the relevant application page for the specific credentials, OAuth URLs, and scopes to use.
Configure security
- Set any Rate limits appropriate for your use case and the API's own limits.
- Enable Logging if you want AI Gateway to record requests and responses for auditing.
- Select Next.
Deploy
Review the summary, then select Deploy. AI Gateway provisions the server and provides a server URL you'll use when configuring your AI client.
Connect to an AI client
Once your server is deployed, you'll need to add it to the AI client your team uses. Select your client for setup instructions:
Tips
- You can create multiple MCP servers for the same application — for example, a read-only server for reporting agents and a read-write server for automation workflows.
- If you're unsure which OAuth scopes to request, start with the minimum read-only set and add write scopes only when needed. Most application pages include scope recommendations.
- You can edit a server's name, description, timeout, and security settings after deployment without redeploying.
Authentication
BigQuery supports two authentication methods. Service account authentication is recommended for server-to-server integrations, while OAuth 2.0 is suitable for user-centric workflows.
Service Account
Generate a JSON key from a Google Cloud service account with BigQuery Admin or Editor permissions, then upload it during MCP server creation.
OAuth 2.0
Configure OAuth 2.0 in the MCP server setup:
- Authorization URL:
https://accounts.google.com/o/oauth2/v2/auth - Token URL:
https://oauth2.googleapis.com/token - Scopes:
https://www.googleapis.com/auth/bigquery— Full read/write access to BigQueryhttps://www.googleapis.com/auth/bigquery.readonly— Read-only access to datasets and tableshttps://www.googleapis.com/auth/bigquery.insertdata— Limited to data insertion only
Available tools
The BigQuery MCP server exposes query execution, dataset and table management, data loading and export, BigQuery ML operations, and access control APIs.
| Tool | Purpose |
|---|---|
| Query Execution | Run interactive and batch queries, retrieve results, list jobs, configure query parameters, set destination tables, and control caching |
| Dataset Management | Create, update, and delete datasets; share datasets; manage row-level security and access control lists |
| Table Operations | Create, update, and delete tables; copy tables; manage partitioning and clustering; update schemas |
| Data Loading | Stream real-time data, load batch data from Cloud Storage, import from Cloud SQL, and load JSON files |
| Data Export | Export query results and tables to Cloud Storage, BigTable, or Avro format |
| BigQuery ML | Create and train regression, classification, time-series, and clustering models; evaluate performance; make predictions |
| Job Management | Monitor query jobs, estimate processing costs, track execution status, and retrieve statistics |
| IAM & Security | Grant dataset access, create service accounts, set row-level security, configure data retention policies |
Tips
Use partitioning and clustering to reduce data scanned, and enable query caching to reuse results — avoid SELECT * to limit data scanned.
Set byte processing limits before executing queries and monitor slot usage to control spending.
Batch multiple small files together before loading, use appropriate formats like Parquet, and validate schemas to catch errors early.
Start with simple models and incrementally add complexity.
Use materialized views to optimize frequently-run queries and refresh patterns strategically.
Grant minimal necessary permissions through IAM roles and audit access regularly.
Encrypt sensitive columns with Cloud Data Loss Prevention scanning.
Cequence AI Gateway