Zscaler Workload Segmentation MCP server
Zscaler Workload Segmentation provides identity-based microsegmentation for workloads across data centers and clouds. An MCP server for Workload Segmentation allows AI agents to discover application dependencies, create zero trust policies, simulate policy changes safely, and prevent lateral movement without needing direct console access.
Setting up an MCP server
This article covers the standard steps for creating an MCP server in AI Gateway and connecting it to an AI client. The steps are the same for every integration — application-specific details (API credentials, OAuth endpoints, and scopes) are covered in the individual application pages.
Before you begin
You'll need:
- Access to AI Gateway with permission to create MCP servers
- API credentials for the application you're connecting (see the relevant application page for what to collect)
Create an MCP server
Find the API in the catalog
- Sign in to AI Gateway and select MCP Servers from the left navigation.
- Select New MCP Server.
- Search for the application you want to connect, then select it from the catalog.
Configure the server
- Enter a Name for your server — something descriptive that identifies both the application and its purpose (for example, "Zendesk Support — Prod").
- Enter a Description so your team knows what the server is for.
- Set the Timeout value. 30 seconds works for most APIs; increase to 60 seconds for APIs that return large payloads.
- Toggle Production mode on if this server will be used in a live workflow.
- Select Next.
Configure authentication
Enter the authentication details for the application. This varies by service — see the Authentication section of the relevant application page for the specific credentials, OAuth URLs, and scopes to use.
Configure security
- Set any Rate limits appropriate for your use case and the API's own limits.
- Enable Logging if you want AI Gateway to record requests and responses for auditing.
- Select Next.
Deploy
Review the summary, then select Deploy. AI Gateway provisions the server and provides a server URL you'll use when configuring your AI client.
Connect to an AI client
Once your server is deployed, you'll need to add it to the AI client your team uses. Select your client for setup instructions:
Tips
- You can create multiple MCP servers for the same application — for example, a read-only server for reporting agents and a read-write server for automation workflows.
- If you're unsure which OAuth scopes to request, start with the minimum read-only set and add write scopes only when needed. Most application pages include scope recommendations.
- You can edit a server's name, description, timeout, and security settings after deployment without redeploying.
Authentication
Zscaler Workload Segmentation uses API key authentication. The base URL is https://api.workload.zscaler.com. Generate API credentials from Settings > API Keys in the Workload Segmentation console, then note your organization ID. API keys are passed in request headers using X-API-Key and X-API-Secret for authentication.
Available tools
The tools enable workload discovery and identity management, application dependency mapping, policy creation and simulation, traffic analytics, and compliance reporting. They help you build zero trust policies, test policy changes safely before enforcement, detect anomalous traffic, and prove compliance with segmentation requirements.
| Tool | Description |
|---|---|
| Workload Discovery | Discover all workloads, identify unprotected services, find dependencies, map topology |
| Workload Identity | Create workload identities, tag by tier, group by business unit, classify by data sensitivity |
| Fingerprinting | Generate workload fingerprints, verify identity, track changes, monitor drift |
| Dependency Discovery | Map application communications, discover service dependencies, identify data flows, track API calls |
| Topology Visualization | Show application topology, display traffic patterns, highlight critical paths, map east-west traffic |
| Policy Creation | Create zero trust policies, build allow lists, define service boundaries, set communication rules |
| Policy Templates | Apply PCI compliance templates, use three-tier app templates, deploy microservices policies |
| Granular Controls | Control by port and protocol, restrict by process, limit by user context, filter by metadata |
| Policy Simulation | Simulate policy changes, preview blocked connections, test before deployment, validate rules |
| Impact Assessment | Show affected workloads, calculate disruption risk, identify dependencies, measure blast radius |
| Safe Deployment | Stage policy rollout, monitor mode first, gradual enforcement, rollback capability |
| Flow Analysis | Show traffic patterns, monitor data volumes, track connection counts, analyze protocols |
| Anomaly Detection | Detect unusual traffic, find policy violations, identify new connections, monitor behavioral changes |
| Performance Metrics | Measure latency impact, track throughput, monitor connection health, assess overhead |
| Lateral Movement Prevention | Block unauthorized connections, prevent service hopping, stop attack propagation |
| Compliance Reports | Generate compliance reports, document segmentation, prove policy enforcement, track changes |
| CI/CD Integration | Integrate with Jenkins, support GitOps, infrastructure as code, policy as code |
Tips
Test policy changes in simulation mode first and review the impact assessment before enforcing any policy in production.
Deploy policies gradually in monitor mode to establish a baseline.
Enable enforcement once you're confident the rules won't disrupt legitimate traffic.
Define clear role-based access controls for who can create policies.
Enable monitoring and alerting on all policy changes to track who made modifications and when.
Cequence AI Gateway