Building a Dataverse MCP Server: Give Your AI Agents Enterprise Data Access
By Gennoor Tech·March 8, 2026
An MCP server for Dataverse gives any AI agent — Claude, GPT, Copilot, or open-source models — the ability to query tables, create records, and navigate relationships in your business data through a standardized protocol.
Dataverse is the data backbone of Power Platform, Dynamics 365, and countless enterprise apps. Now imagine giving any AI agent — regardless of provider — secure, structured access to all of it. This comprehensive guide will show you exactly how to build a production-ready Dataverse MCP server that connects AI agents to your enterprise data.
How a Dataverse MCP Server Works
- Authentication — The server uses OAuth 2.0 with a service principal to connect to Dataverse securely, eliminating the need for user credentials in automated scenarios.
- Tool exposure — It advertises tools like
query_table,get_record,create_record,list_tables, anddescribe_tablethat AI agents can discover and invoke. - Schema awareness — The agent can discover what tables and columns exist, understand data types and relationships, then formulate appropriate queries without hardcoded knowledge.
- Relationship traversal — Navigate from Customer to Orders to Products through Dataverse relationships, following foreign keys and lookups automatically.
Architecture Diagram Explained
The Dataverse MCP server architecture consists of four layers:
Client Layer
AI agents (Claude, GPT, custom agents) act as MCP clients. They connect to the server via stdio or HTTP transport and call exposed tools.
Server Layer
The MCP server exposes standardized tools and handles tool execution. It translates generic tool calls into Dataverse-specific operations.
Authentication Layer
OAuth 2.0 service principal authenticates to Azure AD and obtains access tokens for Dataverse. Tokens are cached and refreshed automatically.
Dataverse Layer
The Dataverse Web API executes queries, creates records, and returns results. Row-level security is enforced based on the service principal's permissions.
OAuth 2.0 Service Principal Setup Walkthrough
Before building the server, configure Azure AD authentication:
Step 1: Create an App Registration
In Azure Portal, navigate to Azure Active Directory > App registrations > New registration. Name it "Dataverse MCP Server" and leave redirect URI blank (we'll use client credentials flow).
Step 2: Create a Client Secret
In your app registration, go to Certificates & secrets > New client secret. Set expiration to 24 months and save the secret value — you won't see it again. Store it in Azure Key Vault or your secret management system.
Step 3: Grant Dataverse Permissions
Go to API permissions > Add a permission > Dynamics CRM > Delegated permissions. Add user_impersonation. Click "Grant admin consent" to approve the permission.
Step 4: Create a Dataverse Application User
In Power Platform admin center, select your environment > Settings > Users + permissions > Application users. Create a new application user linked to your app registration. Assign appropriate security roles (System Administrator for development, custom roles for production).
Step 5: Test Authentication
Use curl to verify you can obtain an access token:
curl -X POST https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/token
-d "client_id={client-id}"
-d "client_secret={client-secret}"
-d "grant_type=client_credentials"
-d "scope=https://{org}.crm.dynamics.com/.default"
If successful, you'll receive an access token. This proves your service principal can authenticate to Dataverse.
Implementing Each Tool
Now let's build the MCP server tools one by one:
Tool 1: list_tables
Returns all tables in the Dataverse environment:
async function listTables() {
const response = await axios.get(
`${dataverseUrl}/api/data/v9.2/EntityDefinitions?$select=LogicalName,DisplayName&$filter=IsCustomizable/Value eq true`,
{ headers: { Authorization: `Bearer ${accessToken}` } }
);
return response.data.value.map(entity => ({
name: entity.LogicalName,
displayName: entity.DisplayName?.UserLocalizedLabel?.Label
}));
}
This tool enables schema discovery. AI agents can ask "What tables are available?" and get a comprehensive list.
Tool 2: describe_table
Returns schema details for a specific table:
async function describeTable(tableName) {
const response = await axios.get(
`${dataverseUrl}/api/data/v9.2/EntityDefinitions(LogicalName='${tableName}')?$expand=Attributes($select=LogicalName,DisplayName,AttributeType,IsValidForCreate,IsValidForUpdate)`,
{ headers: { Authorization: `Bearer ${accessToken}` } }
);
return {
name: response.data.LogicalName,
displayName: response.data.DisplayName?.UserLocalizedLabel?.Label,
primaryKey: response.data.PrimaryIdAttribute,
attributes: response.data.Attributes.map(attr => ({
name: attr.LogicalName,
displayName: attr.DisplayName?.UserLocalizedLabel?.Label,
type: attr.AttributeType,
createable: attr.IsValidForCreate,
updateable: attr.IsValidForUpdate
}))
};
}
With this tool, agents understand table structure and can generate valid queries and create operations.
Tool 3: query_table
Executes queries using OData syntax:
async function queryTable(tableName, filter, select, orderBy, top) {
let url = `${dataverseUrl}/api/data/v9.2/${tableName}?`;
if (filter) url += `$filter=${encodeURIComponent(filter)}&`;
if (select) url += `$select=${select}&`;
if (orderBy) url += `$orderby=${orderBy}&`;
if (top) url += `$top=${top}&`;
const response = await axios.get(url, {
headers: { Authorization: `Bearer ${accessToken}` }
});
return response.data.value;
}
This is the most frequently used tool. AI agents query data to answer questions, validate information, or find records to update.
Tool 4: get_record
Retrieves a specific record by ID:
async function getRecord(tableName, recordId, select) {
let url = `${dataverseUrl}/api/data/v9.2/${tableName}(${recordId})`;
if (select) url += `?$select=${select}`;
const response = await axios.get(url, {
headers: { Authorization: `Bearer ${accessToken}` }
});
return response.data;
}
Use this when you know the exact record ID and need to fetch details efficiently.
Tool 5: create_record
Creates a new record:
async function createRecord(tableName, data) {
const response = await axios.post(
`${dataverseUrl}/api/data/v9.2/${tableName}`,
data,
{
headers: {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json',
'OData-MaxVersion': '4.0',
'OData-Version': '4.0'
}
}
);
const recordId = response.headers['odata-entityid'].split('(')[1].split(')')[0];
return { id: recordId, url: response.headers['odata-entityid'] };
}
This enables AI agents to take action: create support cases, log activities, add contacts.
Tool 6: update_record
Updates an existing record:
async function updateRecord(tableName, recordId, data) {
await axios.patch(
`${dataverseUrl}/api/data/v9.2/${tableName}(${recordId})`,
data,
{
headers: {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json'
}
}
);
return { success: true, id: recordId };
}
Tool 7: delete_record
Deletes a record (use with caution):
async function deleteRecord(tableName, recordId) {
await axios.delete(
`${dataverseUrl}/api/data/v9.2/${tableName}(${recordId})`,
{ headers: { Authorization: `Bearer ${accessToken}` } }
);
return { success: true, id: recordId };
}
Implement strict authorization checks before exposing this tool in production.
FetchXML vs OData Query Patterns
Dataverse supports two query languages:
OData
OData is RESTful and URL-based. Advantages:
- Easier to construct programmatically
- Standard HTTP semantics (GET for queries, POST for creates)
- Better for simple queries and single-table operations
Example: accounts?$filter=revenue gt 1000000&$select=name,revenue&$orderby=revenue desc
FetchXML
FetchXML is XML-based and more powerful. Advantages:
- Supports complex joins across multiple tables
- Aggregate functions (SUM, AVG, COUNT)
- Grouping and linked entities
- Better performance for complex queries
Example FetchXML query:
<fetch>
<entity name="account">
<attribute name="name"/>
<attribute name="revenue"/>
<link-entity name="contact" from="parentcustomerid" to="accountid">
<attribute name="fullname"/>
</link-entity>
</entity>
</fetch>
For MCP servers, start with OData for simplicity. Add a FetchXML tool later for advanced scenarios.
Handling Dataverse Relationships and Lookups
Dataverse relationships are critical for enterprise data models:
One-to-Many Relationships
An Account has many Contacts. Query contacts for an account:
contacts?$filter=_parentcustomerid_value eq {account-id}
The _parentcustomerid_value is the lookup field navigation property.
Many-to-One Relationships
A Contact belongs to an Account. Expand the relationship in a query:
contacts({contact-id})?$select=fullname&$expand=parentcustomerid_account($select=name,revenue)
This returns the contact with their parent account details inline.
Many-to-Many Relationships
A Contact can have multiple Marketing Lists, and a Marketing List has multiple Contacts. Query through the relationship:
contacts({contact-id})/listmember_association?$select=listname
Best Practice: Relationship Tool
Consider adding a dedicated describe_relationships tool that returns relationship metadata. This helps AI agents understand how tables connect.
Row-Level Security Enforcement
Dataverse security is enforced at multiple levels:
Business Unit Security
Records are owned by business units. The service principal's security roles determine which business units it can access.
Record-Based Security
Security roles define create, read, write, delete, append, and append-to privileges at the record level. Configure your application user with appropriate roles.
Field-Level Security
Sensitive fields (SSN, salary) can have field-level security. Even if the service principal can read the record, secured fields may be hidden.
Hierarchical Security
Managers can access their direct reports' records. Configure hierarchical security if your agents need manager visibility.
MCP Server Enforcement
The MCP server inherits all Dataverse security. You don't need to re-implement security logic — Dataverse APIs enforce it automatically. However, add application-level logging to track what each agent accesses for audit purposes.
Caching Strategies
Reduce load on Dataverse with strategic caching:
Metadata Caching
Table schemas change rarely. Cache list_tables and describe_table results for 1 hour. Refresh on startup and periodically in the background.
Data Caching
Cache read-only reference data (product catalogs, price lists, territory mappings). Use Redis or in-memory caching with TTLs appropriate to data volatility.
Negative Caching
Cache "not found" responses to prevent repeated queries for non-existent records. Use a short TTL (5 minutes) to handle cases where records are created shortly after being queried.
Cache Invalidation
Subscribe to Dataverse webhooks or use Azure Service Bus to receive notifications when data changes. Invalidate relevant cache entries proactively.
Error Handling and Retry Logic
Dataverse APIs can fail for various reasons. Implement robust error handling:
Authentication Failures
If the access token expires mid-operation, catch 401 errors, refresh the token, and retry the operation automatically.
Throttling
Dataverse enforces API rate limits (6,000 requests per 5 minutes per user). When throttled, you'll receive a 429 status with a retry-after header. Implement exponential backoff and respect the retry-after value.
Transient Failures
Network glitches and temporary service issues happen. Retry failed requests up to 3 times with exponential backoff (1s, 2s, 4s).
Validation Errors
If create/update operations fail validation (missing required fields, invalid data types), return clear error messages to the AI agent. Include the field names and constraints that were violated.
Permission Errors
403 Forbidden means the service principal lacks permission. Log these for security monitoring and return user-friendly messages.
Pagination for Large Datasets
Dataverse returns up to 5,000 records per request. Implement pagination for larger result sets:
OData Pagination
Use $top and $skip for offset-based pagination:
accounts?$select=name&$top=100&$skip=0
For page 2: $skip=100. For page 3: $skip=200.
Cursor-Based Pagination
Dataverse response includes an @odata.nextLink when more results exist:
{
"value": [...100 records...],
"@odata.nextLink": "https://org.crm.dynamics.com/api/data/v9.2/accounts?$skiptoken=..."
}
Follow the next link to retrieve subsequent pages. This is more efficient than skip-based pagination for large datasets.
MCP Tool Design
For query tools, include a maxResults parameter (default 100, max 1000). If more results exist, return a cursor token the agent can use to fetch the next page:
{
"results": [...],
"nextCursor": "encoded-continuation-token"
}
Real-World Implementation Patterns
Claims Processing
An insurance company processes claims using an AI agent connected to Dataverse via MCP:
- Customer submits claim via email
- Email parsing agent extracts claim details
- Agent calls
query_tableto find the policy in Dataverse - Agent validates coverage using
get_recordto fetch policy details - Agent calls
create_recordto log a new claim - Agent updates claim status through workflow as adjudication progresses
Result: Claims processing time reduced from 48 hours to 2 hours. Human adjusters focus on complex cases while the agent handles routine claims automatically.
Sales Intelligence
A sales team uses an AI assistant to prepare for customer meetings:
- Sales rep asks "Give me a briefing on Contoso"
- AI calls
query_tableto find the account - AI calls
query_tablewith relationship filter to get all opportunities - AI calls
query_tableto fetch recent activities - AI analyzes data and generates a briefing with opportunity risks and next best actions
Result: Meeting prep time reduced from 30 minutes to 2 minutes. Sales reps enter meetings better informed and more confident.
Customer Service Automation
A support agent uses AI to resolve customer inquiries:
- Customer messages "I need to update my address"
- AI calls
query_tableto find the customer by email or phone - AI extracts new address from the message using NLP
- AI calls
update_recordto update the customer's address - AI calls
create_recordto log the interaction as a case activity - AI responds to customer confirming the update
Result: 60% of address change requests handled without human intervention. Support agents focus on complex issues requiring empathy and judgment.
Performance Optimization
Minimize Round Trips
Use $expand to fetch related records in a single query instead of multiple sequential queries:
accounts({id})?$expand=contact_customer_accounts($select=fullname,emailaddress)
This retrieves the account and all related contacts in one HTTP request.
Select Only Needed Fields
Always use $select to fetch only the columns you need:
accounts?$select=name,revenue,industrycode
This reduces payload size and improves query performance, especially for tables with many columns.
Use Batch Requests
Dataverse supports OData batch requests to execute multiple operations in a single HTTP request. Use this when creating multiple related records or executing independent queries.
Connection Pooling
Reuse HTTP connections to Dataverse rather than creating new connections for each request. Most HTTP libraries (axios, fetch) handle this automatically, but verify your configuration.
Monitor API Usage
Track API call counts and response times. Identify which tools consume the most resources and optimize accordingly. Use Azure Application Insights or similar monitoring tools.
Testing Strategies
Unit Testing
Mock Dataverse API responses to test tool logic in isolation:
jest.mock('axios');
axios.get.mockResolvedValue({ data: { value: [{ name: 'Contoso' }] } });
const results = await queryTable('accounts', "name eq 'Contoso'");
expect(results).toHaveLength(1);
Integration Testing
Test against a dedicated Dataverse developer environment with known test data. Verify:
- Authentication succeeds with service principal
- Schema discovery returns expected tables and columns
- Queries return correct results
- Create/update/delete operations work as expected
- Error handling triggers on invalid operations
MCP Client Testing
Use the MCP inspector tool to manually test each tool from a client perspective. Verify tool schemas are correct and results are formatted properly.
Load Testing
Simulate multiple concurrent agents calling tools simultaneously. Verify:
- Token caching prevents authentication bottlenecks
- Connection pooling handles concurrent requests
- Rate limiting doesn't cause failures
- Memory usage remains stable under load
Deployment and Monitoring
Deployment Options
- Azure Container Instances — Simple containerized deployment for HTTP-based MCP servers
- Azure App Service — Fully managed platform with auto-scaling and easy deployment
- Azure Kubernetes Service — For large-scale deployments with complex orchestration needs
- On-premises servers — Run stdio-based MCP servers on local machines for development or air-gapped environments
Monitoring
Instrument your MCP server with comprehensive logging:
- Log every tool call with client identity, tool name, parameters, and execution time
- Log authentication events (token acquisition, refresh, expiration)
- Log errors with full stack traces and context
- Export logs to Azure Application Insights or Log Analytics for analysis
Alerting
Set up alerts for:
- Error rate exceeds 5% over 5 minutes
- Average response time exceeds 2 seconds
- Authentication failures indicate token expiration
- Rate limiting triggers suggest capacity issues
Health Checks
Implement a health check endpoint that verifies:
- The server is running
- Dataverse API is reachable
- Authentication is working
- Critical dependencies (cache, database) are healthy
Integration with Copilot Studio and Other AI Frameworks
Copilot Studio Integration
Copilot Studio doesn't directly support MCP (yet), but you can bridge through Power Automate:
- Create a Power Automate flow that calls your MCP server's HTTP endpoint
- Expose the flow as a Power Platform connector
- Use the connector in your Copilot Studio agent
This adds latency but enables Copilot Studio agents to access Dataverse through your MCP server's tools.
LangChain Integration
LangChain supports MCP through custom tool implementations. Wrap your MCP client in a LangChain tool and add it to your agent's toolkit.
Semantic Kernel Integration
Semantic Kernel can invoke MCP servers through HTTP transport. Register your server as a plugin and expose tools as semantic functions.
Direct API Integration
Any AI framework that supports function calling can use your MCP server. Implement a thin adapter layer that translates MCP tool calls to your framework's function calling format.
Getting Started Today
Building a Dataverse MCP server is a weekend project that unlocks massive value:
- Set up service principal authentication (2 hours)
- Implement
list_tablesanddescribe_tabletools (2 hours) - Add
query_tabletool with OData support (3 hours) - Test with MCP inspector and refine (2 hours)
- Deploy to Azure and connect your first AI agent (1 hour)
Start with read-only tools to minimize risk. Once you've proven value, add write operations with proper authorization and audit logging.
For step-by-step implementation guidance, explore our AI integration workshops or dive into more technical deep dives on our blog covering enterprise AI patterns, Microsoft Power Platform, and agentic architectures.
Jalal Ahmed Khan
Microsoft Certified Trainer (MCT) · Founder, Gennoor Tech
14+ years in enterprise AI and cloud technologies. Delivered AI transformation programs for Fortune 500 companies across 6 countries including Boeing, Aramco, HDFC Bank, and Siemens. Holds 16 active Microsoft certifications including Azure AI Engineer and Power BI Analyst.