Copilot Studio SaaS integrated with a Fabric Data Agent

The Request Lifecycle: A 3-Stage Handshake

Stage 1: Copilot Studio ->  AI Foundry (The Intent)
  • The Action: The user asks: “What is the total AUM for Client X?” * The Network: Copilot Studio (SaaS) identifies that it needs a “Data Action.” It sends a secure REST API call to your specific Model Endpoint in Azure AI Foundry.
  • The “Sovereign” Logic: This happens via a Managed Connection. Because both are in your UK South tenant, the traffic never leaves the Microsoft internal network. It’s authenticated via Entra ID SSO, so the system knows exactly which RM is asking.
Stage 2: AI Foundry -> Fabric Data Agent (The Reasoning)
  • The Action: The LLM (e.g., GPT-5.4) in Foundry “reasons” over the prompt. It realizes it doesn’t have the answer in its training data, so it triggers a Tool Call to the Fabric Data Agent.
  • The Network: This is a Service-to-Service (S2S) call. Foundry essentially says: “I need the AUM for Client X. Fabric Agent, go write the SQL to find this in the Gold Lakehouse.”
  • The “Sovereign” Logic: This uses Managed Private Endpoints. The AI Foundry “reaches into” the Fabric environment through a private tunnel that you’ve approved in the network settings.
Stage 3: Fabric Data Agent -> OneLake (The Retrieval)
  • The Action: The Fabric Data Agent translates the request into a high-performance SQL query. It fetches only the specific rows needed from OneLake.
  • The Network: The query is executed against the Distributed Compute of Fabric.
  • The “Sovereign” Logic: This is Zero-Copy. We aren’t moving the database to the AI; we are moving a tiny “Answer” (e.g., “£2.4M”) back through the chain.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.