api management
68 TopicsAnnouncing Public Preview of API Management WordPress plugin to build customized developer portals
Azure API Management WordPress plugin enables our customers to leverage the power of WordPress to build their own unique developer portal. API managers and administrators can bring up a new developer portal in a matter of few minutes and customize the theme, layout, add stylesheet or localize the portal into different languages.Build. Secure. Launch Your Private MCP Registry with Azure API Center.
We are thrilled to embrace a new era in the world of MCP registries. As organizations increasingly build and consume MCP servers, the need for a secure, governed, robust and easily discoverable tools catalog has become critical. Today, we are excited to show you how to do just that with MCP Center, a live example demonstrating how Azure API Center (APIC) can serve as a private and enterprise-ready MCP registry. The registry puts your MCPs just one click away for developers, ensuring no setup fuss and a direct path to coding brilliance. Why a private registry? 🤔 Public OSS registries have been instrumental in driving growth and innovation across the MCP ecosystem. But as adoption scales, so does the need for tighter security, governance, and control, this is where private MCP registries step in. This is where Azure API Center steps in. Azure API Center offers a powerful and centralized approach to MCP discovery and governance across diverse teams and services within an organization. Let's delve into the key benefits of leveraging a private MCP registry with Azure API Center. Security and Trust: The Foundation of AI Adoption Review and Verification: Public registries, by their open nature, accept submissions from a wide range of developers. This can introduce risks from tools with limited security practices or even malicious intent. A private registry empowers your organization to thoroughly review and verify every MCP server before it becomes accessible to internal developers or AI agents (like Copilot Studio and AI Foundry). This eliminates the risk of introducing random, potentially vulnerable first or third-party tools into your ecosystem. Reduced Attack Surface: By controlling which MCP servers are accessible, organizations significantly shrink their potential attack surface. When your AI agents interact solely with known and secure internal tools, the likelihood of external attackers exploiting vulnerabilities in unvetted solutions is drastically reduced. Enterprise-Grade Authentication and Authorization: Private registries enable the enforcement of your existing robust enterprise authentication and authorization mechanisms (e.g., OAuth 2) across all MCP servers. Public registries, in contrast, may have varying or less stringent authentication requirements. Enforced AI Gateway Control (Azure API Management): Beyond vetting, a private registry enables organizations to route all MCP server traffic through an AI gateway such as Azure API Management. This ensures that every interaction, whether internal or external, adheres to strict security policies, including centralized authentication, authorization, rate limiting, and threat protection, creating a secure front for your AI services. Governance and Control: Navigating the AI Landscape with Confidence Centralized Oversight and "Single Source of Truth": A private registry provides a centralized "single source of truth" for all AI-related tools and data connections within your organization. This empowers comprehensive oversight of AI initiatives, clearly identifying ownership and accountability for each MCP server. Preventing "Shadow AI": Without a formal registry, individual teams might independently develop or integrate AI tools, leading to "shadow AI" – unmanaged and unmonitored AI deployments that can pose significant risks. A private registry encourages a standardized approach, bringing all AI tools under central governance and visibility. Tailored Tool Development: Organizations can develop and host MCP servers specifically tailored to their unique needs and requirements. This means optimized efficiency and utility, providing specialized tools you won't typically find in broader public registries. Simplified Integration and Accelerated Development: A well-managed private registry simplifies the discovery and integration of internal tools for your AI developers. This significantly accelerates the development and deployment of AI-powered applications, fostering innovation. Good news! Azure API Center can be created for free in any Azure subscription. You can find a detailed guide to help you get started: Inventory and Discover MCP Servers in Your API Center - Azure API Center Get involved 💡 Your remote MCP server can be discoverable on API Center’s MCP Discovery page today! Bring your MCP server and reach Azure customers! These Microsoft partners are shaping the future of the MCP ecosystem by making their remote MCP Servers discoverable via API Center’s MCP Discovery page. Early Partners: Atlassian – Connect to Jira and Confluence for issue tracking and documentation Box – Use Box to securely store, manage and share your photos, videos, and documents in the cloud Neon – Manage and query Neon Postgres databases with natural language Pipedream – Add 1000s of APIs with built-in authentication and 10,000+ tools to your AI assistant or agent - coming soon - Stripe – Payment processing and financial infrastructure tools If partners would like their remote MCP servers to be featured in our Discover Panel, reach out to us here: GitHub/mcp-center and comment under the following GitHub issue: MCP Server Onboarding Request Ready to Get Started? 🚀 Modernize your AI strategy and empower your teams with enhanced discovery, security, and governance of agentic tools. Now's the time to explore creating your own private enterprise MCP registry. Check out MCP Center, a public showcase demonstrating how you can build your own enterprise MCP registry - MCP Center - Build Your Own Enterprise MCP Registry - or go ahead and create your Azure API Center today!2KViews5likes3CommentsIntroducing API Management Support in the Azure SRE Agent
In May, the Azure SRE Agent was introduced - an AI-powered Site Reliability Engineering (SRE) assistant built to help customers identify, diagnose, and resolve issues across their Azure environments faster and with less manual effort. Today, we’re excited to highlight how the SRE Agent now extends these capabilities to Azure API Management (APIM) , delivering deep operational visibility, guided troubleshooting, and intelligent remediation for customers running critical APIs at scale. API Management sits at the center of API application architectures, acting as a unified entry point for services, enforcing security, transforming requests, and routing traffic to backends. Ensuring the reliability of this layer is crucial - but as systems grow more distributed, it becomes harder to isolate failures, detect misconfigurations, or trace degraded performance to its root cause. The SRE Agent helps APIM users stay ahead of these challenges by providing both diagnostics and remediation tailored for API Management environments. You can ask the SRE agent direct API Management questions or concerns such as: “My API Management is giving me 503 errors” “We updated our policies yesterday, and now the backend is timing out.” “Can you help me figure out why requests to our billing API are failing?” “Show me recent changes to our APIM instance.” “What’s the failure rate on our orders operation this week?” Proactively Monitor API Management App Health The SRE Agent continuously monitors the overall health of your API Management service. It tracks key metrics such as CPU utilization, latency, error rates, and availability over time, surfacing any abnormal patterns and offering insight into capacity. This helps teams anticipate issues before they impact users and plan for scaling with confidence. Visualize Backend Connections and Health One of the most valuable APIM capabilities introduced with the agent is backend mapping. The agent can identify which backend services each API operation routes to, and visualize the health of those backends. This makes it much easier to answer operational questions like: “Which backend is responsible for the spike in errors on my /checkout API?” “Are there any timeouts happening from APIM to service X?” Drill into Backend App Issues If the root cause lies in a backend application - whether it's a service hosted in Azure Container Apps, Azure Functions Apps App Service, or another compute platform - the SRE Agent can go further. It analyzes backend-specific metrics such as memory and CPU usage, response time distribution, recent deployments, and any logged exceptions. The agent correlates this backend behavior with the observed degradation at the API Management layer to provide a full stack view of what’s happening. For example: “Your backend container app failed 37% of requests in the last hour due to out-of-memory errors. This correlated with a 5xx spike at the /stock/check API operation.” Detect and Fix Configuration Issues The SRE Agent also helps uncover common configuration issues that lead to downtime or silent failures, including: Malformed API policies Missing or misapplied network rules (NSGs, VNet) Incorrect scaling configuration or quota enforcement But it doesn’t stop at diagnostics. Where safe and possible, the agent can also perform remediation with your approval - for example, by adjusting NSG rules, scaling your API Management, etc. Built for Teams that Depend on APIM If API Management is critical to your infrastructure, the SRE Agent gives you an extra layer of confidence - offering the clarity and tooling needed to maintain uptime, reduce operational overhead, and catch issues before they escalate. The APIM-specific capabilities of SRE Agent are now available, and can be used in any SRE Agent resource (currently in preview). Signup for preview access We’re excited to bring this level of intelligence and automation to APIM, and we’re looking forward to your feedback as we continue to evolve the experience. Additional resources Azure SRE Agent overview (preview) | Microsoft Learn Introducing Azure SRE Agent | Microsoft Community Hub1.1KViews5likes4CommentsWorkspaces Are Now Generally Available In Azure API Management Premium v2
We’re excited to announce the general availability of workspaces and workspace gateways in the Premium v2 tier of Azure API Management! Premium v2 tier remains in preview at the time of this announcement. Workspaces enable management and governance of APIs at scale. Whether you're supporting hundreds of APIs across teams or enabling new lines of business to independently manage their APIs, workspaces make it easier to adopt a federated API management model with central governance, observability, and security. To start using workspaces in Premium v2: Create an API Management Premium v2 service in a region where workspaces are available. Follow the documentation to create and set up workspaces. Learn more about workspaces.Announcing the Public Preview of the Applications feature in Azure API management
API Management now supports built-in OAuth 2.0 application-based access to product APIs using the client credentials flow. This feature allows API managers to register Microsoft Entra ID applications, streamlining secure API access for developers through OAuth 2.0 authorization. API publishers and developers can now more effectively manage client identity, access, and authorization flows. With this feature: API managers can identify which products require OAuth authorization by setting a product property to enable application-based access API managers can create and manage client applications and assign them access to specific products. Developers can see their registered applications in API management developer portal and use OAuth tokens to securely call APIs and products OAuth tokens presented in API requests are validated by the API Management gateway to authorize access to the product's APIs. This feature simplifies identity and access management in API programs, enabling a more secure and scalable approach to API consumption. Enable OAuth authorization API managers can now identify specific products which are protected by Microsoft Entra identity by enabling "Application based access". This ensures that only valid client applications which have a secure OAuth token from Microsoft Entra identity can access the APIs associated with this product. An application is created in Microsoft Entra corresponding to the product, with appropriate app role. Register client applications and assign products API managers can register client applications, identify specific developers as owners of these applications and assign products to these applications. This creates a new application in Microsoft Entra and assigns API permissions to access the product. Securely access the API using client applications Developers can login into API management developer portal and see the appropriate applications assigned to them. They can retrieve the application credentials and call Microsoft Entra to get an OAuth token, use this token to call APIM gateway and securely access the product/API. Preview limitations The public preview of the Applications is a limited-access feature. To participate in the preview and enable Applications in your APIM service instance, you must complete a request form. The Azure API Management team will review your request and respond via email within five business days. Learn more Securely access product APIs with Microsoft Entra applicationsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools 🛠️ with Agents 🤖 and Azure API Management is at the fore-front, ready to support this open-source protocol 🚀. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. 🔍 𝗞𝗲𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. 💡 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: To address these challenges, we recommend using 𝗔𝘇𝘂𝗿𝗲 𝗔𝗣𝗜 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 as your authorization gateway for remote MCP servers. 🔗For an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs – Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo15KViews10likes3CommentsLogic Apps Aviators Newsletter - July 25
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month July’s Ace Aviator: Şahin Özdemir What's your role and title? What are your responsibilities? I currently work for Rubicon Cloud Advisor, a Dutch company specialized in digital transformations, cloud adoption and AI implementation. At Rubicon I fulfil the role of Application and Integration architect, while also being a Professional Scrum Trainer at Scrum.org. Even though this sounds like two completely different roles, in practice both go closely hand in hand. I firmly believe that good architecture, a strong development process, and application of best practices are key pillars for delivering high-quality solutions to my clients. Therefore, both roles come in handy in my day-to-day job (combined with my strong background in software development).\ I work closely with companies and their teams in making their journey to Azure - especially Azure Integration Services - successful. Most of the time this journey starts with a business need or challenge, and I work with my clients to get a deeper understanding of their needs. This results in further analysis, capturing requirements, defining architecture, solution design, setting the stage for development (ALM) and being involved in quality assurance. At the same time, I think it’s important to stay relevant from a technical perspective. That’s why I also like being involved with implementing the solution. This way, I hear the technical struggles teams face and I can help them to find the right solution. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Not a single day is the same, although there are some recurring activities. Specific parts of my day (or sprint) are dedicated to Scrum-related activities - whether it's participating in the daily scrum, having sprint reviews with stakeholders, planning the next sprint, or refining the backlog with the team or just aligning with the PO or stakeholders. I’m frequently involved in cross-organizational meetings focused on projects at scale. I contribute from the perspective of architecture, technical expertise, and integration strategy. In my role as a solution architect, I'm engaged in designing and implementing a critical integration platform for my client. This platform connects and exchanges data between many internal departments and external vendors - an effort that requires frequent alignment and collaboration. I’m always looking for opportunities to expand our Hybrid Integration Platform itself. Exploring how Azure resources may add value to our platform and working closely with the team to realize such improvements to the platform’s capabilities is something I enjoy. Outside of the regular meetings, I often focus on designing new integrations. Having working sessions with stakeholders to understand what they want. Based on these discussions, I assess the technical and architectural aspects of the solution. Every integration that lands on the platform is measured against both architectural and development principles and guidelines. I contribute to reviewing the solutions that have been developed. Ensuring that each integration is high-quality, consistent, easy to understand, and maintainable. I support the platform team with, and whenever possible. And if time permits, I develop parts of the solution myself – I see this as a great way to stay relevant from a technological perspective. All the spare time I have, I spend on writing technical articles that may help others. What motivates and inspires you to be an active member of the Aviators/Microsoft community? Because I enjoy helping others. Every day I work with a team of smart professionals on integration solutions and custom code within the Azure platform. Along the way, we regularly encounter challenges, limitations, or issues. In those moments, it's incredibly helpful to find solutions online or to have a community that can think along with you. Over the past few years, there have been many occasions where I just couldn’t find a solution online for a technical problem with Logic Apps. In these cases, we either came up with a creative solution ourselves or received support from Microsoft. When the integration community faces a similar challenge, it’s pretty much wasteful to tackle the same hurdles again. By documenting an approach or solution, others may be saving their invaluable time looking for a solution. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? It is ok that you don’t know everything. Just start doing, experiment, stay curious, challenge yourself, don’t be afraid to ask questions, fail, learn and keep going! What has helped you grow professionally? I have spent a fair amount of my career at a big consulting firm. I started off as a software engineer all the way up to senior manager and architect. A long journey like that gives great and well-dosed opportunities and learning experiences to focus on your technical (in-depth) skillset first, continued by working on you soft skills like consulting, guiding and leading teams, solutioning and architecture. If I had not followed this path at that company, I would not be the person I am now professionally. Be ok with the fact that growth doesn’t happen overnight -no shortcuts, no magic pills. It's like a good red wine that needs time to mature. So do many challenging projects, become all-round and then choose a specialization, ask for constructive feedback, fail many times and take your time to reflect and learn. And don’t forget to have a strong work ethic and ongoing curiosity to learn new things. In the end I found that - from a technological perspective-, quality attributes (the “-illities”), enterprise application integration and scrum made my heart skip a beat. So my advice is to always pursue what brings you joy! If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Overall, I must say that I’m happy with the current state of Logic Apps. Nevertheless, if I had a magic wand: I would like to see that the service plans for Logic App standard would be in line with Function Apps. The plans for Function apps have way better tiers from both memory, cores and pricing perspective. And being able to scale out and in based on specific metrics is more flexible than Logic App Standard currently offers. Having more CPU/memory available in the plans would also improve the overall performance of Logic Apps in general, even though performance optimizations of many actions would also be more than welcome. What I currently really miss in the HTTP connector (and possibly others) is the ability to have better control over the request timeouts. Even though the setting is there, it is capped to 4 minutes max. In practice, we need to deliver data to external APIs that work synchronously and take more time to complete. Giving better control on these timeouts would make the usability of workflows even better! Even though some nice additions to the initialization of variables have been made recently, I would like to see the ability to initialize variables at any point in the workflow. E.g. the foreach loop can be executed in parallel, and therefore the current global variables are not thread-safe, which leads to unexpected behavior. News from our product group Logic Apps Live June 2025 Missed Logic Apps Live in June? You can watch it here. We focused on the Logic Apps big announcements from Integrate 2025. There are a lot of great things to check! Feedback Opportunity: SRE Agent + Logic Apps Discover the new Applications feature in Azure API Management, enabling OAuth-based access to APIs and products. Streamline secure API access with built-in OAuth 2.0 application-based authorization. Configure SQL Storage for Standard Logic Apps Azure Logic Apps traditionally rely on Azure Storage to manage workflow states and runtime data. However, with the introduction of SQL as a storage provider (currently in preview), developers now have a compelling alternative that offers greater control, flexibility, and integration with existing SQL infrastructure. This post explores the benefits, configuration steps, and considerations for using SQL storage with Standard Logic Apps. Announcing General Availability: Azure Logic Apps Standard Automated Test Framework We’re excited to announce the General Availability (GA) of the Azure Logic Apps Standard Automated Test Framework—a major step forward in enabling developers to build, test, and maintain enterprise-grade workflows with confidence and agility. Announcing General Availability: Azure Logic Apps Standard Custom Code with .NET 8 We’re excited to announce the General Availability (GA) of Custom Code support in Azure Logic Apps Standard with .NET 8. This release marks a significant step forward in enabling developers to build more powerful, flexible, and maintainable integration workflows using familiar .NET tools and practices. With this capability, developers can now embed custom .NET 8 code directly within their Logic Apps Standard workflows. This unlocks advanced logic scenarios, promotes code reuse, and allows seamless integration with existing .NET libraries and services—making it easier than ever to build enterprise-grade solutions on Azure. Business Process Tracking Reaches General Availability Business Process Tracking provides key insights to business stakeholders from your Logic Apps (Standard) implementation in an efficient and timely manner. Today, we are pleased to announce the General Availability of this capability, allowing customers to leverage in their production workloads. Announcement: General Availability of Logic Apps Hybrid Deployment Model We’re excited to announce the Public Preview of two major integrations that bring the power of Azure Logic Apps to AI Agents in Foundry – Logic Apps as Tools and AI Agent Service Connector. Learn more on our announcement post! Announcing Public Preview: Organizational Templates in Azure Logic Apps We’re excited to announce the Public Preview of Organizational Templates in Azure Logic Apps— empowering teams to author, share, and reuse automation patterns across their organization. With this release, we’re also rolling out a brand-new UI experience to easily create templates directly from your workflows—no manual packaging required! OpenTelemetry in Azure Logic Apps (Standard and Hybrid) OpenTelemetry provides a unified, vendor-agnostic framework for collecting telemetry data—logs, metrics, and traces—across different services and infrastructure layers. It simplifies monitoring and makes it easier to integrate with a variety of observability backends such as Azure Monitor, Grafana Tempo, Jaeger, and others. For Logic Apps—especially when deployed in hybrid or on-premises scenarios—OpenTelemetry is a powerful addition that elevates diagnostic capabilities beyond the default Application Insights telemetry. Logic App Standard - When High Memory / CPU usage strikes and what to do Monitoring your applications is essential, as it ensures that you know what's happening and you are not caught by surprise when something happens. One possible event is the performance of your application starting to decrease and processing becomes slower than usual. This may happen due to various reasons, and in this blog post, we will be discussing the High Memory and CPU usage and why it affects your Logic App. We will also observe some possibilities that we've seen that have been deemed as the root cause for some customers. Introducing Agent in a Day Agent in a Day represents a fantastic opportunity for customers to participate in hackathon-style contests where attendees learn how to build agents and then can apply them to their unique business use cases. For Partners, Agent in a Day represents a great way to engage your customers by building agents with them and uncovering new use cases. Introducing Confluent Kafka Connector (Public Preview) We are pleased to announce the introduction of the Confluent Kafka Connector in Logic Apps (Standard) which allows you to both send and receive messages between Logic Apps and Confluent Kafka. Confluent Kafka is a distributed streaming platform for building real-time data pipelines and streaming applications. It is used across many industries including financial services, Omnichannel retail, autonomous cars, fraud detection services, microservices and IoT deployments. Our current connector offering supports both triggers (receive) and sending (publish) within Logic Apps. News from our community Logic App Standard: Throw exceptions like a pro! Post by Şahin Özdemir Learn how to throw exceptions in Logic App Standard using a simple Compose action—no code needed, just clever workflow design. Azure Logic Apps: are you handling large blobs? Keep memory usage under control. Post by Stefano Demiliani Struggling with large blob files in Logic Apps? Learn how to keep memory usage under control and avoid out-of-memory errors with smart workflow design and a few performance-boosting tricks De SOAPing Services SOAP to REST using Azure API Management Video by Stephen W Thomas Struggling with legacy SOAP integrations from BizTalk to Azure? Check out this video on simplifying SOAP-to-REST conversions using Azure API Management and learn how easily you can manage SOAP envelopes and streamline your Logic Apps integrations! Integrating Entra ID and AI Agent workflows in Azure Logic Apps Post by Brian Veldman Discover how to build AI-powered workflows in Azure Logic Apps that interact with Entra ID, automate tasks, and adapt dynamically using agentic tools and OpenAI models. Advanced KQL Queries for Logic Apps in Application Insights: A Practical Guide Post by Dieter Gobeyn Boost Logic App performance with advanced KQL queries in Application Insights—spot bottlenecks, analyze slow actions, and optimize workflows without upgrading your hosting plan. How to Build an AI Agent with Azure Logic Apps Post by Cameron McKay Learn how to build your first AI Agent in Azure Logic Apps using Agent Loop—connect to OpenAI, design smart prompts, and automate tasks like weather reporting with low-code workflows. You Can Now Initialize All Your Variables In One Single Action Post by Luis Rigueira You can now initialize multiple variables in Logic Apps with a single action—making your workflows cleaner, faster, and easier to manage. It is a Friday Fact, brought to you by Luis Rigueira! Integration Insights Podcast: The Future of Integration Video by Sagar Sharma and Jochen Toelen In this two-part episode of the Integration Insights podcast, Sagar, Joechen and Kent dive into how integration is evolving in a cloud-first world. From BizTalk migrations to hybrid deployments with Azure Arc, they share practical insights and best practices to future-proof your integration strategy. A must-listen! You can watch part 2 here. Event Grid vs Service Bus vs Event Hubs vs Storage Queues: Choosing the Right Messaging Backbone in Azure Post by Prashant Singh Confused by Azure’s messaging options? This guide breaks down Event Grid, Service Bus, Event Hubs, and Storage Queues—helping you choose the right tool for real-time events, telemetry, enterprise workflows, or lightweight tasks. IntelliSense in Logic Apps Just Got Smarter – Matching Brackets in the Expression Editor! Post by Sandro Pereira Logic Apps just got a lot friendlier—bracket matching in the expression editor now highlights pairs as you type, making it easier to write and debug complex expressions.A Friday Fact from Sandro Pereira. How to Build Resilient Integrations for Mission-Critical Systems Post by Lilan Sameera Learn how to build resilient integrations for mission-critical systems using Logic Apps, Service Bus, and Event Hub—ensuring reliable data delivery, smart retries, and clean outputs even under pressure.553Views2likes0CommentsEnhancing AI Integrations with MCP and Azure API Management
As AI Agents and assistants become increasingly central to modern applications and experiences, the need for seamless, secure integration with external tools and data sources is more critical than ever. The Model Context Protocol (MCP) is emerging as a key open standard enabling these integrations - allowing AI models to interact with APIs, Databases and other services in a consistent, scalable way. Understanding MCP MCP utilizes a client-host-server architecture built upon JSON-RPC 2.0 for messaging. Communication between clients and servers occurs over defined transport layers, primarily: stdio: Standard input/output, suitable for efficient communication when the client and server run on the same machine. HTTP with Server-Sent Events (SSE): Uses HTTP POST for client-to-server messages and SSE for server-to-client messages, enabling communication over networks, including remote servers. Why MCP Matters While Large Language Models (LLMs) are powerful, their utility is often limited by their inability to access real-time or proprietary data. Traditionally, integrating new data sources or tools required custom connectors/ implementations and significant engineering efforts. MCP addresses this by providing a unified protocol for connecting agents to both local and remote data sources - unifying and streamlining integrations. Leveraging Azure API Management for remote MCP servers Azure API Management is a fully managed platform for publishing, securing, and monitoring APIs. By treating MCP server endpoints as other backend APIs, organizations can apply familiar governance, security, and operational controls. With MCP adoption, the need for robust management of these backend services will intensify. API Management retains a vital role in governing these underlying assets by: Applying security controls to protect the backend resources. Ensuring reliability. Effective monitoring and troubleshooting with tracing requests and context flow. n this blog post, I will walk you through a practical example: hosting an MCP server behind Azure API Management, configuring credential management, and connecting with GitHub Copilot. A Practical Example: Automating Issue Triage To follow along with this scenario, please check out our Model Context Protocol (MCP) lab available at AI-Gateway/labs/model-context-protocol Let's move from theory to practice by exploring how MCP, Azure API Management (APIM) and GitHub Copilot can transform a common engineering workflow. Imagine you're an engineering manager aiming to streamline your team's issue triage process - reducing manual steps and improving efficiency. Example workflow: Engineers log bugs/ feature requests as GitHub issues Following a manual review, a corresponding incident ticket is generated in ServiceNow. This manual handoff is inefficient and error prone. Let's see how we can automate this process - securely connecting GitHub and ServiceNow, enabling an AI Agent (GitHub Copilot in VS Code) to handle triage tasks on your behalf. A significant challenge in this integration involves securely managing delegated access to backend APIs, like GitHub and ServiceNow, from your MCP Server. Azure API Management's credential manager solves this by centralizing secure credential storage and facilitating the secure creation of connections to your third-party backend APIs. Build and deploy your MCP server(s) We'll start by building two MCP servers: GitHub Issues MCP Server Provides tools to authenticate on GitHub (authorize_github), retrieve user infromation (get_user ) and list issues for a specified repository (list_issues). ServiceNow Incidents MCP Server Provides tools to authenticate with ServiceNow (authorize_servicenow), list existing incidents (list_incidents) and create new incidents (create_incident). We are using Azure API Management to secure and protect both MCP servers, which are built using Azure Container Apps. Azure API Management's credential manager centralizes secure credential storage and facilitates the secure creation of connections to your backend third-party APIs. Client Auth: You can leverage API Management subscriptions to generate subscription keys, enabling client access to these APIs. Optionally, to further secure /sse and /messages endpoints, we apply the validate-jwt policy to ensure that only clients presenting a valid JWT can access these endpoints, preventing unauthorized access. (see: AI-Gateway/labs/model-context-protocol/src/github/apim-api/auth-client-policy.xml) After registering OAuth applications in GitHub and ServiceNow, we update APIM's credential manager with the respective Client IDs and Client Secrets. This enables APIM to perform OAuth flows on behalf of users, securely storing and managing tokens for backend calls to GitHub and ServiceNow. Connecting your MCP Server in VS Code With your MCP servers deployed and secured behind Azure API Management, the next step is to connect them to your development workflow. Visual Studio Code now supports MCP, enabling GitHub Copilot's agent mode to connect to any MCP-compatible server and extend its capabilities. Open Command Pallette and type in MCP: Add Server ... Select server type as HTTP (HTTP or Server-Sent Events) Paste in the Server URL Provide a Server ID This process automatically updates your settings.json with the MCP server configuration. Once added, GitHub Copilot can connect to your MCP servers and access the defined tools, enabling agentic workflows such as issue triage and automation. You can repeat these steps to add the ServiceNow MCP Server. Understanding Authentication and Authorization with Credential Manager When a user initiates an authentication workflow (e.g, via the authorize_github tool), GitHub Copilot triggers the MCP server to generate an authorization request and a unique login URL. The user is redirected to a consent page, where their registered OAuth application requests permissions to access their GitHub account. Azure API Management acts as a secure intermediary, managing the OAuth flow and token storage. Flow of authorize_github: Step 1 - Connection initiation: GitHub Copilot Agent invokes a sse connection to API Management via the MCP Client (VS Code) Step 2 - Tool Discovery: APIM forwards the request to the GitHub MCP Server, which responds with available tools Step 3 - Authorization Request: GitHub Copilot selects and executes authorize_github tool. The MCP server generates an authorization_id for the chat session. Step 4 - User Consent: If it's the 1st login, APIM requests a login redirect URL from the MCP Server The MCP Server sends the Login URL to the client, prompting the user to authenticate with GitHub Upon successful login, GitHub redirects the client with an authorization code Step 5 - Token Exchange and Storage: The MCP Client sends the authorization code to API Management APIM exchanges the code for access and refresh tokens from GitHub APIM securely stores the token and creates an Access Control List (ACL) for the service principal. Step 6 - Confirmation: APIM confirms successful authentication to the MCP Client, and the user can now perform authenticated actions, such as accessing private repositories. Check out the python logic for how to implement it: AI-Gateway/labs/model-context-protocol/src/github/mcp-server/mcp-server.py Understanding Tool Calling with underlaying APIs in API Management Using the list_issues tool, Connection confirmed APIM confirms the connection to the MCP Client Issue retrieval: The MCP Client requests issues from the MCP server The MCP Server attaches the authorization_id as a header and forwards the request to APIM The list of issues is returned to the agent You can use the same process to add the ServiceNow MCP Server. With both servers connected, GitHub Copilot Agent can extract issues from a private repo in GitHub and create new incidences in ServiceNow, automating your triage workflow. You can define additional tools such as suggest_assignee tool, assign_engineer tool, update_incident_status tool, notify_engineer tool, request_feedback tool and other to demonstrate a truly closed-loop, automated engineering workflow - from issue creation to resolution and feedback. Take a look at this brief demo showcasing the entire end-to-end process: Summary Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we demonstrated how Azure API Management's credential manager solves the secure creation of connections to your backend APIs. By integrating MCP servers with VS Code and leveraging APIM for OAuth flows and token management, you can enable secure, agenting automation across your engineering tools. This approach not only streamlines workflows like issues triage and incident creation but also ensures enterprise-grade security and governance for all APIs. Additional Resources Using Credential Manager will help with managing OAuth 2.0 tokens to backend services. Client Auth for remote MCP servers: AZD up: https://aka.ms/mcp-remote-apim-auth AI lab Client Auth: AI-Gateway/labs/mcp-client-authorization/mcp-client-authorization.ipynb Blog Post: https://aka.ms/remote-mcp-apim-auth-blog If you have any questions or would like to learn more about how MCP and Azure API Management can benefit your organization, feel free to reach out to us. We are always here to help and provide further insights. Connect with us on LinkedIn (Julia Kasper & Julia Muiruri) and follow for more updates, insights, and discussions on AI integrations and API management.4.3KViews3likes2CommentsExpose REST APIs as MCP servers with Azure API Management and API Center (now in preview)
As AI-powered agents and large language models (LLMs) become central to modern application experiences, developers and enterprises need seamless, secure ways to connect these models to real-world data and capabilities. Today, we’re excited to introduce two powerful preview capabilities in the Azure API Management Platform: Expose REST APIs in Azure API Management as remote Model Context Protocol (MCP) servers Discover and manage MCP servers using API Center as a centralized enterprise registry Together, these updates help customers securely operationalize APIs for AI workloads and improve how APIs are managed and shared across organizations. Unlocking the value of AI through secure API integration While LLMs are incredibly capable, they are stateless and isolated unless connected to external tools and systems. Model Context Protocol (MCP) is an open standard designed to bridge this gap by allowing agents to invoke tools—such as APIs—via a standardized, JSON-RPC-based interface. With this release, Azure empowers you to operationalize your APIs for AI integration—securely, observably, and at scale. 1. Expose REST APIs as MCP servers with Azure API Management An MCP server exposes selected API operations to AI clients over JSON-RPC via HTTP or Server-Sent Events (SSE). These operations, referred to as “tools,” can be invoked by AI agents through natural language prompts. With this new capability, you can expose your existing REST APIs in Azure API Management as MCP servers—without rebuilding or rehosting them. Addressing common challenges Before this capability, customers faced several challenges when implementing MCP support: Duplicating development efforts: Building MCP servers from scratch often led to unnecessary work when existing REST APIs already provided much of the needed functionality. Security concerns: Server trust: Malicious servers could impersonate trusted ones. Credential management: Self-hosted MCP implementations often had to manage sensitive credentials like OAuth tokens. Registry and discovery: Without a centralized registry, discovering and managing MCP tools was manual and fragmented, making it hard to scale securely across teams. API Management now addresses these concerns by serving as a managed, policy-enforced hosting surface for MCP tools—offering centralized control, observability, and security. Benefits of using Azure API Management with MCP By exposing MCP servers through Azure API Management, customers gain: Centralized governance for API access, authentication, and usage policies Secure connectivity using OAuth 2.0 and subscription keys Granular control over which API operations are exposed to AI agents as tools Built-in observability through APIM’s monitoring and diagnostics features How it works MCP servers: In your API Management instance navigate to MCP servers Choose an API: + Create a new MCP Server and select the REST API you wish to expose. Configure the MCP Server: Select the API operations you want to expose as tools. These can be all or a subset of your API’s methods. Test and Integrate: Use tools like MCP Inspector or Visual Studio Code (in agent mode) to connect, test, and invoke the tools from your AI host. Getting started and availability This feature is now in public preview and being gradually rolled out to early access customers. To use the MCP server capability in Azure API Management: Prerequisites Your APIM instance must be on a SKUv1 tier: Premium, Standard, or Basic Your service must be enrolled in the AI Gateway early update group (activation may take up to 2 hours) Use the Azure Portal with feature flag: ➤ Append ?Microsoft_Azure_ApiManagement=mcp to your portal URL to access the MCP server configuration experience Note: Support for SKUv2 and broader availability will follow in upcoming updates. Full setup instructions and test guidance can be found via aka.ms/apimdocs/exportmcp. 2. Centralized MCP registry and discovery with Azure API Center As enterprises adopt MCP servers at scale, the need for a centralized, governed registry becomes critical. Azure API Center now provides this capability—serving as a single, enterprise-grade system of record for managing MCP endpoints. With API Center, teams can: Maintain a comprehensive inventory of MCP servers. Track version history, ownership, and metadata. Enforce governance policies across environments. Simplify compliance and reduce operational overhead. API Center also addresses enterprise-grade security by allowing administrators to define who can discover, access, and consume specific MCP servers—ensuring only authorized users can interact with sensitive tools. To support developer adoption, API Center includes: Semantic search and a modern discovery UI. Easy filtering based on capabilities, metadata, and usage context. Tight integration with Copilot Studio and GitHub Copilot, enabling developers to use MCP tools directly within their coding workflows. These capabilities reduce duplication, streamline workflows, and help teams securely scale MCP usage across the organization. Getting started This feature is now in preview and accessible to customers: https://aka.ms/apicenter/docs/mcp AI Gateway Lab | MCP Registry 3. What’s next These new previews are just the beginning. We're already working on: Azure API Management (APIM) Passthrough MCP server support We’re enabling APIM to act as a transparent proxy between your APIs and AI agents—no custom server logic needed. This will simplify onboarding and reduce operational overhead. Azure API Center (APIC) Deeper integration with Copilot Studio and VS Code Today, developers must perform manual steps to surface API Center data in Copilot workflows. We’re working to make this experience more visual and seamless, allowing developers to discover and consume MCP servers directly from familiar tools like VS Code and Copilot Studio. For questions or feedback, reach out to your Microsoft account team or visit: Azure API Management documentation Azure API Center documentation — The Azure API Management & API Center Teams6.6KViews5likes6CommentsAutoscaling Now Available in Azure API Management v2 Tiers
Gateway-Level Metrics: Deep Insight into Performance Azure API Management now exposes fine-grained metrics for each Azure API management v2 gateway instance, giving you more control and observability. These enhancements give you deeper visibility into your infrastructure and the ability to scale automatically based on real-time usage—without manual effort. Key Gateway Metrics CPU Percentage of Gateway – Available in Basic v2, Standard v2, and Premium v2 Memory Percentage of Gateway – Available in Basic v2 and Standard v2 These metrics are essential for performance monitoring, diagnostics, and intelligent scaling. Native Autoscaling: Adaptive, Metric-Driven Scaling With gateway-level metrics in place, Azure Monitor autoscale rules can now drive automatic scaling of Azure API Management v2 gateways. How It Works You define scaling rules that automatically increase or decrease gateway instances based on: CPU percentage Memory percentage (for Basic v2 and Standard v2) Autoscale evaluates these metrics against your thresholds and acts accordingly, eliminating the need for manual scaling or complex scripts. Benefits of Autoscaling in Azure API management v2 tiers Autoscaling in Azure API Management brings several critical benefits for operational resilience, efficiency, and cost control: Reliability Maintain consistent performance by automatically scaling out during periods of high traffic. Your APIs stay responsive and available—even under sudden load spikes. Operational Efficiency Automated scaling eliminates manual, error-prone intervention. This allows teams to focus on innovation, not infrastructure management. Cost Optimization When traffic drops, auto scale automatically scales in to reduce the number of gateway instances—helping you save on infrastructure costs without sacrificing performance. Use Case Highlights Autoscaling is ideal for: APIs with unpredictable or seasonal traffic Enterprise systems needing automated resiliency Teams seeking cost control and governance Premium environments that demand always-on performance Get Started Today Enabling autoscaling is easy via the Azure Portal: Open your API Management instance Go to Settings > Scale out (Autoscale) Enable autoscaling and define rules using gateway metrics Monitor performance in real time via Azure Monitor Configuration walkthrough: Autoscale your Azure API Management v2 instance