azure functions
326 TopicsAnnouncing a flexible, predictable billing model for Azure SRE Agent
Billing for Azure SRE Agent will start on September 1, 2025. Announced at Microsoft Build 2025, Azure SRE Agent is a pre-built AI agent for root cause analysis, uptime improvement, and operational cost reduction. Learn more about the billing model and example scenarios.514Views0likes0CommentsAnnouncing the public preview launch of Azure Functions durable task scheduler
We are excited to roll out the public preview of the Azure Functions durable task scheduler. This new Azure-managed backend is designed to provide high performance, improve reliability, reduce operational overhead, and simplify the monitoring of your stateful orchestrations. If you missed the initial announcement of the private preview, see this blog post. Durable Task Scheduler Durable functions simplifies the development of complex, stateful, and long-running apps in a serverless environment. It allows developers to orchestrate multiple function calls without having to handle fault tolerance. It's great for scenarios like orchestrating multiple agents, distributed transactions, big data processing, batch processing like ETL (extract, transform, load), asynchronous APIs, and essentially any scenario that requires chaining function calls with state persistence. The durable task scheduler is a new storage provider for durable functions, designed to address the challenges and gaps identified by our durable customers with existing bring-your-own storage options. Over the past few months, since the initial limited early access launch of the durable task scheduler, we’ve been working closely with our customers to understand their requirements and ensure they are fully supported in using the durable task scheduler successfully. We’ve also dedicated significant effort to strengthening the fundamentals – expanding regional availability, solidifying APIs, and ensuring the durable task scheduler is reliable, secure, scalable, and can be leveraged from any of the supported durable functions programming languages. Now, we’re excited to open the gates and make the durable task scheduler available to the public. Some notable capabilities and enhancements over the existing “bring your own storage” options include: Azure Managed Unlike the other existing storage providers for durable functions, the durable task scheduler offers dedicated resources that are fully managed by Azure. You no longer need to bring your own storage account for storing orchestration and entity state, as it is completely built in. Looking ahead, the roadmap includes additional operational capabilities, such as auto-purging old execution history, handling failover, and other Business Continuity and Disaster Recovery (BCDR) capabilities. Superior Performance and Scalability Enhanced throughput for processing orchestrations and entities, ideal for demanding and high-scale applications. Efficiently manages sudden bursts of events, ensuring reliable and quick processing of your orchestrations across your function app instances. The table below compares the throughput of the durable task scheduler provider and the Azure Storage provider. The function app used for this test runs onone to four Elastic Premium EP2 instances. The orchestration code was written in C# using the .NET Isolated worker model on NET 8. The same app was used for all storage providers, and the only change was the backend storage provider configuration. The test is triggered using an HTTP trigger which starts 5,000 orchestrations concurrently. The benchmark used a standard orchestrator function calling five activity functions sequentially, each returning a "Hello, {cityName}!" string. This specific benchmark showed that the durable task scheduler is roughly five times faster than the Azure Storage provider. Orchestration Debugging and Management Dashboard Simplify the monitoring and management of orchestrations with an intuitive out-of-the-box UI. It offers clear visibility into orchestration errors and lifecycle events through detailed visual diagrams, providing essential information on exceptions and processing times. It also enables interactive orchestration management, allowing you to perform ad hoc actions such as suspending, resuming, raising events, and terminating orchestrations. Monitor the inputs and outputs between orchestration and activities. Exceptions surfaced making it easy to identify where and why an orchestration may have failed. Security Best Practices Uses identity-based authentication with Role-Based Access Control (RBAC) for enterprise-grade authorization, eliminating the need for SAS tokens or access keys. Local Emulator To simplify the development experience, we are also launching a durable task scheduler emulator that can be run as a container on your development machine. The emulator supports the same durable task scheduler runtime APIs and stores data in local memory, enabling a completely offline debugging experience. The emulator also allows you to run the durable task scheduler management dashboard locally. Pricing Plan We’re excited to announce the initial launch of the durable task scheduler with a Dedicated, fixed pricing plan. One of the key pieces of feedback that we’ve consistently received from customers is the desire for more upfront billing transparency. To address this, we’ve introduced a fixed pricing model with the option to purchase a specific amount of performance and storage through an abstraction called a Capacity Unit (CU). A single CU provides : Single tenancy with dedicated resources for predictable performance Up to 2,000 work items* dispatched per second 50GB of orchestration data storage A Capacity Unit (CU) is a measure of the resources allocated to your durable task scheduler. Each CU represents a pre-allocated amount of CPU, memory, and storage resources. A single CU guarantees the dispatch of a certain number of work items and provides a defined amount of storage. If additional performance and/or storage are needed, more CUs can be purchased*. A Work Item is a message dispatched by the durable task scheduler to your application, triggering the execution of orchestrator, activity, or entity functions. The number of work items that can be dispatch per second is determined by the Capacity Units allocated to the durable task scheduler. For detailed instructions on determining the number of work items your applications needs and the number of CUs you should purchase, please refer to the guidance provided here. *At the beginning of the public preview phase, schedulers will be temporarily limited to a single CU. *Billing for the durable task scheduler will begin on May 1st, 2025. Under the Hood The durable functions team has been continuously evolving the architecture of the backends that persist the state of orchestrations and entities; the durable task scheduler is the latest installment in this series, and it includes both the most successful characteristics of its predecessors, as well as some significant improvements of its own. In the next paragraph, we shed some light on what is new. Of course, it is not necessary to understand these internal implementation details, and they are subject to change as we will keep improving and optimizing the design. Like the MSSQL provider, the durable task scheduler uses a SQL database as the storage foundation, to provide robustness and versatility. Like the Netherite provider, it uses a partitioned design to achieve scale-out, and a pipelining optimization to boost the partition persistence. Unlike the previous backends, however, the durable task scheduler runs as a service, on its own compute nodes, to which workers are connected by GRPC. This significantly improves latency and load balancing. It strongly isolates the workflow management logic from the user application, allowing them to be scaled separately. What can we expect next for the durable task scheduler? One of the most exciting developments is the significant interest we’ve received in leveraging the durable task scheduler across other Azure compute offerings beyond Azure Functions, such as Azure Container Apps (ACA) and Azure Kubernetes Service (AKS). As we continue to enhance the integration with durable functions, we have also integrated the durable task SDKs, which are the underlying technology behind the durable task framework and durable functions, to support the durable task scheduler directly. We refer to these durable task sdks as the “portable SDKs” because they are a client-only SDK that connects directly to the durable task scheduler, where the managed orchestration engine resides, eliminating any dependency on the underlying compute platform, hence the name “portable”. By utilizing the portable SDKs to author your orchestrations as code, you can deploy your orchestrations across any Azure compute offering. This allows you to leverage the durable task scheduler as the backend, benefiting from its full set of capabilities. If you would like to discuss this further with our team or are interested in trying out the portable SDK yourself, please feel free to reach out to us at DurableTaskScheduler@microsoft.com . We welcome your questions and feedback. We've also received feedback from customers’ requesting a versioning mechanism to facilitate zero downtime deployments. This feature would enable you to manage breaking workflow changes by allowing all in-flight orchestrations using the older version to complete, while switching new orchestrations to the updated version. This is already in development and will be available in the near future. Lastly, we are in the process of introducing critical enterprise features under the category of Business Continuity and Disaster Recovery (BCDR). We understand the importance of these capabilities as our customers rely on the durable task scheduler for critical production scenarios. Get started with the durable task scheduler Migrating to the durable task scheduler from an existing durable function application is a quick process. The transition is purely configuration changes, meaning your existing orchestrations and business logic remain unchanged. The durable task scheduler is provided through a new Azure resource known as a scheduler. Each scheduler can contain one or multiple task hubs, which are sub-resources. A task hub, an established concept within durable functions, is responsible for managing the state and execution of orchestrations and activities. Think of a task hub as a logical way to separate your applications that require orchestrations execution. A Durable Task scheduler resource in the Azure Portal includes a task hub named dts-github-agent One you have created a scheduler and task hub(s), simply add the library package to your project and update your host.json to point your function app to the durable task scheduler endpoint and task hub. That’s all there is to it. With the correct authentication configuration applied, your applications can fully leverage the capabilities of the durable task scheduler. For more detailed information on how to migrate or start using the durable task scheduler, visit our getting started page here. Get in touch to learn more We are always interested in engaging with both existing and potential new customers. If any of the above interests you,, if you have any questions, or if you simply want to discuss your scenarios and explore how you can leverage the durable task scheduler, feel free to reach out to us anytime. Our line is always open - DurableTaskScheduler@microsoft.com.3.5KViews1like3CommentsCall Function App from Azure Data Factory with Managed Identity Authentication
Integrating Azure Function Apps into your Azure Data Factory (ADF) workflows is a common practice. To enhance security beyond the use of function API keys, leveraging managed identity authentication is strongly recommended. Given the fact that many existing guides were outdated with recent updates to Azure services, this article provides a comprehensive, up-to-date walkthrough on configuring managed identity in ADF to securely call Function Apps. The provided methods can also be adapted to other Azure services that need to call Function Apps with managed identity authentication. The high level process is: Enable Managed Identity on Data Factory Configure Microsoft Entra Sign-in on Azure Function App Configure Linked Service in Data Factory Assign Permissions to the Data Factory in Azure Function Step 1: Enable Managed Identity on Data Factory On the Data Factory’s portal, go to Managed Identities, and enable a system assigned managed identity. Step 2: Configure Microsoft Entra Sign-in on Azure Function App 1. Go to Function App portal and enable Authentication. Choose "Microsoft" as the identity provider. 2. Add an app registration to the app, it could be an existing one or you can choose to let the platform create a new app registration. 3. Next, allow the ADF as a client application to authenticate to the function app. This step is a new requirement from previous guides. If these settings are not correctly set, the 403 response will be returned: Add the Application ID of the ADF managed identity in Allowed client application and Object ID of the ADF managed identity in the Allowed identities. If the requests are only allowed from specific tenants, add the Tenant ID of the managed identity in the last box. 4. This part sets the response from function app for the unauthenticated requests. We should set the response as "HTTP 401 Unauthorized: recommended for APIs" as sign-in page is not feasible for API calls from ADF. 5. Then, click next and use the default permission option. 6. After everything is set, click "Add" to complete the configuration. Copy the generated App (client) id, as this is used in data factory to handle authorization. Step 3: Configure Linked Service in Data Factory 1. To use an Azure Function activity in a pipeline, follow the steps here: Create an Azure Function activity with UI 2. Then Edit or New a Azure Function Linked Service. 3. Change authentication method to System Assigned Managed Identity, and paste the copied client ID of function app identity provider from Step 2 into Resource ID. This step is necessary as authorization does not work without this. Step 4: Assign Permissions to the Data Factory in Azure Function 1. On the function app portal, go to Access control (IAM), and Add a new role assignment. 2. Assign reader role. 3. Assign the Data Factory’s Managed Identity to that role. After everything is set, test that the function app can be called from Azure Data Factory successfully. Reference: https://prodata.ie/2022/06/16/enabling-managed-identity-authentication-on-azure-functions-in-data-factory/ https://learn.microsoft.com/en-us/azure/data-factory/control-flow-azure-function-activity https://docs.azure.cn/en-us/app-service/overview-authentication-authorization397Views0likes0CommentsCapture Java Thread Dump from Kudu console on Windows App Service
A Java thread dump is a snapshot of all the threads in a Java process. It is a great tool that can help you troubleshooting performance issue of your Java app. Azure App Service provides the ability to capture Java thread dumps easily using its 'Diagnostic & Solve Problem' tool. However, if you're unable to use it or prefer a more customized approach, there's also a way to capture thread dumps from the kudu console. This article provides a step-by-step guide on how to do this.82Views0likes0CommentsTroubleshoot Az Module within Logic App Standard
In Logic App Standard, you can use Powershell Code, using the Execute Powershell Script action If your powershell script depends on Az Module, then the logic app will try to download and install the Az Module within the runtime In some cases, when the logic app is VNET integrated (has NSG, or routed to Virtual Appliance), or hosted within internal ASE, the logic app will be unable to reach the needed Powershell endpoints, and the download would fail, and the action would fail with below error This can be seen within the log folder in Kudo site (Path: C:\home\LogFiles\Application\Functions\Function\Powershell), and the error would appear as: Exception: Failed to install function app dependencies. Error: 'Failed to install function app dependencies. Error: 'The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Unable to save the module 'Az'.' To troubleshoot this, you need to assure that logic app can reach the needed urls, you can try the simple below steps: From Kudo console, you can try this curl command: curl https://cdn.powershellgallery.com/packages/az.accounts.5.1.1.nupkg --output az.accounts.5.1.1.nupkg It should respond as below: If there is reachability issue, you would see the below response: Also, you can test by creating simple workflow, as below: When you run the HTTP action, it should succeed, however, if you have reachability issue, it would fail as below: This means logic app cannot reach the needed endpoint, which are clarified in our documentation (Troubleshooting cmdlets - PowerShell | Microsoft Learn) To resolve this, assure that the NSG, Firewall, or Virtual appliance allows the logic app to reach the mentioned endpoints., Another scenario where the Az module installation could fail, is if the storage account capacity is almost utilized, and there is no space available to download the Az module, you can confirm this by only referencing the Az.Accounts module; as this module is small in size. if the Az.Accounts module worked, then the issue is highly likely due to storage capacityImportant Changes to App Service Managed Certificates: Is Your Certificate Affected?
Overview As part of an upcoming industry-wide change, DigiCert, the Certificate Authority (CA) for Azure App Service Managed Certificates (ASMC), is required to migrate to a new validation platform to meet multi-perspective issuance corroboration (MPIC) requirements. While most certificates will not be impacted by this change, certain site configurations and setups may prevent certificate issuance or renewal starting July 28, 2025. What Will the Change Look Like? For most customers: No disruption. Certificate issuance and renewals will continue as expected for eligible site configurations. For impacted scenarios: Certificate requests will fail (no certificate issued) starting July 28, 2025, if your site configuration is not supported. Existing certificates will remain valid until their expiration (up to six months after last renewal). Impacted Scenarios You will be affected by this change if any of the following apply to your site configurations: Your site is not publicly accessible: Public accessibility to your app is required. If your app is only accessible privately (e.g., requiring a client certificate for access, disabling public network access, using private endpoints or IP restrictions), you will not be able to create or renew a managed certificate. Other site configurations or setup methods not explicitly listed here that restrict public access, such as firewalls, authentication gateways, or any custom access policies, can also impact eligibility for managed certificate issuance or renewal. Action: Ensure your app is accessible from the public internet. However, if you need to limit access to your app, then you must acquire your own SSL certificate and add it to your site. Your site uses Azure Traffic Manager "nested" or "external" endpoints: Only “Azure Endpoints” on Traffic Manager will be supported for certificate creation and renewal. “Nested endpoints” and “External endpoints” will not be supported. Action: Transition to using "Azure Endpoints". However, if you cannot, then you must obtain a different SSL certificate for your domain and add it to your site. Your site relies on *.trafficmanager.net domain: Certificates for *.trafficmanager.net domains will not be supported for creation or renewal. Action: Add a custom domain to your app and point the custom domain to your *.trafficmanager.net domain. After that, secure the custom domain with a new SSL certificate. If none of the above applies, no further action is required. How to Identify Impacted Resources? To assist with the upcoming changes, you can use Azure Resource Graph (ARG) queries to help identify resources that may be affected under each scenario. Please note that these queries are provided as a starting point and may not capture every configuration. Review your environment for any unique setups or custom configurations. Scenario 1: Sites Not Publicly Accessible This ARG query retrieves a list of sites that either have the public network access property disabled or are configured to use client certificates. It then filters for sites that are using App Service Managed Certificates (ASMC) for their custom hostname SSL bindings. These certificates are the ones that could be affected by the upcoming changes. However, please note that this query does not provide complete coverage, as there may be additional configurations impacting public access to your app that are not included here. Ultimately, this query serves as a helpful guide for users, but a thorough review of your environment is recommended. You can copy this query, paste it into Azure Resource Graph Explorer, and then click "Run query" to view the results for your environment. // ARG Query: Identify App Service sites that commonly restrict public access and use ASMC for custom hostname SSL bindings resources | where type == "microsoft.web/sites" // Extract relevant properties for public access and client certificate settings | extend publicNetworkAccess = tolower(tostring(properties.publicNetworkAccess)), clientCertEnabled = tolower(tostring(properties.clientCertEnabled)) // Filter for sites that either have public network access disabled // or have client certificates enabled (both can restrict public access) | where publicNetworkAccess == "disabled" or clientCertEnabled != "false" // Expand the list of SSL bindings for each site | mv-expand hostNameSslState = properties.hostNameSslStates | extend hostName = tostring(hostNameSslState.name), thumbprint = tostring(hostNameSslState.thumbprint) // Only consider custom domains (exclude default *.azurewebsites.net) and sites with an SSL certificate bound | where tolower(hostName) !endswith "azurewebsites.net" and isnotempty(thumbprint) // Select key site properties for output | project siteName = name, siteId = id, siteResourceGroup = resourceGroup, thumbprint, publicNetworkAccess, clientCertEnabled // Join with certificates to find only those using App Service Managed Certificates (ASMC) // ASMCs are identified by the presence of the "canonicalName" property | join kind=inner ( resources | where type == "microsoft.web/certificates" | extend certThumbprint = tostring(properties.thumbprint), canonicalName = tostring(properties.canonicalName) // Only ASMC uses the "canonicalName" property | where isnotempty(canonicalName) | project certName = name, certId = id, certResourceGroup = tostring(properties.resourceGroup), certExpiration = properties.expirationDate, certThumbprint, canonicalName ) on $left.thumbprint == $right.certThumbprint // Final output: sites with restricted public access and using ASMC for custom hostname SSL bindings | project siteName, siteId, siteResourceGroup, publicNetworkAccess, clientCertEnabled, thumbprint, certName, certId, certResourceGroup, certExpiration, canonicalName Scenario 2: Traffic Manager Endpoint Types For this scenario, please manually review your Traffic Manager profile configurations to ensure only “Azure Endpoints” are in use. We recommend inspecting your Traffic Manager profiles directly in the Azure portal or using relevant APIs to confirm your setup and ensure compliance with the new requirements. Scenario 3: Certificates Issued to *.trafficmanager.net Domains This ARG query helps you identify App Service Managed Certificates (ASMC) that were issued to *.trafficmanager.net domains. In addition, it also checks whether any web apps are currently using those certificates for custom domain SSL bindings. You can copy this query, paste it into Azure Resource Graph Explorer, and then click "Run query" to view the results for your environment. // ARG Query: Identify App Service Managed Certificates (ASMC) issued to *.trafficmanager.net domains // Also checks if any web apps are currently using those certificates for custom domain SSL bindings resources | where type == "microsoft.web/certificates" // Extract the certificate thumbprint and canonicalName (ASMCs have a canonicalName property) | extend certThumbprint = tostring(properties.thumbprint), canonicalName = tostring(properties.canonicalName) // Only ASMC uses the "canonicalName" property // Filter for certificates issued to *.trafficmanager.net domains | where canonicalName endswith "trafficmanager.net" // Select key certificate properties for output | project certName = name, certId = id, certResourceGroup = tostring(properties.resourceGroup), certExpiration = properties.expirationDate, certThumbprint, canonicalName // Join with web apps to see if any are using these certificates for SSL bindings | join kind=leftouter ( resources | where type == "microsoft.web/sites" // Expand the list of SSL bindings for each site | mv-expand hostNameSslState = properties.hostNameSslStates | extend hostName = tostring(hostNameSslState.name), thumbprint = tostring(hostNameSslState.thumbprint) // Only consider bindings for *.trafficmanager.net custom domains with a certificate bound | where tolower(hostName) endswith "trafficmanager.net" and isnotempty(thumbprint) // Select key site properties for output | project siteName = name, siteId = id, siteResourceGroup = resourceGroup, thumbprint ) on $left.certThumbprint == $right.thumbprint // Final output: ASMCs for *.trafficmanager.net domains and any web apps using them | project certName, certId, certResourceGroup, certExpiration, canonicalName, siteName, siteId, siteResourceGroup Ongoing Updates We will continue to update this post with any new queries or important changes as they become available. Be sure to check back for the latest information. Note on Comments We hope this information helps you navigate the upcoming changes. To keep this post clear and focused, comments are closed. If you have questions, need help, or want to share tips or alternative detection methods, please visit our official support channels or the Microsoft Q&A, where our team and the community can assist you.13KViews1like0CommentsCollaborative Function App Development Using Repo Branches
In this example, I demonstrate a Windows-based Function App using PowerShell, with deployment via Azure DevOps (ADO) and a Bicep template. Local development is done in VSCode. Scenario: Your Function App project resides in a shared repository maintained by a team. Each developer works on a separate branch. Whenever a branch is updated, the Function App is deployed to a slot named after that branch. If the slot doesn't exist, it will be automatically created. How to use it: Create a Function App You can create a Function App using any method of your choice. Prepare a corresponding repo in Azure DevOps Set up your repo structure for the Function App source code. Create Function App code using the VSCode wizard In this example, we use PowerShell and create an anonymous HTTP trigger. Then, we manually add three additional files. The resulting directory structure looks like this: deploy.yml trigger: branches: include: - '*' pool: vmImage: 'ubuntu-latest' variables: azureSubscription: '<YOUR_CONNECTION_STRING_FROM_ADO>' functionAppName: '<YOUR_FUNCTION_APP_NAME>' resourceGroup: '<YOUR_RG_NAME>' location: '<YOUR_LOCATION_NAME>' steps: - checkout: self - task: AzureCLI@2 name: DeploySlotInfra inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying production infrastructure" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy-master.bicep \ --parameters functionAppName=$(functionAppName) location=$(location) else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying slot: $SLOT_NAME" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy.bicep \ --parameters functionAppName=$(functionAppName) slotName=$SLOT_NAME location=$(location) fi - task: ArchiveFiles@2 displayName: 'Package Function App as ZIP' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/' includeRootFolder: false archiveType: zip archiveFile: '$(Build.ArtifactStagingDirectory)/functionapp.zip' replaceExistingArchive: true - task: AzureCLI@2 name: ZipDeploy inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying code to production" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying code to slot: $SLOT_NAME" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --slot $SLOT_NAME \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" fi Please replace all <YOUR_XXX> placeholders with values relevant to your environment. Additionally, update the two instances of "master" to match your repo's default branch name (e.g., main), as updates from this branch will always deploy to the production slot. deploy-master.bicep @description('Function App Name') param functionAppName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource appSettings 'Microsoft.Web/sites/config@2022-09-01' = { name: 'appsettings' parent: functionApp properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } deploy.bicep @description('Function App Name') param functionAppName string @description('Slot Name (e.g., dev, test, feature-xxx)') param slotName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource functionSlot 'Microsoft.Web/sites/slots@2022-09-01' = { name: slotName parent: functionApp location: location properties: { serverFarmId: functionApp.properties.serverFarmId } } resource slotAppSettings 'Microsoft.Web/sites/slots/config@2022-09-01' = { name: 'appsettings' parent: functionSlot properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } Deploy from the master branch Once deployed, the HTTP trigger becomes active in the production slot, and can be accessed via: https://<FUNCTION_APP_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Switch to a custom branch like member1 and create a test HTTP trigger After publishing, a new deployment slot named member1 will be created (if not already existing). You can open it in the Azure Portal and view its dedicated interface. The branch-specific HTTP trigger will now work at the following URL: https://<FUNCTION_APP_NAME>-<BRANCH_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Notice: Using deployment slots for collaborative development is subject to slot count and SKU limits. For example, the Premium SKU supports up to 20 slots. See the Azure subscription and service limits, quotas, and constraints - Azure Resource Manager | Microsoft Learn for details. If you need to delete a slot after use, you can do so using PowerShell with the Remove-AzWebAppSlot command: Remove-AzWebAppSlot (Az.Websites) | Microsoft Learn326Views1like0CommentsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools 🛠️ with Agents 🤖 and Azure API Management is at the fore-front, ready to support this open-source protocol 🚀. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. 🔍 𝗞𝗲𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. 💡 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: To address these challenges, we recommend using 𝗔𝘇𝘂𝗿𝗲 𝗔𝗣𝗜 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 as your authorization gateway for remote MCP servers. 🔗For an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs – Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo15KViews10likes3CommentsBlogsite AI Voice Answer machine
Hi all, I wanted to quickly to write to show how I thought about building a system based on Azure to allow my blogsite to answer questions about a blog post that a reader may suddenly have in their mind while reading through the post to extend learning. The basic flow is: -User loads a blog post -On load, the page populates 3 buttons a third of the way in the page, each with randomly AI generated questions related to the page that a reader might ask about the page content -On clicking a button, the question is answered through voice, with the answer being 'just' enough to answer the question without being over-bearing (at least that's my feeling!) The architecture is constructed as the following: I wrote in full on how I did this for my blog here : https://www.imaginarium.dev/voice-ai-for-blog/ I wanted to perhaps hear on if I was missing anything here on the design, security considerations particularly on the Azure side? Any ways to improve on the AI Voice implementation? I'm using the Azure OpenAI neural voices at the moment. Gemini voices lately are really good too!! I even thought about using a custom neural voice of my own but I ran into issues when trying to do that within Azure due to not having an enterprise subscription readily available to be allowed this capability. Thoughts?60Views0likes0Comments