ai
748 TopicsPartner showcase: Developing for on-device AI with Surface Copilot+ PCs with Teknikos
Explore how Teknikos built a Windows-native AI document assistant using Phi Silica and the Windows App SDK. This blog highlights the architecture behind the Teknikos PDF Explorer, which runs entirely on-device using NPUs—delivering fast, private, and cost-stable AI experiences without cloud dependencies. Ideal for developers and BDMs exploring hybrid AI and Copilot+ PC capabilities.86Views0likes0Comments10 Things You Might Not Know You Could Do with Azure Communication Services
Azure Communication Services gives developers the building blocks for voice, video, chat, SMS, and more. But the real magic happens when you start combining those capabilities with other Azure services to solve real-world problems. This blog isn’t a feature list or a product pitch. It’s a collection of creative, practical scenarios that show what’s possible with Azure Communication Services today. Each one is based on real questions, real demos, and real developer experiences. Some are simple. Some are surprisingly powerful. All of them are designed to spark ideas. We’ve included links to sample code, documentation, and visuals to help you dive deeper. And we’ll keep this post updated as new scenarios emerge, so if you’ve built something cool, let us know! Build a Voice Assistant That Understands Users—and Follows Through 🔎 Quick Look What it does: Create a voice-first assistant that can understand, respond, and follow up using natural language. Why it matters: Offers a more intelligent, flexible alternative to traditional IVRs. What you'll need: Azure Communication Services for voice, Azure OpenAI, and backend logic to handle actions. Most voice agents are limited to scripted menus or keyword matching. But with Azure Communication Services and Azure OpenAI, you can build a voice experience that actually understands what users are saying and responds with meaningful action. In this demo, a user calls a virtual assistant looking for dinner inspiration. Instead of navigating a rigid menu, they just talk naturally. The assistant interprets the request, asks follow-up questions, and sends a personalized recipe link via SMS—all powered by Azure Communication Services for both the voice and messaging workflows. This kind of voice-first interaction is ideal for customer support, concierge services, or any scenario where users want to speak naturally and get something done. Watch the video below to see the full experience in action or explore the demo yourself here. Send Responsive Messages in Real-Time 🔎 Quick Look What it does: Trigger personalized messages based on real-time user behavior (like missed appointments or failed logins). Why it matters: Helps you move beyond static reminders to more timely, relevant communication. What you’ll need: Azure Communication Services, Azure Event Grid, Azure OpenAI, and an event source like a Logic App or backend service. Most messaging systems are built around schedules: send a reminder at 9 AM, a follow-up two days later, and so on. But what if your messages could respond to what your users are doing right now? With Azure Communication Services, you can build event-driven workflows that trigger messages based on real-time behavior. A customer misses an appointment. A user completes a transaction. A login attempt fails. Using Azure Event Grid, you can detect these events, generate a tailored message with Azure OpenAI, and send it instantly via SMS, email, or WhatsApp using Azure Communication Servies. This approach helps teams move beyond static, one-size-fits-all messaging. It enables timely, relevant communication that’s easier to maintain and scale - without manually scripting every variation. Learn more and get started: Azure Communication Services as an Event Grid source Handle SMS events with Event Grid Push notifications overview Use Event Grid to send calling push notifications Let Users Schedule Appointments by Text – In Their Own Words 🔎 Quick Look What it does: Enable natural language scheduling over SMS. No apps, menus, or portals required. Why it matters: Makes scheduling faster and more user-friendly, especially for service-based businesses. What you’ll need: Azure Communication Services for SMS, Azure OpenAI to interpret intent, and a backend or Logic App to manage availability and confirmations. Coordinating appointments over email or phone is slow and manual. Even traditional SMS-based schedulers often rely on rigid decision trees that break when users type something unexpected. This demo takes a smarter approach. By combining Azure Communication Services with Azure OpenAI, it lets users book, confirm, or reschedule appointments through natural, conversational SMS - no app, no portal, no menus. Just text like you normally would: “Hey, can I move my appointment to next Tuesday?” “Do you have anything earlier in the day?” Behind the scenes, Azure Communication Services handles the messaging layer, while OpenAI interprets the user’s intent and routes it to backend logic that manages availability and confirmations. It’s a lightweight, flexible solution that’s ideal for clinics, service providers, or any business that wants to streamline scheduling—without sacrificing user experience. Try the SMS scheduling demo. Everything you need to get started is in the README. Reach Customers on WhatsApp – Right Alongside SMS & Email 🔎 Quick Look What it does: Send messages across WhatsApp, SMS, and email from a single workflow. Why it matters: Increases engagement by meeting users where they are. What you’ll need: Azure Communication Services with Advanced Messaging SDK, verified sender setup for each channel Your customers are already on WhatsApp. Now your app can be too, without rearchitecting your entire messaging stack. Azure Communication Services lets you send and receive WhatsApp messages using the same platform you already use for SMS, email, and chat. That means you can reuse your existing workflows, backend logic, and delivery infrastructure - just with a new channel that meets your users where they are. Whether it’s appointment reminders, shipping updates, or live customer support, WhatsApp becomes just another part of your communication toolkit. You can trigger messages using Azure Event Grid, automate replies with Azure Bot Framework, and manage everything through the Advanced Messaging SDK. Want to see it in action? This quickstart guide walks you through registering your WhatsApp Business Account, connecting it to Azure Communication Services, and sending both text and media messages. > Channels selected from the blade menu. Learn more: Overview of Advanced Messaging for WhatsApp Send text and media WhatsApp messages (Quickstart) Publish an agent to WhatsApp using Copilot Studio Let Customers Join a Teams Meeting- Without a Teams Account 🔎 Quick Look What it does: Embed a browser-based Teams meeting experience into your app or site. Why it matters: Makes it easy for customers to join secure meetings without downloading Teams or signing in. What you’ll need: Azure Communication Services with Teams interop, a Teams meeting link, and a web app or portal. Not every customer wants to download an app or create a Microsoft account just to join a meeting. With Azure Communication Services, you can embed a fully branded, browser-based meeting experience into your app or website that connects directly to a Microsoft Teams meeting - no Teams account required. This is especially useful for industries like healthcare, legal, or financial services, where external participants need to join secure consultations or appointments without friction. You control the UI, the branding, and the flow, while Azure Communication Services handles the real-time voice and video connection to Teams. You can see how this works in the interop-quickstart demo, which shows how to create a Teams meeting, generate a join link, and embed the experience in a custom app. Handle Teams Calls Inside Your CRM—No App Switch Required 🔎 Quick Look What it does: Let agents make and receive Teams calls directly inside Dynamics 365 or a custom contact center UI. Why it matters: Reduces context switching and improves agent efficiency. What you’ll need: Teams Phone Extensibility, Azure Communication Services Call Automation, Dynamics 365 or another CCaaS. Most contact center agents juggle multiple tools - CRM, phone, notes, AI assistants - just to handle a single call. But what if they could do it all in one place? With Teams Phone Extensibility, powered by Azure Communication Services, agents can make and receive Teams calls directly inside Dynamics 365 or any custom contact center app. No need to open the Teams client. Here’s what’s possible: Answer calls in a custom agent desktop, routed through Teams Phone. Trigger AI workflows mid-call—like summarizing the conversation with Azure OpenAI or escalating to a supervisor. Initiate outbound calls from bots or workflows using ACS’s Call Automation APIs. Record and analyze calls with full control over logic and storage. It’s a surprising way to bring AI, voice, and CRM together, without rebuilding your contact center from scratch. Embed Secure Video Visits to Your Healthcare App–Fast 🔎 Quick Look What it does: Add HIPAA-compliant video calling with identity integration. Why it matters: Enables secure, branded telehealth or consultation experiences. What you’ll need: Azure Communication Services for video, Azure AD B2C, and a secure frontend. Telehealth is here to stay. But building a secure, compliant video experience from scratch can be a heavy lift. Azure Communication Services makes it easier. With built-in support for HIPAA, GDPR, and SOC 2, encrypted media transport, and identity integration via Azure AD B2C, Azure Communication Services lets you embed video calling directly into your app—without compromising on privacy or user experience. The Sample Builder shows how to combine video, chat, and SMS into a seamless patient-provider experience. It’s ready to deploy, customize, and scale. Learn more: Azure Communication Services HIPAA compliance overview Quickstart - Add video calling to your app - An Azure Communication Services quickstart | Microsoft Learn Combine AI and Human Support in a Single Chat Experience 🔎 Quick Look What it does: Start with an AI assistant and escalate to a human agent with full context. Why it matters: Scales support while preserving the human touch when needed. What you’ll need: Azure Communication Services for chat, Azure OpenAI, bot framework, and agent handoff logic. Most customer service chats start with automation—but they shouldn’t get stuck there. With Azure Communication Services, you can build a chat experience that begins with an AI assistant and hands off to a human agent when it makes sense. This demo shows how it works: a customer starts chatting through a web widget. An AI assistant, powered by Azure OpenAI, handles common questions and tasks. If the conversation gets complex or the user asks for help, the chat transitions smoothly to a live agent—no context lost. Agents can even generate AI-powered summaries to get up to speed quickly before jumping in. It’s a practical way to scale support without sacrificing the human touch. . On the left, a dialog box displays the user experience, while on the right, the agent's view shows the conversation summary and includes a button to take over the automated chat. Build a voice-first, AI virtual assistant in Under a Week 🔎 Quick Look What it does: Launch a branded voice assistant quickly using Zammo.ai and ACS. Why it matters: Speeds up deployment of voice experiences across channels. What you’ll need: Zammo.ai, Azure Communication Services for voice, and a publishing channel (e.g., Alexa, web). When Montgomery County, Maryland needed to support COVID-19 vaccine registration, they didn’t have months to build a solution. In just six business days, they launched a voice-first virtual assistant that handled 100% of inbound calls: automating appointment scheduling, supporting English and Spanish, and deflecting thousands of calls from live agents. They partnered with Zammo.ai to build the experience, all without writing custom code. Where Azure Communication Services fits in: Azure Communication Services powered the voice infrastructure, enabling a scalable, multilingual experience that saved time, reduced hold times by 90%, and helped the county serve residents more equitably. Don’t take our word for it, learn more about how it came together here. Know What You’ll Pay, Before You Ship 🔎 Quick Look What it does: Estimate costs and usage before you build. Why it matters: Helps you plan and budget more effectively. What you’ll need: Azure Communication Services pricing calculator, usage estimator, and billing dashboard. One of the first questions developers ask when building with Azure Communication Services is: “How much is this going to cost me?” And the answer is: it depends, but in a good way. Azure Communication Services uses a flexible, pay-as-you-go pricing model. You’re only billed for what you use - no upfront commitments, no recurring subscription fees. That makes it easy to prototype, test, and scale without overcommitting. Each communication channel (SMS, email, voice/video calling, and WhatsApp) has its own pricing structure based on usage volume, geography, and delivery method. For example: SMS to U.S. numbers is priced differently than international messages. Voice calls vary depending on whether you’re using VoIP, PSTN, or Teams interop. WhatsApp pricing may involve partner-based rates through Messaging Connect. There are a few exceptions to the pay-as-you-go model. For instance, leasing a dedicated phone number incurs a monthly fee. But overall, the model is transparent and developer-friendly. To help you estimate costs and plan ahead, here are some helpful resources: Azure portal pricing calculator: Azure Communication Services pricing | Microsoft Azure Azure Communication Services Pricing Overview: Azure Communication Services pricing | Microsoft Azure What Will You Build Next? Azure Communication Services gives you the flexibility to build the communication experience your users actually want - whether that’s a quick SMS, a secure video call, or a voice assistant that gets things done. And when you combine ACS with other Azure services like OpenAI, Event Grid, and Bot Framework, the possibilities expand even further. We’ll keep this post updated as new scenarios and demos emerge. If you’ve built something interesting with ACS, we’d love to hear about it—and maybe even feature it in a future post. Check out our official documentation to get started today!Build. Secure. Launch Your Private MCP Registry with Azure API Center.
We are thrilled to embrace a new era in the world of MCP registries. As organizations increasingly build and consume MCP servers, the need for a secure, governed, robust and easily discoverable tools catalog has become critical. Today, we are excited to show you how to do just that with MCP Center, a live example demonstrating how Azure API Center (APIC) can serve as a private and enterprise-ready MCP registry. The registry puts your MCPs just one click away for developers, ensuring no setup fuss and a direct path to coding brilliance. Why a private registry? 🤔 Public OSS registries have been instrumental in driving growth and innovation across the MCP ecosystem. But as adoption scales, so does the need for tighter security, governance, and control, this is where private MCP registries step in. This is where Azure API Center steps in. Azure API Center offers a powerful and centralized approach to MCP discovery and governance across diverse teams and services within an organization. Let's delve into the key benefits of leveraging a private MCP registry with Azure API Center. Security and Trust: The Foundation of AI Adoption Review and Verification: Public registries, by their open nature, accept submissions from a wide range of developers. This can introduce risks from tools with limited security practices or even malicious intent. A private registry empowers your organization to thoroughly review and verify every MCP server before it becomes accessible to internal developers or AI agents (like Copilot Studio and AI Foundry). This eliminates the risk of introducing random, potentially vulnerable first or third-party tools into your ecosystem. Reduced Attack Surface: By controlling which MCP servers are accessible, organizations significantly shrink their potential attack surface. When your AI agents interact solely with known and secure internal tools, the likelihood of external attackers exploiting vulnerabilities in unvetted solutions is drastically reduced. Enterprise-Grade Authentication and Authorization: Private registries enable the enforcement of your existing robust enterprise authentication and authorization mechanisms (e.g., OAuth 2) across all MCP servers. Public registries, in contrast, may have varying or less stringent authentication requirements. Enforced AI Gateway Control (Azure API Management): Beyond vetting, a private registry enables organizations to route all MCP server traffic through an AI gateway such as Azure API Management. This ensures that every interaction, whether internal or external, adheres to strict security policies, including centralized authentication, authorization, rate limiting, and threat protection, creating a secure front for your AI services. Governance and Control: Navigating the AI Landscape with Confidence Centralized Oversight and "Single Source of Truth": A private registry provides a centralized "single source of truth" for all AI-related tools and data connections within your organization. This empowers comprehensive oversight of AI initiatives, clearly identifying ownership and accountability for each MCP server. Preventing "Shadow AI": Without a formal registry, individual teams might independently develop or integrate AI tools, leading to "shadow AI" – unmanaged and unmonitored AI deployments that can pose significant risks. A private registry encourages a standardized approach, bringing all AI tools under central governance and visibility. Tailored Tool Development: Organizations can develop and host MCP servers specifically tailored to their unique needs and requirements. This means optimized efficiency and utility, providing specialized tools you won't typically find in broader public registries. Simplified Integration and Accelerated Development: A well-managed private registry simplifies the discovery and integration of internal tools for your AI developers. This significantly accelerates the development and deployment of AI-powered applications, fostering innovation. Good news! Azure API Center can be created for free in any Azure subscription. You can find a detailed guide to help you get started: Inventory and Discover MCP Servers in Your API Center - Azure API Center Get involved 💡 Your remote MCP server can be discoverable on API Center’s MCP Discovery page today! Bring your MCP server and reach Azure customers! These Microsoft partners are shaping the future of the MCP ecosystem by making their remote MCP Servers discoverable via API Center’s MCP Discovery page. Early Partners: Atlassian – Connect to Jira and Confluence for issue tracking and documentation Box – Use Box to securely store, manage and share your photos, videos, and documents in the cloud Neon – Manage and query Neon Postgres databases with natural language Pipedream – Add 1000s of APIs with built-in authentication and 10,000+ tools to your AI assistant or agent - coming soon - Stripe – Payment processing and financial infrastructure tools If partners would like their remote MCP servers to be featured in our Discover Panel, reach out to us here: GitHub/mcp-center and comment under the following GitHub issue: MCP Server Onboarding Request Ready to Get Started? 🚀 Modernize your AI strategy and empower your teams with enhanced discovery, security, and governance of agentic tools. Now's the time to explore creating your own private enterprise MCP registry. Check out MCP Center, a public showcase demonstrating how you can build your own enterprise MCP registry - MCP Center - Build Your Own Enterprise MCP Registry - or go ahead and create your Azure API Center today!2.1KViews6likes3CommentsFueling the Agentic Web Revolution with NLWeb and PostgreSQL
We’re excited to announce that NLWeb (Natural Language Web), Microsoft’s open project for natural language interfaces on websites now supports PostgreSQL. With this enhancement, developers can leverage PostgreSQL and NLWeb to transform any website into an AI-powered application or Model Context Protocol (MCP) server. This integration allows organizations to utilize a familiar, robust database as the foundation for conversational AI experiences, streamlining deployment and maximizing data security and scalability. Soon, autonomous agents, not just human users, will consume and interpret website content, transforming how information is accessed and utilized online. During Microsoft //Build 2025, Microsoft introduced the era of the open agentic web, in which the internet is an open agentic web a new paradigm in which autonomous agents seamlessly interact across individual, organizational, team and end-to-end business contexts. To realize the future of an open agentic web, Microsoft announced the NLWeb project. NLWeb transforms any website to an AI-powered application with just a few lines of code and by connecting to an AI model and a knowledge base. In this post, we’ll cover: What NLWeb is and how it works with vector databases How pgvector enables vector similarity search in PostgreSQL for NLWeb Get started using NLWeb with Postgres Let’s dive in and see how Postgres + NLWeb can redefine conversational web interfaces while keeping your data in a familiar, powerful database. What is NLWeb? A Quick Overview of Conversational Web Interfaces NLWeb is an open project developed by Microsoft to simplify adding conversational AI interfaces to websites. How NLWeb works under the hood: Processes existing data/website content that exists in semi-structured formats like Schema.org, RSS, and other data that websites already publish Embeds and indexes all the content in a vector store (i.e PostgreSQL with pgvector) Routes user queries through several processes which handle natural langague understanding, reranking and retrieval. Answers queries with an LLM The result is a high-quality natural language interface on top of web data, giving developers the ability to let users “talk to” web data. By default, every NLWeb instance is also a Model Context Protocol (MCP) server, allowing websites to make their content discoverable and accessible to agents and other participants in the MCP ecosystem if they choose. Importantly, NLWeb is platform-agnostic and supports many major operating systems, AI models, and vector stores and the NLWeb project is modular by design, so developers can bring their own retrieval system, model APIs, and define their own extensions. NLWeb with PostgreSQL PostgreSQL is now embedded into the NLWeb reference stack as a native retriever, creating a scalable and flexible path for deploying NLWeb instances using open-source infrastructure. Retrieval Powered by pgvector NLWeb leverages pgvector, a PostgreSQL extension for efficient vector similarity search, to handle natural language retrieval at scale. By integrating pgvector into the NLWeb stack, teams can eliminate the need for external vector databases. Web data stored in PostgreSQL becomes immediately searchable and usable for NLWeb experiences, streamlining infrastructure and enhancing security. PostgreSQL's robust governance features and wide adoption align with NLWeb’s mission to enable conversational AI for any website or content platform. With pgvector retrieval built in, developers can confidently launch NLWeb instances on their own databases no additional infrastructure required. Implementation example We are going to use NLWeb and Postgres, to create a conversational AI app and MCP server that will let us chat with content from the Talking Postgres with Claire Giordano Podcast! Prerequisites An active Azure account. Enable and configure the pg_vector extensions. Create an Azure AI Foundry project. Deploy models gpt-4.1, gpt-4.1-mini and text-embedding-3-small. Install Visual Studio Code. Install the Python extension. Install Python 3.11.x. Install the Azure CLI (latest version). Getting started All the code and sample datasets are available in this GitHub repository. Step 1: Setup NLWeb Server 1. Clone or download the code from the repo. git clone https://github.com/microsoft/NLWeb cd NLWeb 2. Open a terminal to create a virtual python environment and activate it. python -m venv myenv source myenv/bin/activate # Or on Windows: myenv\Scripts\activate 3. Go to the 'code/python' folder in NLWeb to install the dependencies. cd code/python pip install -r requirements.txt 4. Go to the project root folder in NLWeb and copy the .env.template file to a new .env file cd ../../ cp .env.template .env 5. In the .env file, update the API key you will use for your LLM endpoint of choice and update the Postgres connection string. For example: AZURE_OPENAI_ENDPOINT="https://TODO.openai.azure.com/" AZURE_OPENAI_API_KEY="<TODO>" # If using Postgres connection string POSTGRES_CONNECTION_STRING="postgresql://<HOST>:<PORT>/<DATABASE>?user=<USERNAME>&sslmode=require" POSTGRES_PASSWORD="<PASSWORD>" 6. Update your config files (located in the config folder) to make sure your preferred providers match your .env file. There are three files that may need changes. config_llm.yaml: Update the first line to the LLM provider you set in the .env file. By default it is Azure OpenAI. You can also adjust the models you call here by updating the models noted. By default, we are assuming 4.1 and 4.1-mini. config_embedding.yaml: Update the first line to your preferred embedding provider. By default it is Azure OpenAI, using text-embedding-3-small. config_retrieval.yaml: Update the first line to postgres. You should update write_endpoint to postgres and You should update postgres retrieval endpoint is enabled to 'true' in the following list of possible endpoints. Step 2: Initialize Postgres Server Go to the 'code/python/misc folder in NLWeb to run Postgres initializer. NOTE: If you are using Azure Postgres Flexible server make sure you have `vector` extension allow-listed and make sure the database has the vector extension enabled, cd code/python/misc python postgres_load.py Step 3: Ingest Data from Talk Postgres Podcast Now we will load some data in our local vector database to test with. We've listed a few RSS feeds you can choose from below. Go to the 'code/python folder in NLWeb and run the command. The format of the command is as follows (make sure you are still in the 'python' folder when you run this): python -m data_loading.db_load <RSS URL> <site-name> Talking Postgres with Claire Giordano Podcast: python -m data_loading.db_load https://feeds.transistor.fm/talkingpostgres Talking-Postgres (Optional) You can check the documents table in your Postgres database and verify the table looks like the one below. To verify all the data from the website was uploaded. Test NLWeb Server Start your NLWeb server (again from the 'python' folder): python app-file.py Go to http://localhost:8000/ Start ask questions about the Talking Postgres with Claire Giordano Podcast, you may try different modes. Trying List Mode: Sample Prompt: “I want to listen to something that talks about the advances in vector search such as DiskANN” Trying Generate Mode Sample Prompt: “What did Shireesh Thota say about the future of Postgres?” Running NLWeb with MCP 1. If you do not already have it, install MCP in your venv: pip install mcp 2. Next, configure your Claude MCP server. If you don’t have the config file already, you can create the file at the following locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json The default MCP JSON file needs to be modified as shown below: macOS Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “/Users/yourname/NLWeb/myenv/bin/python”, “args”: [ “/Users/yourname/NLWeb/code/chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “/Users/yourname/NLWeb/code” } } } Windows Example Configuration { “mcpServers”: { “ask_nlw”: { “command”: “C:\\Users\\yourusername\\NLWeb\\myenv\\Scripts\\python”, “args”: [ “C:\\Users\\yourusername\\NLWeb\\code\\chatbot_interface.py”, “—server”, “http://localhost:8000”, “—endpoint”, “/mcp” ], “cwd”: “C:\\Users\\yourusername\\NLWeb\\code” } } } Note: For Windows paths, you need to use double backslashes (\\) to escape the backslash character in JSON. 3. Go to the 'code/python’ folder in NLWeb and run the command. Enter your virtual environment and start your NLWeb local server. Make sure it is configured to access the data you would like to ask about from Claude. # On macOS source ../myenv/bin/activate python app-file.py # On Windows ..\myenv\Scripts\activate python app-file.py 4. Open Claude Desktop. It should ask you to trust the 'ask_nlw' external connection if it is configured correctly. After clicking yes and the welcome page appears, you should see 'ask_nlw' in the bottom right '+' options. Select it to start a query. 5. To query NLWeb, just type 'ask_nlw' in your prompt to Claude. You'll notice that you also get the full JSON script for your results. Remember, you must have your local NLWeb server started to use this option. Learn More Vector Store in Azure Postgres Flexible Server Generative AI in Azure Postgres Flexible Server NLWeb GitHub repo includes: A reference server for handling natural language queries PGvector integrationImpariamo a conoscere MCP: Introduzione al Model Context Protocol (MCP)
Non perderti il prossimo evento “Let’s Learn – MCP” su Microsoft Reactor il 24 di Luglio, pensato per chiunque voglia conoscere meglio il nuovo standard per agenti intelligenti (il Model Context Protocol) e imparare a metterlo in pratica. La sessione è in Italiano e le demo sono in Python, ma fa parte di una serie di live-streaming disponibili in tantissime lingue.Messaging Connect: Global SMS Coverage Now Available in Azure Communication Services
Messaging Connect is now in public preview—Messaging Connect is a new partner-based model for Azure Communication Services that enables global SMS coverage in over 190 countries—starting with Infobip. It simplifies compliant delivery, removes provisioning complexity, and helps developers reach users or Agents reach humans, worldwide using the Azure Communication Service SMS API they already know.Register for the Cloud and AI Platforms FY26 Partner Playbook Walkthrough
Join us live on July 24, 2025, at 8:00 AM Pacific Time or 5:00 PM Pacific Time to learn about the exciting go-to-market (GTM) priorities and initiatives planned for FY26 across the Cloud and AI platforms solutions area, aligned to the System Integrator partner audience. We are excited to invite you to an exclusive session where we will unveil the FY26 Cloud & AI Platform Partner Playbook. This comprehensive guide is designed to help you leverage Microsoft's latest investments and strategies to grow your business and enhance customer engagement. What to Expect: Business Overview: Gain insights into the FY26 investments and strategic priorities. Customer Opportunities: Understand the growth drivers and market opportunities for FY26. Solution Plays: Learn how to migrate and modernize your estate, innovate with Azure AI apps and agents, and unify your data platform. Partner Skilling: Discover opportunities for partner skilling and certifications to enhance your capabilities. Why Attend? Exclusive Insights: Get a detailed overview of the playbook and how it can benefit your business. Interactive Sessions: Participate in discussions and Q&A sessions to address your specific needs and questions. We look forward to your participation and collaboration in driving mutual growth and success. Please register now to secure your spot. Can’t attend the live event? Register anyway to get notified when the event content is available on demand. Register here for the morning or evening session!396Views1like6CommentsHow do I A/B test different versions of my agent?
We tend to think of A/B testing as a marketer’s tool (i.e. headline A vs. headline B). But it’s just as useful for developers building agents. Why? Because most agent improvements are experimental. You’re changing one or more of the following: the system prompt the model the tool selection the output format the interaction flow But without a structured way to test those changes, you’re just guessing. You might think your updated version is smarter or more helpful…but until you compare, you won’t know! A/B testing helps you turn instincts into insight and gives you real data to back your next decision.Introducing API Management Support in the Azure SRE Agent
In May, the Azure SRE Agent was introduced - an AI-powered Site Reliability Engineering (SRE) assistant built to help customers identify, diagnose, and resolve issues across their Azure environments faster and with less manual effort. Today, we’re excited to highlight how the SRE Agent now extends these capabilities to Azure API Management (APIM) , delivering deep operational visibility, guided troubleshooting, and intelligent remediation for customers running critical APIs at scale. API Management sits at the center of API application architectures, acting as a unified entry point for services, enforcing security, transforming requests, and routing traffic to backends. Ensuring the reliability of this layer is crucial - but as systems grow more distributed, it becomes harder to isolate failures, detect misconfigurations, or trace degraded performance to its root cause. The SRE Agent helps APIM users stay ahead of these challenges by providing both diagnostics and remediation tailored for API Management environments. You can ask the SRE agent direct API Management questions or concerns such as: “My API Management is giving me 503 errors” “We updated our policies yesterday, and now the backend is timing out.” “Can you help me figure out why requests to our billing API are failing?” “Show me recent changes to our APIM instance.” “What’s the failure rate on our orders operation this week?” Proactively Monitor API Management App Health The SRE Agent continuously monitors the overall health of your API Management service. It tracks key metrics such as CPU utilization, latency, error rates, and availability over time, surfacing any abnormal patterns and offering insight into capacity. This helps teams anticipate issues before they impact users and plan for scaling with confidence. Visualize Backend Connections and Health One of the most valuable APIM capabilities introduced with the agent is backend mapping. The agent can identify which backend services each API operation routes to, and visualize the health of those backends. This makes it much easier to answer operational questions like: “Which backend is responsible for the spike in errors on my /checkout API?” “Are there any timeouts happening from APIM to service X?” Drill into Backend App Issues If the root cause lies in a backend application - whether it's a service hosted in Azure Container Apps, Azure Functions Apps App Service, or another compute platform - the SRE Agent can go further. It analyzes backend-specific metrics such as memory and CPU usage, response time distribution, recent deployments, and any logged exceptions. The agent correlates this backend behavior with the observed degradation at the API Management layer to provide a full stack view of what’s happening. For example: “Your backend container app failed 37% of requests in the last hour due to out-of-memory errors. This correlated with a 5xx spike at the /stock/check API operation.” Detect and Fix Configuration Issues The SRE Agent also helps uncover common configuration issues that lead to downtime or silent failures, including: Malformed API policies Missing or misapplied network rules (NSGs, VNet) Incorrect scaling configuration or quota enforcement But it doesn’t stop at diagnostics. Where safe and possible, the agent can also perform remediation with your approval - for example, by adjusting NSG rules, scaling your API Management, etc. Built for Teams that Depend on APIM If API Management is critical to your infrastructure, the SRE Agent gives you an extra layer of confidence - offering the clarity and tooling needed to maintain uptime, reduce operational overhead, and catch issues before they escalate. The APIM-specific capabilities of SRE Agent are now available, and can be used in any SRE Agent resource (currently in preview). Signup for preview access We’re excited to bring this level of intelligence and automation to APIM, and we’re looking forward to your feedback as we continue to evolve the experience. Additional resources Azure SRE Agent overview (preview) | Microsoft Learn Introducing Azure SRE Agent | Microsoft Community Hub1.1KViews5likes4Comments