From the course: Microsoft Azure AI Essentials: Workloads and Machine Learning on Azure

Unlock this course with a free trial

Join today to access over 24,700 courses taught by industry experts.

Introduction to Azure AI Content Safety

Introduction to Azure AI Content Safety

- [Instructor] Content moderation involves monitoring and managing user-generated content on digital platforms to ensure it follows guidelines, regulations, and ethical standards. Azure AI Content Safety is an AI service designed to detect harmful user-generated and AI-generated content in apps and services. It's ideal for user prompts submitted to generative AI services and the resulting content, online marketplaces that moderate product catalogs and user-generated content, gaming companies that manage game artifacts and chat rooms, social messaging platforms to moderate images and texts from users, enterprise media companies that centralize content moderation, and K-12 education providers to filter out inappropriate content for students and educators. Azure AI Content Safety has the following service features. Analyze texts scans for sexual content, violence, hate, and self-harm with multi-level severity assessments. Analyze image scans images for the same categories and severity…

Contents