integration-docs
Loading

Azure AI Foundry Integration

Version 0.5.3 beta:[] (View all)
Compatible Kibana version(s) 9.0.0 or higher
Supported Serverless project types
What's this?
Security
Observability
Subscription level
What's this?
Basic
Level of support
What's this?
Elastic

Azure AI Foundry provides a comprehensive suite of AI services that enable developers to build, deploy, and manage AI solutions efficiently. The Azure AI Foundry integration collects metrics through Azure Monitor, facilitating robust monitoring and operational insights.

The Azure AI Foundry logs data stream captures the gateway log events. These are the supported Azure log categories:

Data Stream Log Category
logs Audit
logs RequestResponse
logs ApiManagementGatewayLogs

Refer to the Azure Logs page for more information on how to set up and use this integration.

The Azure AI Foundry provides native logging and monitoring to track the telemetry of the service. The Audit and RequestResponse log categories come under the native logging. However, the default logging doesn't log the inputs and outputs of the service. This is useful to ensure that the services operates as expected.

The API Management services provide the advanced logging capabilities. The ApiManagementGatewayLogs category comes under the advanced logging. This is not directly available in the Azure AI Foundry service itself. You have to set up the API Management services in Azure to access the Azure AI Foundry models. When the setup is complete, add the diagnostic setting for the API Management service.

For more information on how to implement the comprehensive solution using API Management services to monitor the Azure AI Foundry services, check the AI Foundry API page.

Diagnostic settings

Enable the category Logs related to ApiManagement Gateway to stream the logs to the event hub.

┌──────────────────┐      ┌──────────────┐     ┌─────────────────┐
│   APIM service   │      │  Diagnostic  │     │    Event Hub    │
│    <<source>>    │─────▶│   settings   │────▶│ <<destination>> │
└──────────────────┘      └──────────────┘     └─────────────────┘

The metrics data stream collects the cognitive service metrics that is specific to the Azure AI Foundry service.

Model HTTP Request Metrics:

  • Requests: Total number of calls made to the model API over a period of time.

Model HTTP Latency Metrics:

  • Latency: Measures time taken to process the first byte of response, last byte of response and the request latency.

Model Usage Metrics:

  • Token Usage: Number of prompt tokens processed (input), generated completion tokens (output) and the total tokens of a model.

Before you start, check the Authentication and costs section.

Follow these step-by-step instructions on how to set up an Azure metrics integration.

Period:: (string) Reporting interval. Metrics will have a timegrain of 5 minutes, so the Period configuration option for azure_ai_foundry should have a value of 300s or multiple of 300sfor relevant results.

Resource IDs:: ([]string) The fully qualified ID's of the resource, including the resource name and resource type. Has the format /subscriptions/{guid}/resourceGroups/{resource-group-name}/providers/{resource-provider-namespace}/{resource-type}/{resource-name}. Should return a list of resources.

Resource Groups:: ([]string) This option will return all Azure AI Foundry services inside the resource group.

If no resource filter is specified, then all Azure AI Foundry services inside the entire subscription will be considered.

The primary aggregation value will be retrieved for all the metrics contained in the namespaces. The aggregation options are avg, sum, min, max, total, count.

ECS Field Reference

For more details on ECS fields, check the ECS Field Reference documentation.

The Azure AI Foundry metrics provide insights into the performance and usage of your AI resources. These metrics help in monitoring and optimizing your deployments.

ECS Field Reference

For more details on ECS fields, check the ECS Field Reference documentation.