azure stream analytics
44 TopicsAutomating Data Vault processes on Microsoft Fabric with VaultSpeed
This Article is Authored By Jonas De Keuster from VaultSpeed and Co-authored with Michael Olschimke, co-founder and CEO at Scalefree International GmbH & Trung Ta is a senior BI consultant at Scalefree International GmbH. The Technical Review is done by Ian Clarke, Naveed Hussain – GBBs (Cloud Scale Analytics) for EMEA at Microsoft Businesses often struggle to align their understanding of processes and products across disparate systems in corporate operations. In our previous blogs in this series, we explored the advantages of Data Vault as a methodology and why it is increasingly recognized due to its automation-friendly approach to modern data warehousing. Data Vault’s modular structure, scalability, and flexibility address the challenges of integrating diverse and evolving data sources. However, the key to successfully implementing a Data Vault lies in automation. Data Vault’s pattern-based modeling - organized around hubs, links, and satellites - provides a standardized framework well-suited to integrate data from horizontally scattered operational source systems. Automation tools like VaultSpeed enhance this methodology by simplifying the generation of Data Vault structures, streamlining workflows, and enabling rapid delivery of analytics-ready data solutions. By leveraging the strengths of Data Vault and VaultSpeed’s automation capabilities, organizations can overcome inefficiencies in traditional ETL processes, enabling scalable and adaptable data integration. Examples of such operational systems include Microsoft Dynamics 365 for CRM and ERP, SAP for enterprise resource planning, or Salesforce for customer data. Attempts to harmonize this complexity historically relied on pre-built industry data models. However, these models often fell short, requiring significant customization and failing to accommodate unique business processes. Different approaches to Data Integration Industry data models offer a standardized way to structure data, providing a head start for organizations with well-aligned business processes. They work well in stable, regulated environments where consistency is key. However, for organizations dealing with diverse sources and fast-changing requirements, Data Vault offers greater flexibility. Its modular, scalable approach supports evolving data landscapes without the need to reshape existing models. Both approaches aim to streamline integration. Data Vault simply offers more adaptability when complexity and change are the norm. So it depends on the use cases when it comes to choosing the right approach. Tackling data complexity with automation Integrating data from horizontally distributed sources is one of the biggest challenges data engineers face. VaultSpeed addresses this by connecting the physical metadata from source systems with the business's conceptual data model and creating a "town plan" for building a Data Vault model. This "town plan" for Data Vault model construction serves as the bedrock for automating various data pipeline stages. By aligning data's technical and business perspectives, VaultSpeed enables the automated generation of logical and physical data models. This automation streamlines the design process and ensures consistency between the data's conceptual understanding and physical implementation. Furthermore, VaultSpeed's automation extends to the generation of transformation code. This code converts data from its source format into the structure defined by the Data Vault model. Automating this process reduces the potential for errors and accelerates the development of the data integration pipeline. In addition to data models and transformation code, VaultSpeed also automates workflow orchestration. This involves defining and managing the tasks required to extract, transform, and load data into the Data Vault. By automating this orchestration, VaultSpeed ensures that the data integration process is executed reliably and efficiently. How VaultSpeed automates integration The following section will examine the detailed steps involved in the VaultSpeed workflow. We will examine how it combines metadata-driven and data-driven modeling approaches to streamline data integration and automate various data pipeline stages. Harvest metadata: VaultSpeed collects metadata from source systems such as OneLake, AzureSQL, SAP, and Dynamics 365, capturing schema details, relationships, and dependencies. Align with conceptual models: Using a business’s conceptual data model as a guiding framework, VaultSpeed ensures that physical source metadata is mapped consistently to the target Data Vault structure. Generate logical and physical models: VaultSpeed leverages its metadata repository and automation templates to produce fully defined logical and physical Data Vault models, including hubs, links, and satellites. Automate code creation: Once the models are defined, VaultSpeed generates the necessary transformation and workflow code using templates with embedded standards and conventions for Data Vault implementation. This ensures seamless data ingestion, integration, and consistent population of the Data Vault model. By automating these steps, VaultSpeed eliminates the manual effort required for traditional data modeling and integration, reducing errors and addressing the inefficiencies of data integration using traditional ETL. Due to the model driven approach, the code is always in sync with the data model. Unified integration with Microsoft Fabric Microsoft Fabric offers a robust data ingestion, storage, and analytics ecosystem. VaultSpeed seamlessly embeds within this ecosystem to ensure an efficient and automated data pipeline. Here’s how the process works: Ingestion (Extract and Load): Tools like ADF, Fivetran, or OneLake replication bring data from various sources into Fabric. These tools handle the extraction and replication of raw data from operational systems. Microsoft Fabric also supports mirrored databases, enabling real-time data replication from sources like CosmosDB, Azure SQL, or application data into the Fabric environment. This ensures data remains synchronized across the ecosystem, providing a consistent foundation for downstream modeling and analytics. Data Repository or Platform: Microsoft Fabric is the data platform providing the infrastructure for storing, managing, and securing the ingested data. Fabric uniquely supports warehouse and lakehouse experiences, bringing them together under a unified data architecture. This means organizations can combine structured, transactional data with unstructured or semi-structured data in a single platform, eliminating silos and enabling broader analytics use cases. Modeling and Transformation: VaultSpeed takes over at this stage, leveraging its advanced automation to model and transform data into a Data Vault structure. This includes creating hubs, links, and satellites while ensuring alignment with business taxonomies. Unlike traditional ETL tools, VaultSpeed is not involved in the runtime execution of transformations. Instead, it generates code that runs within Microsoft Fabric. This approach ensures better performance, reduces vendor lock-in, and enhances security since no data flows through VaultSpeed itself. By focusing exclusively on model-driven automation, VaultSpeed enables organizations to maintain full control over their data processing while benefiting from automation efficiencies. Additionally, Fabric's VertiPaq engine manages the transformation workloads automatically, ensuring optimal performance without requiring extensive manual tuning, a key capability in a Data Vault context where performance is critical for handling large volumes of data and complex transformations. This simplifies operations for data engineers and ensures that query performance remains efficient, even as data volumes and complexity grow. Consume: The integrated data layer within Microsoft Fabric serves multiple consumption paths. While tools like Power BI enable actionable insights through analytics dashboards, the same data foundation can also drive AI use cases, such as machine learning models or intelligent applications. By connecting ingestion tools, a unified data platform, and analytics or AI solutions, VaultSpeed ensures a streamlined and integrated workflow that maximizes the value of the Microsoft Fabric ecosystem. Loading at multiple speeds: real-time Data Vaults with Fabric Loading data into a Data Vault often requires balancing traditional batch processes with the demands of real-time ingestion within a unified model. Microsoft Fabric’s event-driven tools, such as Data Activator, empower organizations to process data streams in real-time while supporting traditional batch loads. VaultSpeed complements these capabilities by ensuring that both modes of ingestion feed seamlessly into the same Data Vault model, eliminating the need for separate architectures like the Lambda pattern. Key capabilities for real time Data Vault include: Event-driven updates: Automatically trigger incremental loads into the Data Vault when changes occur in CosmosDB, OneLake, or other sources. Automated workflow orchestration: VaultSpeed’s Flow Management Control (FMC) automates the entire data ingestion, transformation, and loading workflow. This includes handling delta detection, incremental updates, and batch processes, ensuring optimal efficiency regardless of the speed of data arrival. FMC integrates natively with Azure Data Factory (ADF) for seamless orchestration within the Microsoft ecosystem. For more complex or distributed workflows, FMC also supports Apache Airflow, enabling flexibility in managing a wide range of data pipelines. Seamless integration: Maintain synchronized pipelines for historical and real-time data within the Fabric environment. The FMC intelligently manages multiple data streams, dynamically adjusting to workload demands to support high-volume batch loads and real-time event-driven updates. These capabilities ensure analytics dashboards reflect the latest data, delivering immediate value to decision-makers. Automating the gold layer and delivering data products at scale Power BI is a cornerstone of Microsoft Fabric, and VaultSpeed makes it easier for data modelers to connect the dots. By automating the creation of the gold layer, VaultSpeed enables seamless integration between Data Vaults and Power BI. Benefits for data teams: Automated gold layer: VaultSpeed automates the creation of the gold layer, including templates for star schemas, One Big Table (OBT), and other analytics-ready structures. These automated templates allow businesses to generate consistent and scalable presentation layers without manual intervention. Accelerated time to insight: By reducing manual preparation steps, VaultSpeed enables teams to deliver dashboards and reports quickly, ensuring faster access to actionable insights. Deliver data products: The ability to automate and standardize star schemas and other presentation models empowers organizations to deliver analytics-ready data products at scale, efficiently meeting the needs of multiple business domains. Improved data governance: VaultSpeed’s lineage tracking ensures compliance and transparency, providing full traceability from raw data to the presentation layer. No-code automation: Eliminate the need for custom scripting, freeing up time to focus on innovation and higher-value tasks. Conclusion Integrating VaultSpeed and Microsoft Fabric redefines how data modelers and engineers approach Data Vault 2.0. This partnership unlocks the full potential of modern data ecosystems by automating workflows, enabling real-time insights, and streamlining analytics. If you’re ready to transform your data workflows, VaultSpeed and Microsoft Fabric provide the tools you need to succeed. The following article will focus on the DataOps part of automation. Further reading Automating common understanding: Integrating different data source views into one comprehensive perspective Why Data Vault is the best model for data warehouse automation: Read the eBook The Elephant in the Fridge by John Giles: A great reference on conceptual data modeling for Data Vault About VaultSpeed VaultSpeed empowers enterprises to deliver data products at scale through advanced automation for modern data ecosystems, including data lakehouse, data mesh, and fabric architectures. The no-code platform eliminates nearly all traditional ETL tasks, delivering significant improvements in automation across areas like data modeling, engineering, testing, and deployment. With seamless integration to platforms like Microsoft Fabric or Databricks, VaultSpeed enables organizations to automate the entire software development lifecycle for data products, accelerating delivery from design to deployment. VaultSpeed addresses inefficiencies in traditional data processes, transforming how data engineers and business users collaborate to build flexible, scalable data foundations for AI and analytics. About the Authors Jonas De Keuster is VP Product at VaultSpeed. He had close to 10 years of experience as a DWH consultant in various industries like banking, insurance, healthcare, and HR services, before joining the data automation vendor. This background allows him to help understand current customer needs and engage in conversations with members of the data industry. Michael Olschimke is co-founder and CEO of Scalefree International GmbH, a European Big Data consulting firm. The firm empowers clients across all industries to use Data Vault 2.0 and similar Big Data solutions. Michael has trained thousands of industry data warehousing professionals, taught academic classes, and published regularly on these topics. Trung Ta is a senior BI consultant at Scalefree International GmbH. With over 7 years of experience in data warehousing and BI, he has advised Scalefree’s clients in different industries (banking, insurance, government, etc.) and of various sizes in establishing and maintaining their data architectures. Trung’s expertise lies within Data Vault 2.0 architecture, modeling, and implementation, specifically focusing on data automation tools. <<< Back to Blog Series Title Page371Views0likes0CommentsDelivering Information with Azure Synapse and Data Vault 2.0
Data Vault has been designed to integrate data from multiple data sources, creatively destruct the data into its fundamental components, and store and organize it so that any target structure can be derived quickly. This article focused on generating information models, often dimensional models, using virtual entities. They are used in the data architecture to deliver information. After all, dimensional models are easier to consume by dashboarding solutions, and business users know how to use dimensions and facts to aggregate their measures. However, PIT and bridge tables are usually needed to maintain the desired performance level. They also simplify the implementation of dimension and fact entities and, for those reasons, are frequently found in Data Vault-based data platforms. This article completes the information delivery. The following articles will focus on the automation aspects of Data Vault modeling and implementation.538Views0likes1CommentAzure Stream Analytics Virtual Network Integration Goes GA!
We are thrilled to announce that the highly anticipated capability of running your Azure Stream Analytics (ASA) job in an Azure Virtual Network (VNET) is now generally available (GA)! This feature, which has been in public preview, is set to revolutionize how you secure and manage your ASA jobs by leveraging the power of virtual networks. What Does This Mean for You? With VNET integration, you can now lock down access to your ASA jobs within your virtual network infrastructure. This provides enhanced security through network isolation, ensuring that your data remains protected and accessible only within your private network. By deploying a containerized instance of your ASA job inside your VNET, you can privately access your resources using: Private Endpoints: These allow you to connect your VNET-injected ASA job to your data sources privately via Azure Private Link. This means that your data traffic remains within the Azure backbone network, reducing exposure to the public internet and enhancing security. Service Endpoints: These enable you to connect your data sources directly to your VNET-injected ASA job. This simplifies the network architecture by providing direct connectivity. Service Tags: These allow you to manage network security by defining rules that allow or deny traffic to Azure Stream Analytics. This helps in maintaining a secure environment by controlling which services can communicate with your ASA jobs. Overall, VNET integration enhances the security of your ASA jobs by leveraging Azure's robust networking features. Expanded Regional Availability We are also excited to announce that this capability is now available in additional regions! Along with the existing regions (West US, Central Canada, East US, East US 2, Central US, West Europe, and North Europe), you can now enable VNET integration in the following regions: Australia East France Central North-Central US Southeast Asia Brazil South Japan East UK South Central India These regions were added in response to customer feedback. If you have suggestions for additional regions, please complete this form: https://forms.office.com/r/NFKdb3W6ti?origin=lprLink This expansion ensures that more customers around the globe can benefit from the enhanced security and network isolation provided by VNET integration. Getting Started To get started with VNET integration for your ASA jobs, follow these steps: Set Up Your VNET: Create or use an existing Azure Virtual Network. Create a Subnet: Add a dedicated subnet for your ASA job within the VNET. Set Up Azure NAT Gateway or disable outbound connectivity: Enhance security and reliability by setting up an Azure NAT Gateway or disable default outbound connectivity. Associate a Storage Account: Ensure you have a General Purpose V2 (GPV2) Storage account linked to your ASA job. Configure Your ASA Job: Azure Portal: Go to Networking and select "Run this job in virtual network." Follow the prompts to configure and save. Visual Studio Code: In the 'JobConfig.json' file, set up the 'VirtualNetworkConfiguration' to reference the subnet. Check Permissions: Make sure you have the necessary Role-based access control permissions on the subnet or higher. For detailed instructions and requirements, refer to the official documentation Run your Stream Analytics in Azure virtual network - Azure Stream Analytics | Microsoft Learn. Join the Revolution Stay tuned for more updates and exciting features as we continue to innovate and improve Azure Stream Analytics. Our other Ignite releases include Azure Stream Analytics Kafka Connectors is Now Generally Available! If you have any questions or need assistance, feel free to reach out to us at askasa@microsoft.com. Happy streaming!363Views0likes0CommentsAzure Stream Analytics Kafka Connectors is Now Generally Available!
We are excited to announce that Kafka Input and Output with Azure Stream Analytics is now generally available! This marks a major milestone in our commitment to empowering our users with robust and innovative solutions. With Stream Analytics Kafka Connectors, users can natively read and write data to and from Kafka topics. This enables users to fully leverage Stream Analytics' rich capabilities and features even when the data resides outside of Azure. Azure Stream Analytics is a job service, so you do not have to spend time managing clusters, and downtime concerns are alleviated with a 99.99% SLA (Service Level Agreement) at the job level. Key Benefits: Stream Analytics job can ingest Kafka events from anywhere, process them and output them to any number of Azure services as well as to other Kafka clusters. No need to use workarounds such as MirrorMaker or Kafka extensions with Azure function to process data in Kafka with azure stream analytics. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft. Getting Started: To get started with Stream Analytics Kafka input and output connector please refer to these links provided below: Stream data from Kafka into Azure Stream Analytics Kafka output from Azure Stream Analytics You can add Kafka input or output to a new or an existing Stream Analytics job in a few simple clicks. To add Kafka input, go to input under job topology, click Add input, and select Kafka. For Kafka output, go to output under job topology, click Add output, and select Kafka. Next, you will be presented with Kafka connection configuration. Once filled you will be able to test the connection with Kafka cluster. VNET Integration: You can connect to Kafka cluster from Azure Stream Analytics whether it is on cloud or on prem with a public endpoint. You can also securely connect to Kafka cluster inside a virtual network with Azure Stream Analytics. Visit the Run your Azure Stream Analytics job in an Azure Virtual Network documentation for more information. Automated deployment with ARM template ARM templates allow for quick and automated deployment of Stream Analytics jobs. To deploy a stream analytics job with Kafka Input or Output quickly and automatically, users can include the following sample snippet in their Stream Analytics job ARM template. "type": "Kafka", "properties": { "consumerGroupId": "string", "bootstrapServers": "string", "topicName": "string", "securityProtocol": "string", "securityProtocolKeyVaultName": "string", "sasl": { "mechanism": "string", "username": "string", "password": "string" }, "tls": { "keystoreKey": "string", "keystoreCertificateChain": "string", "keyPassword": "string", "truststoreCertificates": "string" } } We can’t wait to see what you’ll build with Azure Stream Analytics Kafka input and output connectors. Try it out today and let us know your feedback. Stay tuned for more updates as we continue to innovate and enhance this feature. Call to Action: For direct help with using the Azure Stream Analytics Kafka input, please reach out to askasa@microsoft.com. To learn more about Azure Stream Analytics click here275Views2likes0CommentsGenerally Available:Protocol Buffers (Protobuf) with Azure Stream Analytics
We are excited to announce that Azure Stream Analytics built-in Protobuf deserializer is now generally available! With the built-in deserializer, Azure Stream Analytics supports an out-of-the-box feature to align with the other file formats we support, including JSON and AVRO. Using Protobuf, you can handle data streams in a compact binary format, making it ideal for high-throughput, low-latency applications. Protobuf Protocol Buffers (Protobuf) is a language-neutral, platform-neutral mechanism for serializing structured data, developed by Google. It is widely used for communication between services or for saving data in a compact and efficient format. Configure your Stream Analytics job. Using the Protobuf deserializer is simple to configure. When setting up your Stream Analytics input, select the file format as Protobuf and then upload the Protobuf definition file (.proto file). Complete your configuration by specifying the message type and prefix style. Steps to configure a Stream Analytics job To set up your Stream Analytics job to deserialize events in Protobuf: After creating your job, go to Inputs. Click Add input and choose the desired input to configure. Under Event serialization format, select Protobuf from the dropdown list. Protobuf definition file A file that specifies the structure and data types of your Protobuf event Message type When you define a .proto schema, each message represents a data structure with specific fields. Add the message type that you want to deserialize Prefix style The setting that determines how long a message is, to deserialize Protobuf events correctly We can’t wait to see what you’ll build with Azure Stream Analytics built in Protobuf deserializer. Try it out today and let us know your feedback. Stay tuned for more updates as we continue to innovate and enhance this feature. Call to Action: For direct help with using the Azure Stream Analytics Kafka input, please reach out to askasa@microsoft.com. To learn more about Azure Stream Analytics click here To learn more about Azure Stream Analytics Protobuf deserializer click here253Views0likes0CommentsSimplifying Migration to Fabric Real-Time Intelligence for Power BI Real Time Reports
Power BI with real-time streaming has been the preferred solution for users to visualize streaming data. Real-Time streaming in PowerBI is being retired. We recommend users to start planning the migration of their data processing pipeline to Fabric Real-Time Intelligence.4.5KViews2likes0CommentsSimplicity meets Power - Integrate Azure Stream Analytics with Delta Lake
The native support of Delta Lake in Azure Stream Analytics is general available now. Explore where simplicity meets power in real-time data processing to unlock the potential of your data-driven decisions. Join us today to discover new possibilities for data-driven decision-making.2.4KViews0likes0CommentsPublic Preview: Protocol Buffers (Protobuf) with Azure Stream Analytics
Azure Stream Analytics now allows you to seamlessly process events in the Protobuf data format using a built-in Protocol Buffer deserializer. What is Protobuf? Protobuf is Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data into binary format. This allows for many performance benefits, such as reducing the bandwidth needed to send messages, compact storage, and cross-language compatibility. Protobuf with Azure Stream Analytics The previous method of ingesting Protocol Buffers into your Azure Stream Analytics required customers to use our custom deserializer, which did not provide an intuitive experience. With the built-in deserializer, Azure Stream Analytics supports an out-of-the-box feature to align with the other file formats we support, including JSON and AVRO. Configure your Stream Analytics job. Using the Protobuf deserializer is simple to configure. When setting up your Stream Analytics input, select the file format as Protobuf and then upload the Protobuf definition file (.proto file). Complete your configuration by specifying the message type and prefix style. Conclusion: Azure Stream Analytics now offers a built-in Protobuf deserializer. With this release, you can specify the file format you are ingesting as a Protocol Buffer. This solution presents a more user-friendly experience compared to using a custom deserializer. To use the built-in deserializer, specify the Protobuf definition file, message type, and prefix style. To learn more, visit the documentation for using Protocol Buffers with Azure Stream Analytics.3.6KViews0likes0Comments