Recent Discussions
Dynamics AX connector stops getting records after amount of time
Hello everyone, I am using the Dynamics AX connector to get data out of Finance. After a certain amount of time it suddenly doesnt get any new records anymore and it keeps running until it reaches the general timeout. It gets 290,000 records in like 01:30:00 and then keeps running and doesn't get new records anymore. Sometimes it gets stuck earlier or later. Sometimes it also gives me this error: Failure happened on 'Source' side. ErrorCode=ODataRequestTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Fail to get response from odata service in a expected time.,Source=Microsoft.DataTransfer.Runtime.ODataConnector,''Type=System.Threading.Tasks.TaskCanceledException,Message=A task was canceled.,Source=mscorlib,' This is my pipeline JSON: { "name": "HICT - Init Sync SalesOrders", "properties": { "activities": [ { "name": "Get FO SalesOrders", "type": "Copy", "dependsOn": [], "policy": { "timeout": "0.23:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "source": { "type": "DynamicsAXSource", "query": "$filter=FM_InterCompanyOrder eq Microsoft.Dynamics.DataEntities.NoYes'No' and dataAreaId eq 'prev'&$select=SalesOrderNumber,SalesOrderName,IsDeliveryAddressPrivate,FormattedInvoiceAddress,FormattedDeliveryAddress,ArePricesIncludingSalesTax,RequestedReceiptDate,QuotationNumber,PriceCustomerGroupCode,PBS_PreferredInvoiceDate,PaymentTermsBaseDate,OrderTotalTaxAmount,OrderTotalChargesAmount,OrderTotalAmount,TotalDiscountAmount,IsInvoiceAddressPrivate,InvoiceBuildingCompliment,InvoiceAddressZipCode,LanguageId,IsDeliveryAddressOrderSpecific,IsOneTimeCustomer,InvoiceAddressStreetNumber,InvoiceAddressStreet,InvoiceAddressStateId,InvoiceAddressPostBox,InvoiceAddressLongitude,InvoiceAddressLatitude,InvoiceAddressDistrictName,InvoiceAddressCountyId,InvoiceAddressCountryRegionISOCode,InvoiceAddressCity,FM_Deadline,Email,DeliveryTermsCode,DeliveryModeCode,DeliveryBuildingCompliment,DeliveryAddressCountryRegionISOCode,DeliveryAddressZipCode,DeliveryAddressStreetNumber,SalesOrderStatus,DeliveryAddressStreet,DeliveryAddressStateId,SalesOrderPromisingMethod,DeliveryAddressPostBox,DeliveryAddressName,DeliveryAddressLongitude,DeliveryAddressLocationId,DeliveryAddressLatitude,DeliveryAddressDunsNumber,DeliveryAddressDistrictName,DeliveryAddressDescription,DeliveryAddressCountyId,DeliveryAddressCity,CustomersOrderReference,IsSalesProcessingStopped,CustomerRequisitionNumber,SalesOrderProcessingStatus,CurrencyCode,ConfirmedShippingDate,ConfirmedReceiptDate,SalesOrderOriginCode,URL,OrderingCustomerAccountNumber,InvoiceCustomerAccountNumber,ContactPersonId,FM_WorkerSalesTaker,FM_SalesResponsible,PaymentTermsName,DefaultShippingSiteId,DefaultShippingWarehouseId,DeliveryModeCode,dataAreaId,FM_InterCompanyOrder&cross-company=true", "httpRequestTimeout": "00:15:00", "additionalHeaders": { "Prefer": "odata.maxpagesize=1000" }, "retrieveEnumValuesAsString": true }, "sink": { "type": "JsonSink", "storeSettings": { "type": "AzureBlobStorageWriteSettings", "copyBehavior": "FlattenHierarchy" }, "formatSettings": { "type": "JsonWriteSettings" } }, "enableStaging": false, "enableSkipIncompatibleRow": true, "logSettings": { "enableCopyActivityLog": true, "copyActivityLogSettings": { "logLevel": "Warning", "enableReliableLogging": false }, "logLocationSettings": { "linkedServiceName": { "referenceName": "AzureBlobStorage", "type": "LinkedServiceReference" }, "path": "ceexports" } } }, "inputs": [ { "referenceName": "AX_SalesOrders_Dynamics_365_FO_ACC", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "Orders_FO_D365_Data_JSON", "type": "DatasetReference" } ] }, { "name": "Get_All_CE_Table_Data", "type": "ForEach", "dependsOn": [ { "activity": "Get FO SalesOrders", "dependencyConditions": [ "Completed" ] } ], "userProperties": [], "typeProperties": { "items": { "value": "@pipeline().parameters.CE_Tables", "type": "Expression" }, "activities": [ { "name": "Copy_CE_TableData", "type": "Copy", "dependsOn": [], "policy": { "timeout": "0.12:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "source": { "type": "CommonDataServiceForAppsSource" }, "sink": { "type": "DelimitedTextSink", "storeSettings": { "type": "AzureBlobStorageWriteSettings", "copyBehavior": "FlattenHierarchy" }, "formatSettings": { "type": "DelimitedTextWriteSettings", "quoteAllText": true, "fileExtension": ".txt" } }, "enableStaging": false }, "inputs": [ { "referenceName": "CE_Look_Up_Tables", "type": "DatasetReference", "parameters": { "entiryName": "@item().sourceDataset" } } ], "outputs": [ { "referenceName": "CE_GenericBlobSink", "type": "DatasetReference", "parameters": { "sinkPath": { "value": "@item().sinkPath", "type": "Expression" } } } ] } ] } }, { "name": "Transform_Create_CE_JSON", "type": "ExecuteDataFlow", "dependsOn": [ { "activity": "Get_All_CE_Table_Data", "dependencyConditions": [ "Succeeded" ] } ], "policy": { "timeout": "0.12:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "dataflow": { "referenceName": "FO_Transform_CE_Select", "type": "DataFlowReference" }, "compute": { "coreCount": 16, "computeType": "General" }, "traceLevel": "Fine" } } ], "parameters": { "CE_Tables": { "type": "array", "defaultValue": [ { "name": "D365_CE_ACC_AccountRelations", "sourceDataset": "crmp_accountrelation", "sinkPath": "ce-exports/D365_CE_ACC_AccountRelations.json" }, { "name": "D365_CE_ACC_ContactRelations", "sourceDataset": "crmp_contactrelation", "sinkPath": "ce-exports/D365_CE_ACC_ContactRelations.json" }, { "name": "D365_CE_ACC_PriceCustomerGroup", "sourceDataset": "msdyn_pricecustomergroup", "sinkPath": "ce-exports/D365_CE_ACC_PriceCustomerGroup.json" }, { "name": "D365_CE_ACC_SalesOrderOrigin", "sourceDataset": "odin_salesorderorigin", "sinkPath": "ce-exports/D365_CE_ACC_SalesOrderOrigin.json" }, { "name": "D365_CE_ACC_ShipVia", "sourceDataset": "msdyn_shipvia", "sinkPath": "ce-exports/D365_CE_ACC_ShipVia.json" }, { "name": "D365_CE_ACC_SystemUser", "sourceDataset": "systemuser", "sinkPath": "ce-exports/D365_CE_ACC_SystemUser.json" }, { "name": "D365_CE_ACC_TermsOfDelivery", "sourceDataset": "msdyn_termsofdelivery", "sinkPath": "ce-exports/D365_CE_ACC_TermsOfDelivery.json" }, { "name": "D365_CE_ACC_Worker", "sourceDataset": "cdm_worker", "sinkPath": "ce-exports/D365_CE_ACC_Worker.json" }, { "name": "D365_CE_ACC_TransactionCurrency", "sourceDataset": "transactioncurrency", "sinkPath": "ce-exports/D365_CE_ACC_TransactionCurrency.json" }, { "name": "D365_CE_ACC_Warehouse", "sourceDataset": "msdyn_warehouse", "sinkPath": "ce-exports/D365_CE_ACC_Warehouse.json" }, { "name": "D365_CE_ACC_OperationalSite", "sourceDataset": "msdyn_operationalsite", "sinkPath": "ce-exports/D365_CE_ACC_OperationalSite.json" }, { "name": "D365_CE_ACC_PaymentTerms", "sourceDataset": "odin_paymentterms", "sinkPath": "ce-exports/D365_CE_ACC_PaymentTerms.json" } ] } }, "annotations": [], "lastPublishTime": "2025-07-30T12:55:32Z" }, "type": "Microsoft.DataFactory/factories/pipelines" }15Views0likes0CommentsHow to use existing cache for external table when acceleration in progress
I enabled query acceleration for my external table which binds a delta table (1TB) on ADLS. But acceleration progress needs 1.5 hours to complete. I found during acceleration in progress, querying on the table is quite slower than case when acceleration completed. How can I use the existing acceleration cache/index and after acceleration completed, Kusto will switch to new index?26Views0likes0Commentstimechart legend in Azure Data Explorer
Hi, I'm creating a timechart dashboard in Azure Data Explorer and facing an issue with the legend labels. The legends have extra prefixes and suffixes, such as "Endpoint" or "Count". How can I remove these and show only the actual value in the legend? Thank you!11Views0likes0CommentsAzure Data Factory ForEach Loop Fails Despite Inner Activity Error Handling - Seeking Best Practices
Hello Azure Data Factory Community, I'm encountering a persistent issue with my ADF pipeline where a ForEach loop is failing, even though I've implemented error handling for the inner activities. I'm looking for insights and best practices on how to prevent internal activity failures from propagating up and causing the entire ForEach loop (and subsequently the pipeline) to fail, while still logging all outcomes. My Setup: My pipeline processes records using a ForEach loop. Inside the loop, I have a Web activity (Sample_put_record) that calls an external API. This API call can either succeed or fail for individual records. My current error handling within the ForEach iteration is structured as follows: 1.Sample_put_record (Web Activity): Makes the API call. 2.Conditional Logic: I've tried two main approaches: โขApproach A (Direct Success/Failure Paths): The Sample_put_record activity has a green arrow (on success) leading to a Log Success Items (Script activity) and a red arrow (on failure) leading to a Log Failed Items (Script activity). Both logging activities are followed by Wait activities (Dummy Wait For Success/Failure). โขApproach B (If Condition Wrapper): I've wrapped the Sample_put_record activity and its success/failure logging within an If Condition activity. The If Condition's expression is @equals(activity('Sample_put_record').status, 'Succeeded'). The True branch contains the success logging, and the False branch contains the failure logging. The intention here was for the If Condition to always report success, regardless of the Sample_put_record outcome, to prevent the ForEach from failing. The Problem: Despite these error handling attempts, the ForEach loop (and thus the overall pipeline) still fails when an Sample_put_record activity fails. The error message I typically see for the ForEach activity is "Activity failed because an inner activity failed." When using the If Condition wrapper, the If Condition itself sometimes fails with the same error, indicating that an activity within its True or False branch is still causing a hard failure. For example, a common failure for Sample_put_record is: "valid":false,"message":"WARNING: There was no xxxxxxxxxxxxxxxxxxxxxxxxx scheduled..." (a user configuration/data issue). Even when my Log Failed Items script attempts to capture this, the ForEach still breaks. What I've Ensured/Considered: โขWait Activity Configuration: Wait activities are configured with reasonable durations and do not appear to be the direct cause of failure. โขNo Unhandled Exceptions: I'm trying to ensure no unhandled exceptions are propagating from my error handling activities. โขPipeline Status Goal: My ultimate goal is for the overall pipeline status to be Succeeded as long as the pipeline completes its execution, even if some Sample_put_record calls fail and are logged. I need to rely on the logs to identify actual failures, not the pipeline status. My Questions to the Community: 1.What is the definitive best practice in Azure Data Factory to ensure a ForEach loop never fails due to an inner activity failure, assuming the inner activity's failure is properly logged and handled within that iteration? 2.Are there specific nuances or common pitfalls with If Condition activities or Script activities within ForEach loops that could still cause failure propagation, even with try-catch and success exits? 3.How do you typically structure your ADF pipelines to achieve this level of resilience where internal failures are logged but don't impact the overall pipeline success status? 4.Are there any specific configurations on the ForEach activity itself (e.g., Continue on error setting, if it exists for ForEach?) or other activities that I might be overlooking? Any detailed examples, architectural patterns, or debugging tips would be greatly appreciated. Thank you in advance for your help!39Views0likes0CommentsSynapse Webhook Action with Private Logic App
Hi all, I have a Synapse workspace with public access disabled and using all private endpoints, both for inbound and outbound access from the managed vnet. I also have a Logic App with private endpoints. Both Synapse and Logic App are in separate virtual networks but peered together at a central hub site. Each have access to private DNS zones with records to resolve to each resource. When I disabled public network access on the Logic App, I can no longer use a Webhook activity from a Synapse pipeline with callback URI. A Web action works just fine, but with the Webhook activity, I get a response from the Logic App of 403 Forbidden. Ordinarily this looks like a permission issue, but when public network is enabled, the Logic App workflow works fine. When the Webhook action fails to runs, there is no activity run logged on the Logic App. There's something that the Webhook action is not getting back from the Logic App when public network access is disabled. I've been trying to find a solution (including sending back a 202 response to Synapse from the Logic App), but it continues to baffle me. Has any one else successfully configured Synapse Webhook action to call a workflow in a Standard Logic App over private endpoints? Any ideas or suggestions to troubleshoot this?18Views0likes0CommentsJune 2025 updates for Azure Database for PostgreSQL
Big news this month โ PostgreSQL 17 is now GA with in-place upgrades, and our Migration Service fully supports PG17, making adoption smoother than ever. Also in this release: Online Migration is now generally available SSD v2 HA (Preview) with 10s failovers and better resilience Azure PostgreSQL now available in Indonesia Central VS Code extension enhancements for smoother dev experience Enhanced role management for improved admin control Ansible collection updated for latest REST API Check all these updates in this monthโs recap blog: https://techcommunity.microsoft.com/blog/adforpostgresql/june-2025-recap-azure-database-for-postgresql/4412095 Check it out and tell us which feature you're most excited about!Copy Activity Successful, But Times Out
This appears to be an edge case, but I wanted to share. A copy activity is successful, but times out. Duration is 1:58:55. Times out at 2:00:12. Runs a second time time and is successful, loading duplicate records. The duplicate records is the undesired result. Copy Activity General Timeout: 0.02:00:00 Retry: 2 Source mySQL Parameterized SQL Parameterized Sink Synapse SQL Pool Parameterized Copy method: COPY command Settings Use V2 Hiearchy storage for staging General Synapse/ADF Managed Network37Views0likes0CommentsAdvice requested: how to capture full SQL CDC changes using Dataflow and ADLS gen2
Hi, I'm working on a fairly simple ETL process using Dataflow in Azure Data Factory, where I want to capture the changes in a CDC-enabled SQL table, and store those in Delta Lake format in a ADLS gen2 sink. The resulting dataset will be further processed, but for me this is the end of the line. I don't have an expert understanding of all the details of the Delta Lake format, but I do know that I can use it to store changes to my data over time. So in the sink, I enabled all Update methods (Insert, Delete, Upsert, Update), since my CDC source should be able to figure out the correct row transformation. Key columns are set to the primary key columns in SQL. All this works fine as long as I configure my source to use CDC with 'netChanges: true'. That yields a single change row for each record, which is correctly stored in the sink. But I want to capture all changes since the previous run, so I want to set the source to netChanges: false. That yields rows for every change since the previous time the dataflow ran. But for every table that actually has records with more than one change, the dataflow fails saying "Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways." I take that to mean that my dataflow is, as it is, not smart enough to loop through all changes in the source, and apply them to the sink in order. So apparently something else has to be done. My intuition says that, since CDC actually provides all the metadata to make this possible, there's probably an out-of-the-box way to achieve what I want. But I can't readily find that magic box I should tick ๐. I can probably build it out 'by hand', by somehow looping over all changes and applying them in order, but before I go down that route, I came here to learn from the experts whether this is indeed the only way, or, preferably, that there is a neat trick I missed to get this done easily. Thanks so much for your advice! BR32Views0likes0CommentsPostgreSQL 17 General Availability with In-Place Upgrade Support
Weโre excited to share that PostgreSQL 17 is now Generally Available on Azure Database for PostgreSQL โ Flexible Server! This release brings community-driven enhancements including improved vacuum performance, smarter query planning, enhanced JSON functions, and dynamic logical replication. It also includes support for in-place major version upgrades, allowing customers to upgrade directly from PostgreSQL 11โ16 to 17 without needing to migrate data or change connection strings. PostgreSQL 17 is now the default version for new server creations and major version upgrades. ๐ Read the full blog post: http://aka.ms/PG17 Let us know if you have feedback or questions!Solution: Handling Concurrency in Azure Data Factory with Marker Files and Web Activities
Hi everyone, I wanted to share a concurrency issue we encountered in Azure Data Factory (ADF) and how we resolved it using a small but effective enhancementโone that might be useful if you're working with shared Blob Storage across multiple environments (like Dev, Test, and Prod). Background: Shared Blob Storage & Marker Files In our ADF pipelines, we extract data from various sources (e.g., SharePoint, Oracle) and store them in Azure Blob Storage. That Blob container is shared across multiple environments. To prevent duplicate extractions, we use marker files: started.marker โ created when a copy begins completed.marker โ created when the copy finishes successfully If both markers exist, pipelines reuse the existing file (caching logic). This mechanism was already in place and worked well under normal conditions. The Issue: Race Conditions We observed that simultaneous executions from multiple environments sometimes led to: Overlapping attempts to create the same started.marker Duplicate copy activities Corrupted Blob files This became a serious concern because the Blob file was later loaded into Azure SQL Server, and any corruption led to failed loads. The Fix: Web Activity + REST API To solve this, we modified only the creation of started.marker by: Replacing Copy Activity with a Web Activity that calls the Azure Storage REST API The API uses Azure Blob Storage's conditional header If-None-Match: * to safely create the file only if it doesn't exist If the file already exists, the API returns "BlobAlreadyExists", which the pipeline handles by skipping. The Copy Activity is still used to copy the data and create the completed.markerโno changes needed there. Updated Flow Check marker files: If both exist (started and completed) โ use cached file If only started.marker โ wait and retry If none โ continue to step 2 Web Activity calls REST API to create started.marker Success โ proceed with copy in step 3 Failure โ another run already started โ skip/retry Copy Activity performs the data extract Copy Activity creates completed.marker Benefits Atomic creation of started.marker โ no race conditions Minimal change to existing pipeline logic with marker files Reliable downstream loads into Azure SQL Server Preserves existing architecture (no full redesign) Would love to hear: Have you used similar marker-based patterns in ADF? Any other approaches to concurrency control that worked for your team? Thanks for reading! Hope this helps someone facing similar issues.27Views0likes0CommentsBlob Storage Event Trigger Disappears
Yesterday I ran into an odd situation where there was a resource lock and I was unable to rename pipelines or drop/create storage event triggers. An admin cleared the lock and I was able to remove and clean up the triggers and pipelines. Today, when I try to recreate the blob storage trigger to process a file when it appears in a container, the trigger creates just fine but on refresh, it disappears. If I try to recreate it again with the same name as the one that went away ADF UI says it already exists. I cannot assign it to a pipeline because the UI does not see it. Any insight as to where it is, how I can see it, or even what logs would have such activity recorded to give a clue as to what is going on. This seems like a bug.15Views0likes0CommentsParameter controls are not showing Display text
Hi, After a recent update to the Azure Data Explorer Web UI, the Parameter controls are not displaying correctly. The Display Text for parameters is not shown by default; instead, the raw Value is displayed until the control is clicked, at which point the correct Display Text appears. Could you please investigate this issue and provide guidance on a resolution? Thank you,18Views0likes0CommentsJune 2023 Update: Azure Database for PostgreSQL Flexible Server Unveils New Features
The Azure Database for PostgreSQL Flexible Server's June 2023 update is live! Now enjoy: Easier major version upgrades with reduced downtime. Server recovery feature for dropped servers. A more user-friendly Connect experience. Improved server performance with new IO enhancements. Auto-growing storage and online disk resize, now in public preview. We also support minor versions PostgreSQL 15.2 (preview), 14.7, 13.10, 12.14, 11.19. Big thanks to our dedicated team! Check out our blog for more details: https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/june-2023-recap-azure-database-postgresql-flexible-server/ba-p/3868650July 2023 Recap: Azure Database PostgreSQL Flexible Server
July 2023 Recap: Azure Database PostgreSQL Flexible Server Support for PostgreSQL 15 is now available (general availability). Automation Tasks have been introduced for Streamlined Management (preview phase). Flexible Server Migration Tooling has been enhanced (general availability). Hardware Options have been expanded with the addition of AMD Compute SKUs (general availability). These updates represent substantial improvements in performance, scalability, and efficiency. Whether you are a developer, a Database Administrator (DBA), or an individual passionate about PostgreSQL, we trust that these enhancements will contribute positively to your experience with our platform. Should you find these updates valuable, we encourage you to engage with us through appropriate channels of communication. Thank you for your continued support and interest in Azure Database for PostgreSQL Flexible Server.Autoscaling with Azure: A Comprehensive Guide to PostgreSQL Optimization Using Azure Automation Task
Autoscaling Azure PostgreSQL Server with Automation Tasks Read our latest article detailing the power of Autoscaling the Azure Database for PostgreSQL Flexible Server using Azure Automation Tasks. This new feature can revolutionize how we manage resources, streamlining operations, and minimizing human error.August 2023 Recap: Azure Database for PostgreSQL Flexible Server
Absolutely thrilled to unveil our latest blog post, " ๐๐๐ด๐๐๐ ๐ฎ๐ฌ๐ฎ๐ฏ ๐ฅ๐ฒ๐ฐ๐ฎ๐ฝ: ๐๐๐๐ฟ๐ฒ ๐๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ ๐ณ๐ผ๐ฟ ๐ฃ๐ผ๐๐๐ด๐ฟ๐ฒ๐ฆ๐ค๐ ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ ". This month is jam-packed with feature updates designed to amplify your experience! 1. ๐๐๐๐ผ๐๐ฎ๐ฐ๐๐๐บ ๐ ๐ผ๐ป๐ถ๐๐ผ๐ฟ๐ถ๐ป๐ด - Elevate your database health with improved tools and metrics. 2. ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐๐ก๐ฆ ๐ญ๐ผ๐ป๐ฒ ๐๐ถ๐ป๐ธ๐ถ๐ป๐ด - Simplify your server setup process for multiple networking models. 3. ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ ๐ฃ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ ๐ฉ๐ถ๐๐ถ๐ฏ๐ถ๐น๐ถ๐๐ ๐ฒ๐ป๐ต๐ฎ๐ป๐ฐ๐ฒ๐บ๐ฒ๐ป๐๐ - Now view hidden parameters for better performance optimization. 4. ๐ฆ๐ถ๐ป๐ด๐น๐ฒ ๐๐ผ ๐๐น๐ฒ๐ ๐ถ๐ฏ๐น๐ฒ ๐ฆ๐ฒ๐ฟ๐๐ฒ๐ฟ ๐ ๐ถ๐ด๐ฟ๐ฎ๐๐ถ๐ผ๐ป ๐ง๐ผ๐ผ๐น๐ถ๐ป๐ด โ Simplified migration experience with automated extension allow listing. Don't miss out! Read the full scoop here: August 2023 Recap: Azure Database for PostgreSQL Flexible ServerPostgreSQL 16 generally available (September 14, 2023)
Detailed Release Notes - https://www.postgresql.org/about/news/postgresql-16-released-2715/ How has PostgreSQL 16's new feature set changed the game for your database operations? Share your favorite enhancements and unexpected wins!839Views0likes0CommentsNovember 2023 Recap: Azure PostgreSQL Flexible Server
Excited to share our November 2023 updates for Azure Database for PostgreSQL Flexible Server Server Logs management has been streamlined for better monitoring and troubleshooting, along with customizable retention periods. Embracing the latest in security, we now support TLS Version 1.3, ensuring the most secure and efficient client-server communications. Migrations are smoother with our new Pre-Migration Validation feature, making your transition to Flexible Server seamless. Microsoft Defender integration, providing proactive anomaly detection and real-time alerts to safeguard your databases. Additionally, we've upgraded user and role migration capabilities for a more accurate and hassle-free experience. ๐ฅ Link - https://lnkd.in/gMMGaiAK Stay tuned for more updates, and feel free to share your experiences with these new features!February 2024 Recap: Azure PostgreSQL Flexible Server
Azure database for PostgreSQL Flexible Server - Feb '24 Feature Recap: General Availability of ๐ฃ๐ฟ๐ถ๐๐ฎ๐๐ฒ ๐๐ป๐ฑ๐ฝ๐ผ๐ถ๐ป๐๐ across all public Azure regions for secure, flexible connectivity. ๐๐ฎ๐๐ฒ๐๐ ๐ฒ๐ ๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐๐ฒ๐ฟ๐๐ถ๐ผ๐ป๐ to enhance your PostgreSQL performance and security. ๐๐ฎ๐๐ฒ๐๐ ๐ฃ๐ผ๐๐๐ด๐ฟ๐ฒ๐ ๐บ๐ถ๐ป๐ผ๐ฟ ๐๐ฒ๐ฟ๐๐ถ๐ผ๐ป๐ (16.1, 15.5, 14.10, 13.13, 12.17, 11.22) now supported for automatic upgrades. Enhanced ๐ ๐ฎ๐ท๐ผ๐ฟ ๐ฉ๐ฒ๐ฟ๐๐ถ๐ผ๐ป ๐จ๐ฝ๐ด๐ฟ๐ฎ๐ฑ๐ฒ ๐๐ผ๐ด๐ด๐ถ๐ป๐ด for smoother upgrades. ๐ฝ๐ด๐๐ฒ๐ฐ๐๐ผ๐ฟ ๐ฌ.๐ฒ.๐ฌ introduced for better vector similarity searches. ๐ฅ๐ฒ๐ฎ๐น-๐๐ถ๐บ๐ฒ ๐๐ฒ๐ ๐ ๐๐ฟ๐ฎ๐ป๐๐น๐ฎ๐๐ถ๐ผ๐ป now available with Azure_AI extension. Easier ๐ข๐ป๐น๐ถ๐ป๐ฒ ๐ ๐ถ๐ด๐ฟ๐ฎ๐๐ถ๐ผ๐ป from Single Server to Flexible Server in public preview. We recommend reading our latest blog post to explore these updates in detail - https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/february-2024-recap-azure-postgresql-flexible-server/ba-p/4089037
Events
Recent Blogs
- Weโre thrilled to announce that Schema Migration support in Azure Database Migration Service (DMS) is now generally available (GA)! This milestone marks a significant leap forward in simplifying and ...Aug 04, 202524Views0likes0Comments
- 1 MIN READWe worked on a service request that our customer trying to enable their Python application, hosted on Azure App Service, to connect securely to Azure SQL Database using a user-assigned managed identi...Aug 01, 202537Views0likes0Comments