Best Practices Guide

Azure Optimization Best Practices

Published: July 26, 2018; Last Updated: January 27, 2021 3 Minutes to Read

Microsoft Azure is a powerful cloud platform that enables users to store and quickly process terabytes of data. To keep costs down while managing and securing an Azure-based platform, you need an optimized architecture.

Building an optimized Azure architecture requires numerous assets including databases, data warehouses, machine learning models, and data streaming services. With so many elements, it can be difficult to keep your costs low while operating at peak performance.

At MAQ Software, we’ve migrated hundreds of applications to Azure. Based on our experience, we have compiled five best practices to optimize your Azure platform for high performance and cost.

For General Use

  1. To reduce Azure costs, turn off or scale down virtual machines (VMs) at set times

    Switch off resources during non-usage periods or scale down VMs during off-peak hours to reduce overall subscription costs. There are several ways to schedule auto-shutdowns for Azure VMs. When provisioning a new VM, Azure offers settings for scheduled shutdown time according to time zone.

    When administering multiple VMs, runbooks are ideal for scheduling automatic shutdown. Runbooks in Service Management Automation and Microsoft Azure Automation are Windows PowerShell workflows or PowerShell scripts.

    In DevOps mode, we maintain four parallel environments: Development, Testing, UAT, and Production. We use ARO Toolbox and runbooks to automatically shut down all non-production environments at the end of business hours. The environments automatically start again before business hours resume. When a VM is shut down, Azure does not charge compute or network fees for the stopped VM. Azure does charge a small fee for storage. Depending on usage patterns, you can reduce your Azure subscription costs by 20% to 30% by turning VMs off outside of business hours.

    This best practice also applies to any Azure resource used for compute, including Azure Databricks, Azure Synapse, Azure Data Warehouse, Azure SQL database, Azure App Service, and Azure Cloud Service.

  2. Use automated tools to monitor network resource configurations and changes

    Use Azure Activity Log to monitor network resource configurations. You can also use Azure Activity Log to detect changes to network settings and resources related to Azure Functions deployments.

    Create alerts in Azure Monitor that highlight changes to critical network settings or resources. With alerts, you can maintain a 360 degree view of all your resources and identify easy optimization opportunities.

When using Azure SQL Data Warehouse (ADW)

  1. To optimize query execution time in Azure SQL Data Warehouse, use resource classes

    Resource classes limit the number of queries that run at the same time (concurrency), and the compute resources assigned to each query (memory). Select the resource class that best suits your needs. Smaller resource classes reduce the maximum memory per query but increase concurrency. Larger resource classes increase the maximum memory per query but reduce concurrency.

    There are two types of resource classes:

    • Static resource classes: Choose a static resource class when your resource expectations vary throughout the day. For example, a static resource class works well when your data warehouse contains straightforward data that is queried by many people. With this resource class, scaling the data warehouse does not change the amount of memory allocated to the user. As a result, the system can execute more queries at the same time.
    • Dynamic resource classes: Choose a dynamic resource class when queries are complex but do not need high concurrency. For example, generating daily or weekly reports viewed by leadership. If the reports process large amounts of data, scaling the data warehouse provides more memory to the user's existing resource class. Please note that large dynamic resource classes use many slots, reducing the resources available for additional queries.

    To summarize: use static resource classes for fixed data sets and use dynamic resource classes for growing data sets.

When using Azure Analysis Services (AAS)

  1. To process large volumes of data, dynamically scale Azure Analysis Services

    To improve processing performance, schedule an automatic scale-up of AAS through a runbook immediately before processing large volumes of data. To optimize costs, schedule a scale-down immediately after processing.

  2. When processing large amounts of data with Azure Analysis Services, use partitioning

    Use Azure Durable Functions to process partitions in AAS. Partitioning reduces your data latency and processing time. By dividing your large tables into small, logical partition blocks, you enable the system to process partitions independently. For example, partition large enterprise data such as global support and sales information.

References

Microsoft offers additional documents that provide a high-level framework for best practices. We strongly encourage you to review the following documents: