Contents Azure Architecture Center Cloud fundamentals Application architecture guide Introduction Architecture styles Overview N-tier application Web-queue-worker Microservices CQRS Event-driven architecture Big data Big compute Choosing a compute service Overview Decision tree Compute comparison Choosing a data store Overview Data store comparison Design principles Overview Design for self-healing Make all things redundant Minimize coordination Design to scale out Partition around limits Design for operations Use managed services Use the best data store for the job Design for evolution Build for the needs of business Pillars of software quality Cloud design patterns Overview Categories Availability Data management Design and implementation Messaging Management and monitoring Performance and scalability Resiliency Security Ambassador Anti-corruption Layer Backends for Frontends Bulkhead Cache-Aside Circuit Breaker Claim Check Command and Query Responsibility Segregation (CQRS) Compensating Transaction Competing Consumers Compute Resource Consolidation Event Sourcing External Configuration Store Federated Identity Gatekeeper Gateway Aggregation Gateway Offloading Gateway Routing Health Endpoint Monitoring Index Table Leader Election Materialized View Pipes and Filters Priority Queue Publisher/Subscriber Queue-Based Load Leveling Retry Scheduler Agent Supervisor Sharding Sidecar Static Content Hosting Strangler Throttling Valet Key Best practices for cloud applications API design API implementation Autoscaling Background jobs Caching Content Delivery Network Data partitioning Data partitioning strategies (by service) Monitoring and diagnostics Naming conventions Transient fault handling Retry guidance for specific services Performance antipatterns Overview Busy Database Busy Front End Chatty I/O Extraneous Fetching Improper Instantiation Monolithic Persistence No Caching Synchronous I/O Azure for AWS Professionals Overview Services comparison Example scenarios Overview AI Hotel reservation chatbot Image classification Movie recommendation Newsfeed ingestion Apps Computer-aided engineering Decentralized trust between banks DevOps with containers DevOps with Azure DevOps SAP for dev/test SAP for production E-commerce front-end E-commerce API management E-commerce product search Data and analytics Hybrid ETL with Data Factory IoT for construction Data warehousing and analytics Automotive IoT data Real-time fraud detection Scalable order processing Infrastructure Computational fluid dynamics (CFD) Linux virtual desktops Decomposing monolithic applications Highly scalable WordPress Highly Available Web Application Multi-tier Windows HPC video rendering Image Modeling Reference architectures Overview AI Training and scoring deep learning models and Python models Distributed training of deep learning models Batch scoring of deep learning models Training of Python scikit-learn models Batch scoring of Python models Real-time scoring of Python and deep learning models Scoring R models Batch scoring of R models Real-time scoring of R models Batch scoring of Spark models on Azure Databricks Conversational bot Real-time Recommendation API Big data Enterprise BI with SQL Data Warehouse Automated enterprise BI with Azure Data Factory Stream processing with Azure Databricks Stream processing with Azure Stream Analytics Enterprise integration Basic enterprise integration Enterprise integration with queues and events Hybrid networks Choose a hybrid network architecture VPN ExpressRoute ExpressRoute with VPN failover Hub-spoke topology Hub-spoke topology with shared services Troubleshoot a hybrid VPN connection Identity management Choose an Active Directory integration architecture Integrate on-premises AD with Azure AD Extend AD DS to Azure Create an AD DS forest in Azure Extend AD FS to Azure Internet of Things (IoT) Microservices Azure Kubernetes Service (AKS) Azure Service Fabric Network DMZ DMZ between Azure and on-premises DMZ between Azure and the Internet Highly available network virtual appliances SAP SAP NetWeaver for AnyDB SAP S/4HANA SAP HANA on Azure Large Instances Serverless Serverless web application Serverless event processing VM workloads Single Linux VM Single Windows VM Jenkins server N-tier application with SQL Server Multi-region N-tier application N-tier application with Cassandra SharePoint Server 2016 Web applications Basic web application Improved scalability Multi-region deployment Web application monitoring Build microservices on Azure Overview Introduction to microservices Model microservices Domain analysis Tactical DDD Identify microservice boundaries Design a microservices architecture Introduction Choose a compute option Interservice communication API design API gateways Data considerations Design patterns for microservices Operate microservices in production Logging and monitoring CI/CD Design guides Data architecture guide Data architecture guide Introduction Traditional RDBMS workloads Overview Online transaction processing (OLTP) Data warehousing Online analytical processing (OLAP) Extract, transform, and load (ETL) Big data architectures Overview Batch processing Real time processing Machine learning at scale Non-relational data stores Scenarios Advanced analytics Data lakes Free-form text search Interactive data exploration Natural language processing Time series solutions Working with CSV and JSON files Technology choices Analytical data stores Analytics and reporting Batch processing Cognitive services Data storage Machine learning Natural language processing Pipeline orchestration R developer's guide to Azure Real-time message ingestion Search data stores Stream processing Cross-cutting concerns Data transfer Extending on-premises data solutions to the cloud Securing data solutions Extend Azure Resource Manager templates Introduction Update a resource Conditionally deploy a resource Use an object as a parameter Property transformer and collector Manage multitenant identity Introduction The Tailspin scenario Authentication Claims-based identity Tenant sign-up Application roles Authorization Secure a web API Cache access tokens Client assertion Protect application secrets Federate with a customer's AD FS Run the Surveys application Migrate from Cloud Services to Service Fabric Migrate a Cloud Services application to Service Fabric Refactor a Service Fabric application Monitor Azure Databricks Overview Configure Databricks to send metrics to Azure Monitor Send Databricks application logs to Azure Monitor Use dashboards to visualize Databricks metrics Troubleshoot performance bottlenecks Design Review Framework Design for resiliency Failure mode analysis Availability checklist DevOps checklist Resiliency checklist (general) Resiliency checklist (Azure services) Scalability checklist Cloud Adoption Framework Introduction Getting started Overview How does Azure work? What is cloud resource governance? Resource access management in Azure Business justification Build a cloud migration business case Create a financial model for cloud transformation Examples of fiscal outcomes Digital estate The 5 Rs of Rationalization What is a digital estate? Approaches to digital estate planning Gather inventory data Rationalize the digital estate Align cost models to forecast cost Governance Overview Actionable governance journeys Overview of the journeys Small-to-medium enterprise journey Best practice Narrative Initial corporate policy Best practice explained Security Baseline evolution Resource Consistency evolution Cost Management evolution Multi-cloud evolution Large enterprise journey Best practice Narrative Initial corporate policy Best practice explained Identity Baseline evolution Security Baseline evolution Resource Consistency evolution Cost Management evolution Multi-cloud evolution Multiple layers of governance Governance theory Implement a cloud governance strategy The Five Disciplines of Cloud Governance Evolving corporate policy and compliance How can corporate IT policy become cloud-ready? Understand business risks Evaluate risk tolerance Define corporate policy statements Align architecture design guides Develop policy adherence processes What is regulatory compliance? What is a cloud policy review? How can a CISO prepare for the cloud? What is data classification? Implementing the Five Disciplines of Cloud Governance Cost Management Overview of the Cost Management discipline Download the template Understand business risks Risk tolerance metrics and indicators Sample policy statements Policy compliance processes Discipline improvement Azure-specific toolchain Security Baseline Overview of the Security Baseline discipline Download the template Understand business risks Risk tolerance metrics and indicators Sample policy statements Policy compliance processes Discipline improvement Cloud-Native Security Baseline Additional Azure security guidance Azure-specific toolchain Identity Baseline Overview of the Identity Baseline discipline Download the template Understand business risks Risk tolerance metrics and indicators Sample policy statements Policy compliance processes Discipline improvement Azure-specific toolchain Resource Consistency Overview of the Resource Consistency discipline Download the template Understand business risks Risk tolerance metrics and indicators Sample policy statements Policy compliance processes Discipline improvement Resource access management Governance design for a simple workload Governance design for multiple teams Azure-specific toolchain Deployment Acceleration Overview of the Deployment Acceleration discipline Download the template Understand business risks Risk tolerance metrics and indicators Sample policy statements Policy compliance processes Discipline improvement Azure-specific toolchain Infrastructure Virtual Machines Deploy a basic workload Mainframe migration Overview Myths and facts Migration strategies Application strategies Create hybrid cloud consistency Operations Overview Establish an operational fitness review References Virtual Data Center (VDC) Azure enterprise scaffold Implementing Azure enterprise scaffold Decision Guides Overview Subscriptions Identity Policy enforcement Resource consistency Resource tagging Encryption Software defined networks Overview PaaS Only Cloud Native Cloud DMZ Hybrid Hub and spoke model Logs and reporting Azure Application Architecture Guide A guide to designing scalable, resilient, and highly available applications, based on proven practices that we have learned from customer engagements. Reference Architectures A set of recommended architectures for Azure. Each architecture includes best practices, prescriptive steps, and a deployable solution. Microsoft Cloud Adoption Framework for Azure A process for creating an organization-wide cloud adoption strategy, focusing on policies, governance, and infrastructure. Build Microservices on Azure This design guide takes you through the process of designing and building a microservices architecture on Azure. A reference implementation is included. Azure Data Architecture Guide A structured approach to designing data-centric solutions on Microsoft Azure. High Performance Computing (HPC ) on Azure Design guidance and component information for building High Performance Computing (HPC ) applications on Azure. Cloud Best Practices Best practices for cloud applications, covering aspects such as auto-scaling, caching, data partitioning, API design, and others. Design for Resiliency Learn how to design resilient applications for Azure. Azure Building Blocks Simplify deployment of Azure resources. With a single settings file, deploy complex architectures in Azure. Cloud Design Patterns Design patterns for developers and solution architects. Each pattern describes a problem, a pattern that addresses the problem, and an example based on Azure. Design Review Checklists Checklists to assist developers and solution architects during the design process. Azure for AWS Professionals Leverage your AWS experiences in Microsoft Azure. Performance Antipatterns How to detect and fix some common causes of performance and scalability problems in cloud applications. Azure Virtual Datacenter When deploying enterprise workloads to the cloud, organizations must balance governance with developer agility. Azure Virtual Datacenter provides models to achieve this balance with an emphasis on governance. Submit your ideas for the Architecture Center Do you have ideas for new architecture or design documentation? What kinds of articles would you like to see in the Azure Architecture Center? Build your skills with Microsoft Learn Pillars of a great Azure architecture Design for security in Azure Design for performance and scalability in Azure Design for efficiency and operations in Azure Design for availability and recoverability in Azure Tour the N -tier architecture style This guide presents a structured approach for designing applications on Azure that are scalable, resilient, and highly available. It is based on proven practices that we have learned from customer engagements. Introduction The cloud is changing the way applications are designed. Instead of monoliths, applications are decomposed into smaller, decentralized services. These services communicate through APIs or by using asynchronous messaging or eventing. Applications scale horizontally, adding new instances as demand requires. These trends bring new challenges. Application state is distributed. Operations are done in parallel and asynchronously. The system as a whole must be resilient when failures occur. Deployments must be automated and predictable. Monitoring and telemetry are critical for gaining insight into the system. The Azure Application Architecture Guide is designed to help you navigate these changes. TR AD ITIO NAL O N- PR EMIS ES MO D ER N CLO U D Monolithic, centralized Design for predictable scalability Relational database Strong consistency Serial and synchronized processing Design to avoid failures (MTBF) Occasional big updates Manual management Snowflake servers Decomposed, de-centralized Design for elastic scale Polyglot persistence (mix of storage technologies) Eventual consistency Parallel and asynchronous processing Design for failure (MTTR) Frequent small updates Automated self-management Immutable infrastructure This guide is intended for application architects, developers, and operations teams. It's not a how-to guide for using individual Azure services. After reading this guide, you will understand the architectural patterns and best practices to apply when building on the Azure cloud platform. You can also download an e-book version of the guide. How this guide is structured The Azure Application Architecture Guide is organized as a series of steps, from the architecture and design to implementation. For each step, there is supporting guidance that will help you with the design of your application architecture. Architecture styles The first decision point is the most fundamental. What kind of architecture are you building? It might be a microservices architecture, a more traditional N-tier application, or a big data solution. We have identified several distinct architecture styles. There are benefits and challenges to each. Learn more: Architecture styles Technology choices Two technology choices should be decided early on, because they affect the entire architecture. These are the choice of compute service and data stores. Compute refers to the hosting model for the computing resources that your applications runs on. Data stores includes databases but also storage for message queues, caches, logs, and anything else that an application might persist to storage. Learn more: Choosing a compute service Choosing a data store Design principles We have identified ten high-level design principles that will make your application more scalable, resilient, and manageable. These design principles apply to any architecture styles. Throughout the design process, keep these ten high-level design principles in mind. Then consider the set of best practices for specific aspects of the architecture, such as auto-scaling, caching, data partitioning, API design, and others. Learn more: Design principles Quality pillars A successful cloud application will focus on five pillars of software quality: Scalability, availability, resiliency, management, and security. Use our design review checklists to review your architecture according to these quality pillars. Quality pillars An architecture style is a family of architectures that share certain characteristics. For example, N-tier is a common architecture style. More recently, microservice architectures have started to gain favor. Architecture styles don't require the use of particular technologies, but some technologies are well-suited for certain architectures. For example, containers are a natural fit for microservices. We have identified a set of architecture styles that are commonly found in cloud applications. The article for each style includes: A description and logical diagram of the style. Recommendations for when to choose this style. Benefits, challenges, and best practices. A recommended deployment using relevant Azure services. A quick tour of the styles This section gives a quick tour of the architecture styles that we've identified, along with some high-level considerations for their use. Read more details in the linked topics. N -tier N-tier N-tier is a traditional architecture for enterprise applications. Dependencies are managed by dividing Layers the application into layers that perform logical functions, such as presentation, business logic, and data access. A layer can only call into layers that sit below it. However, this horizontal layering can be a liability. It can be hard to introduce changes in one part of the application without touching the rest of the application. That makes frequent updates a challenge, limiting how quickly new features can be added. N-tier is a natural fit for migrating existing applications that already use a layered architecture. For that reason, N-tier is most often seen in infrastructure as a service (IaaS ) solutions, or application that use a mix of IaaS and managed services. Web- Queue- Worker For a purely PaaS solution, consider a Web-Queue-Worker architecture. In this style, the application has a web front end that handles HTTP requests and a back-end worker that performs CPU-intensive Front Back end end tasks or long-running operations. The front end communicates to the worker through an Queue asynchronous message queue. Web-queue-worker Web-queue-worker is suitable for relatively simple domains with some resource-intensive tasks. Like N-tier, the architecture is easy to understand. The use of managed services simplifies deployment and operations. But with a complex domains, it can be hard to manage dependencies. The front end and the worker can easily become large, monolithic components that are hard to maintain and update. As with N-tier, this can reduce the frequency of updates and limit innovation. Microservices If your application has a more complex domain, consider moving to a Microservices architecture. A microservices application is composed of many small, independent services. Each service implements a single business capability. Services are loosely coupled, communicating through API contracts. Each service can be built by a small, focused development team. Individual services can be deployed without a lot of coordination between teams, which encourages frequent updates. A microservice architecture is more complex to build and manage than either N-tier or web-queue-worker. It requires a mature development and DevOps culture. But done right, this style can lead to higher release velocity, faster innovation, and a more resilient architecture. Microservices CQRS The CQRS (Command and Query Responsibility Segregation) style separates read and write operations into separate models. This isolates the parts of the system that update data from the parts that read the data. Moreover, reads can be executed against a materialized view that is physically separate from the write database. That lets you scale the read and write workloads independently, and optimize the materialized view for queries. Write Read CQRS makes the most sense when it's applied to a subsystem of a larger architecture. Generally, you shouldn't impose it across the entire application, as that will just create unneeded complexity. Consider it for collaborative domains where many users access the same data. Event-driven architecture Event-Driven Architectures use a publish-subscribe (pub-sub) model, where producers publish events, and consumers subscribe to them. The producers are independent from the consumers, and consumers are independent from each other. CQRS Producers Consumers Consider an event-driven architecture for applications that ingest and Event driven process a large volume of data with very low latency, such as IoT solutions. The style is also useful when different subsystems must perform different types of processing on the same event data. Big Data, Big Compute Big Data and Big Compute are specialized architecture styles for workloads that fit certain specific profiles. Big data divides a very large dataset into chunks, performing paralleling processing across the entire set, for analysis and reporting. Big compute, also called high-performance computing (HPC ), makes parallel computations across a large number (thousands) of cores. Domains include simulations, modeling, and 3-D rendering. Architecture styles as constraints An architecture style places constraints on the design, including the set of elements that can appear and the allowed relationships between those elements. Constraints guide the "shape" of an architecture by restricting the universe of choices. When an architecture conforms to the constraints of a particular style, certain desirable properties emerge. For example, the constraints in microservices include: A service represents a single responsibility. Every service is independent of the others. Data is private to the service that owns it. Services do not share data. By adhering to these constraints, what emerges is a system where services can be deployed independently, faults are isolated, frequent updates are possible, and it's easy to introduce new technologies into the application. Before choosing an architecture style, make sure that you understand the underlying principles and constraints of that style. Otherwise, you can end up with a design that conforms to the style at a superficial level, but does not achieve the full potential of that style. It's also important to be pragmatic. Sometimes it's better to relax a constraint, rather than insist on architectural purity. The following table summarizes how each style manages dependencies, and the types of domain that are best suited for each. AR CHITECTU R E S T YLE D EPEND ENCY MANAG EMENT D O MAIN T YPE N-tier Horizontal tiers divided by subnet Traditional business domain. Frequency of updates is low. Web-Queue-Worker Front and backend jobs, decoupled by async messaging. Relatively simple domain with some resource intensive tasks. Microservices Vertically (functionally) decomposed services that call each other through APIs. Complicated domain. Frequent updates. CQRS Read/write segregation. Schema and scale are optimized separately. Collaborative domain where lots of users access the same data. AR CHITECTU R E S T YLE D EPEND ENCY MANAG EMENT D O MAIN T YPE Event-driven architecture. Producer/consumer. Independent view per sub-system. IoT and real-time systems Big data Divide a huge dataset into small chunks. Parallel processing on local datasets. Batch and real-time data analysis. Predictive analysis using ML. Big compute Data allocation to thousands of cores. Compute intensive domains such as simulation. Consider challenges and benefits Constraints also create challenges, so it's important to understand the trade-offs when adopting any of these styles. Do the benefits of the architecture style outweigh the challenges, for this subdomain and bounded context. Here are some of the types of challenges to consider when selecting an architecture style: Complexity. Is the complexity of the architecture justified for your domain? Conversely, is the style too simplistic for your domain? In that case, you risk ending up with a "big ball of mud", because the architecture does not help you to manage dependencies cleanly. Asynchronous messaging and eventual consistency. Asynchronous messaging can be used to decouple services, and increase reliability (because messages can be retried) and scalability. However, this also creates challenges such as alwaysonce semantics and eventual consistency. Inter-service communication. As you decompose an application into separate services, there is a risk that communication between services will cause unacceptable latency or create network congestion (for example, in a microservices architecture). Manageability. How hard is it to manage the application, monitor, deploy updates, and so on? N-tier architecture style 3/13/2019 • 5 minutes to read • Edit Online An N -tier architecture divides an application into logical layers and physical tiers. Remote Service Middle Tier 1 Client WAF Web Tier Cache Data Tier Messaging Middle Tier 2 Layers are a way to separate responsibilities and manage dependencies. Each layer has a specific responsibility. A higher layer can use services in a lower layer, but not the other way around. Tiers are physically separated, running on separate machines. A tier can call to another tier directly, or use asynchronous messaging (message queue). Although each layer might be hosted in its own tier, that's not required. Several layers might be hosted on the same tier. Physically separating the tiers improves scalability and resiliency, but also adds latency from the additional network communication. A traditional three-tier application has a presentation tier, a middle tier, and a database tier. The middle tier is optional. More complex applications can have more than three tiers. The diagram above shows an application with two middle tiers, encapsulating different areas of functionality. An N -tier application can have a closed layer architecture or an open layer architecture: In a closed layer architecture, a layer can only call the next layer immediately down. In an open layer architecture, a layer can call any of the layers below it. A closed layer architecture limits the dependencies between layers. However, it might create unnecessary network traffic, if one layer simply passes requests along to the next layer. When to use this architecture N -tier architectures are typically implemented as infrastructure-as-service (IaaS ) applications, with each tier running on a separate set of VMs. However, an N -tier application doesn't need to be pure IaaS. Often, it's advantageous to use managed services for some parts of the architecture, particularly caching, messaging, and data storage. Consider an N -tier architecture for: Simple web applications. Migrating an on-premises application to Azure with minimal refactoring. Unified development of on-premises and cloud applications. N -tier architectures are very common in traditional on-premises applications, so it's a natural fit for migrating existing workloads to Azure. Benefits Portability between cloud and on-premises, and between cloud platforms. Less learning curve for most developers. Natural evolution from the traditional application model. Open to heterogeneous environment (Windows/Linux) Challenges It's easy to end up with a middle tier that just does CRUD operations on the database, adding extra latency without doing any useful work. Monolithic design prevents independent deployment of features. Managing an IaaS application is more work than an application that uses only managed services. It can be difficult to manage network security in a large system. Best practices Use autoscaling to handle changes in load. See Autoscaling best practices. Use asynchronous messaging to decouple tiers. Cache semi-static data. See Caching best practices. Configure database tier for high availability, using a solution such as SQL Server Always On Availability Groups. Place a web application firewall (WAF ) between the front end and the Internet. Place each tier in its own subnet, and use subnets as a security boundary. Restrict access to the data tier, by allowing requests only from the middle tier(s). N-tier architecture on virtual machines This section describes a recommended N -tier architecture running on VMs. Each tier consists of two or more VMs, placed in an availability set or VM scale set. Multiple VMs provide resiliency in case one VM fails. Load balancers are used to distribute requests across the VMs in a tier. A tier can be scaled horizontally by adding more VMs to the pool. Each tier is also placed inside its own subnet, meaning their internal IP addresses fall within the same address range. That makes it easy to apply network security group (NSG ) rules and route tables to individual tiers. The web and business tiers are stateless. Any VM can handle any request for that tier. The data tier should consist of a replicated database. For Windows, we recommend SQL Server, using Always On Availability Groups for high availability. For Linux, choose a database that supports replication, such as Apache Cassandra. Network Security Groups (NSGs) restrict access to each tier. For example, the database tier only allows access from the business tier. For more information about running N -tier applications on Azure: Run Windows VMs for an N -tier application Windows N -tier application on Azure with SQL Server Microsoft Learn module: Tour the N -tier architecture style Additional considerations N -tier architectures are not restricted to three tiers. For more complex applications, it is common to have more tiers. In that case, consider using layer-7 routing to route requests to a particular tier. Tiers are the boundary of scalability, reliability, and security. Consider having separate tiers for services with different requirements in those areas. Use VM Scale Sets for autoscaling. Look for places in the architecture where you can use a managed service without significant refactoring. In particular, look at caching, messaging, storage, and databases. For higher security, place a network DMZ in front of the application. The DMZ includes network virtual appliances (NVAs) that implement security functionality such as firewalls and packet inspection. For more information, see Network DMZ reference architecture. For high availability, place two or more NVAs in an availability set, with an external load balancer to distribute Internet requests across the instances. For more information, see Deploy highly available network virtual appliances. Do not allow direct RDP or SSH access to VMs that are running application code. Instead, operators should log into a jumpbox, also called a bastion host. This is a VM on the network that administrators use to connect to the other VMs. The jumpbox has an NSG that allows RDP or SSH only from approved public IP addresses. You can extend the Azure virtual network to your on-premises network using a site-to-site virtual private network (VPN ) or Azure ExpressRoute. For more information, see Hybrid network reference architecture. If your organization uses Active Directory to manage identity, you may want to extend your Active Directory environment to the Azure VNet. For more information, see Identity management reference architecture. If you need higher availability than the Azure SLA for VMs provides, replicate the application across two regions and use Azure Traffic Manager for failover. For more information, see Run Windows VMs in multiple regions or Run Linux VMs in multiple regions. Web-Queue-Worker architecture style 3/13/2019 • 3 minutes to read • Edit Online The core components of this architecture are a web front end that serves client requests, and a worker that performs resource-intensive tasks, long-running workflows, or batch jobs. The web front end communicates with the worker through a message queue. Remote Service Identity Provider Client Cache Web Front End Worker Database Queue CDN Static Content Other components that are commonly incorporated into this architecture include: One or more databases. A cache to store values from the database for quick reads. CDN to serve static content Remote services, such as email or SMS service. Often these are provided by third parties. Identity provider for authentication. The web and worker are both stateless. Session state can be stored in a distributed cache. Any long-running work is done asynchronously by the worker. The worker can be triggered by messages on the queue, or run on a schedule for batch processing. The worker is an optional component. If there are no long-running operations, the worker can be omitted. The front end might consist of a web API. On the client side, the web API can be consumed by a single-page application that makes AJAX calls, or by a native client application. When to use this architecture The Web-Queue-Worker architecture is typically implemented using managed compute services, either Azure App Service or Azure Cloud Services. Consider this architecture style for: Applications with a relatively simple domain. Applications with some long-running workflows or batch operations. When you want to use managed services, rather than infrastructure as a service (IaaS ). Benefits Relatively simple architecture that is easy to understand. Easy to deploy and manage. Clear separation of concerns. The front end is decoupled from the worker using asynchronous messaging. The front end and the worker can be scaled independently. Challenges Without careful design, the front end and the worker can become large, monolithic components that are difficult to maintain and update. There may be hidden dependencies, if the front end and worker share data schemas or code modules. Best practices Expose a well-designed API to the client. See API design best practices. Autoscale to handle changes in load. See Autoscaling best practices. Cache semi-static data. See Caching best practices. Use a CDN to host static content. See CDN best practices. Use polyglot persistence when appropriate. See Use the best data store for the job. Partition data to improve scalability, reduce contention, and optimize performance. See Data partitioning best practices. Web-Queue-Worker on Azure App Service This section describes a recommended Web-Queue-Worker architecture that uses Azure App Service. The front end is implemented as an Azure App Service web app, and the worker is implemented as a WebJob. The web app and the WebJob are both associated with an App Service plan that provides the VM instances. You can use either Azure Service Bus or Azure Storage queues for the message queue. (The diagram shows an Azure Storage queue.) Azure Redis Cache stores session state and other data that needs low latency access. Azure CDN is used to cache static content such as images, CSS, or HTML. For storage, choose the storage technologies that best fit the needs of the application. You might use multiple storage technologies (polyglot persistence). To illustrate this idea, the diagram shows Azure SQL Database and Azure Cosmos DB. For more details, see App Service web application reference architecture. Additional considerations Not every transaction has to go through the queue and worker to storage. The web front end can perform simple read/write operations directly. Workers are designed for resource-intensive tasks or long-running workflows. In some cases, you might not need a worker at all. Use the built-in autoscale feature of App Service to scale out the number of VM instances. If the load on the application follows predictable patterns, use schedule-based autoscale. If the load is unpredictable, use metrics-based autoscaling rules. Consider putting the web app and the WebJob into separate App Service plans. That way, they are hosted on separate VM instances and can be scaled independently. Use separate App Service plans for production and testing. Otherwise, if you use the same plan for production and testing, it means your tests are running on your production VMs. Use deployment slots to manage deployments. This lets you to deploy an updated version to a staging slot, then swap over to the new version. It also lets you swap back to the previous version, if there was a problem with the update. Microservices architecture style 3/13/2019 • 5 minutes to read • Edit Online A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability. Microservices Identity Provider Service Service API Gateway Remote Service Service Client Service CDN Static Content Management Service Discovery In some ways, microservices are the natural evolution of service oriented architectures (SOA), but there are differences between microservices and SOA. Here are some defining characteristics of a microservice: In a microservices architecture, services are small, independent, and loosely coupled. Each service is a separate codebase, which can be managed by a small development team. Services can be deployed independently. A team can update an existing service without rebuilding and redeploying the entire application. Services are responsible for persisting their own data or external state. This differs from the traditional model, where a separate data layer handles data persistence. Services communicate with each other by using well-defined APIs. Internal implementation details of each service are hidden from other services. Services don't need to share the same technology stack, libraries, or frameworks. Besides for the services themselves, some other components appear in a typical microservices architecture: Management. The management component is responsible for placing services on nodes, identifying failures, rebalancing services across nodes, and so forth. Service Discovery. Maintains a list of services and which nodes they are located on. Enables service lookup to find the endpoint for a service. API Gateway. The API gateway is the entry point for clients. Clients don't call services directly. Instead, they call the API gateway, which forwards the call to the appropriate services on the back end. The API gateway might aggregate the responses from several services and return the aggregated response. The advantages of using an API gateway include: It decouples clients from services. Services can be versioned or refactored without needing to update all of the clients. Services can use messaging protocols that are not web friendly, such as AMQP. The API Gateway can perform other cross-cutting functions such as authentication, logging, SSL termination, and load balancing. When to use this architecture Consider this architecture style for: Large applications that require a high release velocity. Complex applications that need to be highly scalable. Applications with rich domains or many subdomains. An organization that consists of small development teams. Benefits Independent deployments. You can update a service without redeploying the entire application, and roll back or roll forward an update if something goes wrong. Bug fixes and feature releases are more manageable and less risky. Independent development. A single development team can build, test, and deploy a service. The result is continuous innovation and a faster release cadence. Small, focused teams. Teams can focus on one service. The smaller scope of each service makes the code base easier to understand, and it's easier for new team members to ramp up. Fault isolation. If a service goes down, it won't take out the entire application. However, that doesn't mean you get resiliency for free. You still need to follow resiliency best practices and design patterns. See Designing resilient applications for Azure. Mixed technology stacks. Teams can pick the technology that best fits their service. Granular scaling. Services can be scaled independently. At the same time, the higher density of services per VM means that VM resources are fully utilized. Using placement constraints, a services can be matched to a VM profile (high CPU, high memory, and so on). Challenges Complexity. A microservices application has more moving parts than the equivalent monolithic application. Each service is simpler, but the entire system as a whole is more complex. Development and test. Developing against service dependencies requires a different approach. Existing tools are not necessarily designed to work with service dependencies. Refactoring across service boundaries can be difficult. It is also challenging to test service dependencies, especially when the application is evolving quickly. Lack of governance. The decentralized approach to building microservices has advantages, but it can also lead to problems. You may end up with so many different languages and frameworks that the application becomes hard to maintain. It may be useful to put some project-wide standards in place, without overly restricting teams' flexibility. This especially applies to cross-cutting functionality such as logging. Network congestion and latency. The use of many small, granular services can result in more interservice communication. Also, if the chain of service dependencies gets too long (service A calls B, which calls C...), the additional latency can become a problem. You will need to design APIs carefully. Avoid overly chatty APIs, think about serialization formats, and look for places to use asynchronous communication patterns. Data integrity. With each microservice responsible for its own data persistence. As a result, data consistency can be a challenge. Embrace eventual consistency where possible. Management. To be successful with microservices requires a mature DevOps culture. Correlated logging across services can be challenging. Typically, logging must correlate multiple service calls for a single user operation. Versioning. Updates to a service must not break services that depend on it. Multiple services could be updated at any given time, so without careful design, you might have problems with backward or forward compatibility. Skillset. Microservices are highly distributed systems. Carefully evaluate whether the team has the skills and experience to be successful. Best practices Model services around the business domain. Decentralize everything. Individual teams are responsible for designing and building services. Avoid sharing code or data schemas. Data storage should be private to the service that owns the data. Use the best storage for each service and data type. Services communicate through well-designed APIs. Avoid leaking implementation details. APIs should model the domain, not the internal implementation of the service. Avoid coupling between services. Causes of coupling include shared database schemas and rigid communication protocols. Offload cross-cutting concerns, such as authentication and SSL termination, to the gateway. Keep domain knowledge out of the gateway. The gateway should handle and route client requests without any knowledge of the business rules or domain logic. Otherwise, the gateway becomes a dependency and can cause coupling between services. Services should have loose coupling and high functional cohesion. Functions that are likely to change together should be packaged and deployed together. If they reside in separate services, those services end up being tightly coupled, because a change in one service will require updating the other service. Overly chatty communication between two services may be a symptom of tight coupling and low cohesion. Isolate failures. Use resiliency strategies to prevent failures within a service from cascading. See Resiliency patterns and Designing resilient applications. Next steps For detailed guidance about building a microservices architecture on Azure, see Designing, building, and operating microservices on Azure. CQRS architecture style 3/13/2019 • 3 minutes to read • Edit Online Command and Query Responsibility Segregation (CQRS ) is an architecture style that separates read operations from write operations. Command Model Write Event Client Update Query Model Read Materialized View In traditional architectures, the same data model is used to query and update a database. That's simple and works well for basic CRUD operations. In more complex applications, however, this approach can become unwieldy. For example, on the read side, the application may perform many different queries, returning data transfer objects (DTOs) with different shapes. Object mapping can become complicated. On the write side, the model may implement complex validation and business logic. As a result, you can end up with an overly complex model that does too much. Another potential problem is that read and write workloads are often asymmetrical, with very different performance and scale requirements. CQRS addresses these problems by separating reads and writes into separate models, using commands to update data, and queries to read data. Commands should be task based, rather than data centric. ("Book hotel room," not "set ReservationStatus to Reserved.") Commands may be placed on a queue for asynchronous processing, rather than being processed synchronously. Queries never modify the database. A query returns a DTO that does not encapsulate any domain knowledge. For greater isolation, you can physically separate the read data from the write data. In that case, the read database can use its own data schema that is optimized for queries. For example, it can store a materialized view of the data, in order to avoid complex joins or complex O/RM mappings. It might even use a different type of data store. For example, the write database might be relational, while the read database is a document database. If separate read and write databases are used, they must be kept in sync. Typically this is accomplished by having the write model publish an event whenever it updates the database. Updating the database and publishing the event must occur in a single transaction. Some implementations of CQRS use the Event Sourcing pattern. With this pattern, application state is stored as a sequence of events. Each event represents a set of changes to the data. The current state is constructed by replaying the events. In a CQRS context, one benefit of Event Sourcing is that the same events can be used to notify other components — in particular, to notify the read model. The read model uses the events to create a snapshot of the current state, which is more efficient for queries. However, Event Sourcing adds complexity to the design. Command Model Events Event Store Publish Query Model Read Materialized View When to use this architecture Consider CQRS for collaborative domains where many users access the same data, especially when the read and write workloads are asymmetrical. CQRS is not a top-level architecture that applies to an entire system. Apply CQRS only to those subsystems where there is clear value in separating reads and writes. Otherwise, you are creating additional complexity for no benefit. Benefits Independently scaling. CQRS allows the read and write workloads to scale independently, and may result in fewer lock contentions. Optimized data schemas. The read side can use a schema that is optimized for queries, while the write side uses a schema that is optimized for updates. Security. It's easier to ensure that only the right domain entities are performing writes on the data. Separation of concerns. Segregating the read and write sides can result in models that are more maintainable and flexible. Most of the complex business logic goes into the write model. The read model can be relatively simple. Simpler queries. By storing a materialized view in the read database, the application can avoid complex joins when querying. Challenges Complexity. The basic idea of CQRS is simple. But it can lead to a more complex application design, especially if they include the Event Sourcing pattern. Messaging. Although CQRS does not require messaging, it's common to use messaging to process commands and publish update events. In that case, the application must handle message failures or duplicate messages. Eventual consistency. If you separate the read and write databases, the read data may be stale. Best practices For more information about implementing CQRS, see the CQRS pattern. Consider using the Event Sourcing pattern to avoid update conflicts. Consider using the Materialized View pattern for the read model, to optimize the schema for queries. CQRS in microservices CQRS can be especially useful in a microservices architecture. One of the principles of microservices is that a service cannot directly access another service's data store. In the following diagram, Service A writes to a data store, and Service B keeps a materialized view of the data. Service A publishes an event whenever it writes to the data store. Service B subscribes to the event. Event-driven architecture style 3/13/2019 • 3 minutes to read • Edit Online An event-driven architecture consists of event producers that generate a stream of events, and event consumers that listen for the events. Event Consumers Event Producers Event Ingestion Event Consumers Event Consumers Events are delivered in near real time, so consumers can respond immediately to events as they occur. Producers are decoupled from consumers — a producer doesn't know which consumers are listening. Consumers are also decoupled from each other, and every consumer sees all of the events. This differs from a Competing Consumers pattern, where consumers pull messages from a queue and a message is processed just once (assuming no errors). In some systems, such as IoT, events must be ingested at very high volumes. An event driven architecture can use a pub/sub model or an event stream model. Pub/sub: The messaging infrastructure keeps track of subscriptions. When an event is published, it sends the event to each subscriber. After an event is received, it cannot be replayed, and new subscribers do not see the event. Event streaming: Events are written to a log. Events are strictly ordered (within a partition) and durable. Clients don't subscribe to the stream, instead a client can read from any part of the stream. The client is responsible for advancing its position in the stream. That means a client can join at any time, and can replay events. On the consumer side, there are some common variations: Simple event processing. An event immediately triggers an action in the consumer. For example, you could use Azure Functions with a Service Bus trigger, so that a function executes whenever a message is published to a Service Bus topic. Complex event processing. A consumer processes a series of events, looking for patterns in the event data, using a technology such as Azure Stream Analytics or Apache Storm. For example, you could aggregate readings from an embedded device over a time window, and generate a notification if the moving average crosses a certain threshold. Event stream processing. Use a data streaming platform, such as Azure IoT Hub or Apache Kafka, as a pipeline to ingest events and feed them to stream processors. The stream processors act to process or transform the stream. There may be multiple stream processors for different subsystems of the application. This approach is a good fit for IoT workloads. The source of the events may be external to the system, such as physical devices in an IoT solution. In that case, the system must be able to ingest the data at the volume and throughput that is required by the data source. In the logical diagram above, each type of consumer is shown as a single box. In practice, it's common to have multiple instances of a consumer, to avoid having the consumer become a single point of failure in system. Multiple instances might also be necessary to handle the volume and frequency of events. Also, a single consumer might process events on multiple threads. This can create challenges if events must be processed in order, or require exactly-once semantics. See Minimize Coordination. When to use this architecture Multiple subsystems must process the same events. Real-time processing with minimum time lag. Complex event processing, such as pattern matching or aggregation over time windows. High volume and high velocity of data, such as IoT. Benefits Producers and consumers are decoupled. No point-to point-integrations. It's easy to add new consumers to the system. Consumers can respond to events immediately as they arrive. Highly scalable and distributed. Subsystems have independent views of the event stream. Challenges Guaranteed delivery. In some systems, especially in IoT scenarios, it's crucial to guarantee that events are delivered. Processing events in order or exactly once. Each consumer type typically runs in multiple instances, for resiliency and scalability. This can create a challenge if the events must be processed in order (within a consumer type), or if the processing logic is not idempotent. Big data architecture style 3/13/2019 • 10 minutes to read • Edit Online A big data architecture is designed to handle the ingestion, processing, and analysis of data that is too large or complex for traditional database systems. Batch Processing Data Storage Data Sources Analytical Data Store Stream Processing Real-Time Message Ingestion Analytics and Reporting Orchestration Big data solutions typically involve one or more of the following types of workload: Batch processing of big data sources at rest. Real-time processing of big data in motion. Interactive exploration of big data. Predictive analytics and machine learning. Most big data architectures include some or all of the following components: Data sources: All big data solutions start with one or more data sources. Examples include: Application data stores, such as relational databases. Static files produced by applications, such as web server log files. Real-time data sources, such as IoT devices. Data storage: Data for batch processing operations is typically stored in a distributed file store that can hold high volumes of large files in various formats. This kind of store is often called a data lake. Options for implementing this storage include Azure Data Lake Store or blob containers in Azure Storage. Batch processing: Because the data sets are so large, often a big data solution must process data files using long-running batch jobs to filter, aggregate, and otherwise prepare the data for analysis. Usually these jobs involve reading source files, processing them, and writing the output to new files. Options include running U -SQL jobs in Azure Data Lake Analytics, using Hive, Pig, or custom Map/Reduce jobs in an HDInsight Hadoop cluster, or using Java, Scala, or Python programs in an HDInsight Spark cluster. Real-time message ingestion: If the solution includes real-time sources, the architecture must include a way to capture and store real-time messages for stream processing. This might be a simple data store, where incoming messages are dropped into a folder for processing. However, many solutions need a message ingestion store to act as a buffer for messages, and to support scale-out processing, reliable delivery, and other message queuing semantics. Options include Azure Event Hubs, Azure IoT Hubs, and Kafka. Stream processing: After capturing real-time messages, the solution must process them by filtering, aggregating, and otherwise preparing the data for analysis. The processed stream data is then written to an output sink. Azure Stream Analytics provides a managed stream processing service based on perpetually running SQL queries that operate on unbounded streams. You can also use open source Apache streaming technologies like Storm and Spark Streaming in an HDInsight cluster. Analytical data store: Many big data solutions prepare data for analysis and then serve the processed data in a structured format that can be queried using analytical tools. The analytical data store used to serve these queries can be a Kimball-style relational data warehouse, as seen in most traditional business intelligence (BI) solutions. Alternatively, the data could be presented through a low -latency NoSQL technology such as HBase, or an interactive Hive database that provides a metadata abstraction over data files in the distributed data store. Azure SQL Data Warehouse provides a managed service for large-scale, cloud-based data warehousing. HDInsight supports Interactive Hive, HBase, and Spark SQL, which can also be used to serve data for analysis. Analysis and reporting: The goal of most big data solutions is to provide insights into the data through analysis and reporting. To empower users to analyze the data, the architecture may include a data modeling layer, such as a multidimensional OLAP cube or tabular data model in Azure Analysis Services. It might also support self-service BI, using the modeling and visualization technologies in Microsoft Power BI or Microsoft Excel. Analysis and reporting can also take the form of interactive data exploration by data scientists or data analysts. For these scenarios, many Azure services support analytical notebooks, such as Jupyter, enabling these users to leverage their existing skills with Python or R. For large-scale data exploration, you can use Microsoft R Server, either standalone or with Spark. Orchestration: Most big data solutions consist of repeated data processing operations, encapsulated in workflows, that transform source data, move data between multiple sources and sinks, load the processed data into an analytical data store, or push the results straight to a report or dashboard. To automate these workflows, you can use an orchestration technology such Azure Data Factory or Apache Oozie and Sqoop. Azure includes many services that can be used in a big data architecture. They fall roughly into two categories: Managed services, including Azure Data Lake Store, Azure Data Lake Analytics, Azure Data Warehouse, Azure Stream Analytics, Azure Event Hub, Azure IoT Hub, and Azure Data Factory. Open source technologies based on the Apache Hadoop platform, including HDFS, HBase, Hive, Pig, Spark, Storm, Oozie, Sqoop, and Kafka. These technologies are available on Azure in the Azure HDInsight service. These options are not mutually exclusive, and many solutions combine open source technologies with Azure services. When to use this architecture Consider this architecture style when you need to: Store and process data in volumes too large for a traditional database. Transform unstructured data for analysis and reporting. Capture, process, and analyze unbounded streams of data in real time, or with low latency. Use Azure Machine Learning or Microsoft Cognitive Services. Benefits Technology choices. You can mix and match Azure managed services and Apache technologies in HDInsight clusters, to capitalize on existing skills or technology investments. Performance through parallelism. Big data solutions take advantage of parallelism, enabling highperformance solutions that scale to large volumes of data. Elastic scale. All of the components in the big data architecture support scale-out provisioning, so that you can adjust your solution to small or large workloads, and pay only for the resources that you use. Interoperability with existing solutions. The components of the big data architecture are also used for IoT processing and enterprise BI solutions, enabling you to create an integrated solution across data workloads. Challenges Complexity. Big data solutions can be extremely complex, with numerous components to handle data ingestion from multiple data sources. It can be challenging to build, test, and troubleshoot big data processes. Moreover, there may be a large number of configuration settings across multiple systems that must be used in order to optimize performance. Skillset. Many big data technologies are highly specialized, and use frameworks and languages that are not typical of more general application architectures. On the other hand, big data technologies are evolving new APIs that build on more established languages. For example, the U -SQL language in Azure Data Lake Analytics is based on a combination of Transact-SQL and C#. Similarly, SQL -based APIs are available for Hive, HBase, and Spark. Technology maturity. Many of the technologies used in big data are evolving. While core Hadoop technologies such as Hive and Pig have stabilized, emerging technologies such as Spark introduce extensive changes and enhancements with each new release. Managed services such as Azure Data Lake Analytics and Azure Data Factory are relatively young, compared with other Azure services, and will likely evolve over time. Security. Big data solutions usually rely on storing all static data in a centralized data lake. Securing access to this data can be challenging, especially when the data must be ingested and consumed by multiple applications and platforms. Best practices Leverage parallelism. Most big data processing technologies distribute the workload across multiple processing units. This requires that static data files are created and stored in a splittable format. Distributed file systems such as HDFS can optimize read and write performance, and the actual processing is performed by multiple cluster nodes in parallel, which reduces overall job times. Partition data. Batch processing usually happens on a recurring schedule — for example, weekly or monthly. Partition data files, and data structures such as tables, based on temporal periods that match the processing schedule. That simplifies data ingestion and job scheduling, and makes it easier to troubleshoot failures. Also, partitioning tables that are used in Hive, U -SQL, or SQL queries can significantly improve query performance. Apply schema-on-read semantics. Using a data lake lets you to combine storage for files in multiple formats, whether structured, semi-structured, or unstructured. Use schema -on-read semantics, which project a schema onto the data when the data is processing, not when the data is stored. This builds flexibility into the solution, and prevents bottlenecks during data ingestion caused by data validation and type checking. Process data in-place. Traditional BI solutions often use an extract, transform, and load (ETL ) process to move data into a data warehouse. With larger volumes data, and a greater variety of formats, big data solutions generally use variations of ETL, such as transform, extract, and load (TEL ). With this approach, the data is processed within the distributed data store, transforming it to the required structure, before moving the transformed data into an analytical data store. Balance utilization and time costs. For batch processing jobs, it's important to consider two factors: The per-unit cost of the compute nodes, and the per-minute cost of using those nodes to complete the job. For example, a batch job may take eight hours with four cluster nodes. However, it might turn out that the job uses all four nodes only during the first two hours, and after that, only two nodes are required. In that case, running the entire job on two nodes would increase the total job time, but would not double it, so the total cost would be less. In some business scenarios, a longer processing time may be preferable to the higher cost of using under-utilized cluster resources. Separate cluster resources. When deploying HDInsight clusters, you will normally achieve better performance by provisioning separate cluster resources for each type of workload. For example, although Spark clusters include Hive, if you need to perform extensive processing with both Hive and Spark, you should consider deploying separate dedicated Spark and Hadoop clusters. Similarly, if you are using HBase and Storm for low latency stream processing and Hive for batch processing, consider separate clusters for Storm, HBase, and Hadoop. Orchestrate data ingestion. In some cases, existing business applications may write data files for batch processing directly into Azure storage blob containers, where they can be consumed by HDInsight or Azure Data Lake Analytics. However, you will often need to orchestrate the ingestion of data from on-premises or external data sources into the data lake. Use an orchestration workflow or pipeline, such as those supported by Azure Data Factory or Oozie, to achieve this in a predictable and centrally manageable fashion. Scrub sensitive data early. The data ingestion workflow should scrub sensitive data early in the process, to avoid storing it in the data lake. IoT architecture Internet of Things (IoT) is a specialized subset of big data solutions. The following diagram shows a possible logical architecture for IoT. The diagram emphasizes the event-streaming components of the architecture. The cloud gateway ingests device events at the cloud boundary, using a reliable, low latency messaging system. Devices might send events directly to the cloud gateway, or through a field gateway. A field gateway is a specialized device or software, usually colocated with the devices, that receives events and forwards them to the cloud gateway. The field gateway might also preprocess the raw device events, performing functions such as filtering, aggregation, or protocol transformation. After ingestion, events go through one or more stream processors that can route the data (for example, to storage) or perform analytics and other processing. The following are some common types of processing. (This list is certainly not exhaustive.) Writing event data to cold storage, for archiving or batch analytics. Hot path analytics, analyzing the event stream in (near) real time, to detect anomalies, recognize patterns over rolling time windows, or trigger alerts when a specific condition occurs in the stream. Handling special types of non-telemetry messages from devices, such as notifications and alarms. Machine learning. The boxes that are shaded gray show components of an IoT system that are not directly related to event streaming, but are included here for completeness. The device registry is a database of the provisioned devices, including the device IDs and usually device metadata, such as location. The provisioning API is a common external interface for provisioning and registering new devices. Some IoT solutions allow command and control messages to be sent to devices. This section has presented a very high-level view of IoT, and there are many subtleties and challenges to consider. For a more detailed reference architecture and discussion, see the Microsoft Azure IoT Reference Architecture (PDF download). Next steps Learn more about big data architectures. Big compute architecture style 3/13/2019 • 3 minutes to read • Edit Online The term big compute describes large-scale workloads that require a large number of cores, often numbering in the hundreds or thousands. Scenarios include image rendering, fluid dynamics, financial risk modeling, oil exploration, drug design, and engineering stress analysis, among others. Here are some typical characteristics of big compute applications: The work can be split into discrete tasks, which can be run across many cores simultaneously. Each task is finite. It takes some input, does some processing, and produces output. The entire application runs for a finite amount of time (minutes to days). A common pattern is to provision a large number of cores in a burst, and then spin down to zero once the application completes. The application does not need to stay up 24/7. However, the system must handle node failures or application crashes. For some applications, tasks are independent and can run in parallel. In other cases, tasks are tightly coupled, meaning they must interact or exchange intermediate results. In that case, consider using high-speed networking technologies such as InfiniBand and remote direct memory access (RDMA). Depending on your workload, you might use compute-intensive VM sizes (H16r, H16mr, and A9). When to use this architecture Computationally intensive operations such as simulation and number crunching. Simulations that are computationally intensive and must be split across CPUs in multiple computers (101000s). Simulations that require too much memory for one computer, and must be split across multiple computers. Long-running computations that would take too long to complete on a single computer. Smaller computations that must be run 100s or 1000s of times, such as Monte Carlo simulations. Benefits High performance with "embarrassingly parallel" processing. Can harness hundreds or thousands of computer cores to solve large problems faster. Access to specialized high-performance hardware, with dedicated high-speed InfiniBand networks. You can provision VMs as needed to do work, and then tear them down. Challenges Managing the VM infrastructure. Managing the volume of number crunching Provisioning thousands of cores in a timely manner. For tightly coupled tasks, adding more cores can have diminishing returns. You may need to experiment to find the optimum number of cores. Big compute using Azure Batch Azure Batch is a managed service for running large-scale high-performance computing (HPC ) applications. Using Azure Batch, you configure a VM pool, and upload the applications and data files. Then the Batch service provisions the VMs, assign tasks to the VMs, runs the tasks, and monitors the progress. Batch can automatically scale out the VMs in response to the workload. Batch also provides job scheduling. Big compute running on Virtual Machines You can use Microsoft HPC Pack to administer a cluster of VMs, and schedule and monitor HPC jobs. With this approach, you must provision and manage the VMs and network infrastructure. Consider this approach if you have existing HPC workloads and want to move some or all it to Azure. You can move the entire HPC cluster to Azure, or keep your HPC cluster on-premises but use Azure for burst capacity. For more information, see Batch and HPC solutions for large-scale computing workloads. HPC Pack deployed to Azure In this scenario, the HPC cluster is created entirely within Azure. The head node provides management and job scheduling services to the cluster. For tightly coupled tasks, use an RDMA network that provides very high bandwidth, low latency communication between VMs. For more information see Deploy an HPC Pack 2016 cluster in Azure. Burst an HPC cluster to Azure In this scenario, an organization is running HPC Pack on-premises, and uses Azure VMs for burst capacity. The cluster head node is on-premises. ExpressRoute or VPN Gateway connects the on-premises network to the Azure VNet. Overview of Azure compute options 3/13/2019 • 3 minutes to read • Edit Online The term compute refers to the hosting model for the computing resources that your application runs on. Overview At one end of the spectrum is Infrastructure-as-a-Service (IaaS ). With IaaS, you provision the VMs that you need, along with associated network and storage components. Then you deploy whatever software and applications you want onto those VMs. This model is the closest to a traditional on-premises environment, except that Microsoft manages the infrastructure. You still manage the individual VMs. Platform as a service (PaaS ) provides a managed hosting environment, where you can deploy your application without needing to manage VMs or networking resources. For example, instead of creating individual VMs, you specify an instance count, and the service will provision, configure, and manage the necessary resources. Azure App Service is an example of a PaaS service. There is a spectrum from IaaS to pure PaaS. For example, Azure VMs can auto-scale by using VM Scale Sets. This automatic scaling capability isn't strictly PaaS, but it's the type of management feature that might be found in a PaaS service. Functions-as-a-Service (FaaS ) goes even further in removing the need to worry about the hosting environment. Instead of creating compute instances and deploying code to those instances, you simply deploy your code, and the service automatically runs it. You don’t need to administer the compute resources. These services use a serverless architecture, and seamlessly scale up or down to whatever level necessary to handle the traffic. Azure Functions are a FaaS service. IaaS gives the most control, flexibility, and portability. FaaS provides simplicity, elastic scale, and potential cost savings, because you pay only for the time your code is running. PaaS falls somewhere between the two. In general, the more flexibility a service provides, the more you are responsible for configuring and managing the resources. FaaS services automatically manage nearly all aspects of running an application, while IaaS solutions require you to provision, configure and manage the VMs and network components you create. Azure compute options Here are the main compute options currently available in Azure: Virtual Machines are an IaaS service, allowing you to deploy and manage VMs inside a virtual network (VNet). App Service is a managed PaaS offering for hosting web apps, mobile app back ends, RESTful APIs, or automated business processes. Service Fabric is a distributed systems platform that can run in many environments, including Azure or on premises. Service Fabric is an orchestrator of microservices across a cluster of machines. Azure Kubernetes Service manages a hosted Kubernetes service for running containerized applications. Azure Container Instances offer the fastest and simplest way to run a container in Azure, without having to provision any virtual machines and without having to adopt a higher-level service. Azure Functions is a managed FaaS service. Azure Batch is a managed service for running large-scale parallel and high-performance computing (HPC ) applications. Cloud Services is a managed service for running cloud applications. It uses a PaaS hosting model. When selecting a compute option, here are some factors to consider: Hosting model. How is the service hosted? What requirements and limitations are imposed by this hosting environment? DevOps. Is there built-in support for application upgrades? What is the deployment model? Scalability. How does the service handle adding or removing instances? Can it auto-scale based on load and other metrics? Availability. What is the service SLA? Cost. In addition to the cost of the service itself, consider the operations cost for managing a solution built on that service. For example, IaaS solutions might have a higher operations cost. What are the overall limitations of each service? What kind of application architectures are appropriate for this service? Next steps To help select a compute service for your application, use the Decision tree for Azure compute services For a more detailed comparison of compute options in Azure, see Criteria for choosing an Azure compute service. Decision tree for Azure compute services 3/13/2019 • 2 minutes to read • Edit Online Azure offers a number of ways to host your application code. The term compute refers to the hosting model for the computing resources that your application runs on. The following flowchart will help you to choose a compute service for your application. The flowchart guides you through a set of key decision criteria to reach a recommendation. Treat this flowchart as a starting point. Every application has unique requirements, so use the recommendation as a starting point. Then perform a more detailed evaluation, looking at aspects such as: Feature set Service limits Cost SLA Regional availability Developer ecosystem and team skills Compute comparison tables If your application consists of multiple workloads, evaluate each workload separately. A complete solution may incorporate two or more compute services. For more information about your options for hosting containers in Azure, see Azure Containers. Flowchart Definitions Lift and shift is a strategy for migrating a workload to the cloud without redesigning the application or making code changes. Also called rehosting. For more information, see Azure migration center. Cloud optimized is a strategy for migrating to the cloud by refactoring an application to take advantage of cloud-native features and capabilities. Next steps For additional criteria to consider, see Criteria for choosing an Azure compute service. Criteria for choosing an Azure compute service 3/13/2019 • 3 minutes to read • Edit Online The term compute refers to the hosting model for the computing resources that your applications runs on. The following tables compare Azure compute services across several axes. Refer to these tables when selecting a compute option for your application. Hosting model CRITERIA VIRTUAL MACHINES APP SERVICE SERVICE FABRIC AZURE FUNCTIONS AZURE KUBERNETES SERVICE CONTAINER INSTANCES AZURE BATCH Application compositio n Agnostic Applications , containers Services, guest executables, containers Functions Containers Containers Scheduled jobs Density Agnostic Multiple apps per instance via app service plans Multiple services per VM Serverless 1 Multiple containers per node No dedicated instances Multiple apps per VM Minimum number of nodes 12 1 53 Serverless 1 33 No dedicated nodes 14 State manageme nt Stateless or Stateful Stateless Stateless or stateful Stateless Stateless or Stateful Stateless Stateless Web hosting Agnostic Built in Agnostic Not applicable Agnostic Agnostic No Can be deployed to dedicated VNet? Supported Supported5 Supported Supported 5 Supported Not supported Supported Hybrid connectivity Supported Supported 6 Supported Supported 7 Supported Not supported Supported Notes 1. If using Consumption plan. If using App Service plan, functions run on the VMs allocated for your App Service plan. See Choose the correct service plan for Azure Functions. 2. Higher SLA with two or more instances. 3. Recommended for production environments. 4. Can scale down to zero after job completes. 5. Requires App Service Environment (ASE ). 6. Use Azure App Service Hybrid Connections. 7. Requires App Service plan. DevOps CRITERIA VIRTUAL MACHINES APP SERVICE SERVICE FABRIC AZURE FUNCTIONS AZURE KUBERNETES SERVICE CONTAINER INSTANCES AZURE BATCH Local debugging Agnostic IIS Express, others 1 Local node cluster Visual Studio or Azure Functions CLI Minikube, others Local container runtime Not supported Programmi ng model Agnostic Web and API applications , WebJobs for background tasks Guest executable, Service model, Actor model, Containers Functions with triggers Agnostic Agnostic Command line application Application update No built-in support Deploymen t slots Rolling upgrade (per service) Deploymen t slots Rolling update Not applicable Notes 1. Options include IIS Express for ASP.NET or node.js (iisnode); PHP web server; Azure Toolkit for IntelliJ, Azure Toolkit for Eclipse. App Service also supports remote debugging of deployed web app. 2. See Resource Manager providers, regions, API versions and schemas. Scalability APP SERVICE SERVICE FABRIC AZURE FUNCTIONS AZURE KUBERNETES SERVICE CONTAINER INSTANCES AZURE BATCH VM scale sets Built-in service VM Scale Sets Built-in service Not supported Not supported N/A Load balancer Azure Load Balancer Integrated Azure Load Balancer Integrated Integrated No built-in support Azure Load Balancer Scale limit1 Platform image: 1000 nodes per VMSS, Custom image: 100 nodes per VMSS 20 instances, 100 with App Service Environmen t 100 nodes per VMSS 200 instances per Function app 100 nodes per cluster (default limit) 20 container groups per subscription (default limit). 20 core limit (default limit). CRITERIA VIRTUAL MACHINES Autoscaling Notes 1. See Azure subscription and service limits, quotas, and constraints. Availability CRITERIA VIRTUAL MACHINES APP SERVICE SERVICE FABRIC AZURE FUNCTIONS AZURE KUBERNETES SERVICE CONTAINER INSTANCES AZURE BATCH SLA SLA for Virtual Machines SLA for App Service SLA for Service Fabric SLA for Functions SLA for AKS SLA for Container Instances SLA for Azure Batch Multi region failover Traffic manager Traffic manager Traffic manager, MultiRegion Cluster Not supported Traffic manager Not supported Not Supported APP SERVICE SERVICE FABRIC AZURE FUNCTIONS AZURE KUBERNETES SERVICE CONTAINER INSTANCES AZURE BATCH Other CRITERIA VIRTUAL MACHINES SSL Configured in VM Supported Supported Supported Ingress controller Use sidecar container Supported Cost Windows, Linux App Service pricing Service Fabric pricing Azure Functions pricing AKS pricing Container Instances pricing Azure Batch pricing Suitable architecture styles N-Tier, Big compute (HPC) WebQueueWorker, NTier Microservic es, Eventdriven architecture Microservic es, Eventdriven architecture Microservic es, Eventdriven architecture Microservic es, task automation, batch jobs Big compute (HPC) Choose the right data store 3/25/2019 • 10 minutes to read • Edit Online Modern business systems manage increasingly large volumes of data. Data may be ingested from external services, generated by the system itself, or created by users. These data sets may have extremely varied characteristics and processing requirements. Businesses use data to assess trends, trigger business processes, audit their operations, analyze customer behavior, and many other things. This heterogeneity means that a single data store is usually not the best approach. Instead, it's often better to store different types of data in different data stores, each focused towards a specific workload or usage pattern. The term polyglot persistence is used to describe solutions that use a mix of data store technologies. Selecting the right data store for your requirements is a key design decision. There are literally hundreds of implementations to choose from among SQL and NoSQL databases. Data stores are often categorized by how they structure data and the types of operations they support. This article describes several of the most common storage models. Note that a particular data store technology may support multiple storage models. For example, a relational database management systems (RDBMS ) may also support key/value or graph storage. In fact, there is a general trend for so-called multimodel support, where a single database system supports several models. But it's still useful to understand the different models at a high level. Not all data stores in a given category provide the same feature-set. Most data stores provide server-side functionality to query and process data. Sometimes this functionality is built into the data storage engine. In other cases, the data storage and processing capabilities are separated, and there may be several options for processing and analysis. Data stores also support different programmatic and management interfaces. Generally, you should start by considering which storage model is best suited for your requirements. Then consider a particular data store within that category, based on factors such as feature set, cost, and ease of management. Relational database management systems Relational databases organize data as a series of two-dimensional tables with rows and columns. Each table has its own columns, and every row in a table has the same set of columns. This model is mathematically based, and most vendors provide a dialect of the Structured Query Language (SQL ) for retrieving and managing data. An RDBMS typically implements a transactionally consistent mechanism that conforms to the ACID (Atomic, Consistent, Isolated, Durable) model for updating information. An RDBMS typically supports a schema-on-write model, where the data structure is defined ahead of time, and all read or write operations must use the schema. This is in contrast to most NoSQL data stores, particularly key/value types, where the schema-on-read model assumes that the client will be imposing its own interpretive schema on data coming out of the database, and is agnostic to the data format being written. An RDBMS is very useful when strong consistency guarantees are important — where all changes are atomic, and transactions always leave the data in a consistent state. However, the underlying structures do not lend themselves to scaling out by distributing storage and processing across machines. Also, information stored in an RDBMS, must be put into a relational structure by following the normalization process. While this process is well understood, it can lead to inefficiencies, because of the need to disassemble logical entities into rows in separate tables, and then reassemble the data when running queries. Relevant Azure services: Azure SQL Database Azure Database for MySQL Azure Database for PostgreSQL Azure Database for MariaDB Key/value stores A key/value store is essentially a large hash table. You associate each data value with a unique key, and the key/value store uses this key to store the data by using an appropriate hashing function. The hashing function is selected to provide an even distribution of hashed keys across the data storage. Most key/value stores only support simple query, insert, and delete operations. To modify a value (either partially or completely), an application must overwrite the existing data for the entire value. In most implementations, reading or writing a single value is an atomic operation. If the value is large, writing may take some time. An application can store arbitrary data as a set of values, although some key/value stores impose limits on the maximum size of values. The stored values are opaque to the storage system software. Any schema information must be provided and interpreted by the application. Essentially, values are blobs and the key/value store simply retrieves or stores the value by key. Key/value stores are highly optimized for applications performing simple lookups, but are less suitable for systems that need to query data across different key/value stores. Key/value stores are also not optimized for scenarios where querying by value is important, rather than performing lookups based only on keys. For example, with a relational database, you can find a record by using a WHERE clause, but key/values stores usually do not have this type of lookup capability for values. A single key/value store can be extremely scalable, as the data store can easily distribute data across multiple nodes on separate machines. Relevant Azure services: Cosmos DB Azure Redis Cache Document databases A document database is conceptually similar to a key/value store, except that it stores a collection of named fields and data (known as documents), each of which could be simple scalar items or compound elements such as lists and child collections. The data in the fields of a document can be encoded in a variety of ways, including XML, YAML, JSON, BSON,or even stored as plain text. Unlike key/value stores, the fields in documents are exposed to the storage management system, enabling an application to query and filter data by using the values in these fields. Typically, a document contains the entire data for an entity. What items constitute an entity are application specific. For example, an entity could contain the details of a customer, an order, or a combination of both. A single document may contain information that would be spread across several relational tables in an RDBMS. A document store does not require that all documents have the same structure. This free-form approach provides a great deal of flexibility. Applications can store different data in documents as business requirements change. The application can retrieve documents by using the document key. This is a unique identifier for the document, which is often hashed, to help distribute data evenly. Some document databases create the document key automatically. Others enable you to specify an attribute of the document to use as the key. The application can also query documents based on the value of one or more fields. Some document databases support indexing to facilitate fast lookup of documents based on one or more indexed fields. Many document databases support in-place updates, enabling an application to modify the values of specific fields in a document without rewriting the entire document. Read and write operations over multiple fields in a single document are usually atomic. Relevant Azure service: Cosmos DB Graph databases A graph database stores two types of information, nodes and edges. You can think of nodes as entities. Edges which specify the relationships between nodes. Both nodes and edges can have properties that provide information about that node or edge, similar to columns in a table. Edges can also have a direction indicating the nature of the relationship. The purpose of a graph database is to allow an application to efficiently perform queries that traverse the network of nodes and edges, and to analyze the relationships between entities. The following diagram shows an organization's personnel database structured as a graph. The entities are employees and departments, and the edges indicate reporting relationships and the department in which employees work. In this graph, the arrows on the edges show the direction of the relationships. This structure makes it straightforward to perform queries such as "Find all employees who report directly or indirectly to Sarah" or "Who works in the same department as John?" For large graphs with lots of entities and relationships, you can perform very complex analyses very quickly. Many graph databases provide a query language that you can use to traverse a network of relationships efficiently. Relevant Azure service: Cosmos DB Column-family databases A column-family database organizes data into rows and columns. In its simplest form, a column-family database can appear very similar to a relational database, at least conceptually. The real power of a column-family database lies in its denormalized approach to structuring sparse data. You can think of a column-family database as holding tabular data with rows and columns, but the columns are divided into groups known as column families. Each column family holds a set of columns that are logically related together and are typically retrieved or manipulated as a unit. Other data that is accessed separately can be stored in separate column families. Within a column family, new columns can be added dynamically, and rows can be sparse (that is, a row doesn't need to have a value for every column). The following diagram shows an example with two column families, Identity and Contact Info . The data for a single entity has the same row key in each column-family. This structure, where the rows for any given object in a column family can vary dynamically, is an important benefit of the column-family approach, making this form of data store highly suited for storing structured, volatile data. Unlike a key/value store or a document database, most column-family databases store data in key order, rather than by computing a hash. Many implementations allow you to create indexes over specific columns in a column- family. Indexes let you retrieve data by columns value, rather than row key. Read and write operations for a row are usually atomic with a single column-family, although some implementations provide atomicity across the entire row, spanning multiple column-families. Relevant Azure service: HBase in HDInsight Data analytics Data analytics stores provide massively parallel solutions for ingesting, storing, and analyzing data. This data is distributed across multiple servers using a share-nothing architecture to maximize scalability and minimize dependencies. The data is unlikely to be static, so these stores must be able to handle large quantities of information, arriving in a variety of formats from multiple streams, while continuing to process new queries. Relevant Azure services: SQL Data Warehouse Azure Data Lake Azure Data Explorer Search Engine Databases A search engine database supports the ability to search for information held in external data stores and services. A search engine database can be used to index massive volumes of data and provide near real-time access to these indexes. Although search engine databases are commonly thought of as being synonymous with the web, many large-scale systems use them to provide structured and ad-hoc search capabilities on top of their own databases. The key characteristics of a search engine database are the ability to store and index information very quickly, and provide fast response times for search requests. Indexes can be multi-dimensional and may support free-text searches across large volumes of text data. Indexing can be performed using a pull model, triggered by the search engine database, or using a push model, initiated by external application code. Searching can be exact or fuzzy. A fuzzy search finds documents that match a set of terms and calculates how closely they match. Some search engines also support linguistic analysis that can return matches based on synonyms, genre expansions (for example, matching dogs to pets ), and stemming (matching words with the same root). Relevant Azure service: Azure Search Time Series Databases Time series data is a set of values organized by time, and a time series database is a database that is optimized for this type of data. Time series databases must support a very high number of writes, as they typically collect large amounts of data in real time from a large number of sources. Updates are rare, and deletes are often done as bulk operations. Although the records written to a time-series database are generally small, there are often a large number of records, and total data size can grow rapidly. Time series databases are good for storing telemetry data. Scenarios include IoT sensors or application/system counters. Relevant Azure service: Time Series Insights Object storage Object storage is optimized for storing and retrieving large binary objects (images, files, video and audio streams, large application data objects and documents, virtual machine disk images). Objects in these store types are composed of the stored data, some metadata, and a unique ID for accessing the object. Object stores enables the management of extremely large amounts of unstructured data. Relevant Azure service: Blob Storage Shared files Sometimes, using simple flat files can be the most effective means of storing and retrieving information. Using file shares enables files to be accessed across a network. Given appropriate security and concurrent access control mechanisms, sharing data in this way can enable distributed services to provide highly scalable data access for performing basic, low -level operations such as simple read and write requests. Relevant Azure service: File Storage Criteria for choosing a data store 3/13/2019 • 9 minutes to read • Edit Online Azure supports many types of data storage solutions, each providing different features and capabilities. This article describes the comparison criteria you should use when evaluating a data store. The goal is to help you determine which data storage types can meet your solution's requirements. General Considerations To start your comparison, gather as much of the following information as you can about your data needs. This information will help you to determine which data storage types will meet your needs. Functional requirements Data format. What type of data are you intending to store? Common types include transactional data, JSON objects, telemetry, search indexes, or flat files. Data size. How large are the entities you need to store? Will these entities need to be maintained as a single document, or can they be split across multiple documents, tables, collections, and so forth? Scale and structure. What is the overall amount of storage capacity you need? Do you anticipate partitioning your data? Data relationships. Will your data need to support one-to-many or many-to-many relationships? Are relationships themselves an important part of the data? Will you need to join or otherwise combine data from within the same dataset, or from external datasets? Consistency model. How important is it for updates made in one node to appear in other nodes, before further changes can be made? Can you accept eventual consistency? Do you need ACID guarantees for transactions? Schema flexibility. What kind of schemas will you apply to your data? Will you use a fixed schema, a schema-on-write approach, or a schema-on-read approach? Concurrency. What kind of concurrency mechanism do you want to use when updating and synchronizing data? Will the application perform many updates that could potentially conflict. If so, you may require record locking and pessimistic concurrency control. Alternatively, can you support optimistic concurrency controls? If so, is simple timestamp-based concurrency control enough, or do you need the added functionality of multi-version concurrency control? Data movement. Will your solution need to perform ETL tasks to move data to other stores or data warehouses? Data lifecycle. Is the data write-once, read-many? Can it be moved into cool or cold storage? Other supported features. Do you need any other specific features, such as schema validation, aggregation, indexing, full-text search, MapReduce, or other query capabilities? Non-functional requirements Performance and scalability. What are your data performance requirements? Do you have specific requirements for data ingestion rates and data processing rates? What are the acceptable response times for querying and aggregation of data once ingested? How large will you need the data store to scale up? Is your workload more read-heavy or write-heavy? Reliability. What overall SLA do you need to support? What level of fault-tolerance do you need to provide for data consumers? What kind of backup and restore capabilities do you need? Replication. Will your data need to be distributed among multiple replicas or regions? What kind of data replication capabilities do you require? Limits. Will the limits of a particular data store support your requirements for scale, number of connections, and throughput? Management and cost Managed service. When possible, use a managed data service, unless you require specific capabilities that can only be found in an IaaS -hosted data store. Region availability. For managed services, is the service available in all Azure regions? Does your solution need to be hosted in certain Azure regions? Portability. Will your data need to migrated to on-premises, external datacenters, or other cloud hosting environments? Licensing. Do you have a preference of a proprietary versus OSS license type? Are there any other external restrictions on what type of license you can use? Overall cost. What is the overall cost of using the service within your solution? How many instances will need to run, to support your uptime and throughput requirements? Consider operations costs in this calculation. One reason to prefer managed services is the reduced operational cost. Cost effectiveness. Can you partition your data, to store it more cost effectively? For example, can you move large objects out of an expensive relational database into an object store? Security Security. What type of encryption do you require? Do you need encryption at rest? What authentication mechanism do you want to use to connect to your data? Auditing. What kind of audit log do you need to generate? Networking requirements. Do you need to restrict or otherwise manage access to your data from other network resources? Does data need to be accessible only from inside the Azure environment? Does the data need to be accessible from specific IP addresses or subnets? Does it need to be accessible from applications or services hosted on-premises or in other external datacenters? DevOps Skill set. Are there particular programming languages, operating systems, or other technology that your team is particularly adept at using? Are there others that would be difficult for your team to work with? Clients Is there good client support for your development languages? The following sections compare various data store models in terms of workload profile, data types, and example use cases. Relational database management systems (RDBMS) Workload Both the creation of new records and updates to existing data happen regularly. Multiple operations have to be completed in a single transaction. Requires aggregation functions to perform crosstabulation. Strong integration with reporting tools is required. Relationships are enforced using database constraints. Indexes are used to optimize query performance. Allows access to specific subsets of data. Data type Data is highly normalized. Database schemas are required and enforced. Many-to-many relationships between data entities in the database. Constraints are defined in the schema and imposed on any data in the database. Data requires high integrity. Indexes and relationships need to be maintained accurately. Data requires strong consistency. Transactions operate in a way that ensures all data are 100% consistent for all users and processes. Size of individual data entries is intended to be small to medium-sized. Examples Line of business (human capital management, customer relationship management, enterprise resource planning) Inventory management Reporting database Accounting Asset management Fund management Order management Document databases Workload General purpose. Insert and update operations are common. Both the creation of new records and updates to existing data happen regularly. No object-relational impedance mismatch. Documents can better match the object structures used in application code. Optimistic concurrency is more commonly used. Data must be modified and processed by consuming application. Data requires index on multiple fields. Individual documents are retrieved and written as a single block. Data type Data can be managed in de-normalized way. Size of individual document data is relatively small. Each document type can use its own schema. Documents can include optional fields. Document data is semi-structured, meaning that data types of each field are not strictly defined. Data aggregation is supported. Examples Product catalog User accounts Bill of materials Personalization Content management Operations data Inventory management Transaction history data Materialized view of other NoSQL stores. Replaces file/BLOB indexing. Key/value stores Workload Data is identified and accessed using a single ID key, like a dictionary. Massively scalable. No joins, lock, or unions are required. No aggregation mechanisms are used. Secondary indexes are generally not used. Data type Data size tends to be large. Each key is associated with a single value, which is an unmanaged data BLOB. There is no schema enforcement. No relationships between entities. Examples Data caching Session management User preference and profile management Product recommendation and ad serving Dictionaries Graph databases Workload The relationships between data items are very complex, involving many hops between related data items. The relationship between data items are dynamic and change over time. Relationships between objects are first-class citizens, without requiring foreign-keys and joins to traverse. Data type Data is comprised of nodes and relationships. Nodes are similar to table rows or JSON documents. Relationships are just as important as nodes, and are exposed directly in the query language. Composite objects, such as a person with multiple phone numbers, tend to be broken into separate, smaller nodes, combined with traversable relationships Examples Organization charts Social graphs Fraud detection Analytics Recommendation engines Column-family databases Workload Most column-family databases perform write operations extremely quickly. Update and delete operations are rare. Designed to provide high throughput and low-latency access. Supports easy query access to a particular set of fields within a much larger record. Massively scalable. Data type Data is stored in tables consisting of a key column and one or more column families. Specific columns can vary by individual rows. Individual cells are accessed via get and put commands Multiple rows are returned using a scan command. Examples Recommendations Personalization Sensor data Telemetry Messaging Social media analytics Web analytics Activity monitoring Weather and other time-series data Search engine databases Workload Indexing data from multiple sources and services. Queries are ad-hoc and can be complex. Requires aggregation. Full text search is required. Ad hoc self-service query is required. Data analysis with index on all fields is required. Data type Semi-structured or unstructured Text Text with reference to structured data Examples Product catalogs Site search Logging Analytics Shopping sites Data warehouse Workload Data analytics Enterprise BI Data type Historical data from multiple sources. Usually denormalized in a "star" or "snowflake" schema, consisting of fact and dimension tables. Usually loaded with new data on a scheduled basis. Dimension tables often include multiple historic versions of an entity, referred to as a slowly changing dimension. Examples An enterprise data warehouse that provides data for analytical models, reports, and dashboards. Time series databases Workload An overwhelming proportion of operations (95-99%) are writes. Records are generally appended sequentially in time order. Updates are rare. Deletes occur in bulk, and are made to contiguous blocks or records. Read requests can be larger than available memory. It's common for multiple reads to occur simultaneously. Data is read sequentially in either ascending or descending time order. Data type A time stamp that is used as the primary key and sorting mechanism. Measurements from the entry or descriptions of what the entry represents. Tags that define additional information about the type, origin, and other information about the entry. Examples Monitoring and event telemetry. Sensor or other IoT data. Object storage Workload Identified by key. Objects may be publicly or privately accessible. Content is typically an asset such as a spreadsheet, image, or video file. Content must be durable (persistent), and external to any application tier or virtual machine. Data type Data size is large. Blob data. Value is opaque. Examples Images, videos, office documents, PDFs CSS, Scripts, CSV Static HTML, JSON Log and audit files Database backups Shared files Workload Migration from existing apps that interact with the file system. Requires SMB interface. Data type Files in a hierarchical set of folders. Accessible with standard I/O libraries. Examples Legacy files Shared content accessible among a number of VMs or app instances Ten design principles for Azure applications 3/13/2019 • 2 minutes to read • Edit Online Follow these design principles to make your application more scalable, resilient, and manageable. Design for self healing. In a distributed system, failures happen. Design your application to be self healing when failures occur. Make all things redundant. Build redundancy into your application, to avoid having single points of failure. Minimize coordination. Minimize coordination between application services to achieve scalability. Design to scale out. Design your application so that it can scale horizontally, adding or removing new instances as demand requires. Partition around limits. Use partitioning to work around database, network, and compute limits. Design for operations. Design your application so that the operations team has the tools they need. Use managed services. When possible, use platform as a service (PaaS ) rather than infrastructure as a service (IaaS ). Use the best data store for the job. Pick the storage technology that is the best fit for your data and how it will be used. Design for evolution. All successful applications change over time. An evolutionary design is key for continuous innovation. Build for the needs of business. Every design decision must be justified by a business requirement. Design for self healing 3/13/2019 • 4 minutes to read • Edit Online Design your application to be self healing when failures occur In a distributed system, failures happen. Hardware can fail. The network can have transient failures. Rarely, an entire service or region may experience a disruption, but even those must be planned for. Therefore, design an application to be self healing when failures occur. This requires a three-pronged approach: Detect failures. Respond to failures gracefully. Log and monitor failures, to give operational insight. How you respond to a particular type of failure may depend on your application's availability requirements. For example, if you require very high availability, you might automatically fail over to a secondary region during a regional outage. However, that will incur a higher cost than a single-region deployment. Also, don't just consider big events like regional outages, which are generally rare. You should focus as much, if not more, on handling local, short-lived failures, such as network connectivity failures or failed database connections. Recommendations Retry failed operations. Transient failures may occur due to momentary loss of network connectivity, a dropped database connection, or a timeout when a service is busy. Build retry logic into your application to handle transient failures. For many Azure services, the client SDK implements automatic retries. For more information, see Transient fault handling and the Retry pattern. Protect failing remote services (Circuit Breaker). It's good to retry after a transient failure, but if the failure persists, you can end up with too many callers hammering a failing service. This can lead to cascading failures, as requests back up. Use the Circuit Breaker pattern to fail fast (without making the remote call) when an operation is likely to fail. Isolate critical resources (Bulkhead). Failures in one subsystem can sometimes cascade. This can happen if a failure causes some resources, such as threads or sockets, not to get freed in a timely manner, leading to resource exhaustion. To avoid this, partition a system into isolated groups, so that a failure in one partition does not bring down the entire system. Perform load leveling. Applications may experience sudden spikes in traffic that can overwhelm services on the backend. To avoid this, use the Queue-Based Load Leveling pattern to queue work items to run asynchronously. The queue acts as a buffer that smooths out peaks in the load. Fail over. If an instance can't be reached, fail over to another instance. For things that are stateless, like a web server, put several instances behind a load balancer or traffic manager. For things that store state, like a database, use replicas and fail over. Depending on the data store and how it replicates, this may require the application to deal with eventual consistency. Compensate failed transactions. In general, avoid distributed transactions, as they require coordination across services and resources. Instead, compose an operation from smaller individual transactions. If the operation fails midway through, use Compensating Transactions to undo any step that already completed. Checkpoint long-running transactions. Checkpoints can provide resiliency if a long-running operation fails. When the operation restarts (for example, it is picked up by another VM ), it can be resumed from the last checkpoint. Degrade gracefully. Sometimes you can't work around a problem, but you can provide reduced functionality that is still useful. Consider an application that shows a catalog of books. If the application can't retrieve the thumbnail image for the cover, it might show a placeholder image. Entire subsystems might be noncritical for the application. For example, in an e-commerce site, showing product recommendations is probably less critical than processing orders. Throttle clients. Sometimes a small number of users create excessive load, which can reduce your application's availability for other users. In this situation, throttle the client for a certain period of time. See the Throttling pattern. Block bad actors. Just because you throttle a client, it doesn't mean client was acting maliciously. It just means the client exceeded their service quota. But if a client consistently exceeds their quota or otherwise behaves badly, you might block them. Define an out-of-band process for user to request getting unblocked. Use leader election. When you need to coordinate a task, use Leader Election to select a coordinator. That way, the coordinator is not a single point of failure. If the coordinator fails, a new one is selected. Rather than implement a leader election algorithm from scratch, consider an off-the-shelf solution such as Zookeeper. Test with fault injection. All too often, the success path is well tested but not the failure path. A system could run in production for a long time before a failure path is exercised. Use fault injection to test the resiliency of the system to failures, either by triggering actual failures or by simulating them. Embrace chaos engineering. Chaos engineering extends the notion of fault injection, by randomly injecting failures or abnormal conditions into production instances. For a structured approach to making your applications self healing, see Design resilient applications for Azure. Make all things redundant 3/13/2019 • 2 minutes to read • Edit Online Build redundancy into your application, to avoid having single points of failure A resilient application routes around failure. Identify the critical paths in your application. Is there redundancy at each point in the path? If a subsystem fails, will the application fail over to something else? Recommendations Consider business requirements. The amount of redundancy built into a system can affect both cost and complexity. Your architecture should be informed by your business requirements, such as recovery time objective (RTO ). For example, a multi-region deployment is more expensive than a single-region deployment, and is more complicated to manage. You will need operational procedures to handle failover and failback. The additional cost and complexity might be justified for some business scenarios and not others. Place VMs behind a load balancer. Don't use a single VM for mission-critical workloads. Instead, place multiple VMs behind a load balancer. If any VM becomes unavailable, the load balancer distributes traffic to the remaining healthy VMs. To learn how to deploy this configuration, see Multiple VMs for scalability and availability. Load Balancer Replicate databases. Azure SQL Database and Cosmos DB automatically replicate the data within a region, and you can enable geo-replication across regions. If you are using an IaaS database solution, choose one that supports replication and failover, such as SQL Server Always On Availability Groups. Enable geo-replication. Geo-replication for Azure SQL Database and Cosmos DB creates secondary readable replicas of your data in one or more secondary regions. In the event of an outage, the database can fail over to the secondary region for writes. Partition for availability. Database partitioning is often used to improve scalability, but it can also improve availability. If one shard goes down, the other shards can still be reached. A failure in one shard will only disrupt a subset of the total transactions. Deploy to more than one region. For the highest availability, deploy the application to more than one region. That way, in the rare case when a problem affects an entire region, the application can fail over to another region. The following diagram shows a multi-region application that uses Azure Traffic Manager to handle failover. Region 1 Region 2 Azure Traffic Manager Synchronize front and backend failover. Use Azure Traffic Manager to fail over the front end. If the front end becomes unreachable in one region, Traffic Manager will route new requests to the secondary region. Depending on your database solution, you may need to coordinate failing over the database. Use automatic failover but manual failback. Use Traffic Manager for automatic failover, but not for automatic failback. Automatic failback carries a risk that you might switch to the primary region before the region is completely healthy. Instead, verify that all application subsystems are healthy before manually failing back. Also, depending on the database, you might need to check data consistency before failing back. Include redundancy for Traffic Manager. Traffic Manager is a possible failure point. Review the Traffic Manager SLA, and determine whether using Traffic Manager alone meets your business requirements for high availability. If not, consider adding another traffic management solution as a failback. If the Azure Traffic Manager service fails, change your CNAME records in DNS to point to the other traffic management service. Minimize coordination 3/13/2019 • 4 minutes to read • Edit Online Minimize coordination between application services to achieve scalability Most cloud applications consist of multiple application services — web front ends, databases, business processes, reporting and analysis, and so on. To achieve scalability and reliability, each of those services should run on multiple instances. What happens when two instances try to perform concurrent operations that affect some shared state? In some cases, there must be coordination across nodes, for example to preserve ACID guarantees. In this diagram, Node2 is waiting for Node1 to release a database lock: Update Orders Node 1 Update OrderItems (blocked) Node 2 Coordination limits the benefits of horizontal scale and creates bottlenecks. In this example, as you scale out the application and add more instances, you'll see increased lock contention. In the worst case, the front-end instances will spend most of their time waiting on locks. "Exactly once" semantics are another frequent source of coordination. For example, an order must be processed exactly once. Two workers are listening for new orders. Worker1 picks up an order for processing. The application must ensure that Worker2 doesn't duplicate the work, but also if Worker1 crashes, the order isn't dropped. Orders Create order order #123 Worker 1 ? Process orders Worker 2 You can use a pattern such as Scheduler Agent Supervisor to coordinate between the workers, but in this case a better approach might be to partition the work. Each worker is assigned a certain range of orders (say, by billing region). If a worker crashes, a new instance picks up where the previous instance left off, but multiple instances aren't contending. Recommendations Embrace eventual consistency. When data is distributed, it takes coordination to enforce strong consistency guarantees. For example, suppose an operation updates two databases. Instead of putting it into a single transaction scope, it's better if the system can accommodate eventual consistency, perhaps by using the Compensating Transaction pattern to logically roll back after a failure. Use domain events to synchronize state. A domain event is an event that records when something happens that has significance within the domain. Interested services can listen for the event, rather than using a global transaction to coordinate across multiple services. If this approach is used, the system must tolerate eventual consistency (see previous item). Consider patterns such as CQRS and event sourcing. These two patterns can help to reduce contention between read workloads and write workloads. The CQRS pattern separates read operations from write operations. In some implementations, the read data is physically separated from the write data. In the Event Sourcing pattern, state changes are recorded as a series of events to an append-only data store. Appending an event to the stream is an atomic operation, requiring minimal locking. These two patterns complement each other. If the write-only store in CQRS uses event sourcing, the read-only store can listen for the same events to create a readable snapshot of the current state, optimized for queries. Before adopting CQRS or event sourcing, however, be aware of the challenges of this approach. For more information, see CQRS architecture style. Partition data. Avoid putting all of your data into one data schema that is shared across many application services. A microservices architecture enforces this principle by making each service responsible for its own data store. Within a single database, partitioning the data into shards can improve concurrency, because a service writing to one shard does not affect a service writing to a different shard. Design idempotent operations. When possible, design operations to be idempotent. That way, they can be handled using at-least-once semantics. For example, you can put work items on a queue. If a worker crashes in the middle of an operation, another worker simply picks up the work item. Use asynchronous parallel processing. If an operation requires multiple steps that are performed asynchronously (such as remote service calls), you might be able to call them in parallel, and then aggregate the results. This approach assumes that each step does not depend on the results of the previous step. Use optimistic concurrency when possible. Pessimistic concurrency control uses database locks to prevent conflicts. This can cause poor performance and reduce availability. With optimistic concurrency control, each transaction modifies a copy or snapshot of the data. When the transaction is committed, the database engine validates the transaction and rejects any transactions that would affect database consistency. Azure SQL Database and SQL Server support optimistic concurrency through snapshot isolation. Some Azure storage services support optimistic concurrency through the use of Etags, including Azure Cosmos DB and Azure Storage. Consider MapReduce or other parallel, distributed algorithms. Depending on the data and type of work to be performed, you may be able to split the work into independent tasks that can be performed by multiple nodes working in parallel. See Big compute architecture style. Use leader election for coordination. In cases where you need to coordinate operations, make sure the coordinator does not become a single point of failure in the application. Using the Leader Election pattern, one instance is the leader at any time, and acts as the coordinator. If the leader fails, a new instance is elected to be the leader. Design to scale out 3/13/2019 • 2 minutes to read • Edit Online Design your application so that it can scale horizontally A primary advantage of the cloud is elastic scaling — the ability to use as much capacity as you need, scaling out as load increases, and scaling in when the extra capacity is not needed. Design your application so that it can scale horizontally, adding or removing new instances as demand requires. Recommendations Avoid instance stickiness. Stickiness, or session affinity, is when requests from the same client are always routed to the same server. Stickiness limits the application's ability to scale out. For example, traffic from a high-volume user will not be distributed across instances. Causes of stickiness include storing session state in memory, and using machine-specific keys for encryption. Make sure that any instance can handle any request. Identify bottlenecks. Scaling out isn't a magic fix for every performance issue. For example, if your backend database is the bottleneck, it won't help to add more web servers. Identify and resolve the bottlenecks in the system first, before throwing more instances at the problem. Stateful parts of the system are the most likely cause of bottlenecks. Decompose workloads by scalability requirements. Applications often consist of multiple workloads, with different requirements for scaling. For example, an application might have a public-facing site and a separate administration site. The public site may experience sudden surges in traffic, while the administration site has a smaller, more predictable load. Offload resource-intensive tasks. Tasks that require a lot of CPU or I/O resources should be moved to background jobs when possible, to minimize the load on the front end that is handling user requests. Use built-in autoscaling features. Many Azure compute services have built-in support for autoscaling. If the application has a predictable, regular workload, scale out on a schedule. For example, scale out during business hours. Otherwise, if the workload is not predictable, use performance metrics such as CPU or request queue length to trigger autoscaling. For autoscaling best practices, see Autoscaling. Consider aggressive autoscaling for critical workloads. For critical workloads, you want to keep ahead of demand. It's better to add new instances quickly under heavy load to handle the additional traffic, and then gradually scale back. Design for scale in. Remember that with elastic scale, the application will have periods of scale in, when instances get removed. The application must gracefully handle instances being removed. Here are some ways to handle scalein: Listen for shutdown events (when available) and shut down cleanly. Clients/consumers of a service should support transient fault handling and retry. For long-running tasks, consider breaking up the work, using checkpoints or the Pipes and Filters pattern. Put work items on a queue so that another instance can pick up the work, if an instance is removed in the middle of processing. Partition around limits 3/13/2019 • 2 minutes to read • Edit Online Use partitioning to work around database, network, and compute limits In the cloud, all services have limits in their ability to scale up. Azure service limits are documented in Azure subscription and service limits, quotas, and constraints. Limits include number of cores, database size, query throughput, and network throughput. If your system grows sufficiently large, you may hit one or more of these limits. Use partitioning to work around these limits. There are many ways to partition a system, such as: Partition a database to avoid limits on database size, data I/O, or number of concurrent sessions. Partition a queue or message bus to avoid limits on the number of requests or the number of concurrent connections. Partition an App Service web app to avoid limits on the number of instances per App Service plan. A database can be partitioned horizontally, vertically, or functionally. In horizontal partitioning, also called sharding, each partition holds data for a subset of the total data set. The partitions share the same data schema. For example, customers whose names start with A–M go into one partition, N –Z into another partition. In vertical partitioning, each partition holds a subset of the fields for the items in the data store. For example, put frequently accessed fields in one partition, and less frequently accessed fields in another. In functional partitioning, data is partitioned according to how it is used by each bounded context in the system. For example, store invoice data in one partition and product inventory data in another. The schemas are independent. For more detailed guidance, see Data partitioning. Recommendations Partition different parts of the application. Databases are one obvious candidate for partitioning, but also consider storage, cache, queues, and compute instances. Design the partition key to avoid hot spots. If you partition a database, but one shard still gets the majority of the requests, then you haven't solved your problem. Ideally, load gets distributed evenly across all the partitions. For example, hash by customer ID and not the first letter of the customer name, because some letters are more frequent. The same principle applies when partitioning a message queue. Pick a partition key that leads to an even distribution of messages across the set of queues. For more information, see Sharding. Partition around Azure subscription and service limits. Individual components and services have limits, but there are also limits for subscriptions and resource groups. For very large applications, you might need to partition around those limits. Partition at different levels. Consider a database server deployed on a VM. The VM has a VHD that is backed by Azure Storage. The storage account belongs to an Azure subscription. Notice that each step in the hierarchy has limits. The database server may have a connection pool limit. VMs have CPU and network limits. Storage has IOPS limits. The subscription has limits on the number of VM cores. Generally, it's easier to partition lower in the hierarchy. Only large applications should need to partition at the subscription level. Design for operations 3/13/2019 • 2 minutes to read • Edit Online Design an application so that the operations team has the tools they need The cloud has dramatically changed the role of the operations team. They are no longer responsible for managing the hardware and infrastructure that hosts the application. That said, operations is still a critical part of running a successful cloud application. Some of the important functions of the operations team include: Deployment Monitoring Escalation Incident response Security auditing Robust logging and tracing are particularly important in cloud applications. Involve the operations team in design and planning, to ensure the application gives them the data and insight thay need to be successful. Recommendations Make all things observable. Once a solution is deployed and running, logs and traces are your primary insight into the system. Tracing records a path through the system, and is useful to pinpoint bottlenecks, performance issues, and failure points. Logging captures individual events such as application state changes, errors, and exceptions. Log in production, or else you lose insight at the very times when you need it the most. Instrument for monitoring. Monitoring gives insight into how well (or poorly) an application is performing, in terms of availability, performance, and system health. For example, monitoring tells you whether you are meeting your SLA. Monitoring happens during the normal operation of the system. It should be as close to real-time as possible, so that the operations staff can react to issues quickly. Ideally, monitoring can help avert problems before they lead to a critical failure. For more information, see Monitoring and diagnostics. Instrument for root cause analysis. Root cause analysis is the process of finding the underlying cause of failures. It occurs after a failure has already happened. Use distributed tracing. Use a distributed tracing system that is designed for concurrency, asynchrony, and cloud scale. Traces should include a correlation ID that flows across service boundaries. A single operation may involve calls to multiple application services. If an operation fails, the correlation ID helps to pinpoint the cause of the failure. Standardize logs and metrics. The operations team will need to aggregate logs from across the various services in your solution. If every service uses its own logging format, it becomes difficult or impossible to get useful information from them. Define a common schema that includes fields such as correlation ID, event name, IP address of the sender, and so forth. Individual services can derive custom schemas that inherit the base schema, and contain additional fields. Automate management tasks, including provisioning, deployment, and monitoring. Automating a task makes it repeatable and less prone to human errors. Treat configuration as code. Check configuration files into a version control system, so that you can track and version your changes, and roll back if needed. Use managed services 3/13/2019 • 2 minutes to read • Edit Online When possible, use platform as a service (PaaS) rather than infrastructure as a service (IaaS) IaaS is like having a box of parts. You can build anything, but you have to assemble it yourself. Managed services are easier to configure and administer. You don't need to provision VMs, set up VNets, manage patches and updates, and all of the other overhead associated with running software on a VM. For example, suppose your application needs a message queue. You could set up your own messaging service on a VM, using something like RabbitMQ. But Azure Service Bus already provides reliable messaging as service, and it's simpler to set up. Just create a Service Bus namespace (which can be done as part of a deployment script) and then call Service Bus using the client SDK. Of course, your application may have specific requirements that make an IaaS approach more suitable. However, even if your application is based on IaaS, look for places where it may be natural to incorporate managed services. These include cache, queues, and data storage. INSTEAD OF RUNNING... CONSIDER USING... Active Directory Azure Active Directory Domain Services Elasticsearch Azure Search Hadoop HDInsight IIS App Service MongoDB Cosmos DB Redis Azure Redis Cache SQL Server Azure SQL Database Use the best data store for the job 3/13/2019 • 2 minutes to read • Edit Online Pick the storage technology that is the best fit for your data and how it will be used Gone are the days when you would just stick all of your data into a big relational SQL database. Relational databases are very good at what they do — providing ACID guarantees for transactions over relational data. But they come with some costs: Queries may require expensive joins. Data must be normalized and conform to a predefined schema (schema on write). Lock contention may impact performance. In any large solution, it's likely that a single data store technology won't fill all your needs. Alternatives to relational databases include key/value stores, document databases, search engine databases, time series databases, column family databases, and graph databases. Each has pros and cons, and different types of data fit more naturally into one or another. For example, you might store a product catalog in a document database, such as Cosmos DB, which allows for a flexible schema. In that case, each product description is a self-contained document. For queries over the entire catalog, you might index the catalog and store the index in Azure Search. Product inventory might go into a SQL database, because that data requires ACID guarantees. Remember that data includes more than just the persisted application data. It also includes application logs, events, messages, and caches. Recommendations Don't use a relational database for everything. Consider other data stores when appropriate. See Choose the right data store. Embrace polyglot persistence. In any large solution, it's likely that a single data store technology won't fill all your needs. Consider the type of data. For example, put transactional data into SQL, put JSON documents into a document database, put telemetry data into a time series data base, put application logs in Elasticsearch, and put blobs in Azure Blob Storage. Prefer availability over (strong) consistency. The CAP theorem implies that a distributed system must make trade-offs between availability and consistency. (Network partitions, the other leg of the CAP theorem, can never be completely avoided.) Often, you can achieve higher availability by adopting an eventual consistency model. Consider the skill set of the development team. There are advantages to using polyglot persistence, but it's possible to go overboard. Adopting a new data storage technology requires a new set of skills. The development team must understand how to get the most out of the technology. They must understand appropriate usage patterns, how to optimize queries, tune for performance, and so on. Factor this in when considering storage technologies. Use compensating transactions. A side effect of polyglot persistence is that single transaction might write data to multiple stores. If something fails, use compensating transactions to undo any steps that already completed. Look at bounded contexts. Bounded context is a term from domain driven design. A bounded context is an explicit boundary around a domain model, and defines which parts of the domain the model applies to. Ideally, a bounded context maps to a subdomain of the business domain. The bounded contexts in your system are a natural place to consider polyglot persistence. For example, "products" may appear in both the Product Catalog subdomain and the Product Inventory subdomain, but it's very likely that these two subdomains have different requirements for storing, updating, and querying products. Design for evolution 3/13/2019 • 3 minutes to read • Edit Online An evolutionary design is key for continuous innovation All successful applications change over time, whether to fix bugs, add new features, bring in new technologies, or make existing systems more scalable and resilient. If all the parts of an application are tightly coupled, it becomes very hard to introduce changes into the system. A change in one part of the application may break another part, or cause changes to ripple through the entire codebase. This problem is not limited to monolithic applications. An application can be decomposed into services, but still exhibit the sort of tight coupling that leaves the system rigid and brittle. But when services are designed to evolve, teams can innovate and continuously deliver new features. Microservices are becoming a popular way to achieve an evolutionary design, because they address many of the considerations listed here. Recommendations Enforce high cohesion and loose coupling. A service is cohesive if it provides functionality that logically belongs together. Services are loosely coupled if you can change one service without changing the other. High cohesion generally means that changes in one function will require changes in other related functions. If you find that updating a service requires coordinated updates to other services, it may be a sign that your services are not cohesive. One of the goals of domain-driven design (DDD ) is to identify those boundaries. Encapsulate domain knowledge. When a client consumes a service, the responsibility for enforcing the business rules of the domain should not fall on the client. Instead, the service should encapsulate all of the domain knowledge that falls under its responsibility. Otherwise, every client has to enforce the business rules, and you end up with domain knowledge spread across different parts of the application. Use asynchronous messaging. Asynchronous messaging is a way to decouple the message producer from the consumer. The producer does not depend on the consumer responding to the message or taking any particular action. With a pub/sub architecture, the producer may not even know who is consuming the message. New services can easily consume the messages without any modifications to the producer. Don't build domain knowledge into a gateway. Gateways can be useful in a microservices architecture, for things like request routing, protocol translation, load balancing, or authentication. However, the gateway should be restricted to this sort of infrastructure functionality. It should not implement any domain knowledge, to avoid becoming a heavy dependency. Expose open interfaces. Avoid creating custom translation layers that sit between services. Instead, a service should expose an API with a well-defined API contract. The API should be versioned, so that you can evolve the API while maintaining backward compatibility. That way, you can update a service without coordinating updates to all of the upstream services that depend on it. Public facing services should expose a RESTful API over HTTP. Backend services might use an RPC -style messaging protocol for performance reasons. Design and test against service contracts. When services expose well-defined APIs, you can develop and test against those APIs. That way, you can develop and test an individual service without spinning up all of its dependent services. (Of course, you would still perform integration and load testing against the real services.) Abstract infrastructure away from domain logic. Don't let domain logic get mixed up with infrastructurerelated functionality, such as messaging or persistence. Otherwise, changes in the domain logic will require updates to the infrastructure layers and vice versa. Offload cross-cutting concerns to a separate service. For example, if several services need to authenticate requests, you could move this functionality into its own service. Then you could evolve the authentication service — for example, by adding a new authentication flow — without touching any of the services that use it. Deploy services independently. When the DevOps team can deploy a single service independently of other services in the application, updates can happen more quickly and safely. Bug fixes and new features can be rolled out at a more regular cadence. Design both the application and the release process to support independent updates. Build for the needs of the business 3/13/2019 • 2 minutes to read • Edit Online Every design decision must be justified by a business requirement This design principle may seem obvious, but it's crucial to keep in mind when designing a solution. Do you anticipate millions of users, or a few thousand? Is a one hour application outage acceptable? Do you expect large bursts in traffic, or a very predictable workload? Ultimately, every design decision must be justified by a business requirement. Recommendations Define business objectives, including the recovery time objective (RTO ), recovery point objective (RPO ), and maximum tolerable outage (MTO ). These numbers should inform decisions about the architecture. For example, to achieve a low RTO, you might implement automated failover to a secondary region. But if your solution can tolerate a higher RTO, that degree of redundancy might be unnecessary. Document service level agreements (SLA ) and service level objectives (SLO ), including availability and performance metrics. You might build a solution that delivers 99.95% availability. Is that enough? The answer is a business decision. Model the application around the business domain. Start by analyzing the business requirements. Use these requirements to model the application. Consider using a domain-driven design (DDD ) approach to create domain models that reflect the business processes and use cases. Capture both functional and nonfunctional requirements. Functional requirements let you judge whether the application does the right thing. Nonfunctional requirements let you judge whether the application does those things well. In particular, make sure that you understand your requirements for scalability, availability, and latency. These requirements will influence design decisions and choice of technology. Decompose by workload. The term "workload" in this context means a discrete capability or computing task, which can be logically separated from other tasks. Different workloads may have different requirements for availability, scalability, data consistency, and disaster recovery. Plan for growth. A solution might meet your current needs, in terms of number of users, volume of transactions, data storage, and so forth. However, a robust application can handle growth without major architectural changes. See Design to scale out and Partition around limits. Also consider that your business model and business requirements will likely change over time. If an application's service model and data models are too rigid, it becomes hard to evolve the application for new use cases and scenarios. See Design for evolution. Manage costs. In a traditional on-premises application, you pay upfront for hardware (CAPEX). In a cloud application, you pay for the resources that you consume. Make sure that you understand the pricing model for the services that you consume. The total cost will include network bandwidth usage, storage, IP addresses, service consumption, and other factors. See Azure pricing for more information. Also consider your operations costs. In the cloud, you don't have to manage the hardware or other infrastructure, but you still need to manage your applications, including DevOps, incident response, disaster recovery, and so forth. Pillars of software quality 3/13/2019 • 10 minutes to read • Edit Online A successful cloud application will focus on these five pillars of software quality: Scalability, availability, resiliency, management, and security. PILLAR DESCRIPTION Scalability The ability of a system to handle increased load. Availability The proportion of time that a system is functional and working. Resiliency The ability of a system to recover from failures and continue to function. Management Operations processes that keep a system running in production. Security Protecting applications and data from threats. Scalability Scalability is the ability of a system to handle increased load. There are two main ways that an application can scale. Vertical scaling (scaling up) means increasing the capacity of a resource, for example by using a larger VM size. Horizontal scaling (scaling out) is adding new instances of a resource, such as VMs or database replicas. Horizontal scaling has significant advantages over vertical scaling: True cloud scale. Applications can be designed to run on hundreds or even thousands of nodes, reaching scales that are not possible on a single node. Horizontal scale is elastic. You can add more instances if load increases, or remove them during quieter periods. Scaling out can be triggered automatically, either on a schedule or in response to changes in load. Scaling out may be cheaper than scaling up. Running several small VMs can cost less than a single large VM. Horizontal scaling can also improve resiliency, by adding redundancy. If an instance goes down, the application keeps running. An advantage of vertical scaling is that you can do it without making any changes to the application. But at some point you'll hit a limit, where you can't scale any up any more. At that point, any further scaling must be horizontal. Horizontal scale must be designed into the system. For example, you can scale out VMs by placing them behind a load balancer. But each VM in the pool must be able to handle any client request, so the application must be stateless or store state externally (say, in a distributed cache). Managed PaaS services often have horizontal scaling and auto-scaling built in. The ease of scaling these services is a major advantage of using PaaS services. Just adding more instances doesn't mean an application will scale, however. It might simply push the bottleneck somewhere else. For example, if you scale a web front-end to handle more client requests, that might trigger lock contentions in the database. You would then need to consider additional measures, such as optimistic concurrency or data partitioning, to enable more throughput to the database. Always conduct performance and load testing to find these potential bottlenecks. The stateful parts of a system, such as databases, are the most common cause of bottlenecks, and require careful design to scale horizontally. Resolving one bottleneck may reveal other bottlenecks elsewhere. Use the Scalability checklist to review your design from a scalability standpoint. Scalability guidance Design patterns for scalability and performance Best practices: Autoscaling, Background jobs, Caching, CDN, Data partitioning Availability Availability is the proportion of time that the system is functional and working. It is usually measured as a percentage of uptime. Application errors, infrastructure problems, and system load can all reduce availability. A cloud application should have a service level objective (SLO ) that clearly defines the expected availability, and how the availability is measured. When defining availability, look at the critical path. The web front-end might be able to service client requests, but if every transaction fails because it can't connect to the database, the application is not available to users. Availability is often described in terms of "9s" — for example, "four 9s" means 99.99% uptime. The following table shows the potential cumulative downtime at different availability levels. % UPTIME DOWNTIME PER WEEK DOWNTIME PER MONTH DOWNTIME PER YEAR 99% 1.68 hours 7.2 hours 3.65 days 99.9% 10 minutes 43.2 minutes 8.76 hours 99.95% 5 minutes 21.6 minutes 4.38 hours 99.99% 1 minute 4.32 minutes 52.56 minutes 99.999% 6 seconds 26 seconds 5.26 minutes Notice that 99% uptime could translate to an almost 2-hour service outage per week. For many applications, especially consumer-facing applications, that is not an acceptable SLO. On the other hand, five 9s (99.999%) means no more than 5 minutes of downtime in a year. It's challenging enough just detecting an outage that quickly, let alone resolving the issue. To get very high availability (99.99% or higher), you can't rely on manual intervention to recover from failures. The application must be self-diagnosing and self-healing, which is where resiliency becomes crucial. In Azure, the Service Level Agreement (SLA) describes Microsoft's commitments for uptime and connectivity. If the SLA for a particular service is 99.95%, it means you should expect the service to be available 99.95% of the time. Applications often depend on multiple services. In general, the probability of either service having downtime is independent. For example, suppose your application depends on two services, each with a 99.9% SLA. The composite SLA for both services is 99.9% × 99.9% ≈ 99.8%, or slightly less than each service by itself. Use the Availability checklist to review your design from an availability standpoint. Availability guidance Design patterns for availability Best practices: Autoscaling, Background jobs Resiliency Resiliency is the ability of the system to recover from failures and continue to function. The goal of resiliency is to return the application to a fully functioning state after a failure occurs. Resiliency is closely related to availability. In traditional application development, there has been a focus on reducing mean time between failures (MTBF ). Effort was spent trying to prevent the system from failing. In cloud computing, a different mindset is required, due to several factors: Distributed systems are complex, and a failure at one point can potentially cascade throughout the system. Costs for cloud environments are kept low through the use of commodity hardware, so occasional hardware failures must be expected. Applications often depend on external services, which may become temporarily unavailable or throttle highvolume users. Today's users expect an application to be available 24/7 without ever going offline. All of these factors mean that cloud applications must be designed to expect occasional failures and recover from them. Azure has many resiliency features already built into the platform. For example: Azure Storage, SQL Database, and Cosmos DB all provide built-in data replication, both within a region and across regions. Azure Managed Disks are automatically placed in different storage scale units, to limit the effects of hardware failures. VMs in an availability set are spread across several fault domains. A fault domain is a group of VMs that share a common power source and network switch. Spreading VMs across fault domains limits the impact of physical hardware failures, network outages, or power interruptions. That said, you still need to build resiliency into your application. Resiliency strategies can be applied at all levels of the architecture. Some mitigations are more tactical in nature — for example, retrying a remote call after a transient network failure. Other mitigations are more strategic, such as failing over the entire application to a secondary region. Tactical mitigations can make a big difference. While it's rare for an entire region to experience a disruption, transient problems such as network congestion are more common — so target these first. Having the right monitoring and diagnostics is also important, both to detect failures when they happen, and to find the root causes. When designing an application to be resilient, you must understand your availability requirements. How much downtime is acceptable? This is partly a function of cost. How much will potential downtime cost your business? How much should you invest in making the application highly available? Use the Resiliency checklist to review your design from a resiliency standpoint. Resiliency guidance Designing resilient applications for Azure Design patterns for resiliency Best practices: Transient fault handling, Retry guidance for specific services Management and DevOps This pillar covers the operations processes that keep an application running in production. Deployments must be reliable and predictable. They should be automated to reduce the chance of human error. They should be a fast and routine process, so they don't slow down the release of new features or bug fixes. Equally important, you must be able to quickly roll back or roll forward if an update has problems. Monitoring and diagnostics are crucial. Cloud applications run in a remote datacenter where you do not have full control of the infrastructure or, in some cases, the operating system. In a large application, it's not practical to log into VMs to troubleshoot an issue or sift through log files. With PaaS services, there may not even be a dedicated VM to log into. Monitoring and diagnostics give insight into the system, so that you know when and where failures occur. All systems must be observable. Use a common and consistent logging schema that lets you correlate events across systems. The monitoring and diagnostics process has several distinct phases: Instrumentation. Generating the raw data, from application logs, web server logs, diagnostics built into the Azure platform, and other sources. Collection and storage. Consolidating the data into one place. Analysis and diagnosis. To troubleshoot issues and see the overall health. Visualization and alerts. Using telemetry data to spot trends or alert the operations team. Use the DevOps checklist to review your design from a management and DevOps standpoint. Management and DevOps guidance Design patterns for management and monitoring Best practices: Monitoring and diagnostics Security You must think about security throughout the entire lifecycle of an application, from design and implementation to deployment and operations. The Azure platform provides protections against a variety of threats, such as network intrusion and DDoS attacks. But you still need to build security into your application and into your DevOps processes. Here are some broad security areas to consider. Identity management Consider using Azure Active Directory (Azure AD ) to authenticate and authorize users. Azure AD is a fully managed identity and access management service. You can use it to create domains that exist purely on Azure, or integrate with your on-premises Active Directory identities. Azure AD also integrates with Office365, Dynamics CRM Online, and many third-party SaaS applications. For consumer-facing applications, Azure Active Directory B2C lets users authenticate with their existing social accounts (such as Facebook, Google, or LinkedIn), or create a new user account that is managed by Azure AD. If you want to integrate an on-premises Active Directory environment with an Azure network, several approaches are possible, depending on your requirements. For more information, see our Identity Management reference architectures. Protecting your infrastructure Control access to the Azure resources that you deploy. Every Azure subscription has a trust relationship with an Azure AD tenant. Use Role-Based Access Control (RBAC ) to grant users within your organization the correct permissions to Azure resources. Grant access by assigning RBAC role to users or groups at a certain scope. The scope can be a subscription, a resource group, or a single resource. Audit all changes to infrastructure. Application security In general, the security best practices for application development still apply in the cloud. These include things like using SSL everywhere, protecting against CSRF and XSS attacks, preventing SQL injection attacks, and so on. Cloud applications often use managed services that have access keys. Never check these into source control. Consider storing application secrets in Azure Key Vault. Data sovereignty and encryption Make sure that your data remains in the correct geopolitical zone when using Azure's highly available. Azure's geo-replicated storage uses the concept of a paired region in the same geopolitical region. Use Key Vault to safeguard cryptographic keys and secrets. By using Key Vault, you can encrypt keys and secrets by using keys that are protected by hardware security modules (HSMs). Many Azure storage and DB services support data encryption at rest, including Azure Storage, Azure SQL Database, Azure SQL Data Warehouse, and Cosmos DB. Security resources Azure Security Center provides integrated security monitoring and policy management across your Azure subscriptions. Azure Security Documentation Microsoft Trust Center Cloud Design Patterns 3/13/2019 • 6 minutes to read • Edit Online These design patterns are useful for building reliable, scalable, secure applications in the cloud. Each pattern describes the problem that the pattern addresses, considerations for applying the pattern, and an example based on Microsoft Azure. Most of the patterns include code samples or snippets that show how to implement the pattern on Azure. However, most of the patterns are relevant to any distributed system, whether hosted on Azure or on other cloud platforms. Challenges in cloud development Availability Availability is the proportion of time that the system is functional and working, usually measured as a percentage of uptime. It can be affected by system errors, infrastructure problems, malicious attacks, and system load. Cloud applications typically provide users with a service level agreement (SLA), so applications must be designed to maximize availability. Data Management Data management is the key element of cloud applications, and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations. Design and Implementation Good design encompasses factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and in other scenarios. Decisions made during the design and implementation phase have a huge impact on the quality and the total cost of ownership of cloud hosted applications and services. Messaging The distributed nature of cloud applications requires a messaging infrastructure that connects the components and services, ideally in a loosely coupled manner in order to maximize scalability. Asynchronous messaging is widely used, and provides many benefits, but also brings challenges such as the ordering of messages, poison message management, idempotency, and more Management and Monitoring Cloud applications run in in a remote datacenter where you do not have full control of the infrastructure or, in some cases, the operating system. This can make management and monitoring more difficult than an on-premises deployment. Applications must expose runtime information that administrators and operators can use to manage and monitor the system, as well as supporting changing business requirements and customization without requiring the application to be stopped or redeployed. Performance and Scalability Performance is an indication of the responsiveness of a system to execute any action within a given time interval, while scalability is ability of a system either to handle increases in load without impact on performance or for the available resources to be readily increased. Cloud applications typically encounter variable workloads and peaks in activity. Predicting these, especially in a multi-tenant scenario, is almost impossible. Instead, applications should be able to scale out within limits to meet peaks in demand, and scale in when demand decreases. Scalability concerns not just compute instances, but other elements such as data storage, messaging infrastructure, and more. Resiliency Resiliency is the ability of a system to gracefully handle and recover from failures. The nature of cloud hosting, where applications are often multi-tenant, use shared platform services, compete for resources and bandwidth, communicate over the Internet, and run on commodity hardware means there is an increased likelihood that both transient and more permanent faults will arise. Detecting failures, and recovering quickly and efficiently, is necessary to maintain resiliency. Security Security is the capability of a system to prevent malicious or accidental actions outside of the designed usage, and to prevent disclosure or loss of information. Cloud applications are exposed on the Internet outside trusted onpremises boundaries, are often open to the public, and may serve untrusted users. Applications must be designed and deployed in a way that protects them from malicious attacks, restricts access to only approved users, and protects sensitive data. Catalog of patterns PATTERN SUMMARY Ambassador Create helper services that send network requests on behalf of a consumer service or application. Anti-Corruption Layer Implement a façade or adapter layer between a modern application and a legacy system. Backends for Frontends Create separate backend services to be consumed by specific frontend applications or interfaces. Bulkhead Isolate elements of an application into pools so that if one fails, the others will continue to function. Cache-Aside Load data on demand into a cache from a data store Circuit Breaker Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource. Claim Check Split a large message into a claim check and a payload to avoid overwhelming a message bus. Compensating Transaction Undo the work performed by a series of steps, which together define an eventually consistent operation. Competing Consumers Enable multiple concurrent consumers to process messages received on the same messaging channel. PATTERN SUMMARY Compute Resource Consolidation Consolidate multiple tasks or operations into a single computational unit CQRS Segregate operations that read data from operations that update data by using separate interfaces. Event Sourcing Use an append-only store to record the full series of events that describe actions taken on data in a domain. External Configuration Store Move configuration information out of the application deployment package to a centralized location. Federated Identity Delegate authentication to an external identity provider. Gatekeeper Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. Gateway Aggregation Use a gateway to aggregate multiple individual requests into a single request. Gateway Offloading Offload shared or specialized service functionality to a gateway proxy. Gateway Routing Route requests to multiple services using a single endpoint. Health Endpoint Monitoring Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. Index Table Create indexes over the fields in data stores that are frequently referenced by queries. Leader Election Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. Materialized View Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. Pipes and Filters Break down a task that performs complex processing into a series of separate elements that can be reused. Priority Queue Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. Publisher/Subscriber Enable an application to announce events to multiple interested consumers aynchronously, without coupling the senders to the receivers. PATTERN SUMMARY Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. Retry Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed. Scheduler Agent Supervisor Coordinate a set of actions across a distributed set of services and other remote resources. Sharding Divide a data store into a set of horizontal partitions or shards. Sidecar Deploy components of an application into a separate process or container to provide isolation and encapsulation. Static Content Hosting Deploy static content to a cloud-based storage service that can deliver them directly to the client. Strangler Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. Throttling Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. Valet Key Use a token or key that provides clients with restricted direct access to a specific resource or service. Availability patterns 3/13/2019 • 2 minutes to read • Edit Online Availability defines the proportion of time that the system is functional and working. It will be affected by system errors, infrastructure problems, malicious attacks, and system load. It is usually measured as a percentage of uptime. Cloud applications typically provide users with a service level agreement (SLA), which means that applications must be designed and implemented in a way that maximizes availability. PATTERN SUMMARY Health Endpoint Monitoring Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. Throttling Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. Data Management patterns 3/13/2019 • 2 minutes to read • Edit Online Data management is the key element of cloud applications, and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations. PATTERN SUMMARY Cache-Aside Load data on demand into a cache from a data store CQRS Segregate operations that read data from operations that update data by using separate interfaces. Event Sourcing Use an append-only store to record the full series of events that describe actions taken on data in a domain. Index Table Create indexes over the fields in data stores that are frequently referenced by queries. Materialized View Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. Sharding Divide a data store into a set of horizontal partitions or shards. Static Content Hosting Deploy static content to a cloud-based storage service that can deliver them directly to the client. Valet Key Use a token or key that provides clients with restricted direct access to a specific resource or service. Design and Implementation patterns 3/13/2019 • 2 minutes to read • Edit Online Good design encompasses factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and in other scenarios. Decisions made during the design and implementation phase have a huge impact on the quality and the total cost of ownership of cloud hosted applications and services. PATTERN SUMMARY Ambassador Create helper services that send network requests on behalf of a consumer service or application. Anti-Corruption Layer Implement a façade or adapter layer between a modern application and a legacy system. Backends for Frontends Create separate backend services to be consumed by specific frontend applications or interfaces. CQRS Segregate operations that read data from operations that update data by using separate interfaces. Compute Resource Consolidation Consolidate multiple tasks or operations into a single computational unit External Configuration Store Move configuration information out of the application deployment package to a centralized location. Gateway Aggregation Use a gateway to aggregate multiple individual requests into a single request. Gateway Offloading Offload shared or specialized service functionality to a gateway proxy. Gateway Routing Route requests to multiple services using a single endpoint. Leader Election Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. Pipes and Filters Break down a task that performs complex processing into a series of separate elements that can be reused. Sidecar Deploy components of an application into a separate process or container to provide isolation and encapsulation. Static Content Hosting Deploy static content to a cloud-based storage service that can deliver them directly to the client. PATTERN SUMMARY Strangler Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. Messaging patterns 3/13/2019 • 2 minutes to read • Edit Online The distributed nature of cloud applications requires a messaging infrastructure that connects the components and services, ideally in a loosely coupled manner in order to maximize scalability. Asynchronous messaging is widely used, and provides many benefits, but also brings challenges such as the ordering of messages, poison message management, idempotency, and more. PATTERN SUMMARY Claim Check Split a large message into a claim check and a payload to avoid overwhelming a message bus. Competing Consumers Enable multiple concurrent consumers to process messages received on the same messaging channel. Pipes and Filters Break down a task that performs complex processing into a series of separate elements that can be reused. Priority Queue Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. Publisher-Subscriber Enable an application to announce events to multiple interested consumers aynchronously, without coupling the senders to the receivers. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. Scheduler Agent Supervisor Coordinate a set of actions across a distributed set of services and other remote resources. Management and Monitoring patterns 3/13/2019 • 2 minutes to read • Edit Online Cloud applications run in in a remote datacenter where you do not have full control of the infrastructure or, in some cases, the operating system. This can make management and monitoring more difficult than an on-premises deployment. Applications must expose runtime information that administrators and operators can use to manage and monitor the system, as well as supporting changing business requirements and customization without requiring the application to be stopped or redeployed. PATTERN SUMMARY Ambassador Create helper services that send network requests on behalf of a consumer service or application. Anti-Corruption Layer Implement a façade or adapter layer between a modern application and a legacy system. External Configuration Store Move configuration information out of the application deployment package to a centralized location. Gateway Aggregation Use a gateway to aggregate multiple individual requests into a single request. Gateway Offloading Offload shared or specialized service functionality to a gateway proxy. Gateway Routing Route requests to multiple services using a single endpoint. Health Endpoint Monitoring Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. Sidecar Deploy components of an application into a separate process or container to provide isolation and encapsulation. Strangler Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. Performance and Scalability patterns 3/13/2019 • 2 minutes to read • Edit Online Performance is an indication of the responsiveness of a system to execute any action within a given time interval, while scalability is ability of a system either to handle increases in load without impact on performance or for the available resources to be readily increased. Cloud applications typically encounter variable workloads and peaks in activity. Predicting these, especially in a multi-tenant scenario, is almost impossible. Instead, applications should be able to scale out within limits to meet peaks in demand, and scale in when demand decreases. Scalability concerns not just compute instances, but other elements such as data storage, messaging infrastructure, and more. PATTERN SUMMARY Cache-Aside Load data on demand into a cache from a data store CQRS Segregate operations that read data from operations that update data by using separate interfaces. Event Sourcing Use an append-only store to record the full series of events that describe actions taken on data in a domain. Index Table Create indexes over the fields in data stores that are frequently referenced by queries. Materialized View Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. Priority Queue Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. Sharding Divide a data store into a set of horizontal partitions or shards. Static Content Hosting Deploy static content to a cloud-based storage service that can deliver them directly to the client. Throttling Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. Resiliency patterns 3/13/2019 • 2 minutes to read • Edit Online Resiliency is the ability of a system to gracefully handle and recover from failures. The nature of cloud hosting, where applications are often multi-tenant, use shared platform services, compete for resources and bandwidth, communicate over the Internet, and run on commodity hardware means there is an increased likelihood that both transient and more permanent faults will arise. Detecting failures, and recovering quickly and efficiently, is necessary to maintain resiliency. PATTERN SUMMARY Bulkhead Isolate elements of an application into pools so that if one fails, the others will continue to function. Circuit Breaker Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource. Compensating Transaction Undo the work performed by a series of steps, which together define an eventually consistent operation. Health Endpoint Monitoring Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. Leader Election Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. Retry Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed. Scheduler Agent Supervisor Coordinate a set of actions across a distributed set of services and other remote resources. Security patterns 3/13/2019 • 2 minutes to read • Edit Online Security is the capability of a system to prevent malicious or accidental actions outside of the designed usage, and to prevent disclosure or loss of information. Cloud applications are exposed on the Internet outside trusted onpremises boundaries, are often open to the public, and may serve untrusted users. Applications must be designed and deployed in a way that protects them from malicious attacks, restricts access to only approved users, and protects sensitive data. PATTERN SUMMARY Federated Identity Delegate authentication to an external identity provider. Gatekeeper Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. Valet Key Use a token or key that provides clients with restricted direct access to a specific resource or service. Ambassador pattern 3/13/2019 • 3 minutes to read • Edit Online Create helper services that send network requests on behalf of a consumer service or application. An ambassador service can be thought of as an out-of-process proxy that is co-located with the client. This pattern can be useful for offloading common client connectivity tasks such as monitoring, logging, routing, security (such as TLS ), and resiliency patterns in a language agnostic way. It is often used with legacy applications, or other applications that are difficult to modify, in order to extend their networking capabilities. It can also enable a specialized team to implement those features. Context and problem Resilient cloud-based applications require features such as circuit breaking, routing, metering and monitoring, and the ability to make network-related configuration updates. It may be difficult or impossible to update legacy applications or existing code libraries to add these features, because the code is no longer maintained or can't be easily modified by the development team. Network calls may also require substantial configuration for connection, authentication, and authorization. If these calls are used across multiple applications, built using multiple languages and frameworks, the calls must be configured for each of these instances. In addition, network and security functionality may need to be managed by a central team within your organization. With a large code base, it can be risky for that team to update application code they aren't familiar with. Solution Put client frameworks and libraries into an external process that acts as a proxy between your application and external services. Deploy the proxy on the same host environment as your application to allow control over routing, resiliency, security features, and to avoid any host-related access restrictions. You can also use the ambassador pattern to standardize and extend instrumentation. The proxy can monitor performance metrics such as latency or resource usage, and this monitoring happens in the same host environment as the application. Features that are offloaded to the ambassador can be managed independently of the application. You can update and modify the ambassador without disturbing the application's legacy functionality. It also allows for separate, specialized teams to implement and maintain security, networking, or authentication features that have been moved to the ambassador. Ambassador services can be deployed as a sidecar to accompany the lifecycle of a consuming application or service. Alternatively, if an ambassador is shared by multiple separate processes on a common host, it can be deployed as a daemon or Windows service. If the consuming service is containerized, the ambassador should be created as a separate container on the same host, with the appropriate links configured for communication. Issues and considerations The proxy adds some latency overhead. Consider whether a client library, invoked directly by the application, is a better approach. Consider the possible impact of including generalized features in the proxy. For example, the ambassador could handle retries, but that might not be safe unless all operations are idempotent. Consider a mechanism to allow the client to pass some context to the proxy, as well as back to the client. For example, include HTTP request headers to opt out of retry or specify the maximum number of times to retry. Consider how you will package and deploy the proxy. Consider whether to use a single shared instance for all clients or an instance for each client. When to use this pattern Use this pattern when you: Need to build a common set of client connectivity features for multiple languages or frameworks. Need to offload cross-cutting client connectivity concerns to infrastructure developers or other more specialized teams. Need to support cloud or cluster connectivity requirements in a legacy application or an application that is difficult to modify. This pattern may not be suitable: When network request latency is critical. A proxy will introduce some overhead, although minimal, and in some cases this may affect the application. When client connectivity features are consumed by a single language. In that case, a better option might be a client library that is distributed to the development teams as a package. When connectivity features cannot be generalized and require deeper integration with the client application. Example The following diagram shows an application making a request to a remote service via an ambassador proxy. The ambassador provides routing, circuit breaking, and logging. It calls the remote service and then returns the response to the client application: Related guidance Sidecar pattern Anti-Corruption Layer pattern 3/13/2019 • 2 minutes to read • Edit Online Implement a façade or adapter layer between different subsystems that don't share the same semantics. This layer translates requests that one subsystem makes to the other subsystem. Use this pattern to ensure that an application's design is not limited by dependencies on outside subsystems. This pattern was first described by Eric Evans in Domain-Driven Design. Context and problem Most applications rely on other systems for some data or functionality. For example, when a legacy application is migrated to a modern system, it may still need existing legacy resources. New features must be able to call the legacy system. This is especially true of gradual migrations, where different features of a larger application are moved to a modern system over time. Often these legacy systems suffer from quality issues such as convoluted data schemas or obsolete APIs. The features and technologies used in legacy systems can vary widely from more modern systems. To interoperate with the legacy system, the new application may need to support outdated infrastructure, protocols, data models, APIs, or other features that you wouldn't otherwise put into a modern application. Maintaining access between new and legacy systems can force the new system to adhere to at least some of the legacy system's APIs or other semantics. When these legacy features have quality issues, supporting them "corrupts" what might otherwise be a cleanly designed modern application. Similar issues can arise with any external system that your development team doesn't control, not just legacy systems. Solution Isolate the different subsystems by placing an anti-corruption layer between them. This layer translates communications between the two systems, allowing one system to remain unchanged while the other can avoid compromising its design and technological approach. The diagram above shows an application with two subsystems. Subsystem A calls to subsystem B through an anti-corruption layer. Communication between subsystem A and the anti-corruption layer always uses the data model and architecture of subsystem A. Calls from the anti-corruption layer to subsystem B conform to that subsystem's data model or methods. The anti-corruption layer contains all of the logic necessary to translate between the two systems. The layer can be implemented as a component within the application or as an independent service. Issues and considerations The anti-corruption layer may add latency to calls made between the two systems. The anti-corruption layer adds an additional service that must be managed and maintained. Consider how your anti-corruption layer will scale. Consider whether you need more than one anti-corruption layer. You may want to decompose functionality into multiple services using different technologies or languages, or there may be other reasons to partition the anti-corruption layer. Consider how the anti-corruption layer will be managed in relation with your other applications or services. How will it be integrated into your monitoring, release, and configuration processes? Make sure transaction and data consistency are maintained and can be monitored. Consider whether the anti-corruption layer needs to handle all communication between different subsystems, or just a subset of features. If the anti-corruption layer is part of an application migration strategy, consider whether it will be permanent, or will be retired after all legacy functionality has been migrated. When to use this pattern Use this pattern when: A migration is planned to happen over multiple stages, but integration between new and legacy systems needs to be maintained. Two or more subsystems have different semantics, but still need to communicate. This pattern may not be suitable if there are no significant semantic differences between new and legacy systems. Related guidance Strangler pattern Backends for Frontends pattern 3/13/2019 • 3 minutes to read • Edit Online Create separate backend services to be consumed by specific frontend applications or interfaces. This pattern is useful when you want to avoid customizing a single backend for multiple interfaces. This pattern was first described by Sam Newman. Context and problem An application may initially be targeted at a desktop web UI. Typically, a backend service is developed in parallel that provides the features needed for that UI. As the application's user base grows, a mobile application is developed that must interact with the same backend. The backend service becomes a general-purpose backend, serving the requirements of both the desktop and mobile interfaces. But the capabilities of a mobile device differ significantly from a desktop browser, in terms of screen size, performance, and display limitations. As a result, the requirements for a mobile application backend differ from the desktop web UI. These differences result in competing requirements for the backend. The backend requires regular and significant changes to serve both the desktop web UI and the mobile application. Often, separate interface teams work on each frontend, causing the backend to become a bottleneck in the development process. Conflicting update requirements, and the need to keep the service working for both frontends, can result in spending a lot of effort on a single deployable resource. As the development activity focuses on the backend service, a separate team may be created to manage and maintain the backend. Ultimately, this results in a disconnect between the interface and backend development teams, placing a burden on the backend team to balance the competing requirements of the different UI teams. When one interface team requires changes to the backend, those changes must be validated with other interface teams before they can be integrated into the backend. Solution Create one backend per user interface. Fine tune the behavior and performance of each backend to best match the needs of the frontend environment, without worrying about affecting other frontend experiences. Because each backend is specific to one interface, it can be optimized for that interface. As a result, it will be smaller, less complex, and likely faster than a generic backend that tries to satisfy the requirements for all interfaces. Each interface team has autonomy to control their own backend and doesn't rely on a centralized backend development team. This gives the interface team flexibility in language selection, release cadence, prioritization of workload, and feature integration in their backend. For more information, see Pattern: Backends For Frontends. Issues and considerations Consider how many backends to deploy. If different interfaces (such as mobile clients) will make the same requests, consider whether it is necessary to implement a backend for each interface, or if a single backend will suffice. Code duplication across services is highly likely when implementing this pattern. Frontend-focused backend services should only contain client-specific logic and behavior. General business logic and other global features should be managed elsewhere in your application. Think about how this pattern might be reflected in the responsibilities of a development team. Consider how long it will take to implement this pattern. Will the effort of building the new backends incur technical debt, while you continue to support the existing generic backend? When to use this pattern Use this pattern when: A shared or general purpose backend service must be maintained with significant development overhead. You want to optimize the backend for the requirements of specific client interfaces. Customizations are made to a general-purpose backend to accommodate multiple interfaces. An alternative language is better suited for the backend of a different user interface. This pattern may not be suitable: When interfaces make the same or similar requests to the backend. When only one interface is used to interact with the backend. Related guidance Gateway Aggregation pattern Gateway Offloading pattern Gateway Routing pattern Bulkhead pattern 3/13/2019 • 4 minutes to read • Edit Online Isolate elements of an application into pools so that if one fails, the others will continue to function. This pattern is named Bulkhead because it resembles the sectioned partitions of a ship's hull. If the hull of a ship is compromised, only the damaged section fills with water, which prevents the ship from sinking. Context and problem A cloud-based application may include multiple services, with each service having one or more consumers. Excessive load or failure in a service will impact all consumers of the service. Moreover, a consumer may send requests to multiple services simultaneously, using resources for each request. When the consumer sends a request to a service that is misconfigured or not responding, the resources used by the client's request may not be freed in a timely manner. As requests to the service continue, those resources may be exhausted. For example, the client's connection pool may be exhausted. At that point, requests by the consumer to other services are affected. Eventually the consumer can no longer send requests to other services, not just the original unresponsive service. The same issue of resource exhaustion affects services with multiple consumers. A large number of requests originating from one client may exhaust available resources in the service. Other consumers are no longer able to consume the service, causing a cascading failure effect. Solution Partition service instances into different groups, based on consumer load and availability requirements. This design helps to isolate failures, and allows you to sustain service functionality for some consumers, even during a failure. A consumer can also partition resources, to ensure that resources used to call one service don't affect the resources used to call another service. For example, a consumer that calls multiple services may be assigned a connection pool for each service. If a service begins to fail, it only affects the connection pool assigned for that service, allowing the consumer to continue using the other services. The benefits of this pattern include: Isolates consumers and services from cascading failures. An issue affecting a consumer or service can be isolated within its own bulkhead, preventing the entire solution from failing. Allows you to preserve some functionality in the event of a service failure. Other services and features of the application will continue to work. Allows you to deploy services that offer a different quality of service for consuming applications. A highpriority consumer pool can be configured to use high-priority services. The following diagram shows bulkheads structured around connection pools that call individual services. If Service A fails or causes some other issue, the connection pool is isolated, so only workloads using the thread pool assigned to Service A are affected. Workloads that use Service B and C are not affected and can continue working without interruption. The next diagram shows multiple clients calling a single service. Each client is assigned a separate service instance. Client 1 has made too many requests and overwhelmed its instance. Because each service instance is isolated from the others, the other clients can continue making calls. Issues and considerations Define partitions around the business and technical requirements of the application. When partitioning services or consumers into bulkheads, consider the level of isolation offered by the technology as well as the overhead in terms of cost, performance and manageability. Consider combining bulkheads with retry, circuit breaker, and throttling patterns to provide more sophisticated fault handling. When partitioning consumers into bulkheads, consider using processes, thread pools, and semaphores. Projects like Netflix Hystrix and Polly offer a framework for creating consumer bulkheads. When partitioning services into bulkheads, consider deploying them into separate virtual machines, containers, or processes. Containers offer a good balance of resource isolation with fairly low overhead. Services that communicate using asynchronous messages can be isolated through different sets of queues. Each queue can have a dedicated set of instances processing messages on the queue, or a single group of instances using an algorithm to dequeue and dispatch processing. Determine the level of granularity for the bulkheads. For example, if you want to distribute tenants across partitions, you could place each tenant into a separate partition, or put several tenants into one partition. Monitor each partition’s performance and SLA. When to use this pattern Use this pattern to: Isolate resources used to consume a set of backend services, especially if the application can provide some level of functionality even when one of the services is not responding. Isolate critical consumers from standard consumers. Protect the application from cascading failures. This pattern may not be suitable when: Less efficient use of resources may not be acceptable in the project. The added complexity is not necessary Example The following Kubernetes configuration file creates an isolated container to run a single service, with its own CPU and memory resources and limits. apiVersion: v1 kind: Pod metadata: name: drone-management spec: containers: - name: drone-management-container image: drone-service resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "1" Related guidance Designing resilient applications for Azure Circuit Breaker pattern Retry pattern Throttling pattern Cache-Aside pattern 3/13/2019 • 7 minutes to read • Edit Online Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store. Context and problem Applications use a cache to improve repeated access to information held in a data store. However, it's impractical to expect that cached data will always be completely consistent with the data in the data store. Applications should implement a strategy that helps to ensure that the data in the cache is as up-to-date as possible, but can also detect and handle situations that arise when the data in the cache has become stale. Solution Many commercial caching systems provide read-through and write-through/write-behind operations. In these systems, an application retrieves data by referencing the cache. If the data isn't in the cache, it's retrieved from the data store and added to the cache. Any modifications to data held in the cache are automatically written back to the data store as well. For caches that don't provide this functionality, it's the responsibility of the applications that use the cache to maintain the data. An application can emulate the functionality of read-through caching by implementing the cache-aside strategy. This strategy loads data into the cache on demand. The figure illustrates using the Cache-Aside pattern to store data in the cache. If an application updates information, it can follow the write-through strategy by making the modification to the data store, and by invalidating the corresponding item in the cache. When the item is next required, using the cache-aside strategy will cause the updated data to be retrieved from the data store and added back into the cache. Issues and considerations Consider the following points when deciding how to implement this pattern: Lifetime of cached data. Many caches implement an expiration policy that invalidates data and removes it from the cache if it's not accessed for a specified period. For cache-aside to be effective, ensure that the expiration policy matches the pattern of access for applications that use the data. Don't make the expiration period too short because this can cause applications to continually retrieve data from the data store and add it to the cache. Similarly, don't make the expiration period so long that the cached data is likely to become stale. Remember that caching is most effective for relatively static data, or data that is read frequently. Evicting data. Most caches have a limited size compared to the data store where the data originates, and they'll evict data if necessary. Most caches adopt a least-recently-used policy for selecting items to evict, but this might be customizable. Configure the global expiration property and other properties of the cache, and the expiration property of each cached item, to ensure that the cache is cost effective. It isn't always appropriate to apply a global eviction policy to every item in the cache. For example, if a cached item is very expensive to retrieve from the data store, it can be beneficial to keep this item in the cache at the expense of more frequently accessed but less costly items. Priming the cache. Many solutions prepopulate the cache with the data that an application is likely to need as part of the startup processing. The Cache-Aside pattern can still be useful if some of this data expires or is evicted. Consistency. Implementing the Cache-Aside pattern doesn't guarantee consistency between the data store and the cache. An item in the data store can be changed at any time by an external process, and this change might not be reflected in the cache until the next time the item is loaded. In a system that replicates data across data stores, this problem can become serious if synchronization occurs frequently. Local (in-memory) caching. A cache could be local to an application instance and stored in-memory. Cacheaside can be useful in this environment if an application repeatedly accesses the same data. However, a local cache is private and so different application instances could each have a copy of the same cached data. This data could quickly become inconsistent between caches, so it might be necessary to expire data held in a private cache and refresh it more frequently. In these scenarios, consider investigating the use of a shared or a distributed caching mechanism. When to use this pattern Use this pattern when: A cache doesn't provide native read-through and write-through operations. Resource demand is unpredictable. This pattern enables applications to load data on demand. It makes no assumptions about which data an application will require in advance. This pattern might not be suitable: When the cached data set is static. If the data will fit into the available cache space, prime the cache with the data on startup and apply a policy that prevents the data from expiring. For caching session state information in a web application hosted in a web farm. In this environment, you should avoid introducing dependencies based on client-server affinity. Example In Microsoft Azure you can use Azure Redis Cache to create a distributed cache that can be shared by multiple instances of an application. This following code examples use the StackExchange.Redis client, which is a Redis client library written for .NET. To connect to an Azure Redis Cache instance, call the static ConnectionMultiplexer.Connect method and pass in the connection string. The method returns a ConnectionMultiplexer that represents the connection. One approach to sharing a ConnectionMultiplexer instance in your application is to have a static property that returns a connected instance, similar to the following example. This approach provides a thread-safe way to initialize only a single connected instance. private static ConnectionMultiplexer Connection; // Redis Connection string info private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => { string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString(); return ConnectionMultiplexer.Connect(cacheConnection); }); public static ConnectionMultiplexer Connection => lazyConnection.Value; The GetMyEntityAsync method in the following code example shows an implementation of the Cache-Aside pattern. This method retrieves an object from the cache using the read-through approach. An object is identified by using an integer ID as the key. The GetMyEntityAsync method tries to retrieve an item with this key from the cache. If a matching item is found, it's returned. If there's no match in the cache, the GetMyEntityAsync method retrieves the object from a data store, adds it to the cache, and then returns it. The code that actually reads the data from the data store is not shown here, because it depends on the data store. Note that the cached item is configured to expire to prevent it from becoming stale if it's updated elsewhere. // Set five minute expiration as a default private const double DefaultExpirationTimeInMinutes = 5.0; public async Task<MyEntity> GetMyEntityAsync(int id) { // Define a unique key for this method and its parameters. var key = $"MyEntity:{id}"; var cache = Connection.GetDatabase(); // Try to get the entity from the cache. var json = await cache.StringGetAsync(key).ConfigureAwait(false); var value = string.IsNullOrWhiteSpace(json) ? default(MyEntity) : JsonConvert.DeserializeObject<MyEntity>(json); if (value == null) // Cache miss { // If there's a cache miss, get the entity from the original store and cache it. // Code has been omitted because it is data store dependent. value = ...; // Avoid caching a null value. if (value != null) { // Put the item in the cache with a custom expiration time that // depends on how critical it is to have stale data. await cache.StringSetAsync(key, JsonConvert.SerializeObject(value)).ConfigureAwait(false); await cache.KeyExpireAsync(key, TimeSpan.FromMinutes(DefaultExpirationTimeInMinutes)).ConfigureAwait(false); } } return value; } The examples use Redis Cache to access the store and retrieve information from the cache. For more information, see Using Microsoft Azure Redis Cache and How to create a Web App with Redis Cache The UpdateEntityAsync method shown below demonstrates how to invalidate an object in the cache when the value is changed by the application. The code updates the original data store and then removes the cached item from the cache. public async Task UpdateEntityAsync(MyEntity entity) { // Update the object in the original data store. await this.store.UpdateEntityAsync(entity).ConfigureAwait(false); // Invalidate the current cache object. var cache = Connection.GetDatabase(); var id = entity.Id; var key = $"MyEntity:{id}"; // The key for the cached object. await cache.KeyDeleteAsync(key).ConfigureAwait(false); // Delete this key from the cache. } NOTE The order of the steps is important. Update the data store before removing the item from the cache. If you remove the cached item first, there is a small window of time when a client might fetch the item before the data store is updated. That will result in a cache miss (because the item was removed from the cache), causing the earlier version of the item to be fetched from the data store and added back into the cache. The result will be stale cache data. Related guidance The following information may be relevant when implementing this pattern: Caching Guidance. Provides additional information on how you can cache data in a cloud solution, and the issues that you should consider when you implement a cache. Data Consistency Primer. Cloud applications typically use data that's spread across data stores. Managing and maintaining data consistency in this environment is a critical aspect of the system, particularly the concurrency and availability issues that can arise. This primer describes issues about consistency across distributed data, and summarizes how an application can implement eventual consistency to maintain the availability of data. Circuit Breaker pattern 3/13/2019 • 17 minutes to read • Edit Online Handle faults that might take a variable amount of time to recover from, when connecting to a remote service or resource. This can improve the stability and resiliency of an application. Context and problem In a distributed environment, calls to remote resources and services can fail due to transient faults, such as slow network connections, timeouts, or the resources being overcommitted or temporarily unavailable. These faults typically correct themselves after a short period of time, and a robust cloud application should be prepared to handle them by using a strategy such as the Retry pattern. However, there can also be situations where faults are due to unanticipated events, and that might take much longer to fix. These faults can range in severity from a partial loss of connectivity to the complete failure of a service. In these situations it might be pointless for an application to continually retry an operation that is unlikely to succeed, and instead the application should quickly accept that the operation has failed and handle this failure accordingly. Additionally, if a service is very busy, failure in one part of the system might lead to cascading failures. For example, an operation that invokes a service could be configured to implement a timeout, and reply with a failure message if the service fails to respond within this period. However, this strategy could cause many concurrent requests to the same operation to be blocked until the timeout period expires. These blocked requests might hold critical system resources such as memory, threads, database connections, and so on. Consequently, these resources could become exhausted, causing failure of other possibly unrelated parts of the system that need to use the same resources. In these situations, it would be preferable for the operation to fail immediately, and only attempt to invoke the service if it's likely to succeed. Note that setting a shorter timeout might help to resolve this problem, but the timeout shouldn't be so short that the operation fails most of the time, even if the request to the service would eventually succeed. Solution The Circuit Breaker pattern, popularized by Michael Nygard in his book, Release It!, can prevent an application from repeatedly trying to execute an operation that's likely to fail. Allowing it to continue without waiting for the fault to be fixed or wasting CPU cycles while it determines that the fault is long lasting. The Circuit Breaker pattern also enables an application to detect whether the fault has been resolved. If the problem appears to have been fixed, the application can try to invoke the operation. The purpose of the Circuit Breaker pattern is different than the Retry pattern. The Retry pattern enables an application to retry an operation in the expectation that it'll succeed. The Circuit Breaker pattern prevents an application from performing an operation that is likely to fail. An application can combine these two patterns by using the Retry pattern to invoke an operation through a circuit breaker. However, the retry logic should be sensitive to any exceptions returned by the circuit breaker and abandon retry attempts if the circuit breaker indicates that a fault is not transient. A circuit breaker acts as a proxy for operations that might fail. The proxy should monitor the number of recent failures that have occurred, and use this information to decide whether to allow the operation to proceed, or simply return an exception immediately. The proxy can be implemented as a state machine with the following states that mimic the functionality of an electrical circuit breaker: Closed: The request from the application is routed to the operation. The proxy maintains a count of the number of recent failures, and if the call to the operation is unsuccessful the proxy increments this count. If the number of recent failures exceeds a specified threshold within a given time period, the proxy is placed into the Open state. At this point the proxy starts a timeout timer, and when this timer expires the proxy is placed into the Half-Open state. The purpose of the timeout timer is to give the system time to fix the problem that caused the failure before allowing the application to try to perform the operation again. Open: The request from the application fails immediately and an exception is returned to the application. Half-Open: A limited number of requests from the application are allowed to pass through and invoke the operation. If these requests are successful, it's assumed that the fault that was previously causing the failure has been fixed and the circuit breaker switches to the Closed state (the failure counter is reset). If any request fails, the circuit breaker assumes that the fault is still present so it reverts back to the Open state and restarts the timeout timer to give the system a further period of time to recover from the failure. The Half-Open state is useful to prevent a recovering service from suddenly being flooded with requests. As a service recovers, it might be able to support a limited volume of requests until the recovery is complete, but while recovery is in progress a flood of work can cause the service to time out or fail again. In the figure, the failure counter used by the Closed state is time based. It's automatically reset at periodic intervals. This helps to prevent the circuit breaker from entering the Open state if it experiences occasional failures. The failure threshold that trips the circuit breaker into the Open state is only reached when a specified number of failures have occurred during a specified interval. The counter used by the Half-Open state records the number of successful attempts to invoke the operation. The circuit breaker reverts to the Closed state after a specified number of consecutive operation invocations have been successful. If any invocation fails, the circuit breaker enters the Open state immediately and the success counter will be reset the next time it enters the Half-Open state. How the system recovers is handled externally, possibly by restoring or restarting a failed component or repairing a network connection. The Circuit Breaker pattern provides stability while the system recovers from a failure and minimizes the impact on performance. It can help to maintain the response time of the system by quickly rejecting a request for an operation that's likely to fail, rather than waiting for the operation to time out, or never return. If the circuit breaker raises an event each time it changes state, this information can be used to monitor the health of the part of the system protected by the circuit breaker, or to alert an administrator when a circuit breaker trips to the Open state. The pattern is customizable and can be adapted according to the type of the possible failure. For example, you can apply an increasing timeout timer to a circuit breaker. You could place the circuit breaker in the Open state for a few seconds initially, and then if the failure hasn't been resolved increase the timeout to a few minutes, and so on. In some cases, rather than the Open state returning failure and raising an exception, it could be useful to return a default value that is meaningful to the application. Issues and considerations You should consider the following points when deciding how to implement this pattern: Exception Handling. An application invoking an operation through a circuit breaker must be prepared to handle the exceptions raised if the operation is unavailable. The way exceptions are handled will be application specific. For example, an application could temporarily degrade its functionality, invoke an alternative operation to try to perform the same task or obtain the same data, or report the exception to the user and ask them to try again later. Types of Exceptions. A request might fail for many reasons, some of which might indicate a more severe type of failure than others. For example, a request might fail because a remote service has crashed and will take several minutes to recover, or because of a timeout due to the service being temporarily overloaded. A circuit breaker might be able to examine the types of exceptions that occur and adjust its strategy depending on the nature of these exceptions. For example, it might require a larger number of timeout exceptions to trip the circuit breaker to the Open state compared to the number of failures due to the service being completely unavailable. Logging. A circuit breaker should log all failed requests (and possibly successful requests) to enable an administrator to monitor the health of the operation. Recoverability. You should configure the circuit breaker to match the likely recovery pattern of the operation it's protecting. For example, if the circuit breaker remains in the Open state for a long period, it could raise exceptions even if the reason for the failure has been resolved. Similarly, a circuit breaker could fluctuate and reduce the response times of applications if it switches from the Open state to the Half-Open state too quickly. Testing Failed Operations. In the Open state, rather than using a timer to determine when to switch to the Half-Open state, a circuit breaker can instead periodically ping the remote service or resource to determine whether it's become available again. This ping could take the form of an attempt to invoke an operation that had previously failed, or it could use a special operation provided by the remote service specifically for testing the health of the service, as described by the Health Endpoint Monitoring pattern. Manual Override. In a system where the recovery time for a failing operation is extremely variable, it's beneficial to provide a manual reset option that enables an administrator to close a circuit breaker (and reset the failure counter). Similarly, an administrator could force a circuit breaker into the Open state (and restart the timeout timer) if the operation protected by the circuit breaker is temporarily unavailable. Concurrency. The same circuit breaker could be accessed by a large number of concurrent instances of an application. The implementation shouldn't block concurrent requests or add excessive overhead to each call to an operation. Resource Differentiation. Be careful when using a single circuit breaker for one type of resource if there might be multiple underlying independent providers. For example, in a data store that contains multiple shards, one shard might be fully accessible while another is experiencing a temporary issue. If the error responses in these scenarios are merged, an application might try to access some shards even when failure is highly likely, while access to other shards might be blocked even though it's likely to succeed. Accelerated Circuit Breaking. Sometimes a failure response can contain enough information for the circuit breaker to trip immediately and stay tripped for a minimum amount of time. For example, the error response from a shared resource that's overloaded could indicate that an immediate retry isn't recommended and that the application should instead try again in a few minutes. NOTE A service can return HTTP 429 (Too Many Requests) if it is throttling the client, or HTTP 503 (Service Unavailable) if the service is not currently available. The response can include additional information, such as the anticipated duration of the delay. Replaying Failed Requests. In the Open state, rather than simply failing quickly, a circuit breaker could also record the details of each request to a journal and arrange for these requests to be replayed when the remote resource or service becomes available. Inappropriate Timeouts on External Services. A circuit breaker might not be able to fully protect applications from operations that fail in external services that are configured with a lengthy timeout period. If the timeout is too long, a thread running a circuit breaker might be blocked for an extended period before the circuit breaker indicates that the operation has failed. In this time, many other application instances might also try to invoke the service through the circuit breaker and tie up a significant number of threads before they all fail. When to use this pattern Use this pattern: To prevent an application from trying to invoke a remote service or access a shared resource if this operation is highly likely to fail. This pattern isn't recommended: For handling access to local private resources in an application, such as in-memory data structure. In this environment, using a circuit breaker would add overhead to your system. As a substitute for handling exceptions in the business logic of your applications. Example In a web application, several of the pages are populated with data retrieved from an external service. If the system implements minimal caching, most hits to these pages will cause a round trip to the service. Connections from the web application to the service could be configured with a timeout period (typically 60 seconds), and if the service doesn't respond in this time the logic in each web page will assume that the service is unavailable and throw an exception. However, if the service fails and the system is very busy, users could be forced to wait for up to 60 seconds before an exception occurs. Eventually resources such as memory, connections, and threads could be exhausted, preventing other users from connecting to the system, even if they aren't accessing pages that retrieve data from the service. Scaling the system by adding further web servers and implementing load balancing might delay when resources become exhausted, but it won't resolve the issue because user requests will still be unresponsive and all web servers could still eventually run out of resources. Wrapping the logic that connects to the service and retrieves the data in a circuit breaker could help to solve this problem and handle the service failure more elegantly. User requests will still fail, but they'll fail more quickly and the resources won't be blocked. The class maintains state information about a circuit breaker in an object that implements the ICircuitBreakerStateStore interface shown in the following code. CircuitBreaker interface ICircuitBreakerStateStore { CircuitBreakerStateEnum State { get; } Exception LastException { get; } DateTime LastStateChangedDateUtc { get; } void Trip(Exception ex); void Reset(); void HalfOpen(); bool IsClosed { get; } } The State property indicates the current state of the circuit breaker, and will be either Open, HalfOpen, or Closed as defined by the CircuitBreakerStateEnum enumeration. The IsClosed property should be true if the circuit breaker is closed, but false if it's open or half open. The Trip method switches the state of the circuit breaker to the open state and records the exception that caused the change in state, together with the date and time that the exception occurred. The LastException and the LastStateChangedDateUtc properties return this information. The Reset method closes the circuit breaker, and the HalfOpen method sets the circuit breaker to half open. The class in the example contains an implementation of the interface. The CircuitBreaker class creates an instance of this class to hold the state InMemoryCircuitBreakerStateStore ICircuitBreakerStateStore of the circuit breaker. The ExecuteAction method in the CircuitBreaker class wraps an operation, specified as an Action delegate. If the circuit breaker is closed, ExecuteAction invokes the Action delegate. If the operation fails, an exception handler calls TrackException , which sets the circuit breaker state to open. The following code example highlights this flow. public class CircuitBreaker { private readonly ICircuitBreakerStateStore stateStore = CircuitBreakerStateStoreFactory.GetCircuitBreakerStateStore(); private readonly object halfOpenSyncObject = new object (); ... public bool IsClosed { get { return stateStore.IsClosed; } } public bool IsOpen { get { return !IsClosed; } } public void ExecuteAction(Action action) { ... if (IsOpen) { // The circuit breaker is Open. ... (see code sample below for details) } // The circuit breaker is Closed, execute the action. try { action(); } catch (Exception ex) { // If an exception still occurs here, simply // retrip the breaker immediately. this.TrackException(ex); // Throw the exception so that the caller can tell // the type of exception that was thrown. throw; } } private void TrackException(Exception ex) { // For simplicity in this example, open the circuit breaker on the first exception. // In reality this would be more complex. A certain type of exception, such as one // that indicates a service is offline, might trip the circuit breaker immediately. // Alternatively it might count exceptions locally or across multiple instances and // use this value over time, or the exception/success ratio based on the exception // types, to open the circuit breaker. this.stateStore.Trip(ex); } } The following example shows the code (omitted from the previous example) that is executed if the circuit breaker isn't closed. It first checks if the circuit breaker has been open for a period longer than the time specified by the local OpenToHalfOpenWaitTime field in the CircuitBreaker class. If this is the case, the ExecuteAction method sets the circuit breaker to half open, then tries to perform the operation specified by the Action delegate. If the operation is successful, the circuit breaker is reset to the closed state. If the operation fails, it is tripped back to the open state and the time the exception occurred is updated so that the circuit breaker will wait for a further period before trying to perform the operation again. If the circuit breaker has only been open for a short time, less than the OpenToHalfOpenWaitTime value, the ExecuteAction method simply throws a CircuitBreakerOpenException exception and returns the error that caused the circuit breaker to transition to the open state. Additionally, it uses a lock to prevent the circuit breaker from trying to perform concurrent calls to the operation while it's half open. A concurrent attempt to invoke the operation will be handled as if the circuit breaker was open, and it'll fail with an exception as described later. ... if (IsOpen) { // The circuit breaker is Open. Check if the Open timeout has expired. // If it has, set the state to HalfOpen. Another approach might be to // check for the HalfOpen state that had be set by some other operation. if (stateStore.LastStateChangedDateUtc + OpenToHalfOpenWaitTime < DateTime.UtcNow) { // The Open timeout has expired. Allow one operation to execute. Note that, in // this example, the circuit breaker is set to HalfOpen after being // in the Open state for some period of time. An alternative would be to set // this using some other approach such as a timer, test method, manually, and // so on, and check the state here to determine how to handle execution // of the action. // Limit the number of threads to be executed when the breaker is HalfOpen. // An alternative would be to use a more complex approach to determine which // threads or how many are allowed to execute, or to execute a simple test // method instead. bool lockTaken = false; try { Monitor.TryEnter(halfOpenSyncObject, ref lockTaken); if (lockTaken) { // Set the circuit breaker state to HalfOpen. stateStore.HalfOpen(); // Attempt the operation. action(); // If this action succeeds, reset the state and allow other operations. // In reality, instead of immediately returning to the Closed state, a counter // here would record the number of successful operations and return the // circuit breaker to the Closed state only after a specified number succeed. this.stateStore.Reset(); return; } } catch (Exception ex) { // If there's still an exception, trip the breaker again immediately. this.stateStore.Trip(ex); // Throw the exception so that the caller knows which exception occurred. throw; } finally { if (lockTaken) { Monitor.Exit(halfOpenSyncObject); } } } // The Open timeout hasn't yet expired. Throw a CircuitBreakerOpen exception to // inform the caller that the call was not actually attempted, // and return the most recent exception received. throw new CircuitBreakerOpenException(stateStore.LastException); } ... To use a CircuitBreaker object to protect an operation, an application creates an instance of the class and invokes the ExecuteAction method, specifying the operation to be performed as the parameter. The application should be prepared to catch the CircuitBreakerOpenException exception if the operation fails because the circuit breaker is open. The following code shows an example: CircuitBreaker var breaker = new CircuitBreaker(); try { breaker.ExecuteAction(() => { // Operation protected by the circuit breaker. ... }); } catch (CircuitBreakerOpenException ex) { // Perform some different action when the breaker is open. // Last exception details are in the inner exception. ... } catch (Exception ex) { ... } Related patterns and guidance The following patterns might also be useful when implementing this pattern: Retry pattern. Describes how an application can handle anticipated temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that has previously failed. Health Endpoint Monitoring pattern. A circuit breaker might be able to test the health of a service by sending a request to an endpoint exposed by the service. The service should return information indicating its status. Claim-Check Pattern 3/13/2019 • 6 minutes to read • Edit Online Split a large message into a claim check and a payload. Send the claim check to the messaging platform and store the payload to an external service. This pattern allows large messages to be processed, while protecting the message bus and the client from being overwhelmed or slowed down. This pattern also helps to reduce costs, as storage is usually cheaper than resource units used by the messaging platform. This pattern is also known as Reference-Based Messaging, and was originally described in the book Enterprise Integration Patterns, by Gregor Hohpe and Bobby Woolf. Context and problem A messaging-based architecture at some point must be able to send, receive, and manipulate large messages. Such messages may contain anything, including images (for example, MRI scans), sound files (for example, callcenter calls), text documents, or any kind of binary data of arbitrary size. Sending such large messages to the message bus directly is not recommended, because they require more resources and bandwidth to be consumed. Large messages can also slow down the entire solution, because messaging platforms are usually fine-tuned to handle huge quantities of small messages. Also, most messaging platforms have limits on message size, so you may need to work around these limits for large messages. Solution Store the entire message payload into an external service, such as a database. Get the reference to the stored payload, and send just that reference to the message bus. The reference acts like a claim check used to retrieve a piece of luggage, hence the name of the pattern. Clients interested in processing that specific message can use the obtained reference to retrieve the payload, if needed. Issues and considerations Consider the following points when deciding how to implement this pattern: Consider deleting the message data after consuming it, if you don't need to archive the messages. Although blob storage is relatively cheap, it costs some money in the long run, especially if there is a lot of data. Deleting the message can be done synchronously by the application that receives and processes the message, or asynchronously by a separate dedicated process. The asynchronous approach removes old data with no impact on the throughput and message processing performance of the receiving application. Storing and retrieving the message causes some additional overhead and latency. You may want to implement logic in the sending application to use this pattern only when the message size exceeds the data limit of the message bus. The pattern would be skipped for smaller messages. This approach would result in a conditional claim-check pattern. When to use this pattern This pattern should be used whenever a message cannot fit the supported message limit of the chosen message bus technology. For example, Event Hubs currently has a limit of 256 KB (Basic Tier), while Event Grid supports only 64-KB messages. The pattern can also be used if the payload should be accessed only by services that are authorized to see it. By offloading the payload to an external resource, stricter authentication and authorization rules can be put in place, to ensure that security is enforced when sensitive data is stored in the payload. Examples On Azure, this pattern can be implemented in several ways and with different technologies, but there are two main categories. In both cases, the receiver has the responsibility to read the claim check and use it to retrieve the payload. Automatic claim -check generation. This approach uses Azure Event Grid to automatically generate the claim check and push it into the message bus. Manual claim -check generation. In this approach, the sender is responsible for managing the payload. The sender stores the payload using the appropriate service, gets or generates the claim check, and sends the claim check to the message bus. Event Grid is an event routing service and tries to deliver events within a configurable amount of time up to 24 hours. After that, events are either discarded or dead lettered. If you need to archive the event payloads or replay the event stream, you can add an Event Grid subscription to Event Hubs or Queue Storage, where messages can be retained for longer periods and archiving messages is supported. For information about fine tuning Event Grid message delivery and retry, and dead letter configuration, see Dead letter and retry policies. Automatic claim-check generation with Blob Storage and Event Grid In this approach, the sender drops the message payload into a designated Azure Blob Storage container. Event Grid automatically generates a tag/reference and sends it to a supported message bus, such as Azure Storage Queues. The receiver can poll the queue, get the message, and then use the stored reference data to download the payload directly from Blob Storage. The same Event Grid message can be directly consumed by Azure Functions, without needing to go through a message bus. This approach takes full advantage of the serverless nature of both Event Grid and Functions. You can find example code for this approach here. Event Grid with Event Hubs Similar to the previous example, Event Grid automatically generates a message when a payload is written to an Azure Blob container. But in this example, the message bus is implemented using Event Hubs. A client can register itself to receive the stream of messages as they are written to the event hub. The event hub can also be configured to archive messages, making them available as an Avro file that can be queried using tools like Apache Spark, Apache Drill, or any of the available Avro libraries. You can find example code for this approach here. Claim check generation with Service Bus This solution takes advantage of a specific Service Bus plugin, ServiceBus.AttachmentPlugin, which makes the claim-check workflow easy to implement. The plugin converts any message body into an attachment that gets stored in Azure Blob Storage when the message is sent. using ServiceBus.AttachmentPlugin; ... // Getting connection information var serviceBusConnectionString = Environment.GetEnvironmentVariable("SERVICE_BUS_CONNECTION_STRING"); var queueName = Environment.GetEnvironmentVariable("QUEUE_NAME"); var storageConnectionString = Environment.GetEnvironmentVariable("STORAGE_CONNECTION_STRING"); // Creating config for sending message var config = new AzureStorageAttachmentConfiguration(storageConnectionString); // Creating and registering the sender using Service Bus Connection String and Queue Name var sender = new MessageSender(serviceBusConnectionString, queueName); sender.RegisterAzureStorageAttachmentPlugin(config); // Create payload var payload = new { data = "random data string for testing" }; var serialized = JsonConvert.SerializeObject(payload); var payloadAsBytes = Encoding.UTF8.GetBytes(serialized); var message = new Message(payloadAsBytes); // Send the message await sender.SendAsync(message); The Service Bus message acts as a notification queue, which a client can subscribe to. When the consumer receives the message, the plugin makes it possible to directly read the message data from Blob Storage. You can then choose how to process the message further. An advantage of this approach is that it abstracts the claim-check workflow from the sender and receiver. You can find example code for this approach here. Manual claim-check generation with Kafka In this example, a Kafka client writes the payload to Azure Blob Storage. Then it sends a notification message using Kakfa-enabled Event Hubs. The consumer receives the message and can access the payload from Blob Storage. This example shows how a different messaging protocol can be used to implement the claim-check pattern in Azure. For example, you might need to support existing Kafka clients. You can find example code for this approach here. Related patterns and guidance The examples described above are available on GitHub. The Enterprise Integration Patterns site has a description of this pattern. For another example, see Dealing with large Service Bus messages using claim check pattern (blog post). An alternative pattern for handling large messages is Split and Aggregate. Command and Query Responsibility Segregation (CQRS) pattern 3/13/2019 • 11 minutes to read • Edit Online Segregate operations that read data from operations that update data by using separate interfaces. This can maximize performance, scalability, and security. Supports the evolution of the system over time through higher flexibility, and prevents update commands from causing merge conflicts at the domain level. Context and problem In traditional data management systems, both commands (updates to the data) and queries (requests for data) are executed against the same set of entities in a single data repository. These entities can be a subset of the rows in one or more tables in a relational database such as SQL Server. Typically in these systems, all create, read, update, and delete (CRUD ) operations are applied to the same representation of the entity. For example, a data transfer object (DTO ) representing a customer is retrieved from the data store by the data access layer (DAL ) and displayed on the screen. A user updates some fields of the DTO (perhaps through data binding) and the DTO is then saved back in the data store by the DAL. The same DTO is used for both the read and write operations. The figure illustrates a traditional CRUD architecture. Traditional CRUD designs work well when only limited business logic is applied to the data operations. Scaffold mechanisms provided by development tools can create data access code very quickly, which can then be customized as required. However, the traditional CRUD approach has some disadvantages: It often means that there's a mismatch between the read and write representations of the data, such as additional columns or properties that must be updated correctly even though they aren't required as part of an operation. It risks data contention when records are locked in the data store in a collaborative domain, where multiple actors operate in parallel on the same set of data. Or update conflicts caused by concurrent updates when optimistic locking is used. These risks increase as the complexity and throughput of the system grows. In addition, the traditional approach can have a negative effect on performance due to load on the data store and data access layer, and the complexity of queries required to retrieve information. It can make managing security and permissions more complex because each entity is subject to both read and write operations, which might expose data in the wrong context. Solution Command and Query Responsibility Segregation (CQRS ) is a pattern that segregates the operations that read data (queries) from the operations that update data (commands) by using separate interfaces. This means that the data models used for querying and updates are different. The models can then be isolated, as shown in the following figure, although that's not an absolute requirement. Compared to the single data model used in CRUD -based systems, the use of separate query and update models for the data in CQRS -based systems simplifies design and implementation. However, one disadvantage is that unlike CRUD designs, CQRS code can't automatically be generated using scaffold mechanisms. The query model for reading data and the update model for writing data can access the same physical store, perhaps by using SQL views or by generating projections on the fly. However, it's common to separate the data into different physical stores to maximize performance, scalability, and security, as shown in the next figure. The read store can be a read-only replica of the write store, or the read and write stores can have a different structure altogether. Using multiple read-only replicas of the read store can greatly increase query performance and application UI responsiveness, especially in distributed scenarios where read-only replicas are located close to the application instances. Some database systems (SQL Server) provide additional features such as failover replicas to maximize availability. Separation of the read and write stores also allows each to be scaled appropriately to match the load. For example, read stores typically encounter a much higher load than write stores. When the query/read model contains denormalized data (see Materialized View pattern), performance is maximized when reading data for each of the views in an application or when querying the data in the system. Issues and considerations Consider the following points when deciding how to implement this pattern: Dividing the data store into separate physical stores for read and write operations can increase the performance and security of a system, but it can add complexity in terms of resiliency and eventual consistency. The read model store must be updated to reflect changes to the write model store, and it can be difficult to detect when a user has issued a request based on stale read data, which means that the operation can't be completed. For a description of eventual consistency see the Data Consistency Primer. Consider applying CQRS to limited sections of your system where it will be most valuable. A typical approach to deploying eventual consistency is to use event sourcing in conjunction with CQRS so that the write model is an append-only stream of events driven by execution of commands. These events are used to update materialized views that act as the read model. For more information see Event Sourcing and CQRS. When to use this pattern Use this pattern in the following situations: Collaborative domains where multiple operations are performed in parallel on the same data. CQRS allows you to define commands with enough granularity to minimize merge conflicts at the domain level (any conflicts that do arise can be merged by the command), even when updating what appears to be the same type of data. Task-based user interfaces where users are guided through a complex process as a series of steps or with complex domain models. Also, useful for teams already familiar with domain-driven design (DDD ) techniques. The write model has a full command-processing stack with business logic, input validation, and business validation to ensure that everything is always consistent for each of the aggregates (each cluster of associated objects treated as a unit for data changes) in the write model. The read model has no business logic or validation stack and just returns a DTO for use in a view model. The read model is eventually consistent with the write model. Scenarios where performance of data reads must be fine tuned separately from performance of data writes, especially when the read/write ratio is very high, and when horizontal scaling is required. For example, in many systems the number of read operations is many times greater than the number of write operations. To accommodate this, consider scaling out the read model, but running the write model on only one or a few instances. A small number of write model instances also helps to minimize the occurrence of merge conflicts. Scenarios where one team of developers can focus on the complex domain model that is part of the write model, and another team can focus on the read model and the user interfaces. Scenarios where the system is expected to evolve over time and might contain multiple versions of the model, or where business rules change regularly. Integration with other systems, especially in combination with event sourcing, where the temporal failure of one subsystem shouldn't affect the availability of the others. This pattern isn't recommended in the following situations: Where the domain or the business rules are simple. Where a simple CRUD -style user interface and the related data access operations are sufficient. For implementation across the whole system. There are specific components of an overall data management scenario where CQRS can be useful, but it can add considerable and unnecessary complexity when it isn't required. Event Sourcing and CQRS The CQRS pattern is often used along with the Event Sourcing pattern. CQRS -based systems use separate read and write data models, each tailored to relevant tasks and often located in physically separate stores. When used with the Event Sourcing pattern, the store of events is the write model, and is the official source of information. The read model of a CQRS -based system provides materialized views of the data, typically as highly denormalized views. These views are tailored to the interfaces and display requirements of the application, which helps to maximize both display and query performance. Using the stream of events as the write store, rather than the actual data at a point in time, avoids update conflicts on a single aggregate and maximizes performance and scalability. The events can be used to asynchronously generate materialized views of the data that are used to populate the read store. Because the event store is the official source of information, it is possible to delete the materialized views and replay all past events to create a new representation of the current state when the system evolves, or when the read model must change. The materialized views are in effect a durable read-only cache of the data. When using CQRS combined with the Event Sourcing pattern, consider the following: As with any system where the write and read stores are separate, systems based on this pattern are only eventually consistent. There will be some delay between the event being generated and the data store being updated. The pattern adds complexity because code must be created to initiate and handle events, and assemble or update the appropriate views or objects required by queries or a read model. The complexity of the CQRS pattern when used with the Event Sourcing pattern can make a successful implementation more difficult, and requires a different approach to designing systems. However, event sourcing can make it easier to model the domain, and makes it easier to rebuild views or create new ones because the intent of the changes in the data is preserved. Generating materialized views for use in the read model or projections of the data by replaying and handling the events for specific entities or collections of entities can require significant processing time and resource usage. This is especially true if it requires summation or analysis of values over long periods, because all the associated events might need to be examined. Resolve this by implementing snapshots of the data at scheduled intervals, such as a total count of the number of a specific action that have occurred, or the current state of an entity. Example The following code shows some extracts from an example of a CQRS implementation that uses different definitions for the read and the write models. The model interfaces don't dictate any features of the underlying data stores, and they can evolve and be fine-tuned independently because these interfaces are separated. The following code shows the read model definition. // Query interface namespace ReadModel { public interface ProductsDao { ProductDisplay FindById(int productId); ICollection<ProductDisplay> FindByName(string name); ICollection<ProductInventory> FindOutOfStockProducts(); ICollection<ProductDisplay> FindRelatedProducts(int productId); } public class ProductDisplay { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } public decimal UnitPrice { get; set; } public bool IsOutOfStock { get; set; } public double UserRating { get; set; } } public class ProductInventory { public int Id { get; set; } public string Name { get; set; } public int CurrentStock { get; set; } } } The system allows users to rate products. The application code does this using the in the following code. RateProduct command shown public interface ICommand { Guid Id { get; } } public class RateProduct : ICommand { public RateProduct() { this.Id = Guid.NewGuid(); } public Guid Id { get; set; } public int ProductId { get; set; } public int Rating { get; set; } public int UserId {get; set; } } The system uses the ProductsCommandHandler class to handle commands sent by the application. Clients typically send commands to the domain through a messaging system such as a queue. The command handler accepts these commands and invokes methods of the domain interface. The granularity of each command is designed to reduce the chance of conflicting requests. The following code shows an outline of the ProductsCommandHandler class. public class ProductsCommandHandler : ICommandHandler<AddNewProduct>, ICommandHandler<RateProduct>, ICommandHandler<AddToInventory>, ICommandHandler<ConfirmItemShipped>, ICommandHandler<UpdateStockFromInventoryRecount> { private readonly IRepository<Product> repository; public ProductsCommandHandler (IRepository<Product> repository) { this.repository = repository; } void Handle (AddNewProduct command) { ... } void Handle (RateProduct command) { var product = repository.Find(command.ProductId); if (product != null) { product.RateProduct(command.UserId, command.Rating); repository.Save(product); } } void Handle (AddToInventory command) { ... } void Handle (ConfirmItemsShipped command) { ... } void Handle (UpdateStockFromInventoryRecount command) { ... } } The following code shows the public { void void void void void } IProductsDomain interface from the write model. interface IProductsDomain AddNewProduct(int id, string name, string description, decimal price); RateProduct(int userId, int rating); AddToInventory(int productId, int quantity); ConfirmItemsShipped(int productId, int quantity); UpdateStockFromInventoryRecount(int productId, int updatedQuantity); Also notice how the IProductsDomain interface contains methods that have a meaning in the domain. Typically, in a CRUD environment these methods would have generic names such as Save or Update , and have a DTO as the only argument. The CQRS approach can be designed to meet the needs of this organization's business and inventory management systems. Related patterns and guidance The following patterns and guidance are useful when implementing this pattern: For a comparison of CQRS with other architectural styles, see Architecture styles and CQRS architecture style. Data Consistency Primer. Explains the issues that are typically encountered due to eventual consistency between the read and write data stores when using the CQRS pattern, and how these issues can be resolved. Data Partitioning Guidance. Describes how the read and write data stores used in the CQRS pattern can be divided into partitions that can be managed and accessed separately to improve scalability, reduce contention, and optimize performance. Event Sourcing pattern. Describes in more detail how Event Sourcing can be used with the CQRS pattern to simplify tasks in complex domains while improving performance, scalability, and responsiveness. As well as how to provide consistency for transactional data while maintaining full audit trails and history that can enable compensating actions. Materialized View pattern. The read model of a CQRS implementation can contain materialized views of the write model data, or the read model can be used to generate materialized views. The patterns & practices guide CQRS Journey. In particular, Introducing the Command Query Responsibility Segregation pattern explores the pattern and when it's useful, and Epilogue: Lessons Learned helps you understand some of the issues that come up when using this pattern. The post CQRS by Martin Fowler, which explains the basics of the pattern and links to other useful resources. Compensating Transaction pattern 3/13/2019 • 7 minutes to read • Edit Online Undo the work performed by a series of steps, which together define an eventually consistent operation, if one or more of the steps fail. Operations that follow the eventual consistency model are commonly found in cloudhosted applications that implement complex business processes and workflows. Context and problem Applications running in the cloud frequently modify data. This data might be spread across various data sources held in different geographic locations. To avoid contention and improve performance in a distributed environment, an application shouldn't try to provide strong transactional consistency. Rather, the application should implement eventual consistency. In this model, a typical business operation consists of a series of separate steps. While these steps are being performed, the overall view of the system state might be inconsistent, but when the operation has completed and all of the steps have been executed the system should become consistent again. The Data Consistency Primer provides information about why distributed transactions don't scale well, and the principles of the eventual consistency model. A challenge in the eventual consistency model is how to handle a step that has failed. In this case it might be necessary to undo all of the work completed by the previous steps in the operation. However, the data can't simply be rolled back because other concurrent instances of the application might have changed it. Even in cases where the data hasn't been changed by a concurrent instance, undoing a step might not simply be a matter of restoring the original state. It might be necessary to apply various business-specific rules (see the travel website described in the Example section). If an operation that implements eventual consistency spans several heterogeneous data stores, undoing the steps in the operation will require visiting each data store in turn. The work performed in every data store must be undone reliably to prevent the system from remaining inconsistent. Not all data affected by an operation that implements eventual consistency might be held in a database. In a service oriented architecture (SOA) environment an operation could invoke an action in a service, and cause a change in the state held by that service. To undo the operation, this state change must also be undone. This can involve invoking the service again and performing another action that reverses the effects of the first. Solution The solution is to implement a compensating transaction. The steps in a compensating transaction must undo the effects of the steps in the original operation. A compensating transaction might not be able to simply replace the current state with the state the system was in at the start of the operation because this approach could overwrite changes made by other concurrent instances of an application. Instead, it must be an intelligent process that takes into account any work done by concurrent instances. This process will usually be application specific, driven by the nature of the work performed by the original operation. A common approach is to use a workflow to implement an eventually consistent operation that requires compensation. As the original operation proceeds, the system records information about each step and how the work performed by that step can be undone. If the operation fails at any point, the workflow rewinds back through the steps it's completed and performs the work that reverses each step. Note that a compensating transaction might not have to undo the work in the exact reverse order of the original operation, and it might be possible to perform some of the undo steps in parallel. This approach is similar to the Sagas strategy discussed in Clemens Vasters’ blog. A compensating transaction is also an eventually consistent operation and it could also fail. The system should be able to resume the compensating transaction at the point of failure and continue. It might be necessary to repeat a step that's failed, so the steps in a compensating transaction should be defined as idempotent commands. For more information, see Idempotency Patterns on Jonathan Oliver’s blog. In some cases it might not be possible to recover from a step that has failed except through manual intervention. In these situations the system should raise an alert and provide as much information as possible about the reason for the failure. Issues and considerations Consider the following points when deciding how to implement this pattern: It might not be easy to determine when a step in an operation that implements eventual consistency has failed. A step might not fail immediately, but instead could block. It might be necessary to implement some form of timeout mechanism. -Compensation logic isn't easily generalized. A compensating transaction is application specific. It relies on the application having sufficient information to be able to undo the effects of each step in a failed operation. You should define the steps in a compensating transaction as idempotent commands. This enables the steps to be repeated if the compensating transaction itself fails. The infrastructure that handles the steps in the original operation, and the compensating transaction, must be resilient. It must not lose the information required to compensate for a failing step, and it must be able to reliably monitor the progress of the compensation logic. A compensating transaction doesn't necessarily return the data in the system to the state it was in at the start of the original operation. Instead, it compensates for the work performed by the steps that completed successfully before the operation failed. The order of the steps in the compensating transaction doesn't necessarily have to be the exact opposite of the steps in the original operation. For example, one data store might be more sensitive to inconsistencies than another, and so the steps in the compensating transaction that undo the changes to this store should occur first. Placing a short-term timeout-based lock on each resource that's required to complete an operation, and obtaining these resources in advance, can help increase the likelihood that the overall activity will succeed. The work should be performed only after all the resources have been acquired. All actions must be finalized before the locks expire. Consider using retry logic that is more forgiving than usual to minimize failures that trigger a compensating transaction. If a step in an operation that implements eventual consistency fails, try handling the failure as a transient exception and repeat the step. Only stop the operation and initiate a compensating transaction if a step fails repeatedly or irrecoverably. Many of the challenges of implementing a compensating transaction are the same as those with implementing eventual consistency. See the section Considerations for Implementing Eventual Consistency in the Data Consistency Primer for more information. When to use this pattern Use this pattern only for operations that must be undone if they fail. If possible, design solutions to avoid the complexity of requiring compensating transactions. Example A travel website lets customers book itineraries. A single itinerary might comprise a series of flights and hotels. A customer traveling from Seattle to London and then on to Paris could perform the following steps when creating an itinerary: 1. 2. 3. 4. 5. Book a seat on flight F1 from Seattle to London. Book a seat on flight F2 from London to Paris. Book a seat on flight F3 from Paris to Seattle. Reserve a room at hotel H1 in London. Reserve a room at hotel H2 in Paris. These steps constitute an eventually consistent operation, although each step is a separate action. Therefore, as well as performing these steps, the system must also record the counter operations necessary to undo each step in case the customer decides to cancel the itinerary. The steps necessary to perform the counter operations can then run as a compensating transaction. Notice that the steps in the compensating transaction might not be the exact opposite of the original steps, and the logic in each step in the compensating transaction must take into account any business-specific rules. For example, unbooking a seat on a flight might not entitle the customer to a complete refund of any money paid. The figure illustrates generating a compensating transaction to undo a long-running transaction to book a travel itinerary. NOTE It might be possible for the steps in the compensating transaction to be performed in parallel, depending on how you've designed the compensating logic for each step. In many business solutions, failure of a single step doesn't always necessitate rolling the system back by using a compensating transaction. For example, if—after having booked flights F1, F2, and F3 in the travel website scenario—the customer is unable to reserve a room at hotel H1, it's preferable to offer the customer a room at a different hotel in the same city rather than canceling the flights. The customer can still decide to cancel (in which case the compensating transaction runs and undoes the bookings made on flights F1, F2, and F3), but this decision should be made by the customer rather than by the system. Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Data Consistency Primer. The Compensating Transaction pattern is often used to undo operations that implement the eventual consistency model. This primer provides information on the benefits and tradeoffs of eventual consistency. Scheduler-Agent-Supervisor pattern. Describes how to implement resilient systems that perform business operations that use distributed services and resources. Sometimes, it might be necessary to undo the work performed by an operation by using a compensating transaction. Retry pattern. Compensating transactions can be expensive to perform, and it might be possible to minimize their use by implementing an effective policy of retrying failing operations by following the Retry pattern. Competing Consumers pattern 3/13/2019 • 9 minutes to read • Edit Online Enable multiple concurrent consumers to process messages received on the same messaging channel. This enables a system to process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload. Context and problem An application running in the cloud is expected to handle a large number of requests. Rather than process each request synchronously, a common technique is for the application to pass them through a messaging system to another service (a consumer service) that handles them asynchronously. This strategy helps to ensure that the business logic in the application isn't blocked while the requests are being processed. The number of requests can vary significantly over time for many reasons. A sudden increase in user activity or aggregated requests coming from multiple tenants can cause an unpredictable workload. At peak hours a system might need to process many hundreds of requests per second, while at other times the number could be very small. Additionally, the nature of the work performed to handle these requests might be highly variable. Using a single instance of the consumer service can cause that instance to become flooded with requests, or the messaging system might be overloaded by an influx of messages coming from the application. To handle this fluctuating workload, the system can run multiple instances of the consumer service. However, these consumers must be coordinated to ensure that each message is only delivered to a single consumer. The workload also needs to be load balanced across consumers to prevent an instance from becoming a bottleneck. Solution Use a message queue to implement the communication channel between the application and the instances of the consumer service. The application posts requests in the form of messages to the queue, and the consumer service instances receive messages from the queue and process them. This approach enables the same pool of consumer service instances to handle messages from any instance of the application. The figure illustrates using a message queue to distribute work to instances of a service. This solution has the following benefits: It provides a load-leveled system that can handle wide variations in the volume of requests sent by application instances. The queue acts as a buffer between the application instances and the consumer service instances. This can help to minimize the impact on availability and responsiveness for both the application and the service instances, as described by the Queue-based Load Leveling pattern. Handling a message that requires some long-running processing doesn't prevent other messages from being handled concurrently by other instances of the consumer service. It improves reliability. If a producer communicates directly with a consumer instead of using this pattern, but doesn't monitor the consumer, there's a high probability that messages could be lost or fail to be processed if the consumer fails. In this pattern, messages aren't sent to a specific service instance. A failed service instance won't block a producer, and messages can be processed by any working service instance. It doesn't require complex coordination between the consumers, or between the producer and the consumer instances. The message queue ensures that each message is delivered at least once. It's scalable. The system can dynamically increase or decrease the number of instances of the consumer service as the volume of messages fluctuates. It can improve resiliency if the message queue provides transactional read operations. If a consumer service instance reads and processes the message as part of a transactional operation, and the consumer service instance fails, this pattern can ensure that the message will be returned to the queue to be picked up and handled by another instance of the consumer service. Issues and considerations Consider the following points when deciding how to implement this pattern: Message ordering. The order in which consumer service instances receive messages isn't guaranteed, and doesn't necessarily reflect the order in which the messages were created. Design the system to ensure that message processing is idempotent because this will help to eliminate any dependency on the order in which messages are handled. For more information, see Idempotency Patterns on Jonathon Oliver’s blog. Microsoft Azure Service Bus Queues can implement guaranteed first-in-first-out ordering of messages by using message sessions. For more information, see Messaging Patterns Using Sessions. Designing services for resiliency. If the system is designed to detect and restart failed service instances, it might be necessary to implement the processing performed by the service instances as idempotent operations to minimize the effects of a single message being retrieved and processed more than once. Detecting poison messages. A malformed message, or a task that requires access to resources that aren't available, can cause a service instance to fail. The system should prevent such messages being returned to the queue, and instead capture and store the details of these messages elsewhere so that they can be analyzed if necessary. Handling results. The service instance handling a message is fully decoupled from the application logic that generates the message, and they might not be able to communicate directly. If the service instance generates results that must be passed back to the application logic, this information must be stored in a location that's accessible to both. In order to prevent the application logic from retrieving incomplete data the system must indicate when processing is complete. If you're using Azure, a worker process can pass results back to the application logic by using a dedicated message reply queue. The application logic must be able to correlate these results with the original message. This scenario is described in more detail in the Asynchronous Messaging Primer. Scaling the messaging system. In a large-scale solution, a single message queue could be overwhelmed by the number of messages and become a bottleneck in the system. In this situation, consider partitioning the messaging system to send messages from specific producers to a particular queue, or use load balancing to distribute messages across multiple message queues. Ensuring reliability of the messaging system. A reliable messaging system is needed to guarantee that after the application enqueues a message it won't be lost. This is essential for ensuring that all messages are delivered at least once. When to use this pattern Use this pattern when: The workload for an application is divided into tasks that can run asynchronously. Tasks are independent and can run in parallel. The volume of work is highly variable, requiring a scalable solution. The solution must provide high availability, and must be resilient if the processing for a task fails. This pattern might not be useful when: It's not easy to separate the application workload into discrete tasks, or there's a high degree of dependence between tasks. Tasks must be performed synchronously, and the application logic must wait for a task to complete before continuing. Tasks must be performed in a specific sequence. Some messaging systems support sessions that enable a producer to group messages together and ensure that they're all handled by the same consumer. This mechanism can be used with prioritized messages (if they are supported) to implement a form of message ordering that delivers messages in sequence from a producer to a single consumer. Example Azure provides storage queues and Service Bus queues that can act as a mechanism for implementing this pattern. The application logic can post messages to a queue, and consumers implemented as tasks in one or more roles can retrieve messages from this queue and process them. For resiliency, a Service Bus queue enables a consumer to use PeekLock mode when it retrieves a message from the queue. This mode doesn't actually remove the message, but simply hides it from other consumers. The original consumer can delete the message when it's finished processing it. If the consumer fails, the peek lock will time out and the message will become visible again, allowing another consumer to retrieve it. For detailed information on using Azure Service Bus queues, see Service Bus queues, topics, and subscriptions. For information on using Azure storage queues, see Get started with Azure Queue storage using .NET. The following code from the QueueManager class in CompetingConsumers solution available on GitHub shows how you can create a queue by using a QueueClient instance in the Start event handler in a web or worker role. private string queueName = ...; private string connectionString = ...; ... public async Task Start() { // Check if the queue already exists. var manager = NamespaceManager.CreateFromConnectionString(this.connectionString); if (!manager.QueueExists(this.queueName)) { var queueDescription = new QueueDescription(this.queueName); // Set the maximum delivery count for messages in the queue. A message // is automatically dead-lettered after this number of deliveries. The // default value for dead letter count is 10. queueDescription.MaxDeliveryCount = 3; await manager.CreateQueueAsync(queueDescription); } ... // Create the queue client. By default the PeekLock method is used. this.client = QueueClient.CreateFromConnectionString( this.connectionString, this.queueName); } The next code snippet shows how an application can create and send a batch of messages to the queue. public async Task SendMessagesAsync() { // Simulate sending a batch of messages to the queue. var messages = new List<BrokeredMessage>(); for (int i = 0; i < 10; i++) { var message = new BrokeredMessage() { MessageId = Guid.NewGuid().ToString() }; messages.Add(message); } await this.client.SendBatchAsync(messages); } The following code shows how a consumer service instance can receive messages from the queue by following an event-driven approach. The processMessageTask parameter to the ReceiveMessages method is a delegate that references the code to run when a message is received. This code is run asynchronously. private ManualResetEvent pauseProcessingEvent; ... public void ReceiveMessages(Func<BrokeredMessage, Task> processMessageTask) { // Set up the options for the message pump. var options = new OnMessageOptions(); // When AutoComplete is disabled it's necessary to manually // complete or abandon the messages and handle any errors. options.AutoComplete = false; options.MaxConcurrentCalls = 10; options.ExceptionReceived += this.OptionsOnExceptionReceived; // Use of the Service Bus OnMessage message pump. // The OnMessage method must be called once, otherwise an exception will occur. this.client.OnMessageAsync( async (msg) => { // Will block the current thread if Stop is called. this.pauseProcessingEvent.WaitOne(); // Execute processing task here. await processMessageTask(msg); }, options); } ... private void OptionsOnExceptionReceived(object sender, ExceptionReceivedEventArgs exceptionReceivedEventArgs) { ... } Note that autoscaling features, such as those available in Azure, can be used to start and stop role instances as the queue length fluctuates. For more information, see Autoscaling Guidance. Also, it's not necessary to maintain a one-to-one correspondence between role instances and worker processes—a single role instance can implement multiple worker processes. For more information, see Compute Resource Consolidation pattern. Related patterns and guidance The following patterns and guidance might be relevant when implementing this pattern: Asynchronous Messaging Primer. Message queues are an asynchronous communications mechanism. If a consumer service needs to send a reply to an application, it might be necessary to implement some form of response messaging. The Asynchronous Messaging Primer provides information on how to implement request/reply messaging using message queues. Autoscaling Guidance. It might be possible to start and stop instances of a consumer service since the length of the queue applications post messages on varies. Autoscaling can help to maintain throughput during times of peak processing. Compute Resource Consolidation pattern. It might be possible to consolidate multiple instances of a consumer service into a single process to reduce costs and management overhead. The Compute Resource Consolidation pattern describes the benefits and tradeoffs of following this approach. Queue-based Load Leveling pattern. Introducing a message queue can add resiliency to the system, enabling service instances to handle widely varying volumes of requests from application instances. The message queue acts as a buffer, which levels the load. The Queue-based Load Leveling pattern describes this scenario in more detail. This pattern has a sample application associated with it. Compute Resource Consolidation pattern 3/13/2019 • 12 minutes to read • Edit Online Consolidate multiple tasks or operations into a single computational unit. This can increase compute resource utilization, and reduce the costs and management overhead associated with performing compute processing in cloud-hosted applications. Context and problem A cloud application often implements a variety of operations. In some solutions it makes sense to follow the design principle of separation of concerns initially, and divide these operations into separate computational units that are hosted and deployed individually (for example, as separate App Service web apps, separate Virtual Machines, or separate Cloud Service roles). However, although this strategy can help simplify the logical design of the solution, deploying a large number of computational units as part of the same application can increase runtime hosting costs and make management of the system more complex. As an example, the figure shows the simplified structure of a cloud-hosted solution that is implemented using more than one computational unit. Each computational unit runs in its own virtual environment. Each function has been implemented as a separate task (labeled Task A through Task E ) running in its own computational unit. Each computational unit consumes chargeable resources, even when it's idle or lightly used. Therefore, this isn't always the most cost-effective solution. In Azure, this concern applies to roles in a Cloud Service, App Services, and Virtual Machines. These items run in their own virtual environment. Running a collection of separate roles, websites, or virtual machines that are designed to perform a set of well-defined operations, but that need to communicate and cooperate as part of a single solution, can be an inefficient use of resources. Solution To help reduce costs, increase utilization, improve communication speed, and reduce management it's possible to consolidate multiple tasks or operations into a single computational unit. Tasks can be grouped according to criteria based on the features provided by the environment and the costs associated with these features. A common approach is to look for tasks that have a similar profile concerning their scalability, lifetime, and processing requirements. Grouping these together allows them to scale as a unit. The elasticity provided by many cloud environments enables additional instances of a computational unit to be started and stopped according to the workload. For example, Azure provides autoscaling that you can apply to roles in a Cloud Service, App Services, and Virtual Machines. For more information, see Autoscaling Guidance. As a counter example to show how scalability can be used to determine which operations shouldn't be grouped together, consider the following two tasks: Task 1 polls for infrequent, time-insensitive messages sent to a queue. Task 2 handles high-volume bursts of network traffic. The second task requires elasticity that can involve starting and stopping a large number of instances of the computational unit. Applying the same scaling to the first task would simply result in more tasks listening for infrequent messages on the same queue, and is a waste of resources. In many cloud environments it's possible to specify the resources available to a computational unit in terms of the number of CPU cores, memory, disk space, and so on. Generally, the more resources specified, the greater the cost. To save money, it's important to maximize the work an expensive computational unit performs, and not let it become inactive for an extended period. If there are tasks that require a great deal of CPU power in short bursts, consider consolidating these into a single computational unit that provides the necessary power. However, it's important to balance this need to keep expensive resources busy against the contention that could occur if they are over stressed. Long-running, compute-intensive tasks shouldn't share the same computational unit, for example. Issues and considerations Consider the following points when implementing this pattern: Scalability and elasticity. Many cloud solutions implement scalability and elasticity at the level of the computational unit by starting and stopping instances of units. Avoid grouping tasks that have conflicting scalability requirements in the same computational unit. Lifetime. The cloud infrastructure periodically recycles the virtual environment that hosts a computational unit. When there are many long-running tasks inside a computational unit, it might be necessary to configure the unit to prevent it from being recycled until these tasks have finished. Alternatively, design the tasks by using a checkpointing approach that enables them to stop cleanly, and continue at the point they were interrupted when the computational unit is restarted. Release cadence. If the implementation or configuration of a task changes frequently, it might be necessary to stop the computational unit hosting the updated code, reconfigure and redeploy the unit, and then restart it. This process will also require that all other tasks within the same computational unit are stopped, redeployed, and restarted. Security. Tasks in the same computational unit might share the same security context and be able to access the same resources. There must be a high degree of trust between the tasks, and confidence that one task isn't going to corrupt or adversely affect another. Additionally, increasing the number of tasks running in a computational unit increases the attack surface of the unit. Each task is only as secure as the one with the most vulnerabilities. Fault tolerance. If one task in a computational unit fails or behaves abnormally, it can affect the other tasks running within the same unit. For example, if one task fails to start correctly it can cause the entire startup logic for the computational unit to fail, and prevent other tasks in the same unit from running. Contention. Avoid introducing contention between tasks that compete for resources in the same computational unit. Ideally, tasks that share the same computational unit should exhibit different resource utilization characteristics. For example, two compute-intensive tasks should probably not reside in the same computational unit, and neither should two tasks that consume large amounts of memory. However, mixing a compute intensive task with a task that requires a large amount of memory is a workable combination. NOTE Consider consolidating compute resources only for a system that's been in production for a period of time so that operators and developers can monitor the system and create a heat map that identifies how each task utilizes differing resources. This map can be used to determine which tasks are good candidates for sharing compute resources. Complexity. Combining multiple tasks into a single computational unit adds complexity to the code in the unit, possibly making it more difficult to test, debug, and maintain. Stable logical architecture. Design and implement the code in each task so that it shouldn't need to change, even if the physical environment the task runs in does change. Other strategies. Consolidating compute resources is only one way to help reduce costs associated with running multiple tasks concurrently. It requires careful planning and monitoring to ensure that it remains an effective approach. Other strategies might be more appropriate, depending on the nature of the work and where the users these tasks are running are located. For example, functional decomposition of the workload (as described by the Compute Partitioning Guidance) might be a better option. When to use this pattern Use this pattern for tasks that are not cost effective if they run in their own computational units. If a task spends much of its time idle, running this task in a dedicated unit can be expensive. This pattern might not be suitable for tasks that perform critical fault-tolerant operations, or tasks that process highly sensitive or private data and require their own security context. These tasks should run in their own isolated environment, in a separate computational unit. Example When building a cloud service on Azure, it’s possible to consolidate the processing performed by multiple tasks into a single role. Typically this is a worker role that performs background or asynchronous processing tasks. In some cases it's possible to include background or asynchronous processing tasks in the web role. This technique helps to reduce costs and simplify deployment, although it can impact the scalability and responsiveness of the public-facing interface provided by the web role. The role is responsible for starting and stopping the tasks. When the Azure fabric controller loads a role, it raises the Start event for the role. You can override the OnStart method of the WebRole or WorkerRole class to handle this event, perhaps to initialize the data and other resources the tasks in this method depend on. When the OnStart method completes, the role can start responding to requests. You can find more information and guidance about using the OnStart and Run methods in a role in the Application Startup Processes section in the patterns & practices guide Moving Applications to the Cloud. Keep the code in the OnStart method as concise as possible. Azure doesn't impose any limit on the time taken for this method to complete, but the role won't be able to start responding to network requests sent to it until this method completes. When the OnStart method has finished, the role executes the start sending requests to the role. Place the code that actually creates the tasks in the Run Run method. At this point, the fabric controller can method. Note that the Run method defines the lifetime of the role instance. When this method completes, the fabric controller will arrange for the role to be shut down. When a role shuts down or is recycled, the fabric controller prevents any more incoming requests being received from the load balancer and raises the Stop event. You can capture this event by overriding the OnStop method of the role and perform any tidying up required before the role terminates. Any actions performed in the OnStop method must be completed within five minutes (or 30 seconds if you are using the Azure emulator on a local computer). Otherwise the Azure fabric controller assumes that the role has stalled and will force it to stop. The tasks are started by the Run method that waits for the tasks to complete. The tasks implement the business logic of the cloud service, and can respond to messages posted to the role through the Azure load balancer. The figure shows the lifecycle of tasks and resources in a role in an Azure cloud service. The WorkerRole.cs file in the ComputeResourceConsolidation.Worker project shows an example of how you might implement this pattern in an Azure cloud service. The ComputeResourceConsolidation.Worker project is part of the ComputeResourceConsolidation solution available for download from GitHub. The MyWorkerTask1 and the MyWorkerTask2 methods illustrate how to perform different tasks within the same worker role. The following code shows MyWorkerTask1 . This is a simple task that sleeps for 30 seconds and then outputs a trace message. It repeats this process until the task is canceled. The code in MyWorkerTask2 is similar. // A sample worker role task. private static async Task MyWorkerTask1(CancellationToken ct) { // Fixed interval to wake up and check for work and/or do work. var interval = TimeSpan.FromSeconds(30); try { while (!ct.IsCancellationRequested) { // Wake up and do some background processing if not canceled. // TASK PROCESSING CODE HERE Trace.TraceInformation("Doing Worker Task 1 Work"); // Go back to sleep for a period of time unless asked to cancel. // Task.Delay will throw an OperationCanceledException when canceled. await Task.Delay(interval, ct); } } catch (OperationCanceledException) { // Expect this exception to be thrown in normal circumstances or check // the cancellation token. If the role instances are shutting down, a // cancellation request will be signaled. Trace.TraceInformation("Stopping service, cancellation requested"); // Rethrow the exception. throw; } } The sample code shows a common implementation of a background process. In a real world application you can follow this same structure, except that you should place your own processing logic in the body of the loop that waits for the cancellation request. After the worker role has initialized the resources it uses, the shown here. Run method starts the two tasks concurrently, as /// <summary> /// The cancellation token source use to cooperatively cancel running tasks /// </summary> private readonly CancellationTokenSource cts = new CancellationTokenSource(); /// <summary> /// List of running tasks on the role instance /// </summary> private readonly List<Task> tasks = new List<Task>(); // RoleEntry Run() is called after OnStart(). // Returning from Run() will cause a role instance to recycle. public override void Run() { // Start worker tasks and add to the task list tasks.Add(MyWorkerTask1(cts.Token)); tasks.Add(MyWorkerTask2(cts.Token)); foreach (var worker in this.workerTasks) { this.tasks.Add(worker); } Trace.TraceInformation("Worker host tasks started"); // The assumption is that all tasks should remain running and not return, // similar to role entry Run() behavior. try { Task.WaitAll(tasks.ToArray()); } catch (AggregateException ex) { Trace.TraceError(ex.Message); // If any of the inner exceptions in the aggregate exception // are not cancellation exceptions then re-throw the exception. ex.Handle(innerEx => (innerEx is OperationCanceledException)); } // // // if { If there wasn't a cancellation request, stop all tasks and return from Run() An alternative to canceling and returning when a task exits would be to restart the task. (!cts.IsCancellationRequested) Trace.TraceInformation("Task returned without cancellation request"); Stop(TimeSpan.FromMinutes(5)); } } ... In this example, the Run method waits for tasks to be completed. If a task is canceled, the Run method assumes that the role is being shut down and waits for the remaining tasks to be canceled before finishing (it waits for a maximum of five minutes before terminating). If a task fails due to an expected exception, the Run method cancels the task. You could implement more comprehensive monitoring and exception handling strategies in the Run method such as restarting tasks that have failed, or including code that enables the role to stop and start individual tasks. The Stop method shown in the following code is called when the fabric controller shuts down the role instance (it's invoked from the OnStop method). The code stops each task gracefully by canceling it. If any task takes more than five minutes to complete, the cancellation processing in the Stop method ceases waiting and the role is terminated. // Stop running tasks and wait for tasks to complete before returning // unless the timeout expires. private void Stop(TimeSpan timeout) { Trace.TraceInformation("Stop called. Canceling tasks."); // Cancel running tasks. cts.Cancel(); Trace.TraceInformation("Waiting for canceled tasks to finish and return"); // Wait for all the tasks to complete before returning. Note that the // emulator currently allows 30 seconds and Azure allows five // minutes for processing to complete. try { Task.WaitAll(tasks.ToArray(), timeout); } catch (AggregateException ex) { Trace.TraceError(ex.Message); // If any of the inner exceptions in the aggregate exception // are not cancellation exceptions then rethrow the exception. ex.Handle(innerEx => (innerEx is OperationCanceledException)); } } Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Autoscaling Guidance. Autoscaling can be used to start and stop instances of service hosting computational resources, depending on the anticipated demand for processing. Compute Partitioning Guidance. Describes how to allocate the services and components in a cloud service in a way that helps to minimize running costs while maintaining the scalability, performance, availability, and security of the service. This pattern includes a downloadable sample application. Event Sourcing pattern 3/13/2019 • 14 minutes to read • Edit Online Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. The store acts as the system of record and can be used to materialize the domain objects. This can simplify tasks in complex domains, by avoiding the need to synchronize the data model and the business domain, while improving performance, scalability, and responsiveness. It can also provide consistency for transactional data, and maintain full audit trails and history that can enable compensating actions. Context and problem Most applications work with data, and the typical approach is for the application to maintain the current state of the data by updating it as users work with it. For example, in the traditional create, read, update, and delete (CRUD ) model a typical data process is to read data from the store, make some modifications to it, and update the current state of the data with the new values—often by using transactions that lock the data. The CRUD approach has some limitations: CRUD systems perform update operations directly against a data store, which can slow down performance and responsiveness, and limit scalability, due to the processing overhead it requires. In a collaborative domain with many concurrent users, data update conflicts are more likely because the update operations take place on a single item of data. Unless there's an additional auditing mechanism that records the details of each operation in a separate log, history is lost. For a deeper understanding of the limits of the CRUD approach see CRUD, Only When You Can Afford It. Solution The Event Sourcing pattern defines an approach to handling operations on data that's driven by a sequence of events, each of which is recorded in an append-only store. Application code sends a series of events that imperatively describe each action that has occurred on the data to the event store, where they're persisted. Each event represents a set of changes to the data (such as AddedItemToOrder ). The events are persisted in an event store that acts as the system of record (the authoritative data source) about the current state of the data. The event store typically publishes these events so that consumers can be notified and can handle them if needed. Consumers could, for example, initiate tasks that apply the operations in the events to other systems, or perform any other associated action that's required to complete the operation. Notice that the application code that generates the events is decoupled from the systems that subscribe to the events. Typical uses of the events published by the event store are to maintain materialized views of entities as actions in the application change them, and for integration with external systems. For example, a system can maintain a materialized view of all customer orders that's used to populate parts of the UI. As the application adds new orders, adds or removes items on the order, and adds shipping information, the events that describe these changes can be handled and used to update the materialized view. In addition, at any point it's possible for applications to read the history of events, and use it to materialize the current state of an entity by playing back and consuming all the events related to that entity. This can occur on demand to materialize a domain object when handling a request, or through a scheduled task so that the state of the entity can be stored as a materialized view to support the presentation layer. The figure shows an overview of the pattern, including some of the options for using the event stream such as creating a materialized view, integrating events with external applications and systems, and replaying events to create projections of the current state of specific entities. The Event Sourcing pattern provides the following advantages: Events are immutable and can be stored using an append-only operation. The user interface, workflow, or process that initiated an event can continue, and tasks that handle the events can run in the background. This, combined with the fact that there's no contention during the processing of transactions, can vastly improve performance and scalability for applications, especially for the presentation level or user interface. Events are simple objects that describe some action that occurred, together with any associated data required to describe the action represented by the event. Events don't directly update a data store. They're simply recorded for handling at the appropriate time. This can simplify implementation and management. Events typically have meaning for a domain expert, whereas object-relational impedance mismatch can make complex database tables hard to understand. Tables are artificial constructs that represent the current state of the system, not the events that occurred. Event sourcing can help prevent concurrent updates from causing conflicts because it avoids the requirement to directly update objects in the data store. However, the domain model must still be designed to protect itself from requests that might result in an inconsistent state. The append-only storage of events provides an audit trail that can be used to monitor actions taken against a data store, regenerate the current state as materialized views or projections by replaying the events at any time, and assist in testing and debugging the system. In addition, the requirement to use compensating events to cancel changes provides a history of changes that were reversed, which wouldn't be the case if the model simply stored the current state. The list of events can also be used to analyze application performance and detect user behavior trends, or to obtain other useful business information. The event store raises events, and tasks perform operations in response to those events. This decoupling of the tasks from the events provides flexibility and extensibility. Tasks know about the type of event and the event data, but not about the operation that triggered the event. In addition, multiple tasks can handle each event. This enables easy integration with other services and systems that only listen for new events raised by the event store. However, the event sourcing events tend to be very low level, and it might be necessary to generate specific integration events instead. Event sourcing is commonly combined with the CQRS pattern by performing the data management tasks in response to the events, and by materializing views from the stored events. Issues and considerations Consider the following points when deciding how to implement this pattern: The system will only be eventually consistent when creating materialized views or generating projections of data by replaying events. There's some delay between an application adding events to the event store as the result of handling a request, the events being published, and consumers of the events handling them. During this period, new events that describe further changes to entities might have arrived at the event store. NOTE See the Data Consistency Primer for information about eventual consistency. The event store is the permanent source of information, and so the event data should never be updated. The only way to update an entity to undo a change is to add a compensating event to the event store. If the format (rather than the data) of the persisted events needs to change, perhaps during a migration, it can be difficult to combine existing events in the store with the new version. It might be necessary to iterate through all the events making changes so they're compliant with the new format, or add new events that use the new format. Consider using a version stamp on each version of the event schema to maintain both the old and the new event formats. Multi-threaded applications and multiple instances of applications might be storing events in the event store. The consistency of events in the event store is vital, as is the order of events that affect a specific entity (the order that changes occur to an entity affects its current state). Adding a timestamp to every event can help to avoid issues. Another common practice is to annotate each event resulting from a request with an incremental identifier. If two actions attempt to add events for the same entity at the same time, the event store can reject an event that matches an existing entity identifier and event identifier. There's no standard approach, or existing mechanisms such as SQL queries, for reading the events to obtain information. The only data that can be extracted is a stream of events using an event identifier as the criteria. The event ID typically maps to individual entities. The current state of an entity can be determined only by replaying all of the events that relate to it against the original state of that entity. The length of each event stream affects managing and updating the system. If the streams are large, consider creating snapshots at specific intervals such as a specified number of events. The current state of the entity can be obtained from the snapshot and by replaying any events that occurred after that point in time. For more information about creating snapshots of data, see Snapshot on Martin Fowler’s Enterprise Application Architecture website and Master-Subordinate Snapshot Replication. Even though event sourcing minimizes the chance of conflicting updates to the data, the application must still be able to deal with inconsistencies that result from eventual consistency and the lack of transactions. For example, an event that indicates a reduction in stock inventory might arrive in the data store while an order for that item is being placed, resulting in a requirement to reconcile the two operations either by advising the customer or creating a back order. Event publication might be “at least once,” and so consumers of the events must be idempotent. They must not reapply the update described in an event if the event is handled more than once. For example, if multiple instances of a consumer maintain an aggregate an entity's property, such as the total number of orders placed, only one must succeed in incrementing the aggregate when an order placed event occurs. While this isn't a key characteristic of event sourcing, it's the usual implementation decision. When to use this pattern Use this pattern in the following scenarios: When you want to capture intent, purpose, or reason in the data. For example, changes to a customer entity can be captured as a series of specific event types such as Moved home, Closed account, or Deceased. When it's vital to minimize or completely avoid the occurrence of conflicting updates to data. When you want to record events that occur, and be able to replay them to restore the state of a system, roll back changes, or keep a history and audit log. For example, when a task involves multiple steps you might need to execute actions to revert updates and then replay some steps to bring the data back into a consistent state. When using events is a natural feature of the operation of the application, and requires little additional development or implementation effort. When you need to decouple the process of inputting or updating data from the tasks required to apply these actions. This might be to improve UI performance, or to distribute events to other listeners that take action when the events occur. For example, integrating a payroll system with an expense submission website so that events raised by the event store in response to data updates made in the website are consumed by both the website and the payroll system. When you want flexibility to be able to change the format of materialized models and entity data if requirements change, or—when used in conjunction with CQRS —you need to adapt a read model or the views that expose the data. When used in conjunction with CQRS, and eventual consistency is acceptable while a read model is updated, or the performance impact of rehydrating entities and data from an event stream is acceptable. This pattern might not be useful in the following situations: Small or simple domains, systems that have little or no business logic, or nondomain systems that naturally work well with traditional CRUD data management mechanisms. Systems where consistency and real-time updates to the views of the data are required. Systems where audit trails, history, and capabilities to roll back and replay actions are not required. Systems where there's only a very low occurrence of conflicting updates to the underlying data. For example, systems that predominantly add data rather than updating it. Example A conference management system needs to track the number of completed bookings for a conference so that it can check whether there are seats still available when a potential attendee tries to make a booking. The system could store the total number of bookings for a conference in at least two ways: The system could store the information about the total number of bookings as a separate entity in a database that holds booking information. As bookings are made or canceled, the system could increment or decrement this number as appropriate. This approach is simple in theory, but can cause scalability issues if a large number of attendees are attempting to book seats during a short period of time. For example, in the last day or so prior to the booking period closing. The system could store information about bookings and cancellations as events held in an event store. It could then calculate the number of seats available by replaying these events. This approach can be more scalable due to the immutability of events. The system only needs to be able to read data from the event store, or append data to the event store. Event information about bookings and cancellations is never modified. The following diagram illustrates how the seat reservation subsystem of the conference management system might be implemented using event sourcing. The sequence of actions for reserving two seats is as follows: 1. The user interface issues a command to reserve seats for two attendees. The command is handled by a separate command handler. A piece of logic that is decoupled from the user interface and is responsible for handling requests posted as commands. 2. An aggregate containing information about all reservations for the conference is constructed by querying the events that describe bookings and cancellations. This aggregate is called SeatAvailability , and is contained within a domain model that exposes methods for querying and modifying the data in the aggregate. Some optimizations to consider are using snapshots (so that you don’t need to query and replay the full list of events to obtain the current state of the aggregate), and maintaining a cached copy of the aggregate in memory. 3. The command handler invokes a method exposed by the domain model to make the reservations. 4. The SeatAvailability aggregate records an event containing the number of seats that were reserved. The next time the aggregate applies events, all the reservations will be used to compute how many seats remain. 5. The system appends the new event to the list of events in the event store. If a user cancels a seat, the system follows a similar process except the command handler issues a command that generates a seat cancellation event and appends it to the event store. As well as providing more scope for scalability, using an event store also provides a complete history, or audit trail, of the bookings and cancellations for a conference. The events in the event store are the accurate record. There is no need to persist aggregates in any other way because the system can easily replay the events and restore the state to any point in time. You can find more information about this example in Introducing Event Sourcing. Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Command and Query Responsibility Segregation (CQRS ) pattern. The write store that provides the permanent source of information for a CQRS implementation is often based on an implementation of the Event Sourcing pattern. Describes how to segregate the operations that read data in an application from the operations that update data by using separate interfaces. Materialized View pattern. The data store used in a system based on event sourcing is typically not well suited to efficient querying. Instead, a common approach is to generate prepopulated views of the data at regular intervals, or when the data changes. Shows how this can be done. Compensating Transaction pattern. The existing data in an event sourcing store is not updated, instead new entries are added that transition the state of entities to the new values. To reverse a change, compensating entries are used because it isn't possible to simply reverse the previous change. Describes how to undo the work that was performed by a previous operation. Data Consistency Primer. When using event sourcing with a separate read store or materialized views, the read data won't be immediately consistent, instead it'll be only eventually consistent. Summarizes the issues surrounding maintaining consistency over distributed data. Data Partitioning Guidance. Data is often partitioned when using event sourcing to improve scalability, reduce contention, and optimize performance. Describes how to divide data into discrete partitions, and the issues that can arise. External Configuration Store pattern 3/13/2019 • 10 minutes to read • Edit Online Move configuration information out of the application deployment package to a centralized location. This can provide opportunities for easier management and control of configuration data, and for sharing configuration data across applications and application instances. Context and problem The majority of application runtime environments include configuration information that's held in files deployed with the application. In some cases, it's possible to edit these files to change the application behavior after it's been deployed. However, changes to the configuration require the application be redeployed, often resulting in unacceptable downtime and other administrative overhead. Local configuration files also limit the configuration to a single application, but sometimes it would be useful to share configuration settings across multiple applications. Examples include database connection strings, UI theme information, or the URLs of queues and storage used by a related set of applications. It's challenging to manage changes to local configurations across multiple running instances of the application, especially in a cloud-hosted scenario. It can result in instances using different configuration settings while the update is being deployed. In addition, updates to applications and components might require changes to configuration schemas. Many configuration systems don't support different versions of configuration information. Solution Store the configuration information in external storage, and provide an interface that can be used to quickly and efficiently read and update configuration settings. The type of external store depends on the hosting and runtime environment of the application. In a cloud-hosted scenario it's typically a cloud-based storage service, but could be a hosted database or other system. The backing store you choose for configuration information should have an interface that provides consistent and easy-to-use access. It should expose the information in a correctly typed and structured format. The implementation might also need to authorize users’ access in order to protect configuration data, and be flexible enough to allow storage of multiple versions of the configuration (such as development, staging, or production, including multiple release versions of each one). Many built-in configuration systems read the data when the application starts up, and cache the data in memory to provide fast access and minimize the impact on application performance. Depending on the type of backing store used, and the latency of this store, it might be helpful to implement a caching mechanism within the external configuration store. For more information, see the Caching Guidance. The figure illustrates an overview of the External Configuration Store pattern with optional local cache. Issues and considerations Consider the following points when deciding how to implement this pattern: Choose a backing store that offers acceptable performance, high availability, robustness, and can be backed up as part of the application maintenance and administration process. In a cloud-hosted application, using a cloud storage mechanism is usually a good choice to meet these requirements. Design the schema of the backing store to allow flexibility in the types of information it can hold. Ensure that it provides for all configuration requirements such as typed data, collections of settings, multiple versions of settings, and any other features that the applications using it require. The schema should be easy to extend to support additional settings as requirements change. Consider the physical capabilities of the backing store, how it relates to the way configuration information is stored, and the effects on performance. For example, storing an XML document containing configuration information will require either the configuration interface or the application to parse the document in order to read individual settings. It'll make updating a setting more complicated, though caching the settings can help to offset slower read performance. Consider how the configuration interface will permit control of the scope and inheritance of configuration settings. For example, it might be a requirement to scope configuration settings at the organization, application, and the machine level. It might need to support delegation of control over access to different scopes, and to prevent or allow individual applications to override settings. Ensure that the configuration interface can expose the configuration data in the required formats such as typed values, collections, key/value pairs, or property bags. Consider how the configuration store interface will behave when settings contain errors, or don't exist in the backing store. It might be appropriate to return default settings and log errors. Also consider aspects such as the case sensitivity of configuration setting keys or names, the storage and handling of binary data, and the ways that null or empty values are handled. Consider how to protect the configuration data to allow access to only the appropriate users and applications. This is likely a feature of the configuration store interface, but it's also necessary to ensure that the data in the backing store can't be accessed directly without the appropriate permission. Ensure strict separation between the permissions required to read and to write configuration data. Also consider whether you need to encrypt some or all of the configuration settings, and how this'll be implemented in the configuration store interface. Centrally stored configurations, which change application behavior during runtime, are critically important and should be deployed, updated, and managed using the same mechanisms as deploying application code. For example, changes that can affect more than one application must be carried out using a full test and staged deployment approach to ensure that the change is appropriate for all applications that use this configuration. If an administrator edits a setting to update one application, it could adversely impact other applications that use the same setting. If an application caches configuration information, the application needs to be alerted if the configuration changes. It might be possible to implement an expiration policy over cached configuration data so that this information is automatically refreshed periodically and any changes picked up (and acted on). When to use this pattern This pattern is useful for: Configuration settings that are shared between multiple applications and application instances, or where a standard configuration must be enforced across multiple applications and application instances. A standard configuration system that doesn't support all of the required configuration settings, such as storing images or complex data types. As a complementary store for some of the settings for applications, perhaps allowing applications to override some or all of the centrally-stored settings. As a way to simplify administration of multiple applications, and optionally for monitoring use of configuration settings by logging some or all types of access to the configuration store. Example In a Microsoft Azure hosted application, a typical choice for storing configuration information externally is to use Azure Storage. This is resilient, offers high performance, and is replicated three times with automatic failover to offer high availability. Azure Table storage provides a key/value store with the ability to use a flexible schema for the values. Azure Blob storage provides a hierarchical, container-based store that can hold any type of data in individually named blobs. The following example shows how a configuration store can be implemented over Blob storage to store and expose configuration information. The BlobSettingsStore class abstracts Blob storage for holding configuration information, and implements the ISettingsStore interface shown in the following code. This code is provided in the ExternalConfigurationStore.Cloud project in the ExternalConfigurationStore solution, available from GitHub. public interface ISettingsStore { Task<string> GetVersionAsync(); Task<Dictionary<string, string>> FindAllAsync(); } This interface defines methods for retrieving and updating configuration settings held in the configuration store, and includes a version number that can be used to detect whether any configuration settings have been modified recently. The BlobSettingsStore class uses the ETag property of the blob to implement versioning. The ETag property is updated automatically each time the blob is written. By design, this simple solution exposes all configuration settings as string values rather than typed values. The ExternalConfigurationManager class provides a wrapper around a BlobSettingsStore object. An application can use this class to store and retrieve configuration information. This class uses the Microsoft Reactive Extensions library to expose any changes made to the configuration through an implementation of the IObservable interface. If a setting is modified by calling the SetAppSetting method, the Changed event is raised and all subscribers to this event will be notified. Note that all settings are also cached in a Dictionary object inside the ExternalConfigurationManager class for fast access. The GetSetting method used to retrieve a configuration setting reads the data from the cache. If the setting isn't found in the cache, it's retrieved from the BlobSettingsStore object instead. The GetSettings method invokes the CheckForConfigurationChanges method to detect whether the configuration information in blob storage has changed. It does this by examining the version number and comparing it with the current version number held by the ExternalConfigurationManager object. If one or more changes have occurred, the Changed event is raised and the configuration settings cached in the Dictionary object are refreshed. This is an application of the Cache-Aside pattern. The following code sample shows how the Changed event, the CheckForConfigurationChanges method are implemented: GetSettings method, and the public class ExternalConfigurationManager : IDisposable { // An abstraction of the configuration store. private readonly ISettingsStore settings; private readonly ISubject<KeyValuePair<string, string>> changed; ... private readonly ReaderWriterLockSlim settingsCacheLock = new ReaderWriterLockSlim(); private readonly SemaphoreSlim syncCacheSemaphore = new SemaphoreSlim(1); ... private Dictionary<string, string> settingsCache; private string currentVersion; ... public ExternalConfigurationManager(ISettingsStore settings, ...) { this.settings = settings; ... } ... public IObservable<KeyValuePair<string, string>> Changed => this.changed.AsObservable(); ... public string GetAppSetting(string key) { ... // Try to get the value from the settings cache. // If there's a cache miss, get the setting from the settings store and refresh the settings cache. string value; try { this.settingsCacheLock.EnterReadLock(); this.settingsCache.TryGetValue(key, out value); } finally { this.settingsCacheLock.ExitReadLock(); } return value; } ... private void CheckForConfigurationChanges() { try { // It is assumed that updates are infrequent. // To avoid race conditions in refreshing the cache, synchronize access to the in-memory cache. await this.syncCacheSemaphore.WaitAsync(); var latestVersion = await this.settings.GetVersionAsync(); // If the versions are the same, nothing has changed in the configuration. if (this.currentVersion == latestVersion) return; // Get the latest settings from the settings store and publish changes. var latestSettings = await this.settings.FindAllAsync(); // Refresh the settings cache. try { this.settingsCacheLock.EnterWriteLock(); if (this.settingsCache != null) { //Notify settings changed latestSettings.Except(this.settingsCache).ToList().ForEach(kv => this.changed.OnNext(kv)); } this.settingsCache = latestSettings; } finally { this.settingsCacheLock.ExitWriteLock(); } // Update the current version. this.currentVersion = latestVersion; } catch (Exception ex) { this.changed.OnError(ex); } finally { this.syncCacheSemaphore.Release(); } } } The ExternalConfigurationManager class also provides a property named Environment . This property supports varying configurations for an application running in different environments, such as staging and production. An ExternalConfigurationManager object can also query the BlobSettingsStore object periodically for any changes. In the following code, the StartMonitor method calls CheckForConfigurationChanges at an interval to detect any changes and raise the Changed event, as described earlier. public class ExternalConfigurationManager : IDisposable { ... private readonly ISubject<KeyValuePair<string, string>> changed; private Dictionary<string, string> settingsCache; private readonly CancellationTokenSource cts = new CancellationTokenSource(); private Task monitoringTask; private readonly TimeSpan interval; private readonly SemaphoreSlim timerSemaphore = new SemaphoreSlim(1); ... public ExternalConfigurationManager(string environment) : this(new BlobSettingsStore(environment), TimeSpan.FromSeconds(15), environment) { } public ExternalConfigurationManager(ISettingsStore settings, TimeSpan interval, string environment) { this.settings = settings; this.settings = settings; this.interval = interval; this.CheckForConfigurationChangesAsync().Wait(); this.changed = new Subject<KeyValuePair<string, string>>(); this.Environment = environment; } ... /// <summary> /// Check to see if the current instance is monitoring for changes /// </summary> public bool IsMonitoring => this.monitoringTask != null && !this.monitoringTask.IsCompleted; /// <summary> /// Start the background monitoring for configuration changes in the central store /// </summary> public void StartMonitor() { if (this.IsMonitoring) return; try { this.timerSemaphore.Wait(); // Check again to make sure we are not already running. if (this.IsMonitoring) return; // Start running our task loop. this.monitoringTask = ConfigChangeMonitor(); } finally { this.timerSemaphore.Release(); } } /// <summary> /// Loop that monitors for configuration changes /// </summary> /// <returns></returns> public async Task ConfigChangeMonitor() { while (!cts.Token.IsCancellationRequested) { await this.CheckForConfigurationChangesAsync(); await Task.Delay(this.interval, cts.Token); } } /// <summary> /// Stop monitoring for configuration changes /// </summary> public void StopMonitor() { try { this.timerSemaphore.Wait(); // Signal the task to stop. this.cts.Cancel(); // Wait for the loop to stop. this.monitoringTask.Wait(); this.monitoringTask = null; } finally { this.timerSemaphore.Release(); this.timerSemaphore.Release(); } } public void Dispose() { this.cts.Cancel(); } ... } The ExternalConfigurationManager class is instantiated as a singleton instance by the shown below. ExternalConfiguration class public static class ExternalConfiguration { private static readonly Lazy<ExternalConfigurationManager> configuredInstance = new Lazy<ExternalConfigurationManager>( () => { var environment = CloudConfigurationManager.GetSetting("environment"); return new ExternalConfigurationManager(environment); }); public static ExternalConfigurationManager Instance => configuredInstance.Value; } The following code is taken from the WorkerRole class in the ExternalConfigurationStore.Cloud project. It shows how the application uses the ExternalConfiguration class to read a setting. public override void Run() { // Start monitoring configuration changes. ExternalConfiguration.Instance.StartMonitor(); // Get a setting. var setting = ExternalConfiguration.Instance.GetAppSetting("setting1"); Trace.TraceInformation("Worker Role: Get setting1, value: " + setting); this.completeEvent.WaitOne(); } The following code, also from the WorkerRole class, shows how the application subscribes to configuration events. public override bool OnStart() { ... // Subscribe to the event. ExternalConfiguration.Instance.Changed.Subscribe( m => Trace.TraceInformation("Configuration has changed. Key:{0} Value:{1}", m.Key, m.Value), ex => Trace.TraceError("Error detected: " + ex.Message)); ... } Related patterns and guidance A sample that demonstrates this pattern is available on GitHub. Federated Identity pattern 3/13/2019 • 7 minutes to read • Edit Online Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application. Context and problem Users typically need to work with multiple applications provided and hosted by different organizations they have a business relationship with. These users might be required to use specific (and different) credentials for each one. This can: Cause a disjointed user experience. Users often forget sign-in credentials when they have many different ones. Expose security vulnerabilities. When a user leaves the company the account must immediately be deprovisioned. It's easy to overlook this in large organizations. Complicate user management. Administrators must manage credentials for all of the users, and perform additional tasks such as providing password reminders. Users typically prefer to use the same credentials for all these applications. Solution Implement an authentication mechanism that can use federated identity. Separate user authentication from the application code, and delegate authentication to a trusted identity provider. This can simplify development and allow users to authenticate using a wider range of identity providers (IdP ) while minimizing the administrative overhead. It also allows you to clearly decouple authentication from authorization. The trusted identity providers include corporate directories, on-premises federation services, other security token services (STS ) provided by business partners, or social identity providers that can authenticate users who have, for example, a Microsoft, Google, Yahoo!, or Facebook account. The figure illustrates the Federated Identity pattern when a client application needs to access a service that requires authentication. The authentication is performed by an IdP that works in concert with an STS. The IdP issues security tokens that provide information about the authenticated user. This information, referred to as claims, includes the user’s identity, and might also include other information such as role membership and more granular access rights. This model is often called claims-based access control. Applications and services authorize access to features and functionality based on the claims contained in the token. The service that requires authentication must trust the IdP. The client application contacts the IdP that performs the authentication. If the authentication is successful, the IdP returns a token containing the claims that identify the user to the STS (note that the IdP and STS can be the same service). The STS can transform and augment the claims in the token based on predefined rules, before returning it to the client. The client application can then pass this token to the service as proof of its identity. There might be additional STSs in the chain of trust. For example, in the scenario described later, an onpremises STS trusts another STS that is responsible for accessing an identity provider to authenticate the user. This approach is common in enterprise scenarios where there's an on-premises STS and directory. Federated authentication provides a standards-based solution to the issue of trusting identities across diverse domains, and can support single sign-on. It's becoming more common across all types of applications, especially cloud-hosted applications, because it supports single sign-on without requiring a direct network connection to identity providers. The user doesn't have to enter credentials for every application. This increases security because it prevents the creation of credentials required to access many different applications, and it also hides the user’s credentials from all but the original identity provider. Applications see just the authenticated identity information contained within the token. Federated identity also has the major advantage that management of the identity and credentials is the responsibility of the identity provider. The application or service doesn't need to provide identity management features. In addition, in corporate scenarios, the corporate directory doesn't need to know about the user if it trusts the identity provider. This removes all the administrative overhead of managing the user identity within the directory. Issues and considerations Consider the following when designing applications that implement federated authentication: Authentication can be a single point of failure. If you deploy your application to multiple datacenters, consider deploying your identity management mechanism to the same datacenters to maintain application reliability and availability. Authentication tools make it possible to configure access control based on role claims contained in the authentication token. This is often referred to as role-based access control (RBAC ), and it can allow a more granular level of control over access to features and resources. Unlike a corporate directory, claims-based authentication using social identity providers doesn't usually provide information about the authenticated user other than an email address, and perhaps a name. Some social identity providers, such as a Microsoft account, provide only a unique identifier. The application usually needs to maintain some information on registered users, and be able to match this information to the identifier contained in the claims in the token. Typically this is done through registration when the user first accesses the application, and information is then injected into the token as additional claims after each authentication. If there's more than one identity provider configured for the STS, it must detect which identity provider the user should be redirected to for authentication. This process is called home realm discovery. The STS might be able to do this automatically based on an email address or user name that the user provides, a subdomain of the application that the user is accessing, the user’s IP address scope, or on the contents of a cookie stored in the user’s browser. For example, if the user entered an email address in the Microsoft domain, such as user@live.com, the STS will redirect the user to the Microsoft account sign-in page. On later visits, the STS could use a cookie to indicate that the last sign in was with a Microsoft account. If automatic discovery can't determine the home realm, the STS will display a home realm discovery page that lists the trusted identity providers, and the user must select the one they want to use. When to use this pattern This pattern is useful for scenarios such as: Single sign-on in the enterprise. In this scenario you need to authenticate employees for corporate applications that are hosted in the cloud outside the corporate security boundary, without requiring them to sign in every time they visit an application. The user experience is the same as when using on-premises applications where they're authenticated when signing in to a corporate network, and from then on have access to all relevant applications without needing to sign in again. Federated identity with multiple partners. In this scenario you need to authenticate both corporate employees and business partners who don't have accounts in the corporate directory. This is common in business-to-business applications, applications that integrate with third-party services, and where companies with different IT systems have merged or shared resources. Federated identity in SaaS applications. In this scenario independent software vendors provide a readyto-use service for multiple clients or tenants. Each tenant authenticates using a suitable identity provider. For example, business users will use their corporate credentials, while consumers and clients of the tenant will use their social identity credentials. This pattern might not be useful in the following situations: All users of the application can be authenticated by one identity provider, and there's no requirement to authenticate using any other identity provider. This is typical in business applications that use a corporate directory (accessible within the application) for authentication, by using a VPN, or (in a cloud-hosted scenario) through a virtual network connection between the on-premises directory and the application. The application was originally built using a different authentication mechanism, perhaps with custom user stores, or doesn't have the capability to handle the negotiation standards used by claims-based technologies. Retrofitting claims-based authentication and access control into existing applications can be complex, and probably not cost effective. Example An organization hosts a multi-tenant software as a service (SaaS ) application in Microsoft Azure. The application includes a website that tenants can use to manage the application for their own users. The application allows tenants to access the website by using a federated identity that is generated by Active Directory Federation Services (ADFS ) when a user is authenticated by that organization’s own Active Directory. The figure shows how tenants authenticate with their own identity provider (step 1), in this case ADFS. After successfully authenticating a tenant, ADFS issues a token. The client browser forwards this token to the SaaS application’s federation provider, which trusts tokens issued by the tenant’s ADFS, in order to get back a token that is valid for the SaaS federation provider (step 2). If necessary, the SaaS federation provider performs a transformation on the claims in the token into claims that the application recognizes (step 3) before returning the new token to the client browser. The application trusts tokens issued by the SaaS federation provider and uses the claims in the token to apply authorization rules (step 4). Tenants won't need to remember separate credentials to access the application, and an administrator at the tenant’s company can configure in its own ADFS the list of users that can access the application. Related guidance Microsoft Azure Active Directory Active Directory Domain Services Active Directory Federation Services Identity management for multitenant applications in Microsoft Azure Multitenant Applications in Azure Gatekeeper pattern 3/13/2019 • 4 minutes to read • Edit Online Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. This can provide an additional layer of security, and limit the attack surface of the system. Context and problem Applications expose their functionality to clients by accepting and processing requests. In cloud-hosted scenarios, applications expose endpoints clients connect to, and typically include the code to handle the requests from clients. This code performs authentication and validation, some or all request processing, and is likely to accesses storage and other services on behalf of the client. If a malicious user is able to compromise the system and gain access to the application’s hosting environment, the security mechanisms it uses such as credentials and storage keys, and the services and data it accesses, are exposed. As a result, the malicious user can gain unrestrained access to sensitive information and other services. Solution To minimize the risk of clients gaining access to sensitive information and services, decouple hosts or tasks that expose public endpoints from the code that processes requests and accesses storage. You can achieve this by using a façade or a dedicated task that interacts with clients and then hands off the request—perhaps through a decoupled interface—to the hosts or tasks that'll handle the request. The figure provides a high-level overview of this pattern. The gatekeeper pattern can be used to simply protect storage, or it can be used as a more comprehensive façade to protect all of the functions of the application. The important factors are: Controlled validation. The gatekeeper validates all requests, and rejects those that don't meet validation requirements. Limited risk and exposure. The gatekeeper doesn't have access to the credentials or keys used by the trusted host to access storage and services. If the gatekeeper is compromised, the attacker doesn't get access to these credentials or keys. Appropriate security. The gatekeeper runs in a limited privilege mode, while the rest of the application runs in the full trust mode required to access storage and services. If the gatekeeper is compromised, it can't directly access the application services or data. This pattern acts like a firewall in a typical network topography. It allows the gatekeeper to examine requests and make a decision about whether to pass the request on to the trusted host (sometimes called the keymaster) that performs the required tasks. This decision typically requires the gatekeeper to validate and sanitize the request content before passing it on to the trusted host. Issues and considerations Consider the following points when deciding how to implement this pattern: Ensure that the trusted hosts the gatekeeper passes requests to expose only internal or protected endpoints, and connect only to the gatekeeper. The trusted hosts shouldn't expose any external endpoints or interfaces. The gatekeeper must run in a limited privilege mode. Typically this means running the gatekeeper and the trusted host in separate hosted services or virtual machines. The gatekeeper shouldn't perform any processing related to the application or services, or access any data. Its function is purely to validate and sanitize requests. The trusted hosts might need to perform additional validation of requests, but the core validation should be performed by the gatekeeper. Use a secure communication channel (HTTPS, SSL, or TLS ) between the gatekeeper and the trusted hosts or tasks where this is possible. However, some hosting environments don't support HTTPS on internal endpoints. Adding the extra layer to the application to implement the gatekeeper pattern is likely to have some impact on performance due to the additional processing and network communication it requires. The gatekeeper instance could be a single point of failure. To minimize the impact of a failure, consider deploying additional instances and using an autoscaling mechanism to ensure capacity to maintain availability. When to use this pattern This pattern is useful for: Applications that handle sensitive information, expose services that must have a high degree of protection from malicious attacks, or perform mission-critical operations that shouldn't be disrupted. Distributed applications where it's necessary to perform request validation separately from the main tasks, or to centralize this validation to simplify maintenance and administration. Example In a cloud-hosted scenario, this pattern can be implemented by decoupling the gatekeeper role or virtual machine from the trusted roles and services in an application. Do this by using an internal endpoint, a queue, or storage as an intermediate communication mechanism. The figure illustrates using an internal endpoint. Related patterns The Valet Key pattern might also be relevant when implementing the Gatekeeper pattern. When communicating between the Gatekeeper and trusted roles it's good practice to enhance security by using keys or tokens that limit permissions for accessing resources. Describes how to use a token or key that provides clients with restricted direct access to a specific resource or service. Gateway Aggregation pattern 3/13/2019 • 3 minutes to read • Edit Online Use a gateway to aggregate multiple individual requests into a single request. This pattern is useful when a client must make multiple calls to different backend systems to perform an operation. Context and problem To perform a single task, a client may have to make multiple calls to various backend services. An application that relies on many services to perform a task must expend resources on each request. When any new feature or service is added to the application, additional requests are needed, further increasing resource requirements and network calls. This chattiness between a client and a backend can adversely impact the performance and scale of the application. Microservice architectures have made this problem more common, as applications built around many smaller services naturally have a higher amount of cross-service calls. In the following diagram, the client sends requests to each service (1,2,3). Each service processes the request and sends the response back to the application (4,5,6). Over a cellular network with typically high latency, using individual requests in this manner is inefficient and could result in broken connectivity or incomplete requests. While each request may be done in parallel, the application must send, wait, and process data for each request, all on separate connections, increasing the chance of failure. Solution Use a gateway to reduce chattiness between the client and the services. The gateway receives client requests, dispatches requests to the various backend systems, and then aggregates the results and sends them back to the requesting client. This pattern can reduce the number of requests that the application makes to backend services, and improve application performance over high-latency networks. In the following diagram, the application sends a request to the gateway (1). The request contains a package of additional requests. The gateway decomposes these and processes each request by sending it to the relevant service (2). Each service returns a response to the gateway (3). The gateway combines the responses from each service and sends the response to the application (4). The application makes a single request and receives only a single response from the gateway. Issues and considerations The gateway should not introduce service coupling across the backend services. The gateway should be located near the backend services to reduce latency as much as possible. The gateway service may introduce a single point of failure. Ensure the gateway is properly designed to meet your application's availability requirements. The gateway may introduce a bottleneck. Ensure the gateway has adequate performance to handle load and can be scaled to meet your anticipated growth. Perform load testing against the gateway to ensure you don't introduce cascading failures for services. Implement a resilient design, using techniques such as bulkheads, circuit breaking, retry, and timeouts. If one or more service calls takes too long, it may be acceptable to timeout and return a partial set of data. Consider how your application will handle this scenario. Use asynchronous I/O to ensure that a delay at the backend doesn't cause performance issues in the application. Implement distributed tracing using correlation IDs to track each individual call. Monitor request metrics and response sizes. Consider returning cached data as a failover strategy to handle failures. Instead of building aggregation into the gateway, consider placing an aggregation service behind the gateway. Request aggregation will likely have different resource requirements than other services in the gateway and may impact the gateway's routing and offloading functionality. When to use this pattern Use this pattern when: A client needs to communicate with multiple backend services to perform an operation. The client may use networks with significant latency, such as cellular networks. This pattern may not be suitable when: You want to reduce the number of calls between a client and a single service across multiple operations. In that scenario, it may be better to add a batch operation to the service. The client or application is located near the backend services and latency is not a significant factor. Example The following example illustrates how to create a simple a gateway aggregation NGINX service using Lua. worker_processes 4; events { worker_connections 1024; } http { server { listen 80; location = /batch { content_by_lua ' ngx.req.read_body() -- read json body content local cjson = require "cjson" local batch = cjson.decode(ngx.req.get_body_data())["batch"] -- create capture_multi table local requests = {} for i, item in ipairs(batch) do table.insert(requests, {item.relative_url, { method = ngx.HTTP_GET}}) end -- execute batch requests in parallel local results = {} local resps = { ngx.location.capture_multi(requests) } for i, res in ipairs(resps) do table.insert(results, {status = res.status, body = cjson.decode(res.body), header = res.header}) end ngx.say(cjson.encode({results = results})) '; } location = /service1 { default_type application/json; echo '{"attr1":"val1"}'; } location = /service2 { default_type application/json; echo '{"attr2":"val2"}'; } } } Related guidance Backends for Frontends pattern Gateway Offloading pattern Gateway Routing pattern Gateway Offloading pattern 3/13/2019 • 3 minutes to read • Edit Online Offload shared or specialized service functionality to a gateway proxy. This pattern can simplify application development by moving shared service functionality, such as the use of SSL certificates, from other parts of the application into the gateway. Context and problem Some features are commonly used across multiple services, and these features require configuration, management, and maintenance. A shared or specialized service that is distributed with every application deployment increases the administrative overhead and increases the likelihood of deployment error. Any updates to a shared feature must be deployed across all services that share that feature. Properly handling security issues (token validation, encryption, SSL certificate management) and other complex tasks can require team members to have highly specialized skills. For example, a certificate needed by an application must be configured and deployed on all application instances. With each new deployment, the certificate must be managed to ensure that it does not expire. Any common certificate that is due to expire must be updated, tested, and verified on every application deployment. Other common services such as authentication, authorization, logging, monitoring, or throttling can be difficult to implement and manage across a large number of deployments. It may be better to consolidate this type of functionality, in order to reduce overhead and the chance of errors. Solution Offload some features into an API gateway, particularly cross-cutting concerns such as certificate management, authentication, SSL termination, monitoring, protocol translation, or throttling. The following diagram shows an API gateway that terminates inbound SSL connections. It requests data on behalf of the original requestor from any HTTP server upstream of the API gateway. Benefits of this pattern include: Simplify the development of services by removing the need to distribute and maintain supporting resources, such as web server certificates and configuration for secure websites. Simpler configuration results in easier management and scalability and makes service upgrades simpler. Allow dedicated teams to implement features that require specialized expertise, such as security. This allows your core team to focus on the application functionality, leaving these specialized but cross-cutting concerns to the relevant experts. Provide some consistency for request and response logging and monitoring. Even if a service is not correctly instrumented, the gateway can be configured to ensure a minimum level of monitoring and logging. Issues and considerations Ensure the API gateway is highly available and resilient to failure. Avoid single points of failure by running multiple instances of your API gateway. Ensure the gateway is designed for the capacity and scaling requirements of your application and endpoints. Make sure the gateway does not become a bottleneck for the application and is sufficiently scalable. Only offload features that are used by the entire application, such as security or data transfer. Business logic should never be offloaded to the API gateway. If you need to track transactions, consider generating correlation IDs for logging purposes. When to use this pattern Use this pattern when: An application deployment has a shared concern such as SSL certificates or encryption. A feature that is common across application deployments that may have different resource requirements, such as memory resources, storage capacity or network connections. You wish to move the responsibility for issues such as network security, throttling, or other network boundary concerns to a more specialized team. This pattern may not be suitable if it introduces coupling across services. Example Using Nginx as the SSL offload appliance, the following configuration terminates an inbound SSL connection and distributes the connection to one of three upstream HTTP servers. upstream iis { server 10.3.0.10 server 10.3.0.20 server 10.3.0.30 } max_fails=3 max_fails=3 max_fails=3 fail_timeout=15s; fail_timeout=15s; fail_timeout=15s; server { listen 443; ssl on; ssl_certificate /etc/nginx/ssl/domain.cer; ssl_certificate_key /etc/nginx/ssl/domain.key; location / { set $targ iis; proxy_pass http://$targ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; } } On Azure, this can be achieved by setting up SSL termination on Application Gateway. Related guidance Backends for Frontends pattern Gateway Aggregation pattern Gateway Routing pattern Gateway Routing pattern 3/13/2019 • 3 minutes to read • Edit Online Route requests to multiple services using a single endpoint. This pattern is useful when you wish to expose multiple services on a single endpoint and route to the appropriate service based on the request. Context and problem When a client needs to consume multiple services, setting up a separate endpoint for each service and having the client manage each endpoint can be challenging. For example, an e-commerce application might provide services such as search, reviews, cart, checkout, and order history. Each service has a different API that the client must interact with, and the client must know about each endpoint in order to connect to the services. If an API changes, the client must be updated as well. If you refactor a service into two or more separate services, the code must change in both the service and the client. Solution Place a gateway in front of a set of applications, services, or deployments. Use application Layer 7 routing to route the request to the appropriate instances. With this pattern, the client application only needs to know about and communicate with a single endpoint. If a service is consolidated or decomposed, the client does not necessarily require updating. It can continue making requests to the gateway, and only the routing changes. A gateway also lets you abstract backend services from the clients, allowing you to keep client calls simple while enabling changes in the backend services behind the gateway. Client calls can be routed to whatever service or services need to handle the expected client behavior, allowing you to add, split, and reorganize services behind the gateway without changing the client. This pattern can also help with deployment, by allowing you to manage how updates are rolled out to users. When a new version of your service is deployed, it can be deployed in parallel with the existing version. Routing lets you control what version of the service is presented to the clients, giving you the flexibility to use various release strategies, whether incremental, parallel, or complete rollouts of updates. Any issues discovered after the new service is deployed can be quickly reverted by making a configuration change at the gateway, without affecting clients. Issues and considerations The gateway service may introduce a single point of failure. Ensure it is properly designed to meet your availability requirements. Consider resiliency and fault tolerance capabilities when implementing. The gateway service may introduce a bottleneck. Ensure the gateway has adequate performance to handle load and can easily scale in line with your growth expectations. Perform load testing against the gateway to ensure you don't introduce cascading failures for services. Gateway routing is level 7. It can be based on IP, port, header, or URL. When to use this pattern Use this pattern when: A client needs to consume multiple services that can be accessed behind a gateway. You wish to simplify client applications by using a single endpoint. You need to route requests from externally addressable endpoints to internal virtual endpoints, such as exposing ports on a VM to cluster virtual IP addresses. This pattern may not be suitable when you have a simple application that uses only one or two services. Example Using Nginx as the router, the following is a simple example configuration file for a server that routes requests for applications residing on different virtual directories to different machines at the back end. server { listen 80; server_name domain.com; location /app1 { proxy_pass http://10.0.3.10:80; } location /app2 { proxy_pass http://10.0.3.20:80; } location /app3 { proxy_pass http://10.0.3.30:80; } } On Azure, multiple services can be set up behind an Application Gateway instance, which provides layer-7 routing. Related guidance Backends for Frontends pattern Gateway Aggregation pattern Gateway Offloading pattern Health Endpoint Monitoring pattern 3/13/2019 • 13 minutes to read • Edit Online Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. This can help to verify that applications and services are performing correctly. Context and problem It's a good practice, and often a business requirement, to monitor web applications and back-end services, to ensure they're available and performing correctly. However, it's more difficult to monitor services running in the cloud than it is to monitor on-premises services. For example, you don't have full control of the hosting environment, and the services typically depend on other services provided by platform vendors and others. There are many factors that affect cloud-hosted applications such as network latency, the performance and availability of the underlying compute and storage systems, and the network bandwidth between them. The service can fail entirely or partially due to any of these factors. Therefore, you must verify at regular intervals that the service is performing correctly to ensure the required level of availability, which might be part of your service level agreement (SLA). Solution Implement health monitoring by sending requests to an endpoint on the application. The application should perform the necessary checks, and return an indication of its status. A health monitoring check typically combines two factors: The checks (if any) performed by the application or service in response to the request to the health verification endpoint. Analysis of the results by the tool or framework that performs the health verification check. The response code indicates the status of the application and, optionally, any components or services it uses. The latency or response time check is performed by the monitoring tool or framework. The figure provides an overview of the pattern. Other checks that might be carried out by the health monitoring code in the application include: Checking cloud storage or a database for availability and response time. Checking other resources or services located in the application, or located elsewhere but used by the application. Services and tools are available that monitor web applications by submitting a request to a configurable set of endpoints, and evaluating the results against a set of configurable rules. It's relatively easy to create a service endpoint whose sole purpose is to perform some functional tests on the system. Typical checks that can be performed by the monitoring tools include: Validating the response code. For example, an HTTP response of 200 (OK) indicates that the application responded without error. The monitoring system might also check for other response codes to give more comprehensive results. Checking the content of the response to detect errors, even when a 200 (OK) status code is returned. This can detect errors that affect only a section of the returned web page or service response. For example, checking the title of a page or looking for a specific phrase that indicates the correct page was returned. Measuring the response time, which indicates a combination of the network latency and the time that the application took to execute the request. An increasing value can indicate an emerging problem with the application or network. Checking resources or services located outside the application, such as a content delivery network used by the application to deliver content from global caches. Checking for expiration of SSL certificates. Measuring the response time of a DNS lookup for the URL of the application to measure DNS latency and DNS failures. Validating the URL returned by the DNS lookup to ensure correct entries. This can help to avoid malicious request redirection through a successful attack on the DNS server. It's also useful, where possible, to run these checks from different on-premises or hosted locations to measure and compare response times. Ideally you should monitor applications from locations that are close to customers to get an accurate view of the performance from each location. In addition to providing a more robust checking mechanism, the results can help you decide on the deployment location for the application—and whether to deploy it in more than one datacenter. Tests should also be run against all the service instances that customers use to ensure the application is working correctly for all customers. For example, if customer storage is spread across more than one storage account, the monitoring process should check all of these. Issues and considerations Consider the following points when deciding how to implement this pattern: How to validate the response. For example, is just a single 200 (OK) status code sufficient to verify the application is working correctly? While this provides the most basic measure of application availability, and is the minimum implementation of this pattern, it provides little information about the operations, trends, and possible upcoming issues in the application. Make sure that the application correctly returns a 200 (OK) only when the target resource is found and processed. In some scenarios, such as when using a master page to host the target web page, the server sends back a 200 (OK) status code instead of a 404 (Not Found) code, even when the target content page was not found. The number of endpoints to expose for an application. One approach is to expose at least one endpoint for the core services that the application uses and another for lower priority services, allowing different levels of importance to be assigned to each monitoring result. Also consider exposing more endpoints, such as one for each core service, for additional monitoring granularity. For example, a health verification check might check the database, storage, and an external geocoding service that an application uses, with each requiring a different level of uptime and response time. The application could still be healthy if the geocoding service, or some other background task, is unavailable for a few minutes. Whether to use the same endpoint for monitoring as is used for general access, but to a specific path designed for health verification checks, for example, /HealthCheck/{GUID }/ on the general access endpoint. This allows some functional tests in the application to be run by the monitoring tools, such as adding a new user registration, signing in, and placing a test order, while also verifying that the general access endpoint is available. The type of information to collect in the service in response to monitoring requests, and how to return this information. Most existing tools and frameworks look only at the HTTP status code that the endpoint returns. To return and validate additional information, you might have to create a custom monitoring utility or service. How much information to collect. Performing excessive processing during the check can overload the application and impact other users. The time it takes might exceed the timeout of the monitoring system so it marks the application as unavailable. Most applications include instrumentation such as error handlers and performance counters that log performance and detailed error information, this might be sufficient instead of returning additional information from a health verification check. Caching the endpoint status. It could be expensive to run the health check too frequently. If the health status is reported through a dashboard, for example, you don't want every request from the dashboard to trigger a health check. Instead, periodically check the system health and cache the status. Expose an endpoint that returns the cached status. How to configure security for the monitoring endpoints to protect them from public access, which might expose the application to malicious attacks, risk the exposure of sensitive information, or attract denial of service (DoS ) attacks. Typically this should be done in the application configuration so that it can be updated easily without restarting the application. Consider using one or more of the following techniques: Secure the endpoint by requiring authentication. You can do this by using an authentication security key in the request header or by passing credentials with the request, provided that the monitoring service or tool supports authentication. Use an obscure or hidden endpoint. For example, expose the endpoint on a different IP address to that used by the default application URL, configure the endpoint on a nonstandard HTTP port, and/or use a complex path to the test page. You can usually specify additional endpoint addresses and ports in the application configuration, and add entries for these endpoints to the DNS server if required to avoid having to specify the IP address directly. Expose a method on an endpoint that accepts a parameter such as a key value or an operation mode value. Depending on the value supplied for this parameter, when a request is received the code can perform a specific test or set of tests, or return a 404 (Not Found) error if the parameter value isn't recognized. The recognized parameter values could be set in the application configuration. DoS attacks are likely to have less impact on a separate endpoint that performs basic functional tests without compromising the operation of the application. Ideally, avoid using a test that might expose sensitive information. If you must return information that might be useful to an attacker, consider how you'll protect the endpoint and the data from unauthorized access. In this case just relying on obscurity isn't enough. You should also consider using an HTTPS connection and encrypting any sensitive data, although this will increase the load on the server. How to access an endpoint that's secured using authentication. Not all tools and frameworks can be configured to include credentials with the health verification request. For example, Microsoft Azure builtin health verification features can't provide authentication credentials. Some third-party alternatives are Pingdom, Panopta, NewRelic, and Statuscake. How to ensure that the monitoring agent is performing correctly. One approach is to expose an endpoint that simply returns a value from the application configuration or a random value that can be used to test the agent. Also ensure that the monitoring system performs checks on itself, such as a self-test and built-in test, to avoid it issuing false positive results. When to use this pattern This pattern is useful for: Monitoring websites and web applications to verify availability. Monitoring websites and web applications to check for correct operation. Monitoring middle-tier or shared services to detect and isolate a failure that could disrupt other applications. Complementing existing instrumentation in the application, such as performance counters and error handlers. Health verification checking doesn't replace the requirement for logging and auditing in the application. Instrumentation can provide valuable information for an existing framework that monitors counters and error logs to detect failures or other issues. However, it can't provide information if the application is unavailable. Example The following code examples, taken from the HealthCheckController class (a sample that demonstrates this pattern is available on GitHub), demonstrates exposing an endpoint for performing a range of health checks. The CoreServices method, shown below in C#, performs a series of checks on services used in the application. If all of the tests run without error, the method returns a 200 (OK) status code. If any of the tests raises an exception, the method returns a 500 (Internal Error) status code. The method could optionally return additional information when an error occurs, if the monitoring tool or framework is able to use it. public ActionResult CoreServices() { try { // Run a simple check to ensure the database is available. DataStore.Instance.CoreHealthCheck(); // Run a simple check on our external service. MyExternalService.Instance.CoreHealthCheck(); } catch (Exception ex) { Trace.TraceError("Exception in basic health check: {0}", ex.Message); // This can optionally return different status codes based on the exception. // Optionally it could return more details about the exception. // The additional information could be used by administrators who access the // endpoint with a browser, or using a ping utility that can display the // additional information. return new HttpStatusCodeResult((int)HttpStatusCode.InternalServerError); } return new HttpStatusCodeResult((int)HttpStatusCode.OK); } The ObscurePath method shows how you can read a path from the application configuration and use it as the endpoint for tests. This example, in C#, also shows how you can accept an ID as a parameter and use it to check for valid requests. public ActionResult ObscurePath(string id) { // The id could be used as a simple way to obscure or hide the endpoint. // The id to match could be retrieved from configuration and, if matched, // perform a specific set of tests and return the result. If not matched it // could return a 404 (Not Found) status. // The obscure path can be set through configuration to hide the endpoint. var hiddenPathKey = CloudConfigurationManager.GetSetting("Test.ObscurePath"); // If the value passed does not match that in configuration, return 404 (Not Found). if (!string.Equals(id, hiddenPathKey)) { return new HttpStatusCodeResult((int)HttpStatusCode.NotFound); } // Else continue and run the tests... // Return results from the core services test. return this.CoreServices(); } The TestResponseFromConfig method shows how you can expose an endpoint that performs a check for a specified configuration setting value. public ActionResult TestResponseFromConfig() { // Health check that returns a response code set in configuration for testing. var returnStatusCodeSetting = CloudConfigurationManager.GetSetting( "Test.ReturnStatusCode"); int returnStatusCode; if (!int.TryParse(returnStatusCodeSetting, out returnStatusCode)) { returnStatusCode = (int)HttpStatusCode.OK; } return new HttpStatusCodeResult(returnStatusCode); } Monitoring endpoints in Azure hosted applications Some options for monitoring endpoints in Azure applications are: Use the built-in monitoring features of Azure. Use a third-party service or a framework such as Microsoft System Center Operations Manager. Create a custom utility or a service that runs on your own or on a hosted server. Even though Azure provides a reasonably comprehensive set of monitoring options, you can use additional services and tools to provide extra information. Azure Management Services provides a built-in monitoring mechanism for alert rules. The alerts section of the management services page in the Azure portal allows you to configure up to ten alert rules per subscription for your services. These rules specify a condition and a threshold value for a service such as CPU load, or the number of requests or errors per second, and the service can automatically send email notifications to addresses you define in each rule. The conditions you can monitor vary depending on the hosting mechanism you choose for your application (such as Web Sites, Cloud Services, Virtual Machines, or Mobile Services), but all of these include the ability to create an alert rule that uses a web endpoint you specify in the settings for your service. This endpoint should respond in a timely way so that the alert system can detect that the application is operating correctly. Read more information about creating alert notifications. If you host your application in Azure Cloud Services web and worker roles or Virtual Machines, you can take advantage of one of the built-in services in Azure called Traffic Manager. Traffic Manager is a routing and loadbalancing service that can distribute requests to specific instances of your Cloud Services hosted application based on a range of rules and settings. In addition to routing requests, Traffic Manager pings a URL, port, and relative path that you specify on a regular basis to determine which instances of the application defined in its rules are active and are responding to requests. If it detects a status code 200 (OK), it marks the application as available. Any other status code causes Traffic Manager to mark the application as offline. You can view the status in the Traffic Manager console, and configure the rule to reroute requests to other instances of the application that are responding. However, Traffic Manager will only wait ten seconds to receive a response from the monitoring URL. Therefore, you should ensure that your health verification code executes in this time, allowing for network latency for the round trip from Traffic Manager to your application and back again. Read more information about using Traffic Manager to monitor your applications. Traffic Manager is also discussed in Multiple Datacenter Deployment Guidance. Related guidance The following guidance can be useful when implementing this pattern: Instrumentation and Telemetry Guidance. Checking the health of services and components is typically done by probing, but it's also useful to have information in place to monitor application performance and detect events that occur at runtime. This data can be transmitted back to monitoring tools as additional information for health monitoring. Instrumentation and Telemetry Guidance explores gathering remote diagnostics information that's collected by instrumentation in applications. Receiving alert notifications. This pattern includes a downloadable sample application. Index Table pattern 3/13/2019 • 9 minutes to read • Edit Online Create indexes over the fields in data stores that are frequently referenced by queries. This pattern can improve query performance by allowing applications to more quickly locate the data to retrieve from a data store. Context and problem Many data stores organize the data for a collection of entities using the primary key. An application can use this key to locate and retrieve data. The figure shows an example of a data store holding customer information. The primary key is the Customer ID. The figure shows customer information organized by the primary key (Customer ID ). While the primary key is valuable for queries that fetch data based on the value of this key, an application might not be able to use the primary key if it needs to retrieve data based on some other field. In the customers example, an application can't use the Customer ID primary key to retrieve customers if it queries data solely by referencing the value of some other attribute, such as the town in which the customer is located. To perform a query such as this, the application might have to fetch and examine every customer record, which could be a slow process. Many relational database management systems support secondary indexes. A secondary index is a separate data structure that's organized by one or more nonprimary (secondary) key fields, and it indicates where the data for each indexed value is stored. The items in a secondary index are typically sorted by the value of the secondary keys to enable fast lookup of data. These indexes are usually maintained automatically by the database management system. You can create as many secondary indexes as you need to support the different queries that your application performs. For example, in a Customers table in a relational database where the Customer ID is the primary key, it's beneficial to add a secondary index over the town field if the application frequently looks up customers by the town where they reside. However, although secondary indexes are common in relational systems, most NoSQL data stores used by cloud applications don't provide an equivalent feature. Solution If the data store doesn't support secondary indexes, you can emulate them manually by creating your own index tables. An index table organizes the data by a specified key. Three strategies are commonly used for structuring an index table, depending on the number of secondary indexes that are required and the nature of the queries that an application performs. The first strategy is to duplicate the data in each index table but organize it by different keys (complete denormalization). The next figure shows index tables that organize the same customer information by Town and LastName. This strategy is appropriate if the data is relatively static compared to the number of times it's queried using each key. If the data is more dynamic, the processing overhead of maintaining each index table becomes too large for this approach to be useful. Also, if the volume of data is very large, the amount of space required to store the duplicate data is significant. The second strategy is to create normalized index tables organized by different keys and reference the original data by using the primary key rather than duplicating it, as shown in the following figure. The original data is called a fact table. This technique saves space and reduces the overhead of maintaining duplicate data. The disadvantage is that an application has to perform two lookup operations to find data using a secondary key. It has to find the primary key for the data in the index table, and then use the primary key to look up the data in the fact table. The third strategy is to create partially normalized index tables organized by different keys that duplicate frequently retrieved fields. Reference the fact table to access less frequently accessed fields. The next figure shows how commonly accessed data is duplicated in each index table. With this strategy, you can strike a balance between the first two approaches. The data for common queries can be retrieved quickly by using a single lookup, while the space and maintenance overhead isn't as significant as duplicating the entire data set. If an application frequently queries data by specifying a combination of values (for example, “Find all customers that live in Redmond and that have a last name of Smith”), you could implement the keys to the items in the index table as a concatenation of the Town attribute and the LastName attribute. The next figure shows an index table based on composite keys. The keys are sorted by Town, and then by LastName for records that have the same value for Town. Index tables can speed up query operations over sharded data, and are especially useful where the shard key is hashed. The next figure shows an example where the shard key is a hash of the Customer ID. The index table can organize data by the nonhashed value (Town and LastName), and provide the hashed shard key as the lookup data. This can save the application from repeatedly calculating hash keys (an expensive operation) if it needs to retrieve data that falls within a range, or it needs to fetch data in order of the nonhashed key. For example, a query such as “Find all customers that live in Redmond” can be quickly resolved by locating the matching items in the index table, where they're all stored in a contiguous block. Then, follow the references to the customer data using the shard keys stored in the index table. Issues and considerations Consider the following points when deciding how to implement this pattern: The overhead of maintaining secondary indexes can be significant. You must analyze and understand the queries that your application uses. Only create index tables when they're likely to be used regularly. Don't create speculative index tables to support queries that an application doesn't perform, or performs only occasionally. Duplicating data in an index table can add significant overhead in storage costs and the effort required to maintain multiple copies of data. Implementing an index table as a normalized structure that references the original data requires an application to perform two lookup operations to find data. The first operation searches the index table to retrieve the primary key, and the second uses the primary key to fetch the data. If a system incorporates a number of index tables over very large data sets, it can be difficult to maintain consistency between index tables and the original data. It might be possible to design the application around the eventual consistency model. For example, to insert, update, or delete data, an application could post a message to a queue and let a separate task perform the operation and maintain the index tables that reference this data asynchronously. For more information about implementing eventual consistency, see the Data Consistency Primer. Microsoft Azure storage tables support transactional updates for changes made to data held in the same partition (referred to as entity group transactions). If you can store the data for a fact table and one or more index tables in the same partition, you can use this feature to help ensure consistency. Index tables might themselves be partitioned or sharded. When to use this pattern Use this pattern to improve query performance when an application frequently needs to retrieve data by using a key other than the primary (or shard) key. This pattern might not be useful when: Data is volatile. An index table can become out of date very quickly, making it ineffective or making the overhead of maintaining the index table greater than any savings made by using it. A field selected as the secondary key for an index table is nondiscriminating and can only have a small set of values (for example, gender). The balance of the data values for a field selected as the secondary key for an index table are highly skewed. For example, if 90% of the records contain the same value in a field, then creating and maintaining an index table to look up data based on this field might create more overhead than scanning sequentially through the data. However, if queries very frequently target values that lie in the remaining 10%, this index can be useful. You should understand the queries that your application is performing, and how frequently they're performed. Example Azure storage tables provide a highly scalable key/value data store for applications running in the cloud. Applications store and retrieve data values by specifying a key. The data values can contain multiple fields, but the structure of a data item is opaque to table storage, which simply handles a data item as an array of bytes. Azure storage tables also support sharding. The sharding key includes two elements, a partition key and a row key. Items that have the same partition key are stored in the same partition (shard), and the items are stored in row key order within a shard. Table storage is optimized for performing queries that fetch data falling within a contiguous range of row key values within a partition. If you're building cloud applications that store information in Azure tables, you should structure your data with this feature in mind. For example, consider an application that stores information about movies. The application frequently queries movies by genre (action, documentary, historical, comedy, drama, and so on). You could create an Azure table with partitions for each genre by using the genre as the partition key, and specifying the movie name as the row key, as shown in the next figure. This approach is less effective if the application also needs to query movies by starring actor. In this case, you can create a separate Azure table that acts as an index table. The partition key is the actor and the row key is the movie name. The data for each actor will be stored in separate partitions. If a movie stars more than one actor, the same movie will occur in multiple partitions. You can duplicate the movie data in the values held by each partition by adopting the first approach described in the Solution section above. However, it's likely that each movie will be replicated several times (once for each actor), so it might be more efficient to partially denormalize the data to support the most common queries (such as the names of the other actors) and enable an application to retrieve any remaining details by including the partition key necessary to find the complete information in the genre partitions. This approach is described by the third option in the Solution section. The next figure shows this approach. Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Data Consistency Primer. An index table must be maintained as the data that it indexes changes. In the cloud, it might not be possible or appropriate to perform operations that update an index as part of the same transaction that modifies the data. In that case, an eventually consistent approach is more suitable. Provides information on the issues surrounding eventual consistency. Sharding pattern. The Index Table pattern is frequently used in conjunction with data partitioned by using shards. The Sharding pattern provides more information on how to divide a data store into a set of shards. Materialized View pattern. Instead of indexing data to support queries that summarize data, it might be more appropriate to create a materialized view of the data. Describes how to support efficient summary queries by generating prepopulated views over data. Leader Election pattern 3/13/2019 • 11 minutes to read • Edit Online Coordinate the actions performed by a collection of collaborating instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the others. This can help to ensure that instances don't conflict with each other, cause contention for shared resources, or inadvertently interfere with the work that other instances are performing. Context and problem A typical cloud application has many tasks acting in a coordinated manner. These tasks could all be instances running the same code and requiring access to the same resources, or they might be working together in parallel to perform the individual parts of a complex calculation. The task instances might run separately for much of the time, but it might also be necessary to coordinate the actions of each instance to ensure that they don’t conflict, cause contention for shared resources, or accidentally interfere with the work that other task instances are performing. For example: In a cloud-based system that implements horizontal scaling, multiple instances of the same task could be running at the same time with each instance serving a different user. If these instances write to a shared resource, it's necessary to coordinate their actions to prevent each instance from overwriting the changes made by the others. If the tasks are performing individual elements of a complex calculation in parallel, the results need to be aggregated when they all complete. The task instances are all peers, so there isn't a natural leader that can act as the coordinator or aggregator. Solution A single task instance should be elected to act as the leader, and this instance should coordinate the actions of the other subordinate task instances. If all of the task instances are running the same code, they are each capable of acting as the leader. Therefore, the election process must be managed carefully to prevent two or more instances taking over the leader role at the same time. The system must provide a robust mechanism for selecting the leader. This method has to cope with events such as network outages or process failures. In many solutions, the subordinate task instances monitor the leader through some type of heartbeat method, or by polling. If the designated leader terminates unexpectedly, or a network failure makes the leader unavailable to the subordinate task instances, it's necessary for them to elect a new leader. There are several strategies for electing a leader among a set of tasks in a distributed environment, including: Selecting the task instance with the lowest-ranked instance or process ID. Racing to acquire a shared, distributed mutex. The first task instance that acquires the mutex is the leader. However, the system must ensure that, if the leader terminates or becomes disconnected from the rest of the system, the mutex is released to allow another task instance to become the leader. Implementing one of the common leader election algorithms such as the Bully Algorithm or the Ring Algorithm. These algorithms assume that each candidate in the election has a unique ID, and that it can communicate with the other candidates reliably. Issues and considerations Consider the following points when deciding how to implement this pattern: The process of electing a leader should be resilient to transient and persistent failures. It must be possible to detect when the leader has failed or has become otherwise unavailable (such as due to a communications failure). How quickly detection is needed is system dependent. Some systems might be able to function for a short time without a leader, during which a transient fault might be fixed. In other cases, it might be necessary to detect leader failure immediately and trigger a new election. In a system that implements horizontal autoscaling, the leader could be terminated if the system scales back and shuts down some of the computing resources. Using a shared, distributed mutex introduces a dependency on the external service that provides the mutex. The service constitutes a single point of failure. If it becomes unavailable for any reason, the system won't be able to elect a leader. Using a single dedicated process as the leader is a straightforward approach. However, if the process fails there could be a significant delay while it's restarted. The resulting latency can affect the performance and response times of other processes if they're waiting for the leader to coordinate an operation. Implementing one of the leader election algorithms manually provides the greatest flexibility for tuning and optimizing the code. When to use this pattern Use this pattern when the tasks in a distributed application, such as a cloud-hosted solution, need careful coordination and there's no natural leader. Avoid making the leader a bottleneck in the system. The purpose of the leader is to coordinate the work of the subordinate tasks, and it doesn't necessarily have to participate in this work itself—although it should be able to do so if the task isn't elected as the leader. This pattern might not be useful if: There's a natural leader or dedicated process that can always act as the leader. For example, it might be possible to implement a singleton process that coordinates the task instances. If this process fails or becomes unhealthy, the system can shut it down and restart it. The coordination between tasks can be achieved using a more lightweight method. For example, if several task instances simply need coordinated access to a shared resource, a better solution is to use optimistic or pessimistic locking to control access. A third-party solution is more appropriate. For example, the Microsoft Azure HDInsight service (based on Apache Hadoop) uses the services provided by Apache Zookeeper to coordinate the map and reduce tasks that collect and summarize data. Example The DistributedMutex project in the LeaderElection solution (a sample that demonstrates this pattern is available on GitHub) shows how to use a lease on an Azure Storage blob to provide a mechanism for implementing a shared, distributed mutex. This mutex can be used to elect a leader among a group of role instances in an Azure cloud service. The first role instance to acquire the lease is elected the leader, and remains the leader until it releases the lease or isn't able to renew the lease. Other role instances can continue to monitor the blob lease in case the leader is no longer available. A blob lease is an exclusive write lock over a blob. A single blob can be the subject of only one lease at any point in time. A role instance can request a lease over a specified blob, and it'll be granted the lease if no other role instance holds a lease over the same blob. Otherwise the request will throw an exception. To avoid a faulted role instance retaining the lease indefinitely, specify a lifetime for the lease. When this expires, the lease becomes available. However, while a role instance holds the lease it can request that the lease is renewed, and it'll be granted the lease for a further period of time. The role instance can continually repeat this process if it wants to retain the lease. For more information on how to lease a blob, see Lease Blob (REST API). The BlobDistributedMutex class in the C# example below contains the RunTaskWhenMutexAquired method that enables a role instance to attempt to acquire a lease over a specified blob. The details of the blob (the name, container, and storage account) are passed to the constructor in a BlobSettings object when the BlobDistributedMutex object is created (this object is a simple struct that is included in the sample code). The constructor also accepts a Task that references the code that the role instance should run if it successfully acquires the lease over the blob and is elected the leader. Note that the code that handles the low -level details of acquiring the lease is implemented in a separate helper class named BlobLeaseManager . public class BlobDistributedMutex { ... private readonly BlobSettings blobSettings; private readonly Func<CancellationToken, Task> taskToRunWhenLeaseAcquired; ... public BlobDistributedMutex(BlobSettings blobSettings, Func<CancellationToken, Task> taskToRunWhenLeaseAquired) { this.blobSettings = blobSettings; this.taskToRunWhenLeaseAquired = taskToRunWhenLeaseAquired; } public async Task RunTaskWhenMutexAcquired(CancellationToken token) { var leaseManager = new BlobLeaseManager(blobSettings); await this.RunTaskWhenBlobLeaseAcquired(leaseManager, token); } ... The RunTaskWhenMutexAquired method in the code sample above invokes the RunTaskWhenBlobLeaseAcquired method shown in the following code sample to actually acquire the lease. The RunTaskWhenBlobLeaseAcquired method runs asynchronously. If the lease is successfully acquired, the role instance has been elected the leader. The purpose of the taskToRunWhenLeaseAcquired delegate is to perform the work that coordinates the other role instances. If the lease isn't acquired, another role instance has been elected as the leader and the current role instance remains a subordinate. Note that the TryAcquireLeaseOrWait method is a helper method that uses the BlobLeaseManager object to acquire the lease. private async Task RunTaskWhenBlobLeaseAcquired( BlobLeaseManager leaseManager, CancellationToken token) { while (!token.IsCancellationRequested) { // Try to acquire the blob lease. // Otherwise wait for a short time before trying again. string leaseId = await this.TryAquireLeaseOrWait(leaseManager, token); if (!string.IsNullOrEmpty(leaseId)) { // Create a new linked cancellation token source so that if either the // original token is canceled or the lease can't be renewed, the // leader task can be canceled. using (var leaseCts = CancellationTokenSource.CreateLinkedTokenSource(new[] { token })) { // Run the leader task. var leaderTask = this.taskToRunWhenLeaseAquired.Invoke(leaseCts.Token); ... } } } ... } The task started by the leader also runs asynchronously. While this task is running, the RunTaskWhenBlobLeaseAquired method shown in the following code sample periodically attempts to renew the lease. This helps to ensure that the role instance remains the leader. In the sample solution, the delay between renewal requests is less than the time specified for the duration of the lease in order to prevent another role instance from being elected the leader. If the renewal fails for any reason, the task is canceled. If the lease fails to be renewed or the task is canceled (possibly as a result of the role instance shutting down), the lease is released. At this point, this or another role instance might be elected as the leader. The code extract below shows this part of the process. private async Task RunTaskWhenBlobLeaseAcquired( BlobLeaseManager leaseManager, CancellationToken token) { while (...) { ... if (...) { ... using (var leaseCts = ...) { ... // Keep renewing the lease in regular intervals. // If the lease can't be renewed, then the task completes. var renewLeaseTask = this.KeepRenewingLease(leaseManager, leaseId, leaseCts.Token); // When any task completes (either the leader task itself or when it // couldn't renew the lease) then cancel the other task. await CancelAllWhenAnyCompletes(leaderTask, renewLeaseTask, leaseCts); } } } } ... } The KeepRenewingLease method is another helper method that uses the BlobLeaseManager object to renew the lease. The CancelAllWhenAnyCompletes method cancels the tasks specified as the first two parameters. The following diagram illustrates using the BlobDistributedMutex class to elect a leader and run a task that coordinates operations. The following code example shows how to use the BlobDistributedMutex class in a worker role. This code acquires a lease over a blob named MyLeaderCoordinatorTask in the lease's container in development storage, and specifies that the code defined in the MyLeaderCoordinatorTask method should run if the role instance is elected the leader. var settings = new BlobSettings(CloudStorageAccount.DevelopmentStorageAccount, "leases", "MyLeaderCoordinatorTask"); var cts = new CancellationTokenSource(); var mutex = new BlobDistributedMutex(settings, MyLeaderCoordinatorTask); mutex.RunTaskWhenMutexAcquired(this.cts.Token); ... // Method that runs if the role instance is elected the leader private static async Task MyLeaderCoordinatorTask(CancellationToken token) { ... } Note the following points about the sample solution: The blob is a potential single point of failure. If the blob service becomes unavailable, or is inaccessible, the leader won't be able to renew the lease and no other role instance will be able to acquire the lease. In this case, no role instance will be able to act as the leader. However, the blob service is designed to be resilient, so complete failure of the blob service is considered to be extremely unlikely. If the task being performed by the leader stalls, the leader might continue to renew the lease, preventing any other role instance from acquiring the lease and taking over the leader role in order to coordinate tasks. In the real world, the health of the leader should be checked at frequent intervals. The election process is nondeterministic. You can't make any assumptions about which role instance will acquire the blob lease and become the leader. The blob used as the target of the blob lease shouldn't be used for any other purpose. If a role instance attempts to store data in this blob, this data won't be accessible unless the role instance is the leader and holds the blob lease. Related patterns and guidance The following guidance might also be relevant when implementing this pattern: This pattern has a downloadable sample application. Autoscaling Guidance. It's possible to start and stop instances of the task hosts as the load on the application varies. Autoscaling can help to maintain throughput and performance during times of peak processing. Compute Partitioning Guidance. This guidance describes how to allocate tasks to hosts in a cloud service in a way that helps to minimize running costs while maintaining the scalability, performance, availability, and security of the service. The Task-based Asynchronous Pattern. An example illustrating the Bully Algorithm. An example illustrating the Ring Algorithm. Apache Curator a client library for Apache ZooKeeper. The article Lease Blob (REST API) on MSDN. Materialized View pattern 3/13/2019 • 7 minutes to read • Edit Online Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. This can help support efficient querying and data extraction, and improve application performance. Context and problem When storing data, the priority for developers and data administrators is often focused on how the data is stored, as opposed to how it's read. The chosen storage format is usually closely related to the format of the data, requirements for managing data size and data integrity, and the kind of store in use. For example, when using NoSQL document store, the data is often represented as a series of aggregates, each containing all of the information for that entity. However, this can have a negative effect on queries. When a query only needs a subset of the data from some entities, such as a summary of orders for several customers without all of the order details, it must extract all of the data for the relevant entities in order to obtain the required information. Solution To support efficient querying, a common solution is to generate, in advance, a view that materializes the data in a format suited to the required results set. The Materialized View pattern describes generating prepopulated views of data in environments where the source data isn't in a suitable format for querying, where generating a suitable query is difficult, or where query performance is poor due to the nature of the data or the data store. These materialized views, which only contain data required by a query, allow applications to quickly obtain the information they need. In addition to joining tables or combining data entities, materialized views can include the current values of calculated columns or data items, the results of combining values or executing transformations on the data items, and values specified as part of the query. A materialized view can even be optimized for just a single query. A key point is that a materialized view and the data it contains is completely disposable because it can be entirely rebuilt from the source data stores. A materialized view is never updated directly by an application, and so it's a specialized cache. When the source data for the view changes, the view must be updated to include the new information. You can schedule this to happen automatically, or when the system detects a change to the original data. In some cases it might be necessary to regenerate the view manually. The figure shows an example of how the Materialized View pattern might be used. Issues and considerations Consider the following points when deciding how to implement this pattern: How and when the view will be updated. Ideally it'll regenerate in response to an event indicating a change to the source data, although this can lead to excessive overhead if the source data changes rapidly. Alternatively, consider using a scheduled task, an external trigger, or a manual action to regenerate the view. In some systems, such as when using the Event Sourcing pattern to maintain a store of only the events that modified the data, materialized views are necessary. Prepopulating views by examining all events to determine the current state might be the only way to obtain information from the event store. If you're not using Event Sourcing, you need to consider whether a materialized view is helpful or not. Materialized views tend to be specifically tailored to one, or a small number of queries. If many queries are used, materialized views can result in unacceptable storage capacity requirements and storage cost. Consider the impact on data consistency when generating the view, and when updating the view if this occurs on a schedule. If the source data is changing at the point when the view is generated, the copy of the data in the view won't be fully consistent with the original data. Consider where you'll store the view. The view doesn't have to be located in the same store or partition as the original data. It can be a subset from a few different partitions combined. A view can be rebuilt if lost. Because of that, if the view is transient and is only used to improve query performance by reflecting the current state of the data, or to improve scalability, it can be stored in a cache or in a less reliable location. When defining a materialized view, maximize its value by adding data items or columns to it based on computation or transformation of existing data items, on values passed in the query, or on combinations of these values when appropriate. Where the storage mechanism supports it, consider indexing the materialized view to further increase performance. Most relational databases support indexing for views, as do big data solutions based on Apache Hadoop. When to use this pattern This pattern is useful when: Creating materialized views over data that's difficult to query directly, or where queries must be very complex to extract data that's stored in a normalized, semi-structured, or unstructured way. Creating temporary views that can dramatically improve query performance, or can act directly as source views or data transfer objects for the UI, for reporting, or for display. Supporting occasionally connected or disconnected scenarios where connection to the data store isn't always available. The view can be cached locally in this case. Simplifying queries and exposing data for experimentation in a way that doesn't require knowledge of the source data format. For example, by joining different tables in one or more databases, or one or more domains in NoSQL stores, and then formatting the data to fit its eventual use. Providing access to specific subsets of the source data that, for security or privacy reasons, shouldn't be generally accessible, open to modification, or fully exposed to users. Bridging different data stores, to take advantage of their individual capabilities. For example, using a cloud store that's efficient for writing as the reference data store, and a relational database that offers good query and read performance to hold the materialized views. This pattern isn't useful in the following situations: The source data is simple and easy to query. The source data changes very quickly, or can be accessed without using a view. In these cases, you should avoid the processing overhead of creating views. Consistency is a high priority. The views might not always be fully consistent with the original data. Example The following figure shows an example of using the Materialized View pattern to generate a summary of sales. Data in the Order, OrderItem, and Customer tables in separate partitions in an Azure storage account are combined to generate a view containing the total sales value for each product in the Electronics category, along with a count of the number of customers who made purchases of each item. Creating this materialized view requires complex queries. However, by exposing the query result as a materialized view, users can easily obtain the results and use them directly or incorporate them in another query. The view is likely to be used in a reporting system or dashboard, and can be updated on a scheduled basis such as weekly. Although this example utilizes Azure table storage, many relational database management systems also provide native support for materialized views. Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Data Consistency Primer. The summary information in a materialized view has to be maintained so that it reflects the underlying data values. As the data values change, it might not be practical to update the summary data in real time, and instead you'll have to adopt an eventually consistent approach. Summarizes the issues surrounding maintaining consistency over distributed data, and describes the benefits and tradeoffs of different consistency models. Command and Query Responsibility Segregation (CQRS ) pattern. Use to update the information in a materialized view by responding to events that occur when the underlying data values change. Event Sourcing pattern. Use in conjunction with the CQRS pattern to maintain the information in a materialized view. When the data values a materialized view is based on are changed, the system can raise events that describe these changes and save them in an event store. Index Table pattern. The data in a materialized view is typically organized by a primary key, but queries might need to retrieve information from this view by examining data in other fields. Use to create secondary indexes over data sets for data stores that don't support native secondary indexes. Pipes and Filters pattern 3/13/2019 • 12 minutes to read • Edit Online Decompose a task that performs complex processing into a series of separate elements that can be reused. This can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently. Context and problem An application is required to perform a variety of tasks of varying complexity on the information that it processes. A straightforward but inflexible approach to implementing an application is to perform this processing as a monolithic module. However, this approach is likely to reduce the opportunities for refactoring the code, optimizing it, or reusing it if parts of the same processing are required elsewhere within the application. The figure illustrates the issues with processing data using the monolithic approach. An application receives and processes data from two sources. The data from each source is processed by a separate module that performs a series of tasks to transform this data, before passing the result to the business logic of the application. Some of the tasks that the monolithic modules perform are functionally very similar, but the modules have been designed separately. The code that implements the tasks is closely coupled in a module, and has been developed with little or no thought given to reuse or scalability. However, the processing tasks performed by each module, or the deployment requirements for each task, could change as business requirements are updated. Some tasks might be compute intensive and could benefit from running on powerful hardware, while others might not require such expensive resources. Also, additional processing might be required in the future, or the order in which the tasks performed by the processing could change. A solution is required that addresses these issues, and increases the possibilities for code reuse. Solution Break down the processing required for each stream into a set of separate components (or filters), each performing a single task. By standardizing the format of the data that each component receives and sends, these filters can be combined together into a pipeline. This helps to avoid duplicating code, and makes it easy to remove, replace, or integrate additional components if the processing requirements change. The next figure shows a solution implemented using pipes and filters. The time it takes to process a single request depends on the speed of the slowest filter in the pipeline. One or more filters could be a bottleneck, especially if a large number of requests appear in a stream from a particular data source. A key advantage of the pipeline structure is that it provides opportunities for running parallel instances of slow filters, enabling the system to spread the load and improve throughput. The filters that make up a pipeline can run on different machines, enabling them to be scaled independently and take advantage of the elasticity that many cloud environments provide. A filter that is computationally intensive can run on high performance hardware, while other less demanding filters can be hosted on less expensive commodity hardware. The filters don't even have to be in the same data center or geographical location, which allows each element in a pipeline to run in an environment that is close to the resources it requires. The next figure shows an example applied to the pipeline for the data from Source 1. If the input and output of a filter are structured as a stream, it's possible to perform the processing for each filter in parallel. The first filter in the pipeline can start its work and output its results, which are passed directly on to the next filter in the sequence before the first filter has completed its work. Another benefit is the resiliency that this model can provide. If a filter fails or the machine it's running on is no longer available, the pipeline can reschedule the work that the filter was performing and direct this work to another instance of the component. Failure of a single filter doesn't necessarily result in failure of the entire pipeline. Using the Pipes and Filters pattern in conjunction with the Compensating Transaction pattern is an alternative approach to implementing distributed transactions. A distributed transaction can be broken down into separate, compensable tasks, each of which can be implemented by using a filter that also implements the Compensating Transaction pattern. The filters in a pipeline can be implemented as separate hosted tasks running close to the data that they maintain. Issues and considerations You should consider the following points when deciding how to implement this pattern: Complexity. The increased flexibility that this pattern provides can also introduce complexity, especially if the filters in a pipeline are distributed across different servers. Reliability. Use an infrastructure that ensures that data flowing between filters in a pipeline won't be lost. Idempotency. If a filter in a pipeline fails after receiving a message and the work is rescheduled to another instance of the filter, part of the work might have already been completed. If this work updates some aspect of the global state (such as information stored in a database), the same update could be repeated. A similar issue might occur if a filter fails after posting its results to the next filter in the pipeline, but before indicating that it's completed its work successfully. In these cases, the same work could be repeated by another instance of the filter, causing the same results to be posted twice. This could result in subsequent filters in the pipeline processing the same data twice. Therefore filters in a pipeline should be designed to be idempotent. For more information see Idempotency Patterns on Jonathan Oliver’s blog. Repeated messages. If a filter in a pipeline fails after posting a message to the next stage of the pipeline, another instance of the filter might be run, and it'll post a copy of the same message to the pipeline. This could cause two instances of the same message to be passed to the next filter. To avoid this, the pipeline should detect and eliminate duplicate messages. If you're implementing the pipeline by using message queues (such as Microsoft Azure Service Bus queues), the message queuing infrastructure might provide automatic duplicate message detection and removal. Context and state. In a pipeline, each filter essentially runs in isolation and shouldn't make any assumptions about how it was invoked. This means that each filter should be provided with sufficient context to perform its work. This context could include a large amount of state information. When to use this pattern Use this pattern when: The processing required by an application can easily be broken down into a set of independent steps. The processing steps performed by an application have different scalability requirements. It's possible to group filters that should scale together in the same process. For more information, see the Compute Resource Consolidation pattern. Flexibility is required to allow reordering of the processing steps performed by an application, or the capability to add and remove steps. The system can benefit from distributing the processing for steps across different servers. A reliable solution is required that minimizes the effects of failure in a step while data is being processed. This pattern might not be useful when: The processing steps performed by an application aren't independent, or they have to be performed together as part of the same transaction. The amount of context or state information required by a step makes this approach inefficient. It might be possible to persist state information to a database instead, but don't use this strategy if the additional load on the database causes excessive contention. Example You can use a sequence of message queues to provide the infrastructure required to implement a pipeline. An initial message queue receives unprocessed messages. A component implemented as a filter task listens for a message on this queue, performs its work, and then posts the transformed message to the next queue in the sequence. Another filter task can listen for messages on this queue, process them, post the results to another queue, and so on until the fully transformed data appears in the final message in the queue. The next figure illustrates implementing a pipeline using message queues. If you're building a solution on Azure you can use Service Bus queues to provide a reliable and scalable queuing mechanism. The ServiceBusPipeFilter class shown below in C# demonstrates how you can implement a filter that receives input messages from a queue, processes these messages, and posts the results to another queue. The ServiceBusPipeFilter class is defined in the PipesAndFilters.Shared project available from GitHub. public class ServiceBusPipeFilter { ... private readonly string inQueuePath; private readonly string outQueuePath; ... private QueueClient inQueue; private QueueClient outQueue; ... public ServiceBusPipeFilter(..., string inQueuePath, string outQueuePath = null) { ... this.inQueuePath = inQueuePath; this.outQueuePath = outQueuePath; } public void Start() { ... // Create the outbound filter queue if it doesn't exist. ... this.outQueue = QueueClient.CreateFromConnectionString(...); ... // Create the inbound and outbound queue clients. this.inQueue = QueueClient.CreateFromConnectionString(...); } public void OnPipeFilterMessageAsync( Func<BrokeredMessage, Task<BrokeredMessage>> asyncFilterTask, ...) { ... this.inQueue.OnMessageAsync( async (msg) => { ... // Process the filter and send the output to the // next queue in the pipeline. var outMessage = await asyncFilterTask(msg); // Send the message from the filter processor // to the next queue in the pipeline. if (outQueue != null) { await outQueue.SendAsync(outMessage); } // // // // // // // // Note: There's a chance that the same message could be sent twice or that a message gets processed by an upstream or downstream filter at the same time. This would happen in a situation where processing of a message was completed, it was sent to the next pipe/queue, and then failed to complete when using the PeekLock method. Idempotent message processing and concurrency should be considered in a real-world implementation. }, options); } public async Task Close(TimeSpan timespan) { // Pause the processing threads. this.pauseProcessingEvent.Reset(); // There's no clean approach for waiting for the threads to complete // the processing. This example simply stops any new processing, waits // for the existing thread to complete, then closes the message pump // and finally returns. Thread.Sleep(timespan); this.inQueue.Close(); ... } ... } The method in the ServiceBusPipeFilter class connects to a pair of input and output queues, and the Close method disconnects from the input queue. The OnPipeFilterMessageAsync method performs the actual processing of messages, the asyncFilterTask parameter to this method specifies the processing to be performed. The OnPipeFilterMessageAsync method waits for incoming messages on the input queue, runs the code specified by the asyncFilterTask parameter over each message as it arrives, and posts the results to the output queue. The queues themselves are specified by the constructor. Start The sample solution implements filters in a set of worker roles. Each worker role can be scaled independently, depending on the complexity of the business processing that it performs or the resources required for processing. Additionally, multiple instances of each worker role can be run in parallel to improve throughput. The following code shows an Azure worker role named in the sample solution. PipeFilterARoleEntry , defined in the PipeFilterA project public class PipeFilterARoleEntry : RoleEntryPoint { ... private ServiceBusPipeFilter pipeFilterA; public override bool OnStart() { ... this.pipeFilterA = new ServiceBusPipeFilter( ..., Constants.QueueAPath, Constants.QueueBPath); this.pipeFilterA.Start(); ... } public override void Run() { this.pipeFilterA.OnPipeFilterMessageAsync(async (msg) => { // Clone the message and update it. // Properties set by the broker (Deliver count, enqueue time, ...) // aren't cloned and must be copied over if required. var newMsg = msg.Clone(); await Task.Delay(500); // DOING WORK Trace.TraceInformation("Filter A processed message:{0} at {1}", msg.MessageId, DateTime.UtcNow); newMsg.Properties.Add(Constants.FilterAMessageKey, "Complete"); return newMsg; }); ... } ... } This role contains a ServiceBusPipeFilter object. The OnStart method in the role connects to the queues for receiving input messages and posting output messages (the names of the queues are defined in the Constants class). The Run method invokes the OnPipeFilterMessagesAsync method to perform some processing on each message that's received (in this example, the processing is simulated by waiting for a short period of time). When processing is complete, a new message is constructed containing the results (in this case, the input message has a custom property added), and this message is posted to the output queue. The sample code contains another worker role named PipeFilterBRoleEntry in the PipeFilterB project. This role is similar to PipeFilterARoleEntry except that it performs different processing in the Run method. In the example solution, these two roles are combined to construct a pipeline, the output queue for the PipeFilterARoleEntry role is the input queue for the PipeFilterBRoleEntry role. The sample solution also provides two additional roles named InitialSenderRoleEntry (in the InitialSender project) and FinalReceiverRoleEntry (in the FinalReceiver project). The InitialSenderRoleEntry role provides the initial message in the pipeline. The OnStart method connects to a single queue and the Run method posts a method to this queue. This queue is the input queue used by the PipeFilterARoleEntry role, so posting a message to it causes the message to be received and processed by the PipeFilterARoleEntry role. The processed message then passes through the PipeFilterBRoleEntry role. The input queue for the FinalReceiveRoleEntry role is the output queue for the PipeFilterBRoleEntry role. The Run method in the FinalReceiveRoleEntry role, shown below, receives the message and performs some final processing. Then it writes the values of the custom properties added by the filters in the pipeline to the trace output. public class FinalReceiverRoleEntry : RoleEntryPoint { ... // Final queue/pipe in the pipeline to process data from. private ServiceBusPipeFilter queueFinal; public override bool OnStart() { ... // Set up the queue. this.queueFinal = new ServiceBusPipeFilter(...,Constants.QueueFinalPath); this.queueFinal.Start(); ... } public override void Run() { this.queueFinal.OnPipeFilterMessageAsync( async (msg) => { await Task.Delay(500); // DOING WORK // The pipeline message was received. Trace.TraceInformation( "Pipeline Message Complete - FilterA:{0} FilterB:{1}", msg.Properties[Constants.FilterAMessageKey], msg.Properties[Constants.FilterBMessageKey]); return null; }); ... } ... } Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: A sample that demonstrates this pattern is available on GitHub. Competing Consumers pattern. A pipeline can contain multiple instances of one or more filters. This approach is useful for running parallel instances of slow filters, enabling the system to spread the load and improve throughput. Each instance of a filter will compete for input with the other instances, two instances of a filter shouldn't be able to process the same data. Provides an explanation of this approach. Compute Resource Consolidation pattern. It might be possible to group filters that should scale together into the same process. Provides more information about the benefits and tradeoffs of this strategy. Compensating Transaction pattern. A filter can be implemented as an operation that can be reversed, or that has a compensating operation that restores the state to a previous version in the event of a failure. Explains how this can be implemented to maintain or achieve eventual consistency. Idempotency Patterns on Jonathan Oliver’s blog. Priority Queue pattern 3/13/2019 • 9 minutes to read • Edit Online Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. This pattern is useful in applications that offer different service level guarantees to individual clients. Context and Problem Applications can delegate specific tasks to other services, for example, to perform background processing or to integrate with other applications or services. In the cloud, a message queue is typically used to delegate tasks to background processing. In many cases the order requests are received in by a service isn't important. In some cases, though, it's necessary to prioritize specific requests. These requests should be processed earlier than lower priority requests that were sent previously by the application. Solution A queue is usually a first-in, first-out (FIFO ) structure, and consumers typically receive messages in the same order that they were posted to the queue. However, some message queues support priority messaging. The application posting a message can assign a priority and the messages in the queue are automatically reordered so that those with a higher priority will be received before those with a lower priority. The figure illustrates a queue with priority messaging. Most message queue implementations support multiple consumers (following the Competing Consumers pattern), and the number of consumer processes can be scaled up or down depending on demand. In systems that don't support priority-based message queues, an alternative solution is to maintain a separate queue for each priority. The application is responsible for posting messages to the appropriate queue. Each queue can have a separate pool of consumers. Higher priority queues can have a larger pool of consumers running on faster hardware than lower priority queues. The next figure illustrates using separate message queues for each priority. A variation on this strategy is to have a single pool of consumers that check for messages on high priority queues first, and only then start to fetch messages from lower priority queues. There are some semantic differences between a solution that uses a single pool of consumer processes (either with a single queue that supports messages with different priorities or with multiple queues that each handle messages of a single priority), and a solution that uses multiple queues with a separate pool for each queue. In the single pool approach, higher priority messages are always received and processed before lower priority messages. In theory, messages that have a very low priority could be continually superseded and might never be processed. In the multiple pool approach, lower priority messages will always be processed, just not as quickly as those of a higher priority (depending on the relative size of the pools and the resources that they have available). Using a priority queuing mechanism can provide the following advantages: It allows applications to meet business requirements that require prioritization of availability or performance, such as offering different levels of service to specific groups of customers. It can help to minimize operational costs. In the single queue approach, you can scale back the number of consumers if necessary. High priority messages will still be processed first (although possibly more slowly), and lower priority messages might be delayed for longer. If you've implemented the multiple message queue approach with separate pools of consumers for each queue, you can reduce the pool of consumers for lower priority queues, or even suspend processing for some very low priority queues by stopping all the consumers that listen for messages on those queues. The multiple message queue approach can help maximize application performance and scalability by partitioning messages based on processing requirements. For example, vital tasks can be prioritized to be handled by receivers that run immediately while less important background tasks can be handled by receivers that are scheduled to run at less busy periods. Issues and Considerations Consider the following points when deciding how to implement this pattern: Define the priorities in the context of the solution. For example, high priority could mean that messages should be processed within ten seconds. Identify the requirements for handling high priority items, and the other resources that should be allocated to meet these criteria. Decide if all high priority items must be processed before any lower priority items. If the messages are being processed by a single pool of consumers, you have to provide a mechanism that can preempt and suspend a task that's handling a low priority message if a higher priority message becomes available. In the multiple queue approach, when using a single pool of consumer processes that listen on all queues rather than a dedicated consumer pool for each queue, the consumer must apply an algorithm that ensures it always services messages from higher priority queues before those from lower priority queues. Monitor the processing speed on high and low priority queues to ensure that messages in these queues are processed at the expected rates. If you need to guarantee that low priority messages will be processed, it's necessary to implement the multiple message queue approach with multiple pools of consumers. Alternatively, in a queue that supports message prioritization, it's possible to dynamically increase the priority of a queued message as it ages. However, this approach depends on the message queue providing this feature. Using a separate queue for each message priority works best for systems that have a small number of welldefined priorities. Message priorities can be determined logically by the system. For example, rather than having explicit high and low priority messages, they could be designated as “fee paying customer,” or “non-fee paying customer.” Depending on your business model, your system can allocate more resources to processing messages from fee paying customers than non-fee paying ones. There might be a financial and processing cost associated with checking a queue for a message (some commercial messaging systems charge a small fee each time a message is posted or retrieved, and each time a queue is queried for messages). This cost increases when checking multiple queues. It's possible to dynamically adjust the size of a pool of consumers based on the length of the queue that the pool is servicing. For more information, see the Autoscaling Guidance. When to use this pattern This pattern is useful in scenarios where: The system must handle multiple tasks that have different priorities. Different users or tenants should be served with different priority. Example Microsoft Azure doesn't provide a queuing mechanism that natively supports automatic prioritization of messages through sorting. However, it does provide Azure Service Bus topics and subscriptions that support a queuing mechanism that provides message filtering, together with a wide range of flexible capabilities that make it ideal for use in most priority queue implementations. An Azure solution can implement a Service Bus topic an application can post messages to, in the same way as a queue. Messages can contain metadata in the form of application-defined custom properties. Service Bus subscriptions can be associated with the topic, and these subscriptions can filter messages based on their properties. When an application sends a message to a topic, the message is directed to the appropriate subscription where it can be read by a consumer. Consumer processes can retrieve messages from a subscription using the same semantics as a message queue (a subscription is a logical queue). The following figure illustrates implementing a priority queue with Azure Service Bus topics and subscriptions. In the figure above, the application creates several messages and assigns a custom property called Priority in each message with a value, either High or Low . The application posts these messages to a topic. The topic has two associated subscriptions that both filter messages by examining the Priority property. One subscription accepts messages where the Priority property is set to High , and the other accepts messages where the Priority property is set to Low . A pool of consumers reads messages from each subscription. The high priority subscription has a larger pool, and these consumers might be running on more powerful computers with more resources available than the consumers in the low priority pool. Note that there's nothing special about the designation of high and low priority messages in this example. They're simply labels specified as properties in each message, and are used to direct messages to a specific subscription. If additional priorities are required, it's relatively easy to create further subscriptions and pools of consumer processes to handle these priorities. The PriorityQueue solution available on GitHub contains an implementation of this approach. This solution contains two worker role projects named PriorityQueue.High and PriorityQueue.Low . These worker roles inherit from the PriorityWorkerRole class that contains the functionality for connecting to a specified subscription in the OnStart method. The PriorityQueue.High and PriorityQueue.Low worker roles connect to different subscriptions, defined by their configuration settings. An administrator can configure different numbers of each role to be run. Typically there'll be more instances of the PriorityQueue.High worker role than the PriorityQueue.Low worker role. The Run method in the PriorityWorkerRole class arranges for the virtual ProcessMessage method (also defined in the PriorityWorkerRole class) to be run for each message received on the queue. The following code shows the Run and ProcessMessage methods. The QueueManager class, defined in the PriorityQueue.Shared project, provides helper methods for using Azure Service Bus queues. public class PriorityWorkerRole : RoleEntryPoint { private QueueManager queueManager; ... public override void Run() { // Start listening for messages on the subscription. var subscriptionName = CloudConfigurationManager.GetSetting("SubscriptionName"); this.queueManager.ReceiveMessages(subscriptionName, this.ProcessMessage); ...; } ... protected virtual async Task ProcessMessage(BrokeredMessage message) { // Simulating processing. await Task.Delay(TimeSpan.FromSeconds(2)); } } The and PriorityQueue.Low worker roles both override the default functionality of the method. The code below shows the ProcessMessage method for the PriorityQueue.High worker PriorityQueue.High ProcessMessage role. protected override async Task ProcessMessage(BrokeredMessage message) { // Simulate message processing for High priority messages. await base.ProcessMessage(message); Trace.TraceInformation("High priority message processed by " + RoleEnvironment.CurrentRoleInstance.Id + " MessageId: " + message.MessageId); } When an application posts messages to the topic associated with the subscriptions used by the PriorityQueue.High and PriorityQueue.Low worker roles, it specifies the priority by using the Priority custom property, as shown in the following code example. This code (implemented in the WorkerRole class in the PriorityQueue.Sender project), uses the SendBatchAsync helper method of the QueueManager class to post messages to a topic in batches. // Send a low priority batch. var lowMessages = new List<BrokeredMessage>(); for (int i = 0; i < 10; i++) { var message = new BrokeredMessage() { MessageId = Guid.NewGuid().ToString() }; message.Properties["Priority"] = Priority.Low; lowMessages.Add(message); } this.queueManager.SendBatchAsync(lowMessages).Wait(); ... // Send a high priority batch. var highMessages = new List<BrokeredMessage>(); for (int i = 0; i < 10; i++) { var message = new BrokeredMessage() { MessageId = Guid.NewGuid().ToString() }; message.Properties["Priority"] = Priority.High; highMessages.Add(message); } this.queueManager.SendBatchAsync(highMessages).Wait(); Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: A sample that demonstrates this pattern is available on GitHub. Asynchronous Messaging Primer. A consumer service that processes a request might need to send a reply to the instance of the application that posted the request. Provides information on the strategies that you can use to implement request/response messaging. Competing Consumers pattern. To increase the throughput of the queues, it’s possible to have multiple consumers that listen on the same queue, and process the tasks in parallel. These consumers will compete for messages, but only one should be able to process each message. Provides more information on the benefits and tradeoffs of implementing this approach. Throttling pattern. You can implement throttling by using queues. Priority messaging can be used to ensure that requests from critical applications, or applications being run by high-value customers, are given priority over requests from less important applications. Autoscaling Guidance. It might be possible to scale the size of the pool of consumer processes handling a queue depending on the length of the queue. This strategy can help to improve performance, especially for pools handling high priority messages. Enterprise Integration Patterns with Service Bus on Abhishek Lal’s blog. Publisher-Subscriber pattern 3/13/2019 • 6 minutes to read • Edit Online Enable an application to announce events to multiple interested consumers aynchronously, without coupling the senders to the receivers. Also called: Pub/sub messaging Context and problem In cloud-based and distributed applications, components of the system often need to provide information to other components as events happen. Asynchronous messaging is an effective way to decouple senders from consumers, and avoid blocking the sender to wait for a response. However, using a dedicated message queue for each consumer does not effectively scale to many consumers. Also, some of the consumers might be interested in only a subset of the information. How can the sender announce events to all interested consumers without knowing their identities? Solution Introduce an asynchronous messaging subsystem that includes the following: An input messaging channel used by the sender. The sender packages events into messages, using a known message format, and sends these messages via the input channel. The sender in this pattern is also called the publisher. NOTE A message is a packet of data. An event is a message that notifies other components about a change or an action that has taken place. One output messaging channel per consumer. The consumers are known as subscribers. A mechanism for copying each message from the input channel to the output channels for all subscribers interested in that message. This operation is typically handled by a intermediary such as a message broker or event bus. The following diagram shows the logical components of this pattern: Pub/sub messaging has the following benefits: It decouples subsystems that still need to communicate. Subsystems can be managed independently, and messages can be properly managed even if one or more receivers are offline. It increases scalability and improves responsiveness of the sender. The sender can quickly send a single message to the input channel, then return to its core processing responsibilities. The messaging infrastructure is responsible for ensuring messages are delivered to interested subscribers. It improves reliability. Asynchronous messaging helps applications continue to run smoothly under increased loads and handle intermittent failures more effectively. It allows for deferred or scheduled processing. Subscribers can wait to pick up messages until off-peak hours, or messages can be routed or processed according to a specific schedule. It enables simpler integration between systems using different platforms, programming languages, or communication protocols, as well as between on-premises systems and applications running in the cloud. It facilitates asynchronous workflows across an enterprise. It improves testability. Channels can be monitored and messages can be inspected or logged as part of an overall integration test strategy. It provides separation of concerns for your applications. Each application can focus on its core capabilities, while the messaging infrastructure handles everything required to reliably route messages to multiple consumers. Issues and considerations Consider the following points when deciding how to implement this pattern: Existing technologies. It is strongly recommended to use available messaging products and services that support a publish-subscribe model, rather than building your own. In Azure, consider using Service Bus or Event Grid. Other technologies that can be used for pub/sub messaging include Redis, RabbitMQ, and Apache Kafka. Subscription handling. The messaging infrastructure must provide mechanisms that consumers can use to subscribe to or unsubscribe from available channels. Security. Connecting to any message channel must be restricted by security policy to prevent eavesdropping by unauthorized users or applications. Subsets of messages. Subscribers are usually only interested in subset of the messages distributed by a publisher. Messaging services often allow subscribers to narrow the set of messages received by: Topics. Each topic has a dedicated output channel, and each consumer can subscribe to all relevant topics. Content filtering. Messages are inspected and distributed based on the content of each message. Each subscriber can specify the content it is interested in. Wildcard subscribers. Consider allowing subscribers to subscribe to multiple topics via wildcards. Bi-directional communication. The channels in a publish-subscribe system are treated as unidirectional. If a specific subscriber needs to send acknowledgement or communicate status back to the publisher, consider using the Request/Reply Pattern. This pattern uses one channel to send a message to the subscriber, and a separate reply channel for communicating back to the publisher. Message ordering. The order in which consumer instances receive messages isn't guaranteed, and doesn't necessarily reflect the order in which the messages were created. Design the system to ensure that message processing is idempotent to help eliminate any dependency on the order of message handling. Message priority. Some solutions may require that messages are processed in a specific order. The Priority Queue pattern provides a mechanism for ensuring specific messages are delivered before others. Poison messages. A malformed message, or a task that requires access to resources that aren't available, can cause a service instance to fail. The system should prevent such messages being returned to the queue. Instead, capture and store the details of these messages elsewhere so that they can be analyzed if necessary. Repeated messages. The same message might be sent more than once. For example, the sender might fail after posting a message. Then a new instance of the sender might start up and repeat the message. The messaging infrastructure should implement duplicate message detection and removal (also known as deduping) based on message IDs in order to provide at-most-once delivery of messages. Message expiration. A message might have a limited lifetime. If it isn't processed within this period, it might no longer be relevant and should be discarded. A sender can specify an experiation time as part of the data in the message. A receiver can examine this information before deciding whether to perform the business logic associated with the message. Message scheduling. A message might be temporarily embargoed and should not be processed until a specific date and time. The message should not be available to a receiver until this time. When to use this pattern Use this pattern when: An application needs to broadcast information to a significant number of consumers. An application needs to communicate with one or more independently-developed applications or services, which may use different platforms, programming languages, and communication protocols. An application can send information to consumers without requiring real-time responses from the consumers. The systems being integrated are designed to support an eventual consistency model for their data. An application needs to communicate information to multiple consumers, which may have different availability requirements or uptime schedules than the sender. This pattern might not be useful when: An application has only a few consumers who need significantly different information from the producing application. An application requires near real-time interaction with consumers. Example The following diagram shows an enterprise integration architecture that uses Service Bus to coordinate workflows, and Event Grid notify subsystems of events that occur. For more information, see Enterprise integration on Azure using message queues and events. Related patterns and guidance The following patterns and guidance might be relevant when implementing this pattern: Choose between Azure services that deliver messages. The Event-driven architecture style is an architecture style that uses pub/sub messaging. Asynchronous Messaging Primer. Message queues are an asynchronous communications mechanism. If a consumer service needs to send a reply to an application, it might be necessary to implement some form of response messaging. The Asynchronous Messaging Primer provides information on how to implement request/reply messaging using message queues. Observer Pattern. The Publish-Subscribe pattern builds on the Observer pattern by decoupling subjects from observers via asynchronous messaging. Message Broker Pattern. Many messaging subsystems that support a publish-subscribe modek are implemented via a message broker. Queue-Based Load Leveling pattern 3/13/2019 • 5 minutes to read • Edit Online Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. This can help to minimize the impact of peaks in demand on availability and responsiveness for both the task and the service. Context and problem Many solutions in the cloud involve running tasks that invoke services. In this environment, if a service is subjected to intermittent heavy loads, it can cause performance or reliability issues. A service could be part of the same solution as the tasks that use it, or it could be a third-party service providing access to frequently used resources such as a cache or a storage service. If the same service is used by a number of tasks running concurrently, it can be difficult to predict the volume of requests to the service at any time. A service might experience peaks in demand that cause it to overload and be unable to respond to requests in a timely manner. Flooding a service with a large number of concurrent requests can also result in the service failing if it's unable to handle the contention these requests cause. Solution Refactor the solution and introduce a queue between the task and the service. The task and the service run asynchronously. The task posts a message containing the data required by the service to a queue. The queue acts as a buffer, storing the message until it's retrieved by the service. The service retrieves the messages from the queue and processes them. Requests from a number of tasks, which can be generated at a highly variable rate, can be passed to the service through the same message queue. This figure shows using a queue to level the load on a service. The queue decouples the tasks from the service, and the service can handle the messages at its own pace regardless of the volume of requests from concurrent tasks. Additionally, there's no delay to a task if the service isn't available at the time it posts a message to the queue. This pattern provides the following benefits: It can help to maximize availability because delays arising in services won't have an immediate and direct impact on the application, which can continue to post messages to the queue even when the service isn't available or isn't currently processing messages. It can help to maximize scalability because both the number of queues and the number of services can be varied to meet demand. It can help to control costs because the number of service instances deployed only have to be adequate to meet average load rather than the peak load. Some services implement throttling when demand reaches a threshold beyond which the system could fail. Throttling can reduce the functionality available. You can implement load leveling with these services to ensure that this threshold isn't reached. Issues and considerations Consider the following points when deciding how to implement this pattern: It's necessary to implement application logic that controls the rate at which services handle messages to avoid overwhelming the target resource. Avoid passing spikes in demand to the next stage of the system. Test the system under load to ensure that it provides the required leveling, and adjust the number of queues and the number of service instances that handle messages to achieve this. Message queues are a one-way communication mechanism. If a task expects a reply from a service, it might be necessary to implement a mechanism that the service can use to send a response. For more information, see the Asynchronous Messaging Primer. Be careful if you apply autoscaling to services that are listening for requests on the queue. This can result in increased contention for any resources that these services share and diminish the effectiveness of using the queue to level the load. When to use this pattern This pattern is useful to any application that uses services that are subject to overloading. This pattern isn't useful if the application expects a response from the service with minimal latency. Example A web app writes data to an external data store. If a large number of instances of the web app run concurrently, the data store might be unable to respond to requests quickly enough, causing requests to time out, be throttled, or otherwise fail. The following diagram shows a data store being overwhelmed by a large number of concurrent requests from instances of an application. To resolve this, you can use a queue to level the load between the application instances and the data store. An Azure Functions app reads the messages from the queue and performs the read/write requests to the data store. The application logic in the function app can control the rate at which it passes requests to the data store, to prevent the store from being overwhelmed. (Otherwise the function app will just re-introduce the same problem at the back end.) Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Asynchronous Messaging Primer. Message queues are inherently asynchronous. It might be necessary to redesign the application logic in a task if it's adapted from communicating directly with a service to using a message queue. Similarly, it might be necessary to refactor a service to accept requests from a message queue. Alternatively, it might be possible to implement a proxy service, as described in the example. Competing Consumers pattern. It might be possible to run multiple instances of a service, each acting as a message consumer from the load-leveling queue. You can use this approach to adjust the rate at which messages are received and passed to a service. Throttling pattern. A simple way to implement throttling with a service is to use queue-based load leveling and route all requests to a service through a message queue. The service can process requests at a rate that ensures that resources required by the service aren't exhausted, and to reduce the amount of contention that could occur. Choose between Azure messaging services. Information about choosing a messaging and queuing mechanism in Azure applications. Improve scalability in an Azure web application. This reference architecture includes queue-based load leveling as part of the architecture. Retry pattern 3/13/2019 • 10 minutes to read • Edit Online Enable an application to handle transient failures when it tries to connect to a service or network resource, by transparently retrying a failed operation. This can improve the stability of the application. Context and problem An application that communicates with elements running in the cloud has to be sensitive to the transient faults that can occur in this environment. Faults include the momentary loss of network connectivity to components and services, the temporary unavailability of a service, or timeouts that occur when a service is busy. These faults are typically self-correcting, and if the action that triggered a fault is repeated after a suitable delay it's likely to be successful. For example, a database service that's processing a large number of concurrent requests can implement a throttling strategy that temporarily rejects any further requests until its workload has eased. An application trying to access the database might fail to connect, but if it tries again after a delay it might succeed. Solution In the cloud, transient faults aren't uncommon and an application should be designed to handle them elegantly and transparently. This minimizes the effects faults can have on the business tasks the application is performing. If an application detects a failure when it tries to send a request to a remote service, it can handle the failure using the following strategies: Cancel. If the fault indicates that the failure isn't transient or is unlikely to be successful if repeated, the application should cancel the operation and report an exception. For example, an authentication failure caused by providing invalid credentials is not likely to succeed no matter how many times it's attempted. Retry. If the specific fault reported is unusual or rare, it might have been caused by unusual circumstances such as a network packet becoming corrupted while it was being transmitted. In this case, the application could retry the failing request again immediately because the same failure is unlikely to be repeated and the request will probably be successful. Retry after delay. If the fault is caused by one of the more commonplace connectivity or busy failures, the network or service might need a short period while the connectivity issues are corrected or the backlog of work is cleared. The application should wait for a suitable time before retrying the request. For the more common transient failures, the period between retries should be chosen to spread requests from multiple instances of the application as evenly as possible. This reduces the chance of a busy service continuing to be overloaded. If many instances of an application are continually overwhelming a service with retry requests, it'll take the service longer to recover. If the request still fails, the application can wait and make another attempt. If necessary, this process can be repeated with increasing delays between retry attempts, until some maximum number of requests have been attempted. The delay can be increased incrementally or exponentially, depending on the type of failure and the probability that it'll be corrected during this time. The following diagram illustrates invoking an operation in a hosted service using this pattern. If the request is unsuccessful after a predefined number of attempts, the application should treat the fault as an exception and handle it accordingly. The application should wrap all attempts to access a remote service in code that implements a retry policy matching one of the strategies listed above. Requests sent to different services can be subject to different policies. Some vendors provide libraries that implement retry policies, where the application can specify the maximum number of retries, the time between retry attempts, and other parameters. An application should log the details of faults and failing operations. This information is useful to operators. If a service is frequently unavailable or busy, it's often because the service has exhausted its resources. You can reduce the frequency of these faults by scaling out the service. For example, if a database service is continually overloaded, it might be beneficial to partition the database and spread the load across multiple servers. Microsoft Entity Framework provides facilities for retrying database operations. Also, most Azure services and client SDKs include a retry mechanism. For more information, see Retry guidance for specific services. Issues and considerations You should consider the following points when deciding how to implement this pattern. The retry policy should be tuned to match the business requirements of the application and the nature of the failure. For some noncritical operations, it's better to fail fast rather than retry several times and impact the throughput of the application. For example, in an interactive web application accessing a remote service, it's better to fail after a smaller number of retries with only a short delay between retry attempts, and display a suitable message to the user (for example, “please try again later”). For a batch application, it might be more appropriate to increase the number of retry attempts with an exponentially increasing delay between attempts. An aggressive retry policy with minimal delay between attempts, and a large number of retries, could further degrade a busy service that's running close to or at capacity. This retry policy could also affect the responsiveness of the application if it's continually trying to perform a failing operation. If a request still fails after a significant number of retries, it's better for the application to prevent further requests going to the same resource and simply report a failure immediately. When the period expires, the application can tentatively allow one or more requests through to see whether they're successful. For more details of this strategy, see the Circuit Breaker pattern. Consider whether the operation is idempotent. If so, it's inherently safe to retry. Otherwise, retries could cause the operation to be executed more than once, with unintended side effects. For example, a service might receive the request, process the request successfully, but fail to send a response. At that point, the retry logic might resend the request, assuming that the first request wasn't received. A request to a service can fail for a variety of reasons raising different exceptions depending on the nature of the failure. Some exceptions indicate a failure that can be resolved quickly, while others indicate that the failure is longer lasting. It's useful for the retry policy to adjust the time between retry attempts based on the type of the exception. Consider how retrying an operation that's part of a transaction will affect the overall transaction consistency. Fine tune the retry policy for transactional operations to maximize the chance of success and reduce the need to undo all the transaction steps. Ensure that all retry code is fully tested against a variety of failure conditions. Check that it doesn't severely impact the performance or reliability of the application, cause excessive load on services and resources, or generate race conditions or bottlenecks. Implement retry logic only where the full context of a failing operation is understood. For example, if a task that contains a retry policy invokes another task that also contains a retry policy, this extra layer of retries can add long delays to the processing. It might be better to configure the lower-level task to fail fast and report the reason for the failure back to the task that invoked it. This higher-level task can then handle the failure based on its own policy. It's important to log all connectivity failures that cause a retry so that underlying problems with the application, services, or resources can be identified. Investigate the faults that are most likely to occur for a service or a resource to discover if they're likely to be long lasting or terminal. If they are, it's better to handle the fault as an exception. The application can report or log the exception, and then try to continue either by invoking an alternative service (if one is available), or by offering degraded functionality. For more information on how to detect and handle long-lasting faults, see the Circuit Breaker pattern. When to use this pattern Use this pattern when an application could experience transient faults as it interacts with a remote service or accesses a remote resource. These faults are expected to be short lived, and repeating a request that has previously failed could succeed on a subsequent attempt. This pattern might not be useful: When a fault is likely to be long lasting, because this can affect the responsiveness of an application. The application might be wasting time and resources trying to repeat a request that's likely to fail. For handling failures that aren't due to transient faults, such as internal exceptions caused by errors in the business logic of an application. As an alternative to addressing scalability issues in a system. If an application experiences frequent busy faults, it's often a sign that the service or resource being accessed should be scaled up. Example This example in C# illustrates an implementation of the Retry pattern. The OperationWithBasicRetryAsync method, shown below, invokes an external service asynchronously through the TransientOperationAsync method. The details of the TransientOperationAsync method will be specific to the service and are omitted from the sample code. private int retryCount = 3; private readonly TimeSpan delay = TimeSpan.FromSeconds(5); public async Task OperationWithBasicRetryAsync() { int currentRetry = 0; for (;;) { try { // Call external service. await TransientOperationAsync(); // Return or break. break; } catch (Exception ex) { Trace.TraceError("Operation Exception"); currentRetry++; // // // // if { Check if the exception thrown was a transient exception based on the logic in the error detection strategy. Determine whether to retry the operation, as well as how long to wait, based on the retry strategy. (currentRetry > this.retryCount || !IsTransient(ex)) // If this isn't a transient error or we shouldn't retry, // rethrow the exception. throw; } } // Wait to retry the operation. // Consider calculating an exponential delay here and // using a strategy best suited for the operation and fault. await Task.Delay(delay); } } // Async method that wraps a call to a remote service (details not shown). private async Task TransientOperationAsync() { ... } The statement that invokes this method is contained in a try/catch block wrapped in a for loop. The for loop exits if the call to the TransientOperationAsync method succeeds without throwing an exception. If the TransientOperationAsync method fails, the catch block examines the reason for the failure. If it's believed to be a transient error the code waits for a short delay before retrying the operation. The for loop also tracks the number of times that the operation has been attempted, and if the code fails three times the exception is assumed to be more long lasting. If the exception isn't transient or it's long lasting, the catch handler throws an exception. This exception exits the for loop and should be caught by the code that invokes the OperationWithBasicRetryAsync method. The IsTransient method, shown below, checks for a specific set of exceptions that are relevant to the environment the code is run in. The definition of a transient exception will vary according to the resources being accessed and the environment the operation is being performed in. private bool IsTransient(Exception ex) { // Determine if the exception is transient. // In some cases this is as simple as checking the exception type, in other // cases it might be necessary to inspect other properties of the exception. if (ex is OperationTransientException) return true; var webException = ex as WebException; if (webException != null) { // If the web exception contains one of the following status values // it might be transient. return new[] {WebExceptionStatus.ConnectionClosed, WebExceptionStatus.Timeout, WebExceptionStatus.RequestCanceled }. Contains(webException.Status); } // Additional exception checking logic goes here. return false; } Related patterns and guidance Circuit Breaker pattern. The Retry pattern is useful for handling transient faults. If a failure is expected to be more long lasting, it might be more appropriate to implement the Circuit Breaker pattern. The Retry pattern can also be used in conjunction with a circuit breaker to provide a comprehensive approach to handling faults. Retry guidance for specific services Connection Resiliency Scheduler Agent Supervisor pattern 3/13/2019 • 16 minutes to read • Edit Online Coordinate a set of distributed actions as a single operation. If any of the actions fail, try to handle the failures transparently, or else undo the work that was performed, so the entire operation succeeds or fails as a whole. This can add resiliency to a distributed system, by enabling it to recover and retry actions that fail due to transient exceptions, long-lasting faults, and process failures. Context and problem An application performs tasks that include a number of steps, some of which might invoke remote services or access remote resources. The individual steps might be independent of each other, but they are orchestrated by the application logic that implements the task. Whenever possible, the application should ensure that the task runs to completion and resolve any failures that might occur when accessing remote services or resources. Failures can occur for many reasons. For example, the network might be down, communications could be interrupted, a remote service might be unresponsive or in an unstable state, or a remote resource might be temporarily inaccessible, perhaps due to resource constraints. In many cases the failures will be transient and can be handled by using the Retry pattern. If the application detects a more permanent fault it can't easily recover from, it must be able to restore the system to a consistent state and ensure integrity of the entire operation. Solution The Scheduler Agent Supervisor pattern defines the following actors. These actors orchestrate the steps to be performed as part of the overall task. The Scheduler arranges for the steps that make up the task to be executed and orchestrates their operation. These steps can be combined into a pipeline or workflow. The Scheduler is responsible for ensuring that the steps in this workflow are performed in the right order. As each step is performed, the Scheduler records the state of the workflow, such as "step not yet started," "step running," or "step completed." The state information should also include an upper limit of the time allowed for the step to finish, called the complete-by time. If a step requires access to a remote service or resource, the Scheduler invokes the appropriate Agent, passing it the details of the work to be performed. The Scheduler typically communicates with an Agent using asynchronous request/response messaging. This can be implemented using queues, although other distributed messaging technologies could be used instead. The Scheduler performs a similar function to the Process Manager in the Process Manager pattern. The actual workflow is typically defined and implemented by a workflow engine that's controlled by the Scheduler. This approach decouples the business logic in the workflow from the Scheduler. The Agent contains logic that encapsulates a call to a remote service, or access to a remote resource referenced by a step in a task. Each Agent typically wraps calls to a single service or resource, implementing the appropriate error handling and retry logic (subject to a timeout constraint, described later). If the steps in the workflow being run by the Scheduler use several services and resources across different steps, each step might reference a different Agent (this is an implementation detail of the pattern). The Supervisor monitors the status of the steps in the task being performed by the Scheduler. It runs periodically (the frequency will be system specific), and examines the status of steps maintained by the Scheduler. If it detects any that have timed out or failed, it arranges for the appropriate Agent to recover the step or execute the appropriate remedial action (this might involve modifying the status of a step). Note that the recovery or remedial actions are implemented by the Scheduler and Agents. The Supervisor should simply request that these actions be performed. The Scheduler, Agent, and Supervisor are logical components and their physical implementation depends on the technology being used. For example, several logical agents might be implemented as part of a single web service. The Scheduler maintains information about the progress of the task and the state of each step in a durable data store, called the state store. The Supervisor can use this information to help determine whether a step has failed. The figure illustrates the relationship between the Scheduler, the Agents, the Supervisor, and the state store. NOTE This diagram shows a simplified version of the pattern. In a real implementation, there might be many instances of the Scheduler running concurrently, each a subset of tasks. Similarly, the system could run multiple instances of each Agent, or even multiple Supervisors. In this case, Supervisors must coordinate their work with each other carefully to ensure that they don’t compete to recover the same failed steps and tasks. The Leader Election pattern provides one possible solution to this problem. When the application is ready to run a task, it submits a request to the Scheduler. The Scheduler records initial state information about the task and its steps (for example, step not yet started) in the state store and then starts performing the operations defined by the workflow. As the Scheduler starts each step, it updates the information about the state of that step in the state store (for example, step running). If a step references a remote service or resource, the Scheduler sends a message to the appropriate Agent. The message contains the information that the Agent needs to pass to the service or access the resource, in addition to the complete-by time for the operation. If the Agent completes its operation successfully, it returns a response to the Scheduler. The Scheduler can then update the state information in the state store (for example, step completed) and perform the next step. This process continues until the entire task is complete. An Agent can implement any retry logic that's necessary to perform its work. However, if the Agent doesn't complete its work before the complete-by period expires, the Scheduler will assume that the operation has failed. In this case, the Agent should stop its work and not try to return anything to the Scheduler (not even an error message), or try any form of recovery. The reason for this restriction is that, after a step has timed out or failed, another instance of the Agent might be scheduled to run the failing step (this process is described later). If the Agent fails, the Scheduler won't receive a response. The pattern doesn't make a distinction between a step that has timed out and one that has genuinely failed. If a step times out or fails, the state store will contain a record that indicates that the step is running, but the complete-by time will have passed. The Supervisor looks for steps like this and tries to recover them. One possible strategy is for the Supervisor to update the complete-by value to extend the time available to complete the step, and then send a message to the Scheduler identifying the step that has timed out. The Scheduler can then try to repeat this step. However, this design requires the tasks to be idempotent. The Supervisor might need to prevent the same step from being retried if it continually fails or times out. To do this, the Supervisor could maintain a retry count for each step, along with the state information, in the state store. If this count exceeds a predefined threshold the Supervisor can adopt a strategy of waiting for an extended period before notifying the Scheduler that it should retry the step, in the expectation that the fault will be resolved during this period. Alternatively, the Supervisor can send a message to the Scheduler to request the entire task be undone by implementing a Compensating Transaction pattern. This approach will depend on the Scheduler and Agents providing the information necessary to implement the compensating operations for each step that completed successfully. It isn't the purpose of the Supervisor to monitor the Scheduler and Agents, and restart them if they fail. This aspect of the system should be handled by the infrastructure these components are running in. Similarly, the Supervisor shouldn't have knowledge of the actual business operations that the tasks being performed by the Scheduler are running (including how to compensate should these tasks fail). This is the purpose of the workflow logic implemented by the Scheduler. The sole responsibility of the Supervisor is to determine whether a step has failed and arrange either for it to be repeated or for the entire task containing the failed step to be undone. If the Scheduler is restarted after a failure, or the workflow being performed by the Scheduler terminates unexpectedly, the Scheduler should be able to determine the status of any inflight task that it was handling when it failed, and be prepared to resume this task from that point. The implementation details of this process are likely to be system specific. If the task can't be recovered, it might be necessary to undo the work already performed by the task. This might also require implementing a compensating transaction. The key advantage of this pattern is that the system is resilient in the event of unexpected temporary or unrecoverable failures. The system can be constructed to be self healing. For example, if an Agent or the Scheduler fails, a new one can be started and the Supervisor can arrange for a task to be resumed. If the Supervisor fails, another instance can be started and can take over from where the failure occurred. If the Supervisor is scheduled to run periodically, a new instance can be automatically started after a predefined interval. The state store can be replicated to reach an even greater degree of resiliency. Issues and considerations You should consider the following points when deciding how to implement this pattern: This pattern can be difficult to implement and requires thorough testing of each possible failure mode of the system. The recovery/retry logic implemented by the Scheduler is complex and dependent on state information held in the state store. It might also be necessary to record the information required to implement a compensating transaction in a durable data store. How often the Supervisor runs will be important. It should run often enough to prevent any failed steps from blocking an application for an extended period, but it shouldn't run so often that it becomes an overhead. The steps performed by an Agent could be run more than once. The logic that implements these steps should be idempotent. When to use this pattern Use this pattern when a process that runs in a distributed environment, such as the cloud, must be resilient to communications failure and/or operational failure. This pattern might not be suitable for tasks that don't invoke remote services or access remote resources. Example A web application that implements an ecommerce system has been deployed on Microsoft Azure. Users can run this application to browse the available products and to place orders. The user interface runs as a web role, and the order processing elements of the application are implemented as a set of worker roles. Part of the order processing logic involves accessing a remote service, and this aspect of the system could be prone to transient or more long-lasting faults. For this reason, the designers used the Scheduler Agent Supervisor pattern to implement the order processing elements of the system. When a customer places an order, the application constructs a message that describes the order and posts this message to a queue. A separate submission process, running in a worker role, retrieves the message, inserts the order details into the orders database, and creates a record for the order process in the state store. Note that the inserts into the orders database and the state store are performed as part of the same operation. The submission process is designed to ensure that both inserts complete together. The state information that the submission process creates for the order includes: OrderID. The ID of the order in the orders database. LockedBy. The instance ID of the worker role handling the order. There might be multiple current instances of the worker role running the Scheduler, but each order should only be handled by a single instance. CompleteBy. The time the order should be processed by. ProcessState. The current state of the task handling the order. The possible states are: Pending. The order has been created but processing hasn't yet been started. Processing. The order is currently being processed. Processed. The order has been processed successfully. Error. The order processing has failed. FailureCount. The number of times that processing has been tried for the order. In this state information, the OrderID field is copied from the order ID of the new order. The LockedBy and CompleteBy fields are set to null , the ProcessState field is set to Pending , and the FailureCount field is set to 0. NOTE In this example, the order handling logic is relatively simple and only has a single step that invokes a remote service. In a more complex multistep scenario, the submission process would likely involve several steps, and so several records would be created in the state store — each one describing the state of an individual step. The Scheduler also runs as part of a worker role and implements the business logic that handles the order. An instance of the Scheduler polling for new orders examines the state store for records where the LockedBy field is null and the ProcessState field is pending. When the Scheduler finds a new order, it immediately populates the LockedBy field with its own instance ID, sets the CompleteBy field to an appropriate time, and sets the ProcessState field to processing. The code is designed to be exclusive and atomic to ensure that two concurrent instances of the Scheduler can't try to handle the same order simultaneously. The Scheduler then runs the business workflow to process the order asynchronously, passing it the value in the OrderID field from the state store. The workflow handling the order retrieves the details of the order from the orders database and performs its work. When a step in the order processing workflow needs to invoke the remote service, it uses an Agent. The workflow step communicates with the Agent using a pair of Azure Service Bus message queues acting as a request/response channel. The figure shows a high level view of the solution. The message sent to the Agent from a workflow step describes the order and includes the complete-by time. If the Agent receives a response from the remote service before the complete-by time expires, it posts a reply message on the Service Bus queue on which the workflow is listening. When the workflow step receives the valid reply message, it completes its processing and the Scheduler sets the `ProcessState field of the order state to processed. At this point, the order processing has completed successfully. If the complete-by time expires before the Agent receives a response from the remote service, the Agent simply halts its processing and terminates handling the order. Similarly, if the workflow handling the order exceeds the complete-by time, it also terminates. In both cases, the state of the order in the state store remains set to processing, but the complete-by time indicates that the time for processing the order has passed and the process is deemed to have failed. Note that if the Agent that's accessing the remote service, or the workflow that's handling the order (or both) terminate unexpectedly, the information in the state store will again remain set to processing and eventually will have an expired complete-by value. If the Agent detects an unrecoverable, nontransient fault while it's trying to contact the remote service, it can send an error response back to the workflow. The Scheduler can set the status of the order to error and raise an event that alerts an operator. The operator can then try to resolve the reason for the failure manually and resubmit the failed processing step. The Supervisor periodically examines the state store looking for orders with an expired complete-by value. If the Supervisor finds a record, it increments the FailureCount field. If the failure count value is below a specified threshold value, the Supervisor resets the LockedBy field to null, updates the CompleteBy field with a new expiration time, and sets the ProcessState field to pending. An instance of the Scheduler can pick up this order and perform its processing as before. If the failure count value exceeds a specified threshold, the reason for the failure is assumed to be nontransient. The Supervisor sets the status of the order to error and raises an event that alerts an operator. In this example, the Supervisor is implemented in a separate worker role. You can use a variety of strategies to arrange for the Supervisor task to be run, including using the Azure Scheduler service (not to be confused with the Scheduler component in this pattern). For more information about the Azure Scheduler service, visit the Scheduler page. Although it isn't shown in this example, the Scheduler might need to keep the application that submitted the order informed about the progress and status of the order. The application and the Scheduler are isolated from each other to eliminate any dependencies between them. The application has no knowledge of which instance of the Scheduler is handling the order, and the Scheduler is unaware of which specific application instance posted the order. To allow the order status to be reported, the application could use its own private response queue. The details of this response queue would be included as part of the request sent to the submission process, which would include this information in the state store. The Scheduler would then post messages to this queue indicating the status of the order (request received, order completed, order failed, and so on). It should include the order ID in these messages so they can be correlated with the original request by the application. Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Retry pattern. An Agent can use this pattern to transparently retry an operation that accesses a remote service or resource that has previously failed. Use when the expectation is that the cause of the failure is transient and can be corrected. Circuit Breaker pattern. An Agent can use this pattern to handle faults that take a variable amount of time to correct when connecting to a remote service or resource. Compensating Transaction pattern. If the workflow being performed by a Scheduler can't be completed successfully, it might be necessary to undo any work it's previously performed. The Compensating Transaction pattern describes how this can be achieved for operations that follow the eventual consistency model. These types of operations are commonly implemented by a Scheduler that performs complex business processes and workflows. Asynchronous Messaging Primer. The components in the Scheduler Agent Supervisor pattern typically run decoupled from each other and communicate asynchronously. Describes some of the approaches that can be used to implement asynchronous communication based on message queues. Leader Election pattern. It might be necessary to coordinate the actions of multiple instances of a Supervisor to prevent them from attempting to recover the same failed process. The Leader Election pattern describes how to do this. Cloud Architecture: The Scheduler-Agent-Supervisor Pattern on Clemens Vasters' blog Process Manager pattern Reference 6: A Saga on Sagas. An example showing how the CQRS pattern uses a process manager (part of the CQRS Journey guidance). Microsoft Azure Scheduler Sharding pattern 3/13/2019 • 19 minutes to read • Edit Online Divide a data store into a set of horizontal partitions or shards. This can improve scalability when storing and accessing large volumes of data. Context and problem A data store hosted by a single server might be subject to the following limitations: Storage space. A data store for a large-scale cloud application is expected to contain a huge volume of data that could increase significantly over time. A server typically provides only a finite amount of disk storage, but you can replace existing disks with larger ones, or add further disks to a machine as data volumes grow. However, the system will eventually reach a limit where it isn't possible to easily increase the storage capacity on a given server. Computing resources. A cloud application is required to support a large number of concurrent users, each of which run queries that retrieve information from the data store. A single server hosting the data store might not be able to provide the necessary computing power to support this load, resulting in extended response times for users and frequent failures as applications attempting to store and retrieve data time out. It might be possible to add memory or upgrade processors, but the system will reach a limit when it isn't possible to increase the compute resources any further. Network bandwidth. Ultimately, the performance of a data store running on a single server is governed by the rate the server can receive requests and send replies. It's possible that the volume of network traffic might exceed the capacity of the network used to connect to the server, resulting in failed requests. Geography. It might be necessary to store data generated by specific users in the same region as those users for legal, compliance, or performance reasons, or to reduce latency of data access. If the users are dispersed across different countries or regions, it might not be possible to store the entire data for the application in a single data store. Scaling vertically by adding more disk capacity, processing power, memory, and network connections can postpone the effects of some of these limitations, but it's likely to only be a temporary solution. A commercial cloud application capable of supporting large numbers of users and high volumes of data must be able to scale almost indefinitely, so vertical scaling isn't necessarily the best solution. Solution Divide the data store into horizontal partitions or shards. Each shard has the same schema, but holds its own distinct subset of the data. A shard is a data store in its own right (it can contain the data for many entities of different types), running on a server acting as a storage node. This pattern has the following benefits: You can scale the system out by adding further shards running on additional storage nodes. A system can use off-the-shelf hardware rather than specialized and expensive computers for each storage node. You can reduce contention and improve performance by balancing the workload across shards. In the cloud, shards can be located physically close to the users that'll access the data. When dividing a data store up into shards, decide which data should be placed in each shard. A shard typically contains items that fall within a specified range determined by one or more attributes of the data. These attributes form the shard key (sometimes referred to as the partition key). The shard key should be static. It shouldn't be based on data that might change. Sharding physically organizes the data. When an application stores and retrieves data, the sharding logic directs the application to the appropriate shard. This sharding logic can be implemented as part of the data access code in the application, or it could be implemented by the data storage system if it transparently supports sharding. Abstracting the physical location of the data in the sharding logic provides a high level of control over which shards contain which data. It also enables data to migrate between shards without reworking the business logic of an application if the data in the shards need to be redistributed later (for example, if the shards become unbalanced). The tradeoff is the additional data access overhead required in determining the location of each data item as it's retrieved. To ensure optimal performance and scalability, it's important to split the data in a way that's appropriate for the types of queries that the application performs. In many cases, it's unlikely that the sharding scheme will exactly match the requirements of every query. For example, in a multi-tenant system an application might need to retrieve tenant data using the tenant ID, but it might also need to look up this data based on some other attribute such as the tenant’s name or location. To handle these situations, implement a sharding strategy with a shard key that supports the most commonly performed queries. If queries regularly retrieve data using a combination of attribute values, you can likely define a composite shard key by linking attributes together. Alternatively, use a pattern such as Index Table to provide fast lookup to data based on attributes that aren't covered by the shard key. Sharding strategies Three strategies are commonly used when selecting the shard key and deciding how to distribute data across shards. Note that there doesn't have to be a one-to-one correspondence between shards and the servers that host them—a single server can host multiple shards. The strategies are: The Lookup strategy. In this strategy the sharding logic implements a map that routes a request for data to the shard that contains that data using the shard key. In a multi-tenant application all the data for a tenant might be stored together in a shard using the tenant ID as the shard key. Multiple tenants might share the same shard, but the data for a single tenant won't be spread across multiple shards. The figure illustrates sharding tenant data based on tenant IDs. The mapping between the shard key and the physical storage can be based on physical shards where each shard key maps to a physical partition. Alternatively, a more flexible technique for rebalancing shards is virtual partitioning, where shard keys map to the same number of virtual shards, which in turn map to fewer physical partitions. In this approach, an application locates data using a shard key that refers to a virtual shard, and the system transparently maps virtual shards to physical partitions. The mapping between a virtual shard and a physical partition can change without requiring the application code be modified to use a different set of shard keys. The Range strategy. This strategy groups related items together in the same shard, and orders them by shard key—the shard keys are sequential. It's useful for applications that frequently retrieve sets of items using range queries (queries that return a set of data items for a shard key that falls within a given range). For example, if an application regularly needs to find all orders placed in a given month, this data can be retrieved more quickly if all orders for a month are stored in date and time order in the same shard. If each order was stored in a different shard, they'd have to be fetched individually by performing a large number of point queries (queries that return a single data item). The next figure illustrates storing sequential sets (ranges) of data in shard. In this example, the shard key is a composite key containing the order month as the most significant element, followed by the order day and the time. The data for orders is naturally sorted when new orders are created and added to a shard. Some data stores support two-part shard keys containing a partition key element that identifies the shard and a row key that uniquely identifies an item in the shard. Data is usually held in row key order in the shard. Items that are subject to range queries and need to be grouped together can use a shard key that has the same value for the partition key but a unique value for the row key. The Hash strategy. The purpose of this strategy is to reduce the chance of hotspots (shards that receive a disproportionate amount of load). It distributes the data across the shards in a way that achieves a balance between the size of each shard and the average load that each shard will encounter. The sharding logic computes the shard to store an item in based on a hash of one or more attributes of the data. The chosen hashing function should distribute data evenly across the shards, possibly by introducing some random element into the computation. The next figure illustrates sharding tenant data based on a hash of tenant IDs. To understand the advantage of the Hash strategy over other sharding strategies, consider how a multi-tenant application that enrolls new tenants sequentially might assign the tenants to shards in the data store. When using the Range strategy, the data for tenants 1 to n will all be stored in shard A, the data for tenants n+1 to m will all be stored in shard B, and so on. If the most recently registered tenants are also the most active, most data activity will occur in a small number of shards, which could cause hotspots. In contrast, the Hash strategy allocates tenants to shards based on a hash of their tenant ID. This means that sequential tenants are most likely to be allocated to different shards, which will distribute the load across them. The previous figure shows this for tenants 55 and 56. The three sharding strategies have the following advantages and considerations: Lookup. This offers more control over the way that shards are configured and used. Using virtual shards reduces the impact when rebalancing data because new physical partitions can be added to even out the workload. The mapping between a virtual shard and the physical partitions that implement the shard can be modified without affecting application code that uses a shard key to store and retrieve data. Looking up shard locations can impose an additional overhead. Range. This is easy to implement and works well with range queries because they can often fetch multiple data items from a single shard in a single operation. This strategy offers easier data management. For example, if users in the same region are in the same shard, updates can be scheduled in each time zone based on the local load and demand pattern. However, this strategy doesn't provide optimal balancing between shards. Rebalancing shards is difficult and might not resolve the problem of uneven load if the majority of activity is for adjacent shard keys. Hash. This strategy offers a better chance of more even data and load distribution. Request routing can be accomplished directly by using the hash function. There's no need to maintain a map. Note that computing the hash might impose an additional overhead. Also, rebalancing shards is difficult. Most common sharding systems implement one of the approaches described above, but you should also consider the business requirements of your applications and their patterns of data usage. For example, in a multitenant application: You can shard data based on workload. You could segregate the data for highly volatile tenants in separate shards. The speed of data access for other tenants might be improved as a result. You can shard data based on the location of tenants. You can take the data for tenants in a specific geographic region offline for backup and maintenance during off-peak hours in that region, while the data for tenants in other regions remains online and accessible during their business hours. High-value tenants could be assigned their own private, high performing, lightly loaded shards, whereas lower-value tenants might be expected to share more densely-packed, busy shards. The data for tenants that need a high degree of data isolation and privacy can be stored on a completely separate server. Scaling and data movement operations Each of the sharding strategies implies different capabilities and levels of complexity for managing scale in, scale out, data movement, and maintaining state. The Lookup strategy permits scaling and data movement operations to be carried out at the user level, either online or offline. The technique is to suspend some or all user activity (perhaps during off-peak periods), move the data to the new virtual partition or physical shard, change the mappings, invalidate or refresh any caches that hold this data, and then allow user activity to resume. Often this type of operation can be centrally managed. The Lookup strategy requires state to be highly cacheable and replica friendly. The Range strategy imposes some limitations on scaling and data movement operations, which must typically be carried out when a part or all of the data store is offline because the data must be split and merged across the shards. Moving the data to rebalance shards might not resolve the problem of uneven load if the majority of activity is for adjacent shard keys or data identifiers that are within the same range. The Range strategy might also require some state to be maintained in order to map ranges to the physical partitions. The Hash strategy makes scaling and data movement operations more complex because the partition keys are hashes of the shard keys or data identifiers. The new location of each shard must be determined from the hash function, or the function modified to provide the correct mappings. However, the Hash strategy doesn't require maintenance of state. Issues and considerations Consider the following points when deciding how to implement this pattern: Sharding is complementary to other forms of partitioning, such as vertical partitioning and functional partitioning. For example, a single shard can contain entities that have been partitioned vertically, and a functional partition can be implemented as multiple shards. For more information about partitioning, see the Data Partitioning Guidance. Keep shards balanced so they all handle a similar volume of I/O. As data is inserted and deleted, it's necessary to periodically rebalance the shards to guarantee an even distribution and to reduce the chance of hotspots. Rebalancing can be an expensive operation. To reduce the necessity of rebalancing, plan for growth by ensuring that each shard contains sufficient free space to handle the expected volume of changes. You should also develop strategies and scripts you can use to quickly rebalance shards if this becomes necessary. Use stable data for the shard key. If the shard key changes, the corresponding data item might have to move between shards, increasing the amount of work performed by update operations. For this reason, avoid basing the shard key on potentially volatile information. Instead, look for attributes that are invariant or that naturally form a key. Ensure that shard keys are unique. For example, avoid using autoincrementing fields as the shard key. Is some systems, autoincremented fields can't be coordinated across shards, possibly resulting in items in different shards having the same shard key. Autoincremented values in other fields that are not shard keys can also cause problems. For example, if you use autoincremented fields to generate unique IDs, then two different items located in different shards might be assigned the same ID. It might not be possible to design a shard key that matches the requirements of every possible query against the data. Shard the data to support the most frequently performed queries, and if necessary create secondary index tables to support queries that retrieve data using criteria based on attributes that aren't part of the shard key. For more information, see the Index Table pattern. Queries that access only a single shard are more efficient than those that retrieve data from multiple shards, so avoid implementing a sharding system that results in applications performing large numbers of queries that join data held in different shards. Remember that a single shard can contain the data for multiple types of entities. Consider denormalizing your data to keep related entities that are commonly queried together (such as the details of customers and the orders that they have placed) in the same shard to reduce the number of separate reads that an application performs. If an entity in one shard references an entity stored in another shard, include the shard key for the second entity as part of the schema for the first entity. This can help to improve the performance of queries that reference related data across shards. If an application must perform queries that retrieve data from multiple shards, it might be possible to fetch this data by using parallel tasks. Examples include fan-out queries, where data from multiple shards is retrieved in parallel and then aggregated into a single result. However, this approach inevitably adds some complexity to the data access logic of a solution. For many applications, creating a larger number of small shards can be more efficient than having a small number of large shards because they can offer increased opportunities for load balancing. This can also be useful if you anticipate the need to migrate shards from one physical location to another. Moving a small shard is quicker than moving a large one. Make sure the resources available to each shard storage node are sufficient to handle the scalability requirements in terms of data size and throughput. For more information, see the section “Designing Partitions for Scalability” in the Data Partitioning Guidance. Consider replicating reference data to all shards. If an operation that retrieves data from a shard also references static or slow -moving data as part of the same query, add this data to the shard. The application can then fetch all of the data for the query easily, without having to make an additional round trip to a separate data store. If reference data held in multiple shards changes, the system must synchronize these changes across all shards. The system can experience a degree of inconsistency while this synchronization occurs. If you do this, you should design your applications to be able to handle it. It can be difficult to maintain referential integrity and consistency between shards, so you should minimize operations that affect data in multiple shards. If an application must modify data across shards, evaluate whether complete data consistency is actually required. Instead, a common approach in the cloud is to implement eventual consistency. The data in each partition is updated separately, and the application logic must take responsibility for ensuring that the updates all complete successfully, as well as handling the inconsistencies that can arise from querying data while an eventually consistent operation is running. For more information about implementing eventual consistency, see the Data Consistency Primer. Configuring and managing a large number of shards can be a challenge. Tasks such as monitoring, backing up, checking for consistency, and logging or auditing must be accomplished on multiple shards and servers, possibly held in multiple locations. These tasks are likely to be implemented using scripts or other automation solutions, but that might not completely eliminate the additional administrative requirements. Shards can be geolocated so that the data that they contain is close to the instances of an application that use it. This approach can considerably improve performance, but requires additional consideration for tasks that must access multiple shards in different locations. When to use this pattern Use this pattern when a data store is likely to need to scale beyond the resources available to a single storage node, or to improve performance by reducing contention in a data store. NOTE The primary focus of sharding is to improve the performance and scalability of a system, but as a by-product it can also improve availability due to how the data is divided into separate partitions. A failure in one partition doesn't necessarily prevent an application from accessing data held in other partitions, and an operator can perform maintenance or recovery of one or more partitions without making the entire data for an application inaccessible. For more information, see the Data Partitioning Guidance. Example The following example in C# uses a set of SQL Server databases acting as shards. Each database holds a subset of the data used by an application. The application retrieves data that's distributed across the shards using its own sharding logic (this is an example of a fan-out query). The details of the data that's located in each shard is returned by a method called GetShards . This method returns an enumerable list of ShardInformation objects, where the ShardInformation type contains an identifier for each shard and the SQL Server connection string that an application should use to connect to the shard (the connection strings aren't shown in the code example). private IEnumerable<ShardInformation> GetShards() { // This retrieves the connection information from a shard store // (commonly a root database). return new[] { new ShardInformation { Id = 1, ConnectionString = ... }, new ShardInformation { Id = 2, ConnectionString = ... } }; } The code below shows how the application uses the list of ShardInformation objects to perform a query that fetches data from each shard in parallel. The details of the query aren't shown, but in this example the data that's retrieved contains a string that could hold information such as the name of a customer if the shards contain the details of customers. The results are aggregated into a ConcurrentBag collection for processing by the application. // Retrieve the shards as a ShardInformation[] instance. var shards = GetShards(); var results = new ConcurrentBag<string>(); // Execute the query against each shard in the shard list. // This list would typically be retrieved from configuration // or from a root/master shard store. Parallel.ForEach(shards, shard => { // NOTE: Transient fault handling isn't included, // but should be incorporated when used in a real world application. using (var con = new SqlConnection(shard.ConnectionString)) { con.Open(); var cmd = new SqlCommand("SELECT ... FROM ...", con); Trace.TraceInformation("Executing command against shard: {0}", shard.Id); var reader = cmd.ExecuteReader(); // Read the results in to a thread-safe data structure. while (reader.Read()) { results.Add(reader.GetString(0)); } } }); Trace.TraceInformation("Fanout query complete - Record Count: {0}", results.Count); Related patterns and guidance The following patterns and guidance might also be relevant when implementing this pattern: Data Consistency Primer. It might be necessary to maintain consistency for data distributed across different shards. Summarizes the issues surrounding maintaining consistency over distributed data, and describes the benefits and tradeoffs of different consistency models. Data Partitioning Guidance. Sharding a data store can introduce a range of additional issues. Describes these issues in relation to partitioning data stores in the cloud to improve scalability, reduce contention, and optimize performance. Index Table pattern. Sometimes it isn't possible to completely support queries just through the design of the shard key. Enables an application to quickly retrieve data from a large data store by specifying a key other than the shard key. Materialized View pattern. To maintain the performance of some query operations, it's useful to create materialized views that aggregate and summarize data, especially if this summary data is based on information that's distributed across shards. Describes how to generate and populate these views. Sidecar pattern 3/13/2019 • 5 minutes to read • Edit Online Deploy components of an application into a separate process or container to provide isolation and encapsulation. This pattern can also enable applications to be composed of heterogeneous components and technologies. This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle. In the pattern, the sidecar is attached to a parent application and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, being created and retired alongside the parent. The sidecar pattern is sometimes referred to as the sidekick pattern and is a decomposition pattern. Context and Problem Applications and services often require related functionality, such as monitoring, logging, configuration, and networking services. These peripheral tasks can be implemented as separate components or services. If they are tightly integrated into the application, they can run in the same process as the application, making efficient use of shared resources. However, this also means they are not well isolated, and an outage in one of these components can affect other components or the entire application. Also, they usually need to be implemented using the same language as the parent application. As a result, the component and the application have close interdependence on each other. If the application is decomposed into services, then each service can be built using different languages and technologies. While this gives more flexibility, it means that each component has its own dependencies and requires language-specific libraries to access the underlying platform and any resources shared with the parent application. In addition, deploying these features as separate services can add latency to the application. Managing the code and dependencies for these language-specific interfaces can also add considerable complexity, especially for hosting, deployment, and management. Solution Co-locate a cohesive set of tasks with the primary application, but place them inside their own process or container, providing a homogeneous interface for platform services across languages. A sidecar service is not necessarily part of the application, but is connected to it. It goes wherever the parent application goes. Sidecars are supporting processes or services that are deployed with the primary application. On a motorcycle, the sidecar is attached to one motorcycle, and each motorcycle can have its own sidecar. In the same way, a sidecar service shares the fate of its parent application. For each instance of the application, an instance of the sidecar is deployed and hosted alongside it. Advantages of using a sidecar pattern include: A sidecar is independent from its primary application in terms of runtime environment and programming language, so you don't need to develop one sidecar per language. The sidecar can access the same resources as the primary application. For example, a sidecar can monitor system resources used by both the sidecar and the primary application. Because of its proximity to the primary application, there’s no significant latency when communicating between them. Even for applications that don’t provide an extensibility mechanism, you can use a sidecar to extend functionality by attaching it as own process in the same host or sub-container as the primary application. The sidecar pattern is often used with containers and referred to as a sidecar container or sidekick container. Issues and Considerations Consider the deployment and packaging format you will use to deploy services, processes, or containers. Containers are particularly well suited to the sidecar pattern. When designing a sidecar service, carefully decide on the interprocess communication mechanism. Try to use language- or framework-agnostic technologies unless performance requirements make that impractical. Before putting functionality into a sidecar, consider whether it would work better as a separate service or a more traditional daemon. Also consider whether the functionality could be implemented as a library or using a traditional extension mechanism. Language-specific libraries may have a deeper level of integration and less network overhead. When to Use this Pattern Use this pattern when: Your primary application uses a heterogeneous set of languages and frameworks. A component located in a sidecar service can be consumed by applications written in different languages using different frameworks. A component is owned by a remote team or a different organization. A component or feature must be co-located on the same host as the application You need a service that shares the overall lifecycle of your main application, but can be independently updated. You need fine-grained control over resource limits for a particular resource or component. For example, you may want to restrict the amount of memory a specific component uses. You can deploy the component as a sidecar and manage memory usage independently of the main application. This pattern may not be suitable: When interprocess communication needs to be optimized. Communication between a parent application and sidecar services includes some overhead, notably latency in the calls. This may not be an acceptable trade-off for chatty interfaces. For small applications where the resource cost of deploying a sidecar service for each instance is not worth the advantage of isolation. When the service needs to scale differently than or independently from the main applications. If so, it may be better to deploy the feature as a separate service. Example The sidecar pattern is applicable to many scenarios. Some common examples: Infrastructure API. The infrastructure development team creates a service that's deployed alongside each application, instead of a language-specific client library to access the infrastructure. The service is loaded as a sidecar and provides a common layer for infrastructure services, including logging, environment data, configuration store, discovery, health checks, and watchdog services. The sidecar also monitors the parent application's host environment and process (or container) and logs the information to a centralized service. Manage NGINX/HAProxy. Deploy NGINX with a sidecar service that monitors environment state, then updates the NGINX configuration file and recycles the process when a change in state is needed. Ambassador sidecar. Deploy an ambassador service as a sidecar. The application calls through the ambassador, which handles request logging, routing, circuit breaking, and other connectivity related features. Offload proxy. Place an NGINX proxy in front of a node.js service instance, to handle serving static file content for the service. Related guidance Ambassador pattern Static Content Hosting pattern 3/13/2019 • 5 minutes to read • Edit Online Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances. Context and problem Web applications typically include some elements of static content. This static content might include HTML pages and other resources such as images and documents that are available to the client, either as part of an HTML page (such as inline images, style sheets, and client-side JavaScript files) or as separate downloads (such as PDF documents). Although web servers are optimized for dynamic rendering and output caching, they still have to handle requests to download static content. This consumes processing cycles that could often be put to better use. Solution In most cloud hosting environments, you can put some of an application's resources and static pages in a storage service. The storage service can serve requests for these resources, reducing load on the compute resources that handle other web requests. The cost for cloud-hosted storage is typically much less than for compute instances. When hosting some parts of an application in a storage service, the main considerations are related to deployment of the application and to securing resources that aren't intended to be available to anonymous users. Issues and considerations Consider the following points when deciding how to implement this pattern: The hosted storage service must expose an HTTP endpoint that users can access to download the static resources. Some storage services also support HTTPS, so it's possible to host resources in storage services that require SSL. For maximum performance and availability, consider using a content delivery network (CDN ) to cache the contents of the storage container in multiple datacenters around the world. However, you'll likely have to pay for using the CDN. Storage accounts are often geo-replicated by default to provide resiliency against events that might affect a datacenter. This means that the IP address might change, but the URL will remain the same. When some content is located in a storage account and other content is in a hosted compute instance, it becomes more challenging to deploy and update the application. You might have to perform separate deployments, and version the application and content to manage it more easily—especially when the static content includes script files or UI components. However, if only static resources have to be updated, they can simply be uploaded to the storage account without needing to redeploy the application package. Storage services might not support the use of custom domain names. In this case it's necessary to specify the full URL of the resources in links because they'll be in a different domain from the dynamicallygenerated content containing the links. The storage containers must be configured for public read access, but it's vital to ensure that they aren't configured for public write access to prevent users being able to upload content. Consider using a valet key or token to control access to resources that shouldn't be available anonymously. See the Valet Key pattern for more information. When to use this pattern This pattern is useful for: Minimizing the hosting cost for websites and applications that contain some static resources. Minimizing the hosting cost for websites that consist of only static content and resources. Depending on the capabilities of the hosting provider's storage system, it might be possible to entirely host a fully static website in a storage account. Exposing static resources and content for applications running in other hosting environments or onpremises servers. Locating content in more than one geographical area using a content delivery network that caches the contents of the storage account in multiple datacenters around the world. Monitoring costs and bandwidth usage. Using a separate storage account for some or all of the static content allows the costs to be more easily separated from hosting and runtime costs. This pattern might not be useful in the following situations: The application needs to perform some processing on the static content before delivering it to the client. For example, it might be necessary to add a timestamp to a document. The volume of static content is very small. The overhead of retrieving this content from separate storage can outweigh the cost benefit of separating it out from the compute resource. Example Azure Storage supports serving static content directly from a storage container. Files are served through anonymous access requests. By default, files have a URL in a subdomain of core.windows.net , such as https://contoso.z4.web.core.windows.net/image.png . You can configure a custom domain name, and use Azure CDN to access the files over HTTPS. For more information, see Static website hosting in Azure Storage. Static website hosting makes the files available for anonymous access. If you need to control who can access the files, you can store files in Azure blob storage and then generate shared access signatures to limit access. The links in the pages delivered to the client must specify the full URL of the resource. If the resource is protected with a valet key, such as a shared access signature, this signature must be included in the URL. A sample application that demonstrates using external storage for static resources is available on GitHub. This sample uses configuration files to specify the storage account and container that holds the static content. <Setting name="StaticContent.StorageConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="StaticContent.Container" value="static-content" /> The Settings class in the file Settings.cs of the StaticContentHosting.Web project contains methods to extract these values and build a string value containing the cloud storage account container URL. public class Settings { public static string StaticContentStorageConnectionString { get { return RoleEnvironment.GetConfigurationSettingValue( "StaticContent.StorageConnectionString"); } } public static string StaticContentContainer { get { return RoleEnvironment.GetConfigurationSettingValue("StaticContent.Container"); } } public static string StaticContentBaseUrl { get { var account = CloudStorageAccount.Parse(StaticContentStorageConnectionString); return string.Format("{0}/{1}", account.BlobEndpoint.ToString().TrimEnd('/'), StaticContentContainer.TrimStart('/')); } } } The class in the file StaticContentUrlHtmlHelper.cs exposes a method named StaticContentUrl that generates a URL containing the path to the cloud storage account if the URL passed to it starts with the ASP.NET root path character (~). StaticContentUrlHtmlHelper public static class StaticContentUrlHtmlHelper { public static string StaticContentUrl(this HtmlHelper helper, string contentPath) { if (contentPath.StartsWith("~")) { contentPath = contentPath.Substring(1); } contentPath = string.Format("{0}/{1}", Settings.StaticContentBaseUrl.TrimEnd('/'), contentPath.TrimStart('/')); var url = new UrlHelper(helper.ViewContext.RequestContext); return url.Content(contentPath); } } The file Index.cshtml in the Views\Home folder contains an image element that uses the StaticContentUrl method to create the URL for its src attribute. <img src="@Html.StaticContentUrl("~/media/orderedList1.png")" alt="Test Image" /> Related patterns and guidance Static Content Hosting sample. A sample application that demonstrates this pattern. Valet Key pattern. If the target resources aren't supposed to be available to anonymous users, use this pattern to restrict direct access. Serverless web application on Azure. A reference architecture that uses static website hosting with Azure Functions to implement a serverless web app. Strangler pattern 3/13/2019 • 2 minutes to read • Edit Online Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. As features from the legacy system are replaced, the new system eventually replaces all of the old system's features, strangling the old system and allowing you to decommission it. Context and problem As systems age, the development tools, hosting technology, and even system architectures they were built on can become increasingly obsolete. As new features and functionality are added, the complexity of these applications can increase dramatically, making them harder to maintain or add new features to. Completely replacing a complex system can be a huge undertaking. Often, you will need a gradual migration to a new system, while keeping the old system to handle features that haven't been migrated yet. However, running two separate versions of an application means that clients have to know where particular features are located. Every time a feature or service is migrated, clients need to be updated to point to the new location. Solution Incrementally replace specific pieces of functionality with new applications and services. Create a façade that intercepts requests going to the backend legacy system. The façade routes these requests either to the legacy application or the new services. Existing features can be migrated to the new system gradually, and consumers can continue using the same interface, unaware that any migration has taken place. This pattern helps to minimize risk from the migration, and spread the development effort over time. With the façade safely routing users to the correct application, you can add functionality to the new system at whatever pace you like, while ensuring the legacy application continues to function. Over time, as features are migrated to the new system, the legacy system is eventually "strangled" and is no longer necessary. Once this process is complete, the legacy system can safely be retired. Issues and considerations Consider how to handle services and data stores that are potentially used by both new and legacy systems. Make sure both can access these resources side-by-side. Structure new applications and services in a way that they can easily be intercepted and replaced in future strangler migrations. At some point, when the migration is complete, the strangler façade will either go away or evolve into an adaptor for legacy clients. Make sure the façade keeps up with the migration. Make sure the façade doesn't become a single point of failure or a performance bottleneck. When to use this pattern Use this pattern when gradually migrating a back-end application to a new architecture. This pattern may not be suitable: When requests to the back-end system cannot be intercepted. For smaller systems where the complexity of wholesale replacement is low. Related guidance Martin Fowler's blog post on StranglerApplication Throttling pattern 3/13/2019 • 7 minutes to read • Edit Online Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. This can allow the system to continue to function and meet service level agreements, even when an increase in demand places an extreme load on resources. Context and problem The load on a cloud application typically varies over time based on the number of active users or the types of activities they are performing. For example, more users are likely to be active during business hours, or the system might be required to perform computationally expensive analytics at the end of each month. There might also be sudden and unanticipated bursts in activity. If the processing requirements of the system exceed the capacity of the resources that are available, it'll suffer from poor performance and can even fail. If the system has to meet an agreed level of service, such failure could be unacceptable. There're many strategies available for handling varying load in the cloud, depending on the business goals for the application. One strategy is to use autoscaling to match the provisioned resources to the user needs at any given time. This has the potential to consistently meet user demand, while optimizing running costs. However, while autoscaling can trigger the provisioning of additional resources, this provisioning isn't immediate. If demand grows quickly, there can be a window of time where there's a resource deficit. Solution An alternative strategy to autoscaling is to allow applications to use resources only up to a limit, and then throttle them when this limit is reached. The system should monitor how it's using resources so that, when usage exceeds the threshold, it can throttle requests from one or more users. This will enable the system to continue functioning and meet any service level agreements (SLAs) that are in place. For more information on monitoring resource usage, see the Instrumentation and Telemetry Guidance. The system could implement several throttling strategies, including: Rejecting requests from an individual user who's already accessed system APIs more than n times per second over a given period of time. This requires the system to meter the use of resources for each tenant or user running an application. For more information, see the Service Metering Guidance. Disabling or degrading the functionality of selected nonessential services so that essential services can run unimpeded with sufficient resources. For example, if the application is streaming video output, it could switch to a lower resolution. Using load leveling to smooth the volume of activity (this approach is covered in more detail by the Queue-based Load Leveling pattern). In a multi-tenant environment, this approach will reduce the performance for every tenant. If the system must support a mix of tenants with different SLAs, the work for high-value tenants might be performed immediately. Requests for other tenants can be held back, and handled when the backlog has eased. The Priority Queue pattern could be used to help implement this approach. Deferring operations being performed on behalf of lower priority applications or tenants. These operations can be suspended or limited, with an exception generated to inform the tenant that the system is busy and that the operation should be retried later. The figure shows an area graph for resource use (a combination of memory, CPU, bandwidth, and other factors) against time for applications that are making use of three features. A feature is an area of functionality, such as a component that performs a specific set of tasks, a piece of code that performs a complex calculation, or an element that provides a service such as an in-memory cache. These features are labeled A, B, and C. The area immediately below the line for a feature indicates the resources that are used by applications when they invoke this feature. For example, the area below the line for Feature A shows the resources used by applications that are making use of Feature A, and the area between the lines for Feature A and Feature B indicates the resources used by applications invoking Feature B. Aggregating the areas for each feature shows the total resource use of the system. The previous figure illustrates the effects of deferring operations. Just prior to time T1, the total resources allocated to all applications using these features reach a threshold (the limit of resource use). At this point, the applications are in danger of exhausting the resources available. In this system, Feature B is less critical than Feature A or Feature C, so it's temporarily disabled and the resources that it was using are released. Between times T1 and T2, the applications using Feature A and Feature C continue running as normal. Eventually, the resource use of these two features diminishes to the point when, at time T2, there is sufficient capacity to enable Feature B again. The autoscaling and throttling approaches can also be combined to help keep the applications responsive and within SLAs. If the demand is expected to remain high, throttling provides a temporary solution while the system scales out. At this point, the full functionality of the system can be restored. The next figure shows an area graph of the overall resource use by all applications running in a system against time, and illustrates how throttling can be combined with autoscaling. At time T1, the threshold specifying the soft limit of resource use is reached. At this point, the system can start to scale out. However, if the new resources don't become available quickly enough, then the existing resources might be exhausted and the system could fail. To prevent this from occurring, the system is temporarily throttled, as described earlier. When autoscaling has completed and the additional resources are available, throttling can be relaxed. Issues and considerations You should consider the following points when deciding how to implement this pattern: Throttling an application, and the strategy to use, is an architectural decision that impacts the entire design of a system. Throttling should be considered early in the application design process because it isn't easy to add once a system has been implemented. Throttling must be performed quickly. The system must be capable of detecting an increase in activity and react accordingly. The system must also be able to revert to its original state quickly after the load has eased. This requires that the appropriate performance data is continually captured and monitored. If a service needs to temporarily deny a user request, it should return a specific error code so the client application understands that the reason for the refusal to perform an operation is due to throttling. The client application can wait for a period before retrying the request. Throttling can be used as a temporary measure while a system autoscales. In some cases it's better to simply throttle, rather than to scale, if a burst in activity is sudden and isn't expected to be long lived because scaling can add considerably to running costs. If throttling is being used as a temporary measure while a system autoscales, and if resource demands grow very quickly, the system might not be able to continue functioning—even when operating in a throttled mode. If this isn't acceptable, consider maintaining larger capacity reserves and configuring more aggressive autoscaling. When to use this pattern Use this pattern: To ensure that a system continues to meet service level agreements. To prevent a single tenant from monopolizing the resources provided by an application. To handle bursts in activity. To help cost-optimize a system by limiting the maximum resource levels needed to keep it functioning. Example The final figure illustrates how throttling can be implemented in a multi-tenant system. Users from each of the tenant organizations access a cloud-hosted application where they fill out and submit surveys. The application contains instrumentation that monitors the rate at which these users are submitting requests to the application. In order to prevent the users from one tenant affecting the responsiveness and availability of the application for all other users, a limit is applied to the number of requests per second the users from any one tenant can submit. The application blocks requests that exceed this limit. Related patterns and guidance The following patterns and guidance may also be relevant when implementing this pattern: Instrumentation and Telemetry Guidance. Throttling depends on gathering information about how heavily a service is being used. Describes how to generate and capture custom monitoring information. Service Metering Guidance. Describes how to meter the use of services in order to gain an understanding of how they are used. This information can be useful in determining how to throttle a service. Autoscaling Guidance. Throttling can be used as an interim measure while a system autoscales, or to remove the need for a system to autoscale. Contains information on autoscaling strategies. Queue-based Load Leveling pattern. Queue-based load leveling is a commonly used mechanism for implementing throttling. A queue can act as a buffer that helps to even out the rate at which requests sent by an application are delivered to a service. Priority Queue pattern. A system can use priority queuing as part of its throttling strategy to maintain performance for critical or higher value applications, while reducing the performance of less important applications. Valet Key pattern 3/13/2019 • 13 minutes to read • Edit Online Use a token that provides clients with restricted direct access to a specific resource, in order to offload data transfer from the application. This is particularly useful in applications that use cloud-hosted storage systems or queues, and can minimize cost and maximize scalability and performance. Context and problem Client programs and web browsers often need to read and write files or data streams to and from an application’s storage. Typically, the application will handle the movement of the data — either by fetching it from storage and streaming it to the client, or by reading the uploaded stream from the client and storing it in the data store. However, this approach absorbs valuable resources such as compute, memory, and bandwidth. Data stores have the ability to handle upload and download of data directly, without requiring that the application perform any processing to move this data. But, this typically requires the client to have access to the security credentials for the store. This can be a useful technique to minimize data transfer costs and the requirement to scale out the application, and to maximize performance. It means, though, that the application is no longer able to manage the security of the data. After the client has a connection to the data store for direct access, the application can't act as the gatekeeper. It's no longer in control of the process and can't prevent subsequent uploads or downloads from the data store. This isn't a realistic approach in distributed systems that need to serve untrusted clients. Instead, applications must be able to securely control access to data in a granular way, but still reduce the load on the server by setting up this connection and then allowing the client to communicate directly with the data store to perform the required read or write operations. Solution You need to resolve the problem of controlling access to a data store where the store can't manage authentication and authorization of clients. One typical solution is to restrict access to the data store’s public connection and provide the client with a key or token that the data store can validate. This key or token is usually referred to as a valet key. It provides time-limited access to specific resources and allows only predefined operations such as reading and writing to storage or queues, or uploading and downloading in a web browser. Applications can create and issue valet keys to client devices and web browsers quickly and easily, allowing clients to perform the required operations without requiring the application to directly handle the data transfer. This removes the processing overhead, and the impact on performance and scalability, from the application and the server. The client uses this token to access a specific resource in the data store for only a specific period, and with specific restrictions on access permissions, as shown in the figure. After the specified period, the key becomes invalid and won't allow access to the resource. It's also possible to configure a key that has other dependencies, such as the scope of the data. For example, depending on the data store capabilities, the key can specify a complete table in a data store, or only specific rows in a table. In cloud storage systems the key can specify a container, or just a specific item within a container. The key can also be invalidated by the application. This is a useful approach if the client notifies the server that the data transfer operation is complete. The server can then invalidate that key to prevent further access. Using this pattern can simplify managing access to resources because there's no requirement to create and authenticate a user, grant permissions, and then remove the user again. It also makes it easy to limit the location, the permission, and the validity period—all by simply generating a key at runtime. The important factors are to limit the validity period, and especially the location of the resource, as tightly as possible so that the recipient can only use it for the intended purpose. Issues and considerations Consider the following points when deciding how to implement this pattern: Manage the validity status and period of the key. If leaked or compromised, the key effectively unlocks the target item and makes it available for malicious use during the validity period. A key can usually be revoked or disabled, depending on how it was issued. Server-side policies can be changed or, the server key it was signed with can be invalidated. Specify a short validity period to minimize the risk of allowing unauthorized operations to take place against the data store. However, if the validity period is too short, the client might not be able to complete the operation before the key expires. Allow authorized users to renew the key before the validity period expires if multiple accesses to the protected resource are required. Control the level of access the key will provide. Typically, the key should allow the user to only perform the actions necessary to complete the operation, such as read-only access if the client shouldn't be able to upload data to the data store. For file uploads, it's common to specify a key that provides write-only permission, as well as the location and the validity period. It's critical to accurately specify the resource or the set of resources to which the key applies. Consider how to control users’ behavior. Implementing this pattern means some loss of control over the resources users are granted access to. The level of control that can be exerted is limited by the capabilities of the policies and permissions available for the service or the target data store. For example, it's usually not possible to create a key that limits the size of the data to be written to storage, or the number of times the key can be used to access a file. This can result in huge unexpected costs for data transfer, even when used by the intended client, and might be caused by an error in the code that causes repeated upload or download. To limit the number of times a file can be uploaded, where possible, force the client to notify the application when one operation has completed. For example, some data stores raise events the application code can use to monitor operations and control user behavior. However, it's hard to enforce quotas for individual users in a multi-tenant scenario where the same key is used by all the users from one tenant. Validate, and optionally sanitize, all uploaded data. A malicious user that gains access to the key could upload data designed to compromise the system. Alternatively, authorized users might upload data that's invalid and, when processed, could result in an error or system failure. To protect against this, ensure that all uploaded data is validated and checked for malicious content before use. Audit all operations. Many key-based mechanisms can log operations such as uploads, downloads, and failures. These logs can usually be incorporated into an audit process, and also used for billing if the user is charged based on file size or data volume. Use the logs to detect authentication failures that might be caused by issues with the key provider, or accidental removal of a stored access policy. Deliver the key securely. It can be embedded in a URL that the user activates in a web page, or it can be used in a server redirection operation so that the download occurs automatically. Always use HTTPS to deliver the key over a secure channel. Protect sensitive data in transit. Sensitive data delivered through the application will usually take place using SSL or TLS, and this should be enforced for clients accessing the data store directly. Other issues to be aware of when implementing this pattern are: If the client doesn't, or can't, notify the server of completion of the operation, and the only limit is the expiration period of the key, the application won't be able to perform auditing operations such as counting the number of uploads or downloads, or preventing multiple uploads or downloads. The flexibility of key policies that can be generated might be limited. For example, some mechanisms only allow the use of a timed expiration period. Others aren't able to specify a sufficient granularity of read/write permissions. If the start time for the key or token validity period is specified, ensure that it's a little earlier than the current server time to allow for client clocks that might be slightly out of synchronization. The default, if not specified, is usually the current server time. The URL containing the key will be recorded in server log files. While the key will typically have expired before the log files are used for analysis, ensure that you limit access to them. If log data is transmitted to a monitoring system or stored in another location, consider implementing a delay to prevent leakage of keys until after their validity period has expired. If the client code runs in a web browser, the browser might need to support cross-origin resource sharing (CORS ) to enable code that executes within the web browser to access data in a different domain from the one that served the page. Some older browsers and some data stores don't support CORS, and code that runs in these browsers might be able to use a valet key to provide access to data in a different domain, such as a cloud storage account. When to use this pattern This pattern is useful for the following situations: To minimize resource loading and maximize performance and scalability. Using a valet key doesn't require the resource to be locked, no remote server call is required, there's no limit on the number of valet keys that can be issued, and it avoids a single point of failure resulting from performing the data transfer through the application code. Creating a valet key is typically a simple cryptographic operation of signing a string with a key. To minimize operational cost. Enabling direct access to stores and queues is resource and cost efficient, can result in fewer network round trips, and might allow for a reduction in the number of compute resources required. When clients regularly upload or download data, particularly where there's a large volume or when each operation involves large files. When the application has limited compute resources available, either due to hosting limitations or cost considerations. In this scenario, the pattern is even more helpful if there are many concurrent data uploads or downloads because it relieves the application from handling the data transfer. When the data is stored in a remote data store or a different datacenter. If the application was required to act as a gatekeeper, there might be a charge for the additional bandwidth of transferring the data between datacenters, or across public or private networks between the client and the application, and then between the application and the data store. This pattern might not be useful in the following situations: If the application must perform some task on the data before it's stored or before it's sent to the client. For example, if the application needs to perform validation, log access success, or execute a transformation on the data. However, some data stores and clients are able to negotiate and carry out simple transformations such as compression and decompression (for example, a web browser can usually handle GZip formats). If the design of an existing application makes it difficult to incorporate the pattern. Using this pattern typically requires a different architectural approach for delivering and receiving data. If it's necessary to maintain audit trails or control the number of times a data transfer operation is executed, and the valet key mechanism in use doesn't support notifications that the server can use to manage these operations. If it's necessary to limit the size of the data, especially during upload operations. The only solution to this is for the application to check the data size after the operation is complete, or check the size of uploads after a specified period or on a scheduled basis. Example Azure supports shared access signatures on Azure Storage for granular access control to data in blobs, tables, and queues, and for Service Bus queues and topics. A shared access signature token can be configured to provide specific access rights such as read, write, update, and delete to a specific table; a key range within a table; a queue; a blob; or a blob container. The validity can be a specified time period or with no time limit. Azure shared access signatures also support server-stored access policies that can be associated with a specific resource such as a table or blob. This feature provides additional control and flexibility compared to applicationgenerated shared access signature tokens, and should be used whenever possible. Settings defined in a serverstored policy can be changed and are reflected in the token without requiring a new token to be issued, but settings defined in the token can't be changed without issuing a new token. This approach also makes it possible to revoke a valid shared access signature token before it's expired. For more information see Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS and Using Shared Access Signatures on MSDN. The following code shows how to create a shared access signature token that's valid for five minutes. The GetSharedAccessReferenceForUpload method returns a shared access signatures token that can be used to upload a file to Azure Blob Storage. public class ValuesController : ApiController { private readonly CloudStorageAccount account; private readonly string blobContainer; ... /// <summary> /// Return a limited access key that allows the caller to upload a file /// to this specific destination for a defined period of time. /// </summary> private StorageEntitySas GetSharedAccessReferenceForUpload(string blobName) { var blobClient = this.account.CreateCloudBlobClient(); var container = blobClient.GetContainerReference(this.blobContainer); var blob = container.GetBlockBlobReference(blobName); var policy = new SharedAccessBlobPolicy { Permissions = SharedAccessBlobPermissions.Write, // Specify a start time five minutes earlier to allow for client clock skew. SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5), // Specify a validity period of five minutes starting from now. SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(5) }; // Create the signature. var sas = blob.GetSharedAccessSignature(policy); return new StorageEntitySas { BlobUri = blob.Uri, Credentials = sas, Name = blobName }; } public struct StorageEntitySas { public string Credentials; public Uri BlobUri; public string Name; } } The complete sample is available in the ValetKey solution available for download from GitHub. The ValetKey.Web project in this solution contains a web application that includes the ValuesController class shown above. A sample client application that uses this web application to retrieve a shared access signatures key and upload a file to blob storage is available in the ValetKey.Client project. Next steps The following patterns and guidance might also be relevant when implementing this pattern: A sample that demonstrates this pattern is available on GitHub. Gatekeeper pattern. This pattern can be used in conjunction with the Valet Key pattern to protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service. The gatekeeper validates and sanitizes requests, and passes requests and data between the client and the application. Can provide an additional layer of security, and reduce the attack surface of the system. Static Content Hosting pattern. Describes how to deploy static resources to a cloud-based storage service that can deliver these resources directly to the client to reduce the requirement for expensive compute instances. Where the resources aren't intended to be publicly available, the Valet Key pattern can be used to secure them. Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS Using Shared Access Signatures Shared Access Signature Authentication with Service Bus API design 3/13/2019 • 28 minutes to read • Edit Online Most modern web applications expose APIs that clients can use to interact with the application. A well-designed web API should aim to support: Platform independence. Any client should be able to call the API, regardless of how the API is implemented internally. This requires using standard protocols, and having a mechanism whereby the client and the web service can agree on the format of the data to exchange. Service evolution. The web API should be able to evolve and add functionality independently from client applications. As the API evolves, existing client applications should continue to function without modification. All functionality should be discoverable so that client applications can fully use it. This guidance describes issues that you should consider when designing a web API. Introduction to REST In 2000, Roy Fielding proposed Representational State Transfer (REST) as an architectural approach to designing web services. REST is an architectural style for building distributed systems based on hypermedia. REST is independent of any underlying protocol and is not necessarily tied to HTTP. However, most common REST implementations use HTTP as the application protocol, and this guide focuses on designing REST APIs for HTTP. A primary advantage of REST over HTTP is that it uses open standards, and does not bind the implementation of the API or the client applications to any specific implementation. For example, a REST web service could be written in ASP.NET, and client applications can use any language or toolset that can generate HTTP requests and parse HTTP responses. Here are some of the main design principles of RESTful APIs using HTTP: REST APIs are designed around resources, which are any kind of object, data, or service that can be accessed by the client. A resource has an identifier, which is a URI that uniquely identifies that resource. For example, the URI for a particular customer order might be: https://adventure-works.com/orders/1 Clients interact with a service by exchanging representations of resources. Many web APIs use JSON as the exchange format. For example, a GET request to the URI listed above might return this response body: {"orderId":1,"orderValue":99.90,"productId":1,"quantity":1} REST APIs use a uniform interface, which helps to decouple the client and service implementations. For REST APIs built on HTTP, the uniform interface includes using standard HTTP verbs to perform operations on resources. The most common operations are GET, POST, PUT, PATCH, and DELETE. REST APIs use a stateless request model. HTTP requests should be independent and may occur in any order, so keeping transient state information between requests is not feasible. The only place where information is stored is in the resources themselves, and each request should be an atomic operation. This constraint enables web services to be highly scalable, because there is no need to retain any affinity between clients and specific servers. Any server can handle any request from any client. That said, other factors can limit scalability. For example, many web services write to a backend data store, which may be hard to scale out. (The article Data Partitioning describes strategies to scale out a data store.) REST APIs are driven by hypermedia links that are contained in the representation. For example, the following shows a JSON representation of an order. It contains links to get or update the customer associated with the order. { "orderID":3, "productID":2, "quantity":4, "orderValue":16.60, "links": [ {"rel":"product","href":"https://adventure-works.com/customers/3", "action":"GET" }, {"rel":"product","href":"https://adventure-works.com/customers/3", "action":"PUT" } ] } In 2008, Leonard Richardson proposed the following maturity model for web APIs: Level 0: Define one URI, and all operations are POST requests to this URI. Level 1: Create separate URIs for individual resources. Level 2: Use HTTP methods to define operations on resources. Level 3: Use hypermedia (HATEOAS, described below ). Level 3 corresponds to a truly RESTful API according to Fielding's definition. In practice, many published web APIs fall somewhere around level 2. Organize the API around resources Focus on the business entities that the web API exposes. For example, in an e-commerce system, the primary entities might be customers and orders. Creating an order can be achieved by sending an HTTP POST request that contains the order information. The HTTP response indicates whether the order was placed successfully or not. When possible, resource URIs should be based on nouns (the resource) and not verbs (the operations on the resource). https://adventure-works.com/orders // Good https://adventure-works.com/create-order // Avoid A resource does not have to be based on a single physical data item. For example, an order resource might be implemented internally as several tables in a relational database, but presented to the client as a single entity. Avoid creating APIs that simply mirror the internal structure of a database. The purpose of REST is to model entities and the operations that an application can perform on those entities. A client should not be exposed to the internal implementation. Entities are often grouped together into collections (orders, customers). A collection is a separate resource from the item within the collection, and should have its own URI. For example, the following URI might represent the collection of orders: https://adventure-works.com/orders Sending an HTTP GET request to the collection URI retrieves a list of items in the collection. Each item in the collection also has its own unique URI. An HTTP GET request to the item's URI returns the details of that item. Adopt a consistent naming convention in URIs. In general, it helps to use plural nouns for URIs that reference collections. It's a good practice to organize URIs for collections and items into a hierarchy. For example, /customers is the path to the customers collection, and /customers/5 is the path to the customer with ID equal to 5. This approach helps to keep the web API intuitive. Also, many web API frameworks can route requests based on parameterized URI paths, so you could define a route for the path /customers/{id} . Also consider the relationships between different types of resources and how you might expose these associations. For example, the /customers/5/orders might represent all of the orders for customer 5. You could also go in the other direction, and represent the association from an order back to a customer with a URI such as /orders/99/customer . However, extending this model too far can become cumbersome to implement. A better solution is to provide navigable links to associated resources in the body of the HTTP response message. This mechanism is described in more detail in the section Using the HATEOAS Approach to Enable Navigation To Related Resources later. In more complex systems, it can be tempting to provide URIs that enable a client to navigate through several levels of relationships, such as /customers/1/orders/99/products . However, this level of complexity can be difficult to maintain and is inflexible if the relationships between resources change in the future. Instead, try to keep URIs relatively simple. Once an application has a reference to a resource, it should be possible to use this reference to find items related to that resource. The preceding query can be replaced with the URI /customers/1/orders to find all the orders for customer 1, and then /orders/99/products to find the products in this order. TIP Avoid requiring resource URIs more complex than collection/item/collection. Another factor is that all web requests impose a load on the web server. The more requests, the bigger the load. Therefore, try to avoid "chatty" web APIs that expose a large number of small resources. Such an API may require a client application to send multiple requests to find all of the data that it requires. Instead, you might want to denormalize the data and combine related information into bigger resources that can be retrieved with a single request. However, you need to balance this approach against the overhead of fetching data that the client doesn't need. Retrieving large objects can increase the latency of a request and incur additional bandwidth costs. For more information about these performance antipatterns, see Chatty I/O and Extraneous Fetching. Avoid introducing dependencies between the web API and the underlying data sources. For example, if your data is stored in a relational database, the web API doesn't need to expose each table as a collection of resources. In fact, that's probably a poor design. Instead, think of the web API as an abstraction of the database. If necessary, introduce a mapping layer between the database and the web API. That way, client applications are isolated from changes to the underlying database scheme. Finally, it might not be possible to map every operation implemented by a web API to a specific resource. You can handle such non-resource scenarios through HTTP requests that invoke a function and return the results as an HTTP response message. For example, a web API that implements simple calculator operations such as add and subtract could provide URIs that expose these operations as pseudo resources and use the query string to specify the parameters required. For example a GET request to the URI /add?operand1=99&operand2=1 would return a response message with the body containing the value 100. However, only use these forms of URIs sparingly. Define operations in terms of HTTP methods The HTTP protocol defines a number of methods that assign semantic meaning to a request. The common HTTP methods used by most RESTful web APIs are: GET retrieves a representation of the resource at the specified URI. The body of the response message contains the details of the requested resource. POST creates a new resource at the specified URI. The body of the request message provides the details of the new resource. Note that POST can also be used to trigger operations that don't actually create resources. PUT either creates or replaces the resource at the specified URI. The body of the request message specifies the resource to be created or updated. PATCH performs a partial update of a resource. The request body specifies the set of changes to apply to the resource. DELETE removes the resource at the specified URI. The effect of a specific request should depend on whether the resource is a collection or an individual item. The following table summarizes the common conventions adopted by most RESTful implementations using the ecommerce example. Note that not all of these requests might be implemented; it depends on the specific scenario. RESOURCE POST GET PUT DELETE /customers Create a new customer Retrieve all customers Bulk update of customers Remove all customers /customers/1 Error Retrieve the details for customer 1 Update the details of customer 1 if it exists Remove customer 1 /customers/1/orders Create a new order for customer 1 Retrieve all orders for customer 1 Bulk update of orders for customer 1 Remove all orders for customer 1 The differences between POST, PUT, and PATCH can be confusing. A POST request creates a resource. The server assigns a URI for the new resource, and returns that URI to the client. In the REST model, you frequently apply POST requests to collections. The new resource is added to the collection. A POST request can also be used to submit data for processing to an existing resource, without any new resource being created. A PUT request creates a resource or updates an existing resource. The client specifies the URI for the resource. The request body contains a complete representation of the resource. If a resource with this URI already exists, it is replaced. Otherwise a new resource is created, if the server supports doing so. PUT requests are most frequently applied to resources that are individual items, such as a specific customer, rather than collections. A server might support updates but not creation via PUT. Whether to support creation via PUT depends on whether the client can meaningfully assign a URI to a resource before it exists. If not, then use POST to create resources and PUT or PATCH to update. A PATCH request performs a partial update to an existing resource. The client specifies the URI for the resource. The request body specifies a set of changes to apply to the resource. This can be more efficient than using PUT, because the client only sends the changes, not the entire representation of the resource. Technically PATCH can also create a new resource (by specifying a set of updates to a "null" resource), if the server supports this. PUT requests must be idempotent. If a client submits the same PUT request multiple times, the results should always be the same (the same resource will be modified with the same values). POST and PATCH requests are not guaranteed to be idempotent. Conform to HTTP semantics This section describes some typical considerations for designing an API that conforms to the HTTP specification. However, it doesn't cover every possible detail or scenario. When in doubt, consult the HTTP specifications. Media types As mentioned earlier, clients and servers exchange representations of resources. For example, in a POST request, the request body contains a representation of the resource to create. In a GET request, the response body contains a representation of the fetched resource. In the HTTP protocol, formats are specified through the use of media types, also called MIME types. For nonbinary data, most web APIs support JSON (media type = application/json) and possibly XML (media type = application/xml). The Content-Type header in a request or response specifies the format of the representation. Here is an example of a POST request that includes JSON data: POST https://adventure-works.com/orders HTTP/1.1 Content-Type: application/json; charset=utf-8 Content-Length: 57 {"Id":1,"Name":"Gizmo","Category":"Widgets","Price":1.99} If the server doesn't support the media type, it should return HTTP status code 415 (Unsupported Media Type). A client request can include an Accept header that contains a list of media types the client will accept from the server in the response message. For example: GET https://adventure-works.com/orders/2 HTTP/1.1 Accept: application/json If the server cannot match any of the media type(s) listed, it should return HTTP status code 406 (Not Acceptable). GET methods A successful GET method typically returns HTTP status code 200 (OK). If the resource cannot be found, the method should return 404 (Not Found). POST methods If a POST method creates a new resource, it returns HTTP status code 201 (Created). The URI of the new resource is included in the Location header of the response. The response body contains a representation of the resource. If the method does some processing but does not create a new resource, the method can return HTTP status code 200 and include the result of the operation in the response body. Alternatively, if there is no result to return, the method can return HTTP status code 204 (No Content) with no response body. If the client puts invalid data into the request, the server should return HTTP status code 400 (Bad Request). The response body can contain additional information about the error or a link to a URI that provides more details. PUT methods If a PUT method creates a new resource, it returns HTTP status code 201 (Created), as with a POST method. If the method updates an existing resource, it returns either 200 (OK) or 204 (No Content). In some cases, it might not be possible to update an existing resource. In that case, consider returning HTTP status code 409 (Conflict). Consider implementing bulk HTTP PUT operations that can batch updates to multiple resources in a collection. The PUT request should specify the URI of the collection, and the request body should specify the details of the resources to be modified. This approach can help to reduce chattiness and improve performance. PATCH methods With a PATCH request, the client sends a set of updates to an existing resource, in the form of a patch document. The server processes the patch document to perform the update. The patch document doesn't describe the whole resource, only a set of changes to apply. The specification for the PATCH method (RFC 5789) doesn't define a particular format for patch documents. The format must be inferred from the media type in the request. JSON is probably the most common data format for web APIs. There are two main JSON -based patch formats, called JSON patch and JSON merge patch. JSON merge patch is somewhat simpler. The patch document has the same structure as the original JSON resource, but includes just the subset of fields that should be changed or added. In addition, a field can be deleted by specifying null for the field value in the patch document. (That means merge patch is not suitable if the original resource can have explicit null values.) For example, suppose the original resource has the following JSON representation: { "name":"gizmo", "category":"widgets", "color":"blue", "price":10 } Here is a possible JSON merge patch for this resource: { "price":12, "color":null, "size":"small" } This tells the server to update price , delete color , and add size — name and category are not modified. For the exact details of JSON merge patch, see RFC 7396. The media type for JSON merge patch is application/merge-patch+json . Merge patch is not suitable if the original resource can contain explicit null values, due to the special meaning of null in the patch document. Also, the patch document doesn't specify the order that the server should apply the updates. That may or may not matter, depending on the data and the domain. JSON patch, defined in RFC 6902, is more flexible. It specifies the changes as a sequence of operations to apply. Operations include add, remove, replace, copy, and test (to validate values). The media type for JSON patch is application/json-patch+json . Here are some typical error conditions that might be encountered when processing a PATCH request, along with the appropriate HTTP status code. ERROR CONDITION HTTP STATUS CODE The patch document format isn't supported. 415 (Unsupported Media Type) Malformed patch document. 400 (Bad Request) The patch document is valid, but the changes can't be applied to the resource in its current state. 409 (Conflict) DELETE methods If the delete operation is successful, the web server should respond with HTTP status code 204, indicating that the process has been successfully handled, but that the response body contains no further information. If the resource doesn't exist, the web server can return HTTP 404 (Not Found). Asynchronous operations Sometimes a POST, PUT, PATCH, or DELETE operation might require processing that takes awhile to complete. If you wait for completion before sending a response to the client, it may cause unacceptable latency. If so, consider making the operation asynchronous. Return HTTP status code 202 (Accepted) to indicate the request was accepted for processing but is not completed. You should expose an endpoint that returns the status of an asynchronous request, so the client can monitor the status by polling the status endpoint. Include the URI of the status endpoint in the Location header of the 202 response. For example: HTTP/1.1 202 Accepted Location: /api/status/12345 If the client sends a GET request to this endpoint, the response should contain the current status of the request. Optionally, it could also include an estimated time to completion or a link to cancel the operation. HTTP/1.1 200 OK Content-Type: application/json { "status":"In progress", "link": { "rel":"cancel", "method":"delete", "href":"/api/status/12345" } } If the asynchronous operation creates a new resource, the status endpoint should return status code 303 (See Other) after the operation completes. In the 303 response, include a Location header that gives the URI of the new resource: HTTP/1.1 303 See Other Location: /api/orders/12345 For more information, see Asynchronous operations in REST. Filter and paginate data Exposing a collection of resources through a single URI can lead to applications fetching large amounts of data when only a subset of the information is required. For example, suppose a client application needs to find all orders with a cost over a specific value. It might retrieve all orders from the /orders URI and then filter these orders on the client side. Clearly this process is highly inefficient. It wastes network bandwidth and processing power on the server hosting the web API. Instead, the API can allow passing a filter in the query string of the URI, such as /orders?minCost=n. The web API is then responsible for parsing and handling the minCost parameter in the query string and returning the filtered results on the server side. GET requests over collection resources can potentially return a large number of items. You should design a web API to limit the amount of data returned by any single request. Consider supporting query strings that specify the maximum number of items to retrieve and a starting offset into the collection. For example: /orders?limit=25&offset=50 Also consider imposing an upper limit on the number of items returned, to help prevent Denial of Service attacks. To assist client applications, GET requests that return paginated data should also include some form of metadata that indicate the total number of resources available in the collection. You can use a similar strategy to sort data as it is fetched, by providing a sort parameter that takes a field name as the value, such as /orders?sort=ProductID. However, this approach can have a negative effect on caching, because query string parameters form part of the resource identifier used by many cache implementations as the key to cached data. You can extend this approach to limit the fields returned for each item, if each item contains a large amount of data. For example, you could use a query string parameter that accepts a comma-delimited list of fields, such as /orders?fields=ProductID,Quantity. Give all optional parameters in query strings meaningful defaults. For example, set the limit parameter to 10 and the offset parameter to 0 if you implement pagination, set the sort parameter to the key of the resource if you implement ordering, and set the fields parameter to all fields in the resource if you support projections. Support partial responses for large binary resources A resource may contain large binary fields, such as files or images. To overcome problems caused by unreliable and intermittent connections and to improve response times, consider enabling such resources to be retrieved in chunks. To do this, the web API should support the Accept-Ranges header for GET requests for large resources. This header indicates that the GET operation supports partial requests. The client application can submit GET requests that return a subset of a resource, specified as a range of bytes. Also, consider implementing HTTP HEAD requests for these resources. A HEAD request is similar to a GET request, except that it only returns the HTTP headers that describe the resource, with an empty message body. A client application can issue a HEAD request to determine whether to fetch a resource by using partial GET requests. For example: HEAD https://adventure-works.com/products/10?fields=productImage HTTP/1.1 Here is an example response message: HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: image/jpeg Content-Length: 4580 The Content-Length header gives the total size of the resource, and the Accept-Ranges header indicates that the corresponding GET operation supports partial results. The client application can use this information to retrieve the image in smaller chunks. The first request fetches the first 2500 bytes by using the Range header: GET https://adventure-works.com/products/10?fields=productImage HTTP/1.1 Range: bytes=0-2499 The response message indicates that this is a partial response by returning HTTP status code 206. The ContentLength header specifies the actual number of bytes returned in the message body (not the size of the resource), and the Content-Range header indicates which part of the resource this is (bytes 0-2499 out of 4580): HTTP/1.1 206 Partial Content Accept-Ranges: bytes Content-Type: image/jpeg Content-Length: 2500 Content-Range: bytes 0-2499/4580 [...] A subsequent request from the client application can retrieve the remainder of the resource. Use HATEOAS to enable navigation to related resources One of the primary motivations behind REST is that it should be possible to navigate the entire set of resources without requiring prior knowledge of the URI scheme. Each HTTP GET request should return the information necessary to find the resources related directly to the requested object through hyperlinks included in the response, and it should also be provided with information that describes the operations available on each of these resources. This principle is known as HATEOAS, or Hypertext as the Engine of Application State. The system is effectively a finite state machine, and the response to each request contains the information necessary to move from one state to another; no other information should be necessary. NOTE Currently there are no standards or specifications that define how to model the HATEOAS principle. The examples shown in this section illustrate one possible solution. For example, to handle the relationship between an order and a customer, the representation of an order could include links that identify the available operations for the customer of the order. Here is a possible representation: { "orderID":3, "productID":2, "quantity":4, "orderValue":16.60, "links":[ { "rel":"customer", "href":"https://adventure-works.com/customers/3", "action":"GET", "types":["text/xml","application/json"] }, { "rel":"customer", "href":"https://adventure-works.com/customers/3", "action":"PUT", "types":["application/x-www-form-urlencoded"] }, { "rel":"customer", "href":"https://adventure-works.com/customers/3", "action":"DELETE", "types":[] }, { "rel":"self", "href":"https://adventure-works.com/orders/3", "action":"GET", "types":["text/xml","application/json"] }, { "rel":"self", "href":"https://adventure-works.com/orders/3", "action":"PUT", "types":["application/x-www-form-urlencoded"] }, { "rel":"self", "href":"https://adventure-works.com/orders/3", "action":"DELETE", "types":[] }] } In this example, the links array has a set of links. Each link represents an operation on a related entity. The data for each link includes the relationship ("customer"), the URI ( https://adventure-works.com/customers/3 ), the HTTP method, and the supported MIME types. This is all the information that a client application needs to be able to invoke the operation. The links array also includes self-referencing information about the resource itself that has been retrieved. These have the relationship self. The set of links that are returned may change, depending on the state of the resource. This is what is meant by hypertext being the "engine of application state." Versioning a RESTful web API It is highly unlikely that a web API will remain static. As business requirements change new collections of resources may be added, the relationships between resources might change, and the structure of the data in resources might be amended. While updating a web API to handle new or differing requirements is a relatively straightforward process, you must consider the effects that such changes will have on client applications consuming the web API. The issue is that although the developer designing and implementing a web API has full control over that API, the developer does not have the same degree of control over client applications which may be built by third party organizations operating remotely. The primary imperative is to enable existing client applications to continue functioning unchanged while allowing new client applications to take advantage of new features and resources. Versioning enables a web API to indicate the features and resources that it exposes, and a client application can submit requests that are directed to a specific version of a feature or resource. The following sections describe several different approaches, each of which has its own benefits and trade-offs. No versioning This is the simplest approach, and may be acceptable for some internal APIs. Big changes could be represented as new resources or new links. Adding content to existing resources might not present a breaking change as client applications that are not expecting to see this content will simply ignore it. For example, a request to the URI https://adventure-works.com/customers/3 should return the details of a single customer containing id , name , and address fields expected by the client application: HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 {"id":3,"name":"Contoso LLC","address":"1 Microsoft Way Redmond WA 98053"} NOTE For simplicity, the example responses shown in this section do not include HATEOAS links. If the DateCreated field is added to the schema of the customer resource, then the response would look like this: HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 {"id":3,"name":"Contoso LLC","dateCreated":"2014-09-04T12:11:38.0376089Z","address":"1 Microsoft Way Redmond WA 98053"} Existing client applications might continue functioning correctly if they are capable of ignoring unrecognized fields, while new client applications can be designed to handle this new field. However, if more radical changes to the schema of resources occur (such as removing or renaming fields) or the relationships between resources change then these may constitute breaking changes that prevent existing client applications from functioning correctly. In these situations you should consider one of the following approaches. URI versioning Each time you modify the web API or change the schema of resources, you add a version number to the URI for each resource. The previously existing URIs should continue to operate as before, returning resources that conform to their original schema. Extending the previous example, if the address field is restructured into sub-fields containing each constituent part of the address (such as streetAddress , city , state , and zipCode ), this version of the resource could be exposed through a URI containing a version number, such as https://adventure-works.com/v2/customers/3 : HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 {"id":3,"name":"Contoso LLC","dateCreated":"2014-09-04T12:11:38.0376089Z","address":{"streetAddress":"1 Microsoft Way","city":"Redmond","state":"WA","zipCode":98053}} This versioning mechanism is very simple but depends on the server routing the request to the appropriate endpoint. However, it can become unwieldy as the web API matures through several iterations and the server has to support a number of different versions. Also, from a purist’s point of view, in all cases the client applications are fetching the same data (customer 3), so the URI should not really be different depending on the version. This scheme also complicates implementation of HATEOAS as all links will need to include the version number in their URIs. Query string versioning Rather than providing multiple URIs, you can specify the version of the resource by using a parameter within the query string appended to the HTTP request, such as https://adventure-works.com/customers/3?version=2 . The version parameter should default to a meaningful value such as 1 if it is omitted by older client applications. This approach has the semantic advantage that the same resource is always retrieved from the same URI, but it depends on the code that handles the request to parse the query string and send back the appropriate HTTP response. This approach also suffers from the same complications for implementing HATEOAS as the URI versioning mechanism. NOTE Some older web browsers and web proxies will not cache responses for requests that include a query string in the URI. This can degrade performance for web applications that use a web API and that run from within such a web browser. Header versioning Rather than appending the version number as a query string parameter, you could implement a custom header that indicates the version of the resource. This approach requires that the client application adds the appropriate header to any requests, although the code handling the client request could use a default value (version 1) if the version header is omitted. The following examples use a custom header named Custom -Header. The value of this header indicates the version of web API. Version 1: GET https://adventure-works.com/customers/3 HTTP/1.1 Custom-Header: api-version=1 HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 {"id":3,"name":"Contoso LLC","address":"1 Microsoft Way Redmond WA 98053"} Version 2: GET https://adventure-works.com/customers/3 HTTP/1.1 Custom-Header: api-version=2 HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 {"id":3,"name":"Contoso LLC","dateCreated":"2014-09-04T12:11:38.0376089Z","address":{"streetAddress":"1 Microsoft Way","city":"Redmond","state":"WA","zipCode":98053}} Note that as with the previous two approaches, implementing HATEOAS requires including the appropriate custom header in any links. Media type versioning When a client application sends an HTTP GET request to a web server it should stipulate the format of the content that it can handle by using an Accept header, as described earlier in this guidance. Frequently the purpose of the Accept header is to allow the client application to specify whether the body of the response should be XML, JSON, or some other common format that the client can parse. However, it is possible to define custom media types that include information enabling the client application to indicate which version of a resource it is expecting. The following example shows a request that specifies an Accept header with the value application/vnd.adventureworks.v1+json. The vnd.adventure-works.v1 element indicates to the web server that it should return version 1 of the resource, while the json element specifies that the format of the response body should be JSON: GET https://adventure-works.com/customers/3 HTTP/1.1 Accept: application/vnd.adventure-works.v1+json The code handling the request is responsible for processing the Accept header and honoring it as far as possible (the client application may specify multiple formats in the Accept header, in which case the web server can choose the most appropriate format for the response body). The web server confirms the format of the data in the response body by using the Content-Type header: HTTP/1.1 200 OK Content-Type: application/vnd.adventure-works.v1+json; charset=utf-8 {"id":3,"name":"Contoso LLC","address":"1 Microsoft Way Redmond WA 98053"} If the Accept header does not specify any known media types, the web server could generate an HTTP 406 (Not Acceptable) response message or return a message with a default media type. This approach is arguably the purest of the versioning mechanisms and lends itself naturally to HATEOAS, which can include the MIME type of related data in resource links. NOTE When you select a versioning strategy, you should also consider the implications on performance, especially caching on the web server. The URI versioning and Query String versioning schemes are cache-friendly inasmuch as the same URI/query string combination refers to the same data each time. The Header versioning and Media Type versioning mechanisms typically require additional logic to examine the values in the custom header or the Accept header. In a large-scale environment, many clients using different versions of a web API can result in a significant amount of duplicated data in a server-side cache. This issue can become acute if a client application communicates with a web server through a proxy that implements caching, and that only forwards a request to the web server if it does not currently hold a copy of the requested data in its cache. Open API Initiative The Open API Initiative was created by an industry consortium to standardize REST API descriptions across vendors. As part of this initiative, the Swagger 2.0 specification was renamed the OpenAPI Specification (OAS ) and brought under the Open API Initiative. You may want to adopt OpenAPI for your web APIs. Some points to consider: The OpenAPI Specification comes with a set of opinionated guidelines on how a REST API should be designed. That has advantages for interoperability, but requires more care when designing your API to conform to the specification. OpenAPI promotes a contract-first approach, rather than an implementation-first approach. Contract-first means you design the API contract (the interface) first and then write code that implements the contract. Tools like Swagger can generate client libraries or documentation from API contracts. For example, see ASP.NET Web API Help Pages using Swagger. More information Microsoft REST API Guidelines. Detailed recommendations for designing public REST APIs. Web API Checklist. A useful list of items to consider when designing and implementing a web API. Open API Initiative. Documentation and implementation details on Open API. API implementation 3/13/2019 • 46 minutes to read • Edit Online A carefully-designed RESTful web API defines the resources, relationships, and navigation schemes that are accessible to client applications. When you implement and deploy a web API, you should consider the physical requirements of the environment hosting the web API and the way in which the web API is constructed rather than the logical structure of the data. This guidance focusses on best practices for implementing a web API and publishing it to make it available to client applications. For detailed information about web API design, see API Design Guidance. Processing requests Consider the following points when you implement the code to handle requests. GET, PUT, DELETE, HEAD, and PATCH actions should be idempotent The code that implements these requests should not impose any side-effects. The same request repeated over the same resource should result in the same state. For example, sending multiple DELETE requests to the same URI should have the same effect, although the HTTP status code in the response messages may be different. The first DELETE request might return status code 204 (No Content), while a subsequent DELETE request might return status code 404 (Not Found). NOTE The article Idempotency Patterns on Jonathan Oliver’s blog provides an overview of idempotency and how it relates to data management operations. POST actions that create new resources should not have unrelated side -effects If a POST request is intended to create a new resource, the effects of the request should be limited to the new resource (and possibly any directly related resources if there is some sort of linkage involved) For example, in an ecommerce system, a POST request that creates a new order for a customer might also amend inventory levels and generate billing information, but it should not modify information not directly related to the order or have any other side-effects on the overall state of the system. Avoid implementing chatty POST, PUT, and DELETE operations Support POST, PUT and DELETE requests over resource collections. A POST request can contain the details for multiple new resources and add them all to the same collection, a PUT request can replace the entire set of resources in a collection, and a DELETE request can remove an entire collection. The OData support included in ASP.NET Web API 2 provides the ability to batch requests. A client application can package up several web API requests and send them to the server in a single HTTP request, and receive a single HTTP response that contains the replies to each request. For more information, Introducing Batch Support in Web API and Web API OData. Follow the HTTP specification when sending a response A web API must return messages that contain the correct HTTP status code to enable the client to determine how to handle the result, the appropriate HTTP headers so that the client understands the nature of the result, and a suitably formatted body to enable the client to parse the result. For example, a POST operation should return status code 201 (Created) and the response message should include the URI of the newly created resource in the Location header of the response message. Support content negotiation The body of a response message may contain data in a variety of formats. For example, an HTTP GET request could return data in JSON, or XML format. When the client submits a request, it can include an Accept header that specifies the data formats that it can handle. These formats are specified as media types. For example, a client that issues a GET request that retrieves an image can specify an Accept header that lists the media types that the client can handle, such as image/jpeg, image/gif, image/png . When the web API returns the result, it should format the data by using one of these media types and specify the format in the Content-Type header of the response. If the client does not specify an Accept header, then use a sensible default format for the response body. As an example, the ASP.NET Web API framework defaults to JSON for text-based data. Provide links to support HATEOAS -style navigation and discovery of resources The HATEOAS approach enables a client to navigate and discover resources from an initial starting point. This is achieved by using links containing URIs; when a client issues an HTTP GET request to obtain a resource, the response should contain URIs that enable a client application to quickly locate any directly related resources. For example, in a web API that supports an e-commerce solution, a customer may have placed many orders. When a client application retrieves the details for a customer, the response should include links that enable the client application to send HTTP GET requests that can retrieve these orders. Additionally, HATEOAS -style links should describe the other operations (POST, PUT, DELETE, and so on) that each linked resource supports together with the corresponding URI to perform each request. This approach is described in more detail in API Design. Currently there are no standards that govern the implementation of HATEOAS, but the following example illustrates one possible approach. In this example, an HTTP GET request that finds the details for a customer returns a response that include HATEOAS links that reference the orders for that customer: GET https://adventure-works.com/customers/2 HTTP/1.1 Accept: text/json ... HTTP/1.1 200 OK ... Content-Type: application/json; charset=utf-8 ... Content-Length: ... {"CustomerID":2,"CustomerName":"Bert","Links":[ {"rel":"self", "href":"https://adventure-works.com/customers/2", "action":"GET", "types":["text/xml","application/json"]}, {"rel":"self", "href":"https://adventure-works.com/customers/2", "action":"PUT", "types":["application/x-www-form-urlencoded"]}, {"rel":"self", "href":"https://adventure-works.com/customers/2", "action":"DELETE", "types":[]}, {"rel":"orders", "href":"https://adventure-works.com/customers/2/orders", "action":"GET", "types":["text/xml","application/json"]}, {"rel":"orders", "href":"https://adventure-works.com/customers/2/orders", "action":"POST", "types":["application/x-www-form-urlencoded"]} ]} In this example, the customer data is represented by the Customer class shown in the following code snippet. The HATEOAS links are held in the Links collection property: public class Customer { public int CustomerID { get; set; } public string CustomerName { get; set; } public List<Link> Links { get; set; } ... } public class Link { public string public string public string public string } Rel { get; set; } Href { get; set; } Action { get; set; } [] Types { get; set; } The HTTP GET operation retrieves the customer data from storage and constructs a Customer object, and then populates the Links collection. The result is formatted as a JSON response message. Each link comprises the following fields: The relationship between the object being returned and the object described by the link. In this case self indicates that the link is a reference back to the object itself (similar to a this pointer in many object-oriented languages), and orders is the name of a collection containing the related order information. The hyperlink ( Href ) for the object being described by the link in the form of a URI. The type of HTTP request ( Action ) that can be sent to this URI. The format of any data ( Types ) that should be provided in the HTTP request or that can be returned in the response, depending on the type of the request. The HATEOAS links shown in the example HTTP response indicate that a client application can perform the following operations: An HTTP GET request to the URI https://adventure-works.com/customers/2 to fetch the details of the customer (again). The data can be returned as XML or JSON. An HTTP PUT request to the URI https://adventure-works.com/customers/2 to modify the details of the customer. The new data must be provided in the request message in x-www -form-urlencoded format. An HTTP DELETE request to the URI https://adventure-works.com/customers/2 to delete the customer. The request does not expect any additional information or return data in the response message body. An HTTP GET request to the URI https://adventure-works.com/customers/2/orders to find all the orders for the customer. The data can be returned as XML or JSON. An HTTP PUT request to the URI https://adventure-works.com/customers/2/orders to create a new order for this customer. The data must be provided in the request message in x-www -form-urlencoded format. Handling exceptions Consider the following points if an operation throws an uncaught exception. Capture exceptions and return a meaningful response to clients The code that implements an HTTP operation should provide comprehensive exception handling rather than letting uncaught exceptions propagate to the framework. If an exception makes it impossible to complete the operation successfully, the exception can be passed back in the response message, but it should include a meaningful description of the error that caused the exception. The exception should also include the appropriate HTTP status code rather than simply returning status code 500 for every situation. For example, if a user request causes a database update that violates a constraint (such as attempting to delete a customer that has outstanding orders), you should return status code 409 (Conflict) and a message body indicating the reason for the conflict. If some other condition renders the request unachievable, you can return status code 400 (Bad Request). You can find a full list of HTTP status codes on the Status Code Definitions page on the W3C website. The code example traps different conditions and returns an appropriate response. [HttpDelete] [Route("customers/{id:int}")] public IHttpActionResult DeleteCustomer(int id) { try { // Find the customer to be deleted in the repository var customerToDelete = repository.GetCustomer(id); // If there is no such customer, return an error response // with status code 404 (Not Found) if (customerToDelete == null) { return NotFound(); } // // // if { Remove the customer from the repository The DeleteCustomer method returns true if the customer was successfully deleted (repository.DeleteCustomer(id)) // Return a response message with status code 204 (No Content) // To indicate that the operation was successful return StatusCode(HttpStatusCode.NoContent); } else { // Otherwise return a 400 (Bad Request) error response return BadRequest(Strings.CustomerNotDeleted); } } catch { // If an uncaught exception occurs, return an error response // with status code 500 (Internal Server Error) return InternalServerError(); } } TIP Do not include information that could be useful to an attacker attempting to penetrate your API. Many web servers trap error conditions themselves before they reach the web API. For example, if you configure authentication for a web site and the user fails to provide the correct authentication information, the web server should respond with status code 401 (Unauthorized). Once a client has been authenticated, your code can perform its own checks to verify that the client should be able access the requested resource. If this authorization fails, you should return status code 403 (Forbidden). Handle exceptions consistently and log information about errors To handle exceptions in a consistent manner, consider implementing a global error handling strategy across the entire web API. You should also incorporate error logging which captures the full details of each exception; this error log can contain detailed information as long as it is not made accessible over the web to clients. Distinguish between client-side errors and server-side errors The HTTP protocol distinguishes between errors that occur due to the client application (the HTTP 4xx status codes), and errors that are caused by a mishap on the server (the HTTP 5xx status codes). Make sure that you respect this convention in any error response messages. Optimizing client-side data access In a distributed environment such as that involving a web server and client applications, one of the primary sources of concern is the network. This can act as a considerable bottleneck, especially if a client application is frequently sending requests or receiving data. Therefore you should aim to minimize the amount of traffic that flows across the network. Consider the following points when you implement the code to retrieve and maintain data: Support client-side caching The HTTP 1.1 protocol supports caching in clients and intermediate servers through which a request is routed by the use of the Cache-Control header. When a client application sends an HTTP GET request to the web API, the response can include a Cache-Control header that indicates whether the data in the body of the response can be safely cached by the client or an intermediate server through which the request has been routed, and for how long before it should expire and be considered out-of-date. The following example shows an HTTP GET request and the corresponding response that includes a Cache-Control header: GET https://adventure-works.com/orders/2 HTTP/1.1 HTTP/1.1 200 OK ... Cache-Control: max-age=600, private Content-Type: text/json; charset=utf-8 Content-Length: ... {"orderID":2,"productID":4,"quantity":2,"orderValue":10.00} In this example, the Cache-Control header specifies that the data returned should be expired after 600 seconds, and is only suitable for a single client and must not be stored in a shared cache used by other clients (it is private). The Cache-Control header could specify public rather than private in which case the data can be stored in a shared cache, or it could specify no -store in which case the data must not be cached by the client. The following code example shows how to construct a Cache-Control header in a response message: public class OrdersController : ApiController { ... [Route("api/orders/{id:int:min(0)}")] [HttpGet] public IHttpActionResult FindOrderByID(int id) { // Find the matching order Order order = ...; ... // Create a Cache-Control header for the response var cacheControlHeader = new CacheControlHeaderValue(); cacheControlHeader.Private = true; cacheControlHeader.MaxAge = new TimeSpan(0, 10, 0); ... // Return a response message containing the order and the cache control header OkResultWithCaching<Order> response = new OkResultWithCaching<Order>(order, this) { CacheControlHeader = cacheControlHeader }; return response; } ... } This code makes use of a custom IHttpActionResult class named controller to set the cache header contents: OkResultWithCaching . This class enables the public class OkResultWithCaching<T> : OkNegotiatedContentResult<T> { public OkResultWithCaching(T content, ApiController controller) : base(content, controller) { } public OkResultWithCaching(T content, IContentNegotiator contentNegotiator, HttpRequestMessage request, IEnumerable<MediaTypeFormatter> formatters) : base(content, contentNegotiator, request, formatters) { } public CacheControlHeaderValue CacheControlHeader { get; set; } public EntityTagHeaderValue ETag { get; set; } public override async Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken) { HttpResponseMessage response; try { response = await base.ExecuteAsync(cancellationToken); response.Headers.CacheControl = this.CacheControlHeader; response.Headers.ETag = ETag; } catch (OperationCanceledException) { response = new HttpResponseMessage(HttpStatusCode.Conflict) {ReasonPhrase = "Operation was cancelled"}; } return response; } } NOTE The HTTP protocol also defines the no-cache directive for the Cache-Control header. Rather confusingly, this directive does not mean "do not cache" but rather "revalidate the cached information with the server before returning it"; the data can still be cached, but it is checked each time it is used to ensure that it is still current. Cache management is the responsibility of the client application or intermediate server, but if properly implemented it can save bandwidth and improve performance by removing the need to fetch data that has already been recently retrieved. The max-age value in the Cache-Control header is only a guide and not a guarantee that the corresponding data won't change during the specified time. The web API should set the max-age to a suitable value depending on the expected volatility of the data. When this period expires, the client should discard the object from the cache. NOTE Most modern web browsers support client-side caching by adding the appropriate cache-control headers to requests and examining the headers of the results, as described. However, some older browsers will not cache the values returned from a URL that includes a query string. This is not usually an issue for custom client applications which implement their own cache management strategy based on the protocol discussed here. Some older proxies exhibit the same behavior and might not cache requests based on URLs with query strings. This could be an issue for custom client applications that connect to a web server through such a proxy. Provide ETags to optimize query processing When a client application retrieves an object, the response message can also include an ETag (Entity Tag). An ETag is an opaque string that indicates the version of a resource; each time a resource changes the Etag is also modified. This ETag should be cached as part of the data by the client application. The following code example shows how to add an ETag as part of the response to an HTTP GET request. This code uses the GetHashCode method of an object to generate a numeric value that identifies the object (you can override this method if necessary and generate your own hash using an algorithm such as MD5) : public class OrdersController : ApiController { ... public IHttpActionResult FindOrderByID(int id) { // Find the matching order Order order = ...; ... var hashedOrder = order.GetHashCode(); string hashedOrderEtag = $"\"{hashedOrder}\""; var eTag = new EntityTagHeaderValue(hashedOrderEtag); // Return a response message containing the order and the cache control header OkResultWithCaching<Order> response = new OkResultWithCaching<Order>(order, this) { ..., ETag = eTag }; return response; } ... } The response message posted by the web API looks like this: HTTP/1.1 200 OK ... Cache-Control: max-age=600, private Content-Type: text/json; charset=utf-8 ETag: "2147483648" Content-Length: ... {"orderID":2,"productID":4,"quantity":2,"orderValue":10.00} TIP For security reasons, do not allow sensitive data or data returned over an authenticated (HTTPS) connection to be cached. A client application can issue a subsequent GET request to retrieve the same resource at any time, and if the resource has changed (it has a different ETag) the cached version should be discarded and the new version added to the cache. If a resource is large and requires a significant amount of bandwidth to transmit back to the client, repeated requests to fetch the same data can become inefficient. To combat this, the HTTP protocol defines the following process for optimizing GET requests that you should support in a web API: The client constructs a GET request containing the ETag for the currently cached version of the resource referenced in an If-None-Match HTTP header: GET https://adventure-works.com/orders/2 HTTP/1.1 If-None-Match: "2147483648" The GET operation in the web API obtains the current ETag for the requested data (order 2 in the above example), and compares it to the value in the If-None-Match header. If the current ETag for the requested data matches the ETag provided by the request, the resource has not changed and the web API should return an HTTP response with an empty message body and a status code of 304 (Not Modified). If the current ETag for the requested data does not match the ETag provided by the request, then the data has changed and the web API should return an HTTP response with the new data in the message body and a status code of 200 (OK). If the requested data no longer exists then the web API should return an HTTP response with the status code of 404 (Not Found). The client uses the status code to maintain the cache. If the data has not changed (status code 304) then the object can remain cached and the client application should continue to use this version of the object. If the data has changed (status code 200) then the cached object should be discarded and the new one inserted. If the data is no longer available (status code 404) then the object should be removed from the cache. NOTE If the response header contains the Cache-Control header no-store then the object should always be removed from the cache regardless of the HTTP status code. The code below shows the FindOrderByID method extended to support the If-None-Match header. Notice that if the If-None-Match header is omitted, the specified order is always retrieved: public class OrdersController : ApiController { [Route("api/orders/{id:int:min(0)}")] [HttpGet] public IHttpActionResult FindOrderByID(int id) { try { // Find the matching order Order order = ...; // If there is no such order then return NotFound if (order == null) { return NotFound(); } // Generate the ETag for the order var hashedOrder = order.GetHashCode(); string hashedOrderEtag = $"\"{hashedOrder}\""; // Create the Cache-Control and ETag headers for the response IHttpActionResult response; var cacheControlHeader = new CacheControlHeaderValue(); cacheControlHeader.Public = true; cacheControlHeader.MaxAge = new TimeSpan(0, 10, 0); var eTag = new EntityTagHeaderValue(hashedOrderEtag); // Retrieve the If-None-Match header from the request (if it exists) var nonMatchEtags = Request.Headers.IfNoneMatch; // // // if If there is an ETag in the If-None-Match header and this ETag matches that of the order just retrieved, then create a Not Modified response message (nonMatchEtags.Count > 0 && String.CompareOrdinal(nonMatchEtags.First().Tag, hashedOrderEtag) == 0) { response = new EmptyResultWithCaching() { StatusCode = HttpStatusCode.NotModified, CacheControlHeader = cacheControlHeader, ETag = eTag }; } // Otherwise create a response message that contains the order details else { response = new OkResultWithCaching<Order>(order, this) { CacheControlHeader = cacheControlHeader, ETag = eTag }; } return response; } catch { return InternalServerError(); } } ... } This example incorporates an additional custom IHttpActionResult class named EmptyResultWithCaching . This class simply acts as a wrapper around an HttpResponseMessage object that does not contain a response body: public class EmptyResultWithCaching : IHttpActionResult { public CacheControlHeaderValue CacheControlHeader { get; set; } public EntityTagHeaderValue ETag { get; set; } public HttpStatusCode StatusCode { get; set; } public Uri Location { get; set; } public async Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken) { HttpResponseMessage response = new HttpResponseMessage(StatusCode); response.Headers.CacheControl = this.CacheControlHeader; response.Headers.ETag = this.ETag; response.Headers.Location = this.Location; return response; } } TIP In this example, the ETag for the data is generated by hashing the data retrieved from the underlying data source. If the ETag can be computed in some other way, then the process can be optimized further and the data only needs to be fetched from the data source if it has changed. This approach is especially useful if the data is large or accessing the data source can result in significant latency (for example, if the data source is a remote database). Use ETags to Support Optimistic Concurrency To enable updates over previously cached data, the HTTP protocol supports an optimistic concurrency strategy. If, after fetching and caching a resource, the client application subsequently sends a PUT or DELETE request to change or remove the resource, it should include in If-Match header that references the ETag. The web API can then use this information to determine whether the resource has already been changed by another user since it was retrieved and send an appropriate response back to the client application as follows: The client constructs a PUT request containing the new details for the resource and the ETag for the currently cached version of the resource referenced in an If-Match HTTP header. The following example shows a PUT request that updates an order: PUT https://adventure-works.com/orders/1 HTTP/1.1 If-Match: "2282343857" Content-Type: application/x-www-form-urlencoded Content-Length: ... productID=3&quantity=5&orderValue=250 The PUT operation in the web API obtains the current ETag for the requested data (order 1 in the above example), and compares it to the value in the If-Match header. If the current ETag for the requested data matches the ETag provided by the request, the resource has not changed and the web API should perform the update, returning a message with HTTP status code 204 (No Content) if it is successful. The response can include Cache-Control and ETag headers for the updated version of the resource. The response should always include the Location header that references the URI of the newly updated resource. If the current ETag for the requested data does not match the ETag provided by the request, then the data has been changed by another user since it was fetched and the web API should return an HTTP response with an empty message body and a status code of 412 (Precondition Failed). If the resource to be updated no longer exists then the web API should return an HTTP response with the status code of 404 (Not Found). The client uses the status code and response headers to maintain the cache. If the data has been updated (status code 204) then the object can remain cached (as long as the Cache-Control header does not specify no-store) but the ETag should be updated. If the data was changed by another user changed (status code 412) or not found (status code 404) then the cached object should be discarded. The next code example shows an implementation of the PUT operation for the Orders controller: public class OrdersController : ApiController { [HttpPut] [Route("api/orders/{id:int}")] public IHttpActionResult UpdateExistingOrder(int id, DTOOrder order) { try { var baseUri = Constants.GetUriFromConfig(); var orderToUpdate = this.ordersRepository.GetOrder(id); if (orderToUpdate == null) { return NotFound(); } var hashedOrder = orderToUpdate.GetHashCode(); string hashedOrderEtag = $"\"{hashedOrder}\""; // Retrieve the If-Match header from the request (if it exists) var matchEtags = Request.Headers.IfMatch; // // // if If there is an Etag in the If-Match header and this etag matches that of the order just retrieved, or if there is no etag, then update the Order (((matchEtags.Count > 0 && String.CompareOrdinal(matchEtags.First().Tag, hashedOrderEtag) == 0)) || matchEtags.Count == 0) { // Modify the order orderToUpdate.OrderValue = order.OrderValue; orderToUpdate.ProductID = order.ProductID; orderToUpdate.Quantity = order.Quantity; // Save the order back to the data store // ... // Create the No Content response with Cache-Control, ETag, and Location headers var cacheControlHeader = new CacheControlHeaderValue(); cacheControlHeader.Private = true; cacheControlHeader.MaxAge = new TimeSpan(0, 10, 0); hashedOrder = order.GetHashCode(); hashedOrderEtag = $"\"{hashedOrder}\""; var eTag = new EntityTagHeaderValue(hashedOrderEtag); var location = new Uri($"{baseUri}/{Constants.ORDERS}/{id}"); var response = new EmptyResultWithCaching() { StatusCode = HttpStatusCode.NoContent, CacheControlHeader = cacheControlHeader, ETag = eTag, Location = location }; return response; } // Otherwise return a Precondition Failed response return StatusCode(HttpStatusCode.PreconditionFailed); } } catch { return InternalServerError(); } } ... } TIP Use of the If-Match header is entirely optional, and if it is omitted the web API will always attempt to update the specified order, possibly blindly overwriting an update made by another user. To avoid problems due to lost updates, always provide an If-Match header. Handling large requests and responses There may be occasions when a client application needs to issue requests that send or receive data that may be several megabytes (or bigger) in size. Waiting while this amount of data is transmitted could cause the client application to become unresponsive. Consider the following points when you need to handle requests that include significant amounts of data: Optimize requests and responses that involve large objects Some resources may be large objects or include large fields, such as graphics images or other types of binary data. A web API should support streaming to enable optimized uploading and downloading of these resources. The HTTP protocol provides the chunked transfer encoding mechanism to stream large data objects back to a client. When the client sends an HTTP GET request for a large object, the web API can send the reply back in piecemeal chunks over an HTTP connection. The length of the data in the reply may not be known initially (it might be generated), so the server hosting the web API should send a response message with each chunk that specifies the Transfer-Encoding: Chunked header rather than a Content-Length header. The client application can receive each chunk in turn to build up the complete response. The data transfer completes when the server sends back a final chunk with zero size. A single request could conceivably result in a massive object that consumes considerable resources. If, during the streaming process, the web API determines that the amount of data in a request has exceeded some acceptable bounds, it can abort the operation and return a response message with status code 413 (Request Entity Too Large). You can minimize the size of large objects transmitted over the network by using HTTP compression. This approach helps to reduce the amount of network traffic and the associated network latency, but at the cost of requiring additional processing at the client and the server hosting the web API. For example, a client application that expects to receive compressed data can include an Accept-Encoding: gzip request header (other data compression algorithms can also be specified). If the server supports compression it should respond with the content held in gzip format in the message body and the Content-Encoding: gzip response header. You can combine encoded compression with streaming; compress the data first before streaming it, and specify the gzip content encoding and chunked transfer encoding in the message headers. Also note that some web servers (such as Internet Information Server) can be configured to automatically compress HTTP responses regardless of whether the web API compresses the data or not. Implement partial responses for clients that do not support asynchronous operations As an alternative to asynchronous streaming, a client application can explicitly request data for large objects in chunks, known as partial responses. The client application sends an HTTP HEAD request to obtain information about the object. If the web API supports partial responses if should respond to the HEAD request with a response message that contains an Accept-Ranges header and a Content-Length header that indicates the total size of the object, but the body of the message should be empty. The client application can use this information to construct a series of GET requests that specify a range of bytes to receive. The web API should return a response message with HTTP status 206 (Partial Content), a Content-Length header that specifies the actual amount of data included in the body of the response message, and a Content-Range header that indicates which part (such as bytes 4000 to 8000) of the object this data represents. HTTP HEAD requests and partial responses are described in more detail in API Design. Avoid sending unnecessary 100-Continue status messages in client applications A client application that is about to send a large amount of data to a server may determine first whether the server is actually willing to accept the request. Prior to sending the data, the client application can submit an HTTP request with an Expect: 100-Continue header, a Content-Length header that indicates the size of the data, but an empty message body. If the server is willing to handle the request, it should respond with a message that specifies the HTTP status 100 (Continue). The client application can then proceed and send the complete request including the data in the message body. If you are hosting a service by using IIS, the HTTP.sys driver automatically detects and handles Expect: 100Continue headers before passing requests to your web application. This means that you are unlikely to see these headers in your application code, and you can assume that IIS has already filtered any messages that it deems to be unfit or too large. If you are building client applications by using the .NET Framework, then all POST and PUT messages will first send messages with Expect: 100-Continue headers by default. As with the server-side, the process is handled transparently by the .NET Framework. However, this process results in each POST and PUT request causing two round-trips to the server, even for small requests. If your application is not sending requests with large amounts of data, you can disable this feature by using the ServicePointManager class to create ServicePoint objects in the client application. A ServicePoint object handles the connections that the client makes to a server based on the scheme and host fragments of URIs that identify resources on the server. You can then set the Expect100Continue property of the ServicePoint object to false. All subsequent POST and PUT requests made by the client through a URI that matches the scheme and host fragments of the ServicePoint object will be sent without Expect: 100Continue headers. The following code shows how to configure a ServicePoint object that configures all requests sent to URIs with a scheme of http and a host of www.contoso.com . Uri uri = new Uri("https://www.contoso.com/"); ServicePoint sp = ServicePointManager.FindServicePoint(uri); sp.Expect100Continue = false; You can also set the static Expect100Continue property of the ServicePointManager class to specify the default value of this property for all subsequently created ServicePoint objects. Support pagination for requests that may return large numbers of objects If a collection contains a large number of resources, issuing a GET request to the corresponding URI could result in significant processing on the server hosting the web API affecting performance, and generate a significant amount of network traffic resulting in increased latency. To handle these cases, the web API should support query strings that enable the client application to refine requests or fetch data in more manageable, discrete blocks (or pages). The code below shows the GetAllOrders method in the Orders controller. This method retrieves the details of orders. If this method was unconstrained, it could conceivably return a large amount of data. The limit and offset parameters are intended to reduce the volume of data to a smaller subset, in this case only the first 10 orders by default: public class OrdersController : ApiController { ... [Route("api/orders")] [HttpGet] public IEnumerable<Order> GetAllOrders(int limit=10, int offset=0) { // Find the number of orders specified by the limit parameter // starting with the order specified by the offset parameter var orders = ... return orders; } ... } A client application can issue a request to retrieve 30 orders starting at offset 50 by using the URI https://www.adventure-works.com/api/orders?limit=30&offset=50 . TIP Avoid enabling client applications to specify query strings that result in a URI that is more than 2000 characters long. Many web clients and servers cannot handle URIs that are this long. Maintaining responsiveness, scalability, and availability The same web API might be used by many client applications running anywhere in the world. It is important to ensure that the web API is implemented to maintain responsiveness under a heavy load, to be scalable to support a highly varying workload, and to guarantee availability for clients that perform business-critical operations. Consider the following points when determining how to meet these requirements: Provide asynchronous support for long-running requests A request that might take a long time to process should be performed without blocking the client that submitted the request. The web API can perform some initial checking to validate the request, initiate a separate task to perform the work, and then return a response message with HTTP code 202 (Accepted). The task could run asynchronously as part of the web API processing, or it could be offloaded to a background task. The web API should also provide a mechanism to return the results of the processing to the client application. You can achieve this by providing a polling mechanism for client applications to periodically query whether the processing has finished and obtain the result, or enabling the web API to send a notification when the operation has completed. You can implement a simple polling mechanism by providing a polling URI that acts as a virtual resource using the following approach: 1. The client application sends the initial request to the web API. 2. The web API stores information about the request in a table held in table storage or Microsoft Azure Cache, and generates a unique key for this entry, possibly in the form of a GUID. 3. The web API initiates the processing as a separate task. The web API records the state of the task in the table as Running. 4. The web API returns a response message with HTTP status code 202 (Accepted), and the GUID of the table entry in the body of the message. 5. When the task has completed, the web API stores the results in the table, and sets the state of the task to Complete. Note that if the task fails, the web API could also store information about the failure and set the status to Failed. 6. While the task is running, the client can continue performing its own processing. It can periodically send a request to the URI /polling/{guid } where {guid } is the GUID returned in the 202 response message by the web API. 7. The web API at the /polling/{guid } URI queries the state of the corresponding task in the table and returns a response message with HTTP status code 200 (OK) containing this state (Running, Complete, or Failed). If the task has completed or failed, the response message can also include the results of the processing or any information available about the reason for the failure. Options for implementing notifications include: Using an Azure Notification Hub to push asynchronous responses to client applications. For more information, see Azure Notification Hubs Notify Users. Using the Comet model to retain a persistent network connection between the client and the server hosting the web API, and using this connection to push messages from the server back to the client. The MSDN magazine article Building a Simple Comet Application in the Microsoft .NET Framework describes an example solution. Using SignalR to push data in real-time from the web server to the client over a persistent network connection. SignalR is available for ASP.NET web applications as a NuGet package. You can find more information on the ASP.NET SignalR website. Ensure that each request is stateless Each request should be considered atomic. There should be no dependencies between one request made by a client application and any subsequent requests submitted by the same client. This approach assists in scalability; instances of the web service can be deployed on a number of servers. Client requests can be directed at any of these instances and the results should always be the same. It also improves availability for a similar reason; if a web server fails requests can be routed to another instance (by using Azure Traffic Manager) while the server is restarted with no ill effects on client applications. Track clients and implement throttling to reduce the chances of DOS attacks If a specific client makes a large number of requests within a given period of time it might monopolize the service and affect the performance of other clients. To mitigate this issue, a web API can monitor calls from client applications either by tracking the IP address of all incoming requests or by logging each authenticated access. You can use this information to limit resource access. If a client exceeds a defined limit, the web API can return a response message with status 503 (Service Unavailable) and include a Retry-After header that specifies when the client can send the next request without it being declined. This strategy can help to reduce the chances of a Denial Of Service (DOS ) attack from a set of clients stalling the system. Manage persistent HTTP connections carefully The HTTP protocol supports persistent HTTP connections where they are available. The HTTP 1.0 specificiation added the Connection:Keep-Alive header that enables a client application to indicate to the server that it can use the same connection to send subsequent requests rather than opening new ones. The connection closes automatically if the client does not reuse the connection within a period defined by the host. This behavior is the default in HTTP 1.1 as used by Azure services, so there is no need to include Keep-Alive headers in messages. Keeping a connection open can help to improve responsiveness by reducing latency and network congestion, but it can be detrimental to scalability by keeping unnecessary connections open for longer than required, limiting the ability of other concurrent clients to connect. It can also affect battery life if the client application is running on a mobile device; if the application only makes occasional requests to the server, maintaining an open connection can cause the battery to drain more quickly. To ensure that a connection is not made persistent with HTTP 1.1, the client can include a Connection:Close header with messages to override the default behavior. Similarly, if a server is handling a very large number of clients it can include a Connection:Close header in response messages which should close the connection and save server resources. NOTE Persistent HTTP connections are a purely optional feature to reduce the network overhead associated with repeatedly establishing a communications channel. Neither the web API nor the client application should depend on a persistent HTTP connection being available. Do not use persistent HTTP connections to implement Comet-style notification systems; instead you should utilize sockets (or websockets if available) at the TCP layer. Finally, note Keep-Alive headers are of limited use if a client application communicates with a server via a proxy; only the connection with the client and the proxy will be persistent. Publishing and managing a web API To make a web API available for client applications, the web API must be deployed to a host environment. This environment is typically a web server, although it may be some other type of host process. You should consider the following points when publishing a web API: All requests must be authenticated and authorized, and the appropriate level of access control must be enforced. A commercial web API might be subject to various quality guarantees concerning response times. It is important to ensure that host environment is scalable if the load can vary significantly over time. It may be necessary to meter requests for monetization purposes. It might be necessary to regulate the flow of traffic to the web API, and implement throttling for specific clients that have exhausted their quotas. Regulatory requirements might mandate logging and auditing of all requests and responses. To ensure availability, it may be necessary to monitor the health of the server hosting the web API and restart it if necessary. It is useful to be able to decouple these issues from the technical issues concerning the implementation of the web API. For this reason, consider creating a façade, running as a separate process and that routes requests to the web API. The façade can provide the management operations and forward validated requests to the web API. Using a façade can also bring many functional advantages, including: Acting as an integration point for multiple web APIs. Transforming messages and translating communications protocols for clients built by using varying technologies. Caching requests and responses to reduce load on the server hosting the web API. Testing a web API A web API should be tested as thoroughly as any other piece of software. You should consider creating unit tests to validate the functionality. The nature of a web API brings its own additional requirements to verify that it operates correctly. You should pay particular attention to the following aspects: Test all routes to verify that they invoke the correct operations. Be especially aware of HTTP status code 405 (Method Not Allowed) being returned unexpectedly as this can indicate a mismatch between a route and the HTTP methods (GET, POST, PUT, DELETE ) that can be dispatched to that route. Send HTTP requests to routes that do not support them, such as submitting a POST request to a specific resource (POST requests should only be sent to resource collections). In these cases, the only valid response should be status code 405 (Not Allowed). Verify that all routes are protected properly and are subject to the appropriate authentication and authorization checks. NOTE Some aspects of security such as user authentication are most likely to be the responsibility of the host environment rather than the web API, but it is still necessary to include security tests as part of the deployment process. Test the exception handling performed by each operation and verify that an appropriate and meaningful HTTP response is passed back to the client application. Verify that request and response messages are well-formed. For example, if an HTTP POST request contains the data for a new resource in x-www -form-urlencoded format, confirm that the corresponding operation correctly parses the data, creates the resources, and returns a response containing the details of the new resource, including the correct Location header. Verify all links and URIs in response messages. For example, an HTTP POST message should return the URI of the newly-created resource. All HATEOAS links should be valid. Ensure that each operation returns the correct status codes for different combinations of input. For example: If a query is successful, it should return status code 200 (OK) If a resource is not found, the operation should return HTTP status code 404 (Not Found). If the client sends a request that successfully deletes a resource, the status code should be 204 (No Content). If the client sends a request that creates a new resource, the status code should be 201 (Created). Watch out for unexpected response status codes in the 5xx range. These messages are usually reported by the host server to indicate that it was unable to fulfill a valid request. Test the different request header combinations that a client application can specify and ensure that the web API returns the expected information in response messages. Test query strings. If an operation can take optional parameters (such as pagination requests), test the different combinations and order of parameters. Verify that asynchronous operations complete successfully. If the web API supports streaming for requests that return large binary objects (such as video or audio), ensure that client requests are not blocked while the data is streamed. If the web API implements polling for long-running data modification operations, verify that that the operations report their status correctly as they proceed. You should also create and run performance tests to check that the web API operates satisfactorily under duress. You can build a web performance and load test project by using Visual Studio Ultimate. For more information, see Run performance tests on an application before a release. Using Azure API Management On Azure, consider using Azue API Management to publish and manage a web API. Using this facility, you can generate a service that acts as a façade for one or more web APIs. The service is itself a scalable web service that you can create and configure by using the Azure Management portal. You can use this service to publish and manage a web API as follows: 1. Deploy the web API to a website, Azure cloud service, or Azure virtual machine. 2. Connect the API management service to the web API. Requests sent to the URL of the management API are mapped to URIs in the web API. The same API management service can route requests to more than one web API. This enables you to aggregate multiple web APIs into a single management service. Similarly, the same web API can be referenced from more than one API management service if you need to restrict or partition the functionality available to different applications. NOTE The URIs in HATEOAS links generated as part of the response for HTTP GET requests should reference the URL of the API management service and not the web server hosting the web API. 3. For each web API, specify the HTTP operations that the web API exposes together with any optional parameters that an operation can take as input. You can also configure whether the API management service should cache the response received from the web API to optimize repeated requests for the same data. Record the details of the HTTP responses that each operation can generate. This information is used to generate documentation for developers, so it is important that it is accurate and complete. You can either define operations manually using the wizards provided by the Azure Management portal, or you can import them from a file containing the definitions in WADL or Swagger format. 4. Configure the security settings for communications between the API management service and the web server hosting the web API. The API management service currently supports Basic authentication and mutual authentication using certificates, and OAuth 2.0 user authorization. 5. Create a product. A product is the unit of publication; you add the web APIs that you previously connected to the management service to the product. When the product is published, the web APIs become available to developers. NOTE Prior to publishing a product, you can also define user-groups that can access the product and add users to these groups. This gives you control over the developers and applications that can use the web API. If a web API is subject to approval, prior to being able to access it a developer must send a request to the product administrator. The administrator can grant or deny access to the developer. Existing developers can also be blocked if circumstances change. 6. Configure policies for each web API. Policies govern aspects such as whether cross-domain calls should be allowed, how to authenticate clients, whether to convert between XML and JSON data formats transparently, whether to restrict calls from a given IP range, usage quotas, and whether to limit the call rate. Policies can be applied globally across the entire product, for a single web API in a product, or for individual operations in a web API. For more information, see the API Management Documentation. TIP Azure provides the Azure Traffic Manager which enables you to implement failover and load-balancing, and reduce latency across multiple instances of a web site hosted in different geographic locations. You can use Azure Traffic Manager in conjunction with the API Management Service; the API Management Service can route requests to instances of a web site through Azure Traffic Manager. For more information, see Traffic Manager routing Methods. In this structure, if you are using custom DNS names for your web sites, you should configure the appropriate CNAME record for each web site to point to the DNS name of the Azure Traffic Manager web site. Supporting client-side developers Developers constructing client applications typically require information on how to access the web API, and documentation concerning the parameters, data types, return types, and return codes that describe the different requests and responses between the web service and the client application. Document the REST operations for a web API The Azure API Management Service includes a developer portal that describes the REST operations exposed by a web API. When a product has been published it appears on this portal. Developers can use this portal to sign up for access; the administrator can then approve or deny the request. If the developer is approved, they are assigned a subscription key that is used to authenticate calls from the client applications that they develop. This key must be provided with each web API call otherwise it will be rejected. This portal also provides: Documentation for the product, listing the operations that it exposes, the parameters required, and the different responses that can be returned. Note that this information is generated from the details provided in step 3 in the list in the Publishing a web API by using the Microsoft Azure API Management Service section. Code snippets that show how to invoke operations from several languages, including JavaScript, C#, Java, Ruby, Python, and PHP. A developers' console that enables a developer to send an HTTP request to test each operation in the product and view the results. A page where the developer can report any issues or problems found. The Azure Management portal enables you to customize the developer portal to change the styling and layout to match the branding of your organization. Implement a client SDK Building a client application that invokes REST requests to access a web API requires writing a significant amount of code to construct each request and format it appropriately, send the request to the server hosting the web service, and parse the response to work out whether the request succeeded or failed and extract any data returned. To insulate the client application from these concerns, you can provide an SDK that wraps the REST interface and abstracts these low -level details inside a more functional set of methods. A client application uses these methods, which transparently convert calls into REST requests and then convert the responses back into method return values. This is a common technique that is implemented by many services, including the Azure SDK. Creating a client-side SDK is a considerable undertaking as it has to be implemented consistently and tested carefully. However, much of this process can be made mechanical, and many vendors supply tools that can automate many of these tasks. Monitoring a web API Depending on how you have published and deployed your web API you can monitor the web API directly, or you can gather usage and health information by analyzing the traffic that passes through the API Management service. Monitoring a web API directly If you have implemented your web API by using the ASP.NET Web API template (either as a Web API project or as a Web role in an Azure cloud service) and Visual Studio 2013, you can gather availability, performance, and usage data by using ASP.NET Application Insights. Application Insights is a package that transparently tracks and records information about requests and responses when the web API is deployed to the cloud; once the package is installed and configured, you don't need to amend any code in your web API to use it. When you deploy the web API to an Azure web site, all traffic is examined and the following statistics are gathered: Server response time. Number of server requests and the details of each request. The top slowest requests in terms of average response time. The details of any failed requests. The number of sessions initiated by different browsers and user agents. The most frequently viewed pages (primarily useful for web applications rather than web APIs). The different user roles accessing the web API. You can view this data in real time from the Azure Management portal. You can also create webtests that monitor the health of the web API. A webtest sends a periodic request to a specified URI in the web API and captures the response. You can specify the definition of a successful response (such as HTTP status code 200), and if the request does not return this response you can arrange for an alert to be sent to an administrator. If necessary, the administrator can restart the server hosting the web API if it has failed. For more information, see Application Insights - Get started with ASP.NET. Monitoring a web API through the API Management Service If you have published your web API by using the API Management service, the API Management page on the Azure Management portal contains a dashboard that enables you to view the overall performance of the service. The Analytics page enables you to drill down into the details of how the product is being used. This page contains the following tabs: Usage. This tab provides information about the number of API calls made and the bandwidth used to handle these calls over time. You can filter usage details by product, API, and operation. Health. This tab enables you to view the outcome of API requests (the HTTP status codes returned), the effectiveness of the caching policy, the API response time, and the service response time. Again, you can filter health data by product, API, and operation. Activity. This tab provides a text summary of the numbers of successful calls, failed calls, blocked calls, average response time, and response times for each product, web API, and operation. This page also lists the number of calls made by each developer. At a glance. This tab displays a summary of the performance data, including the developers responsible for making the most API calls, and the products, web APIs, and operations that received these calls. You can use this information to determine whether a particular web API or operation is causing a bottleneck, and if necessary scale the host environment and add more servers. You can also ascertain whether one or more applications are using a disproportionate volume of resources and apply the appropriate policies to set quotas and limit call rates. NOTE You can change the details for a published product, and the changes are applied immediately. For example, you can add or remove an operation from a web API without requiring that you republish the product that contains the web API. More information ASP.NET Web API OData contains examples and further information on implementing an OData web API by using ASP.NET. Introducing batch support in Web API and Web API OData describes how to implement batch operations in a web API by using OData. Idempotency patterns on Jonathan Oliver’s blog provides an overview of idempotency and how it relates to data management operations. Status Code Definitions on the W3C website contains a full list of HTTP status codes and their descriptions. Run background tasks with WebJobs provides information and examples on using WebJobs to perform background operations. Azure Notification Hubs notify users shows how to use an Azure Notification Hub to push asynchronous responses to client applications. API Management describes how to publish a product that provides controlled and secure access to a web API. Azure API Management REST API reference describes how to use the API Management REST API to build custom management applications. Traffic Manager routing methods summarizes how Azure Traffic Manager can be used to load-balance requests across multiple instances of a website hosting a web API. Application Insights - Get started with ASP.NET provides detailed information on installing and configuring Application Insights in an ASP.NET Web API project. Autoscaling 3/13/2019 • 13 minutes to read • Edit Online Autoscaling is the process of dynamically allocating resources to match performance requirements. As the volume of work grows, an application may need additional resources to maintain the desired performance levels and satisfy service-level agreements (SLAs). As demand slackens and the additional resources are no longer needed, they can be de-allocated to minimize costs. Autoscaling takes advantage of the elasticity of cloud-hosted environments while easing management overhead. It reduces the need for an operator to continually monitor the performance of a system and make decisions about adding or removing resources. There are two main ways that an application can scale: Vertical scaling, also called scaling up and down, means changing the capacity of a resource. For example, you could move an application to a larger VM size. Vertical scaling often requires making the system temporarily unavailable while it is being redeployed. Therefore, it's less common to automate vertical scaling. Horizontal scaling, also called scaling out and in, means adding or removing instances of a resource. The application continues running without interruption as new resources are provisioned. When the provisioning process is complete, the solution is deployed on these additional resources. If demand drops, the additional resources can be shut down cleanly and deallocated. Many cloud-based systems, including Microsoft Azure, support automatic horizontal scaling. The rest of this article focuses on horizontal scaling. NOTE Autoscaling mostly applies to compute resources. While it's possible to horizontally scale a database or message queue, this usually involves data partitioning, which is generally not automated. Overview An autoscaling strategy typically involves the following pieces: Instrumentation and monitoring systems at the application, service, and infrastructure levels. These systems capture key metrics, such as response times, queue lengths, CPU utilization, and memory usage. Decision-making logic that evaluates these metrics against predefined thresholds or schedules, and decides whether to scale. Components that scale the system. Testing, monitoring, and tuning of the autoscaling strategy to ensure that it functions as expected. Azure provides built-in autoscaling mechanisms that address common scenarios. If a particular service or technology does not have built-in autoscaling functionality, or if you have specific autoscaling requirements beyond its capabilities, you might consider a custom implementation. A custom implementation would collect operational and system metrics, analyze the metrics, and then scale resources accordingly. Configure autoscaling for an Azure solution Azure provides built-in autoscaling for most compute options. Virtual Machines support autoscaling through the use of VM Scale Sets, which are a way to manage a set of Azure virtual machines as a group. See How to use automatic scaling and Virtual Machine Scale Sets. Service Fabric also supports auto-scaling through VM Scale Sets. Every node type in a Service Fabric cluster is set up as a separate VM scale set. That way, each node type can be scaled in or out independently. See Scale a Service Fabric cluster in or out using auto-scale rules. Azure App Service has built-in autoscaling. Autoscale settings apply to all of the apps within an App Service. See Scale instance count manually or automatically. Azure Cloud Services has built-in autoscaling at the role level. See How to configure auto scaling for a Cloud Service in the portal. These compute options all use Azure Monitor autoscale to provide a common set of autoscaling functionality. Azure Functions differs from the previous compute options, because you don't need to configure any autoscale rules. Instead, Azure Functions automatically allocates compute power when your code is running, scaling out as necessary to handle load. For more information, see Choose the correct hosting plan for Azure Functions. Finally, a custom autoscaling solution can sometimes be useful. For example, you could use Azure diagnostics and application-based metrics, along with custom code to monitor and export the application metrics. Then you could define custom rules based on these metrics, and use Resource Manager REST APIs to trigger autoscaling. However, a custom solution is not simple to implement, and should be considered only if none of the previous approaches can fulfill your requirements. Use the built-in autoscaling features of the platform, if they meet your requirements. If not, carefully consider whether you really need more complex scaling features. Examples of additional requirements may include more granularity of control, different ways to detect trigger events for scaling, scaling across subscriptions, and scaling other types of resources. Use Azure Monitor autoscale Azure Monitor autoscale provide a common set of autoscaling functionality for VM Scale Sets, Azure App Service, and Azure Cloud Service. Scaling can be performed on a schedule, or based on a runtime metric, such as CPU or memory usage. Examples: Scale out to 10 instances on weekdays, and scale in to 4 instances on Saturday and Sunday. Scale out by one instance if average CPU usage is above 70%, and scale in by one instance if CPU usage falls below 50%. Scale out by one instance if the number of messages in a queue exceeds a certain threshold. For a list of built-in metrics, see Azure Monitor autoscaling common metrics. You can also implement custom metrics by using Application Insights. You can configure autoscaling by using PowerShell, the Azure CLI, an Azure Resource Manager template, or the Azure portal. For more detailed control, use the Azure Resource Manager REST API. The Azure Monitoring Service Management Library and the Microsoft Insights Library (in preview ) are SDKs that allow collecting metrics from different resources, and perform autoscaling by making use of the REST APIs. For resources where Azure Resource Manager support isn't available, or if you are using Azure Cloud Services, the Service Management REST API can be used for autoscaling. In all other cases, use Azure Resource Manager. Consider the following points when using Azure autoscale: Consider whether you can predict the load on the application well enough to use scheduled autoscaling, adding and removing instances to meet anticipated peaks in demand. If this isn't possible, use reactive autoscaling based on runtime metrics, in order to handle unpredictable changes in demand. Typically, you can combine these approaches. For example, create a strategy that adds resources based on a schedule of the times when you know the application is most busy. This helps to ensure that capacity is available when required, without any delay from starting new instances. For each scheduled rule, define metrics that allow reactive autoscaling during that period to ensure that the application can handle sustained but unpredictable peaks in demand. It's often difficult to understand the relationship between metrics and capacity requirements, especially when an application is initially deployed. Provision a little extra capacity at the beginning, and then monitor and tune the autoscaling rules to bring the capacity closer to the actual load. Configure the autoscaling rules, and then monitor the performance of your application over time. Use the results of this monitoring to adjust the way in which the system scales if necessary. However, keep in mind that autoscaling is not an instantaneous process. It takes time to react to a metric such as average CPU utilization exceeding (or falling below ) a specified threshold. Autoscaling rules that use a detection mechanism based on a measured trigger attribute (such as CPU usage or queue length) use an aggregated value over time, rather than instantaneous values, to trigger an autoscaling action. By default, the aggregate is an average of the values. This prevents the system from reacting too quickly, or causing rapid oscillation. It also allows time for new instances that are auto-started to settle into running mode, preventing additional autoscaling actions from occurring while the new instances are starting up. For Azure Cloud Services and Azure Virtual Machines, the default period for the aggregation is 45 minutes, so it can take up to this period of time for the metric to trigger autoscaling in response to spikes in demand. You can change the aggregation period by using the SDK, but be aware that periods of fewer than 25 minutes may cause unpredictable results. For Web Apps, the averaging period is much shorter, allowing new instances to be available in about five minutes after a change to the average trigger measure. If you configure autoscaling using the SDK rather than the portal, you can specify a more detailed schedule during which the rules are active. You can also create your own metrics and use them with or without any of the existing ones in your autoscaling rules. For example, you may wish to use alternative counters, such as the number of requests per second or the average memory availability, or use custom counters that measure specific business processes. When autoscaling Service Fabric, the node types in your cluster are made of VM scale sets at the backend, so you need to set up auto-scale rules for each node type. Take into account the number of nodes that you must have before you set up auto-scaling. The minimum number of nodes that you must have for the primary node type is driven by the reliability level you have chosen. For more info, see scale a Service Fabric cluster in or out using auto-scale rules. You can use the portal to link resources such as SQL Database instances and queues to a Cloud Service instance. This allows you to more easily access the separate manual and automatic scaling configuration options for each of the linked resources. For more information, see How to: Link a resource to a cloud service. When you configure multiple policies and rules, they could conflict with each other. Autoscale uses the following conflict resolution rules to ensure that there is always a sufficient number of instances running: Scale out operations always take precedence over scale in operations. When scale out operations conflict, the rule that initiates the largest increase in the number of instances takes precedence. When scale in operations conflict, the rule that initiates the smallest decrease in the number of instances takes precedence. In an App Service Environment any worker pool or front-end metrics can be used to define autoscale rules. For more information, see Autoscaling and App Service Environment. Application design considerations Autoscaling isn't an instant solution. Simply adding resources to a system or running more instances of a process doesn't guarantee that the performance of the system will improve. Consider the following points when designing an autoscaling strategy: The system must be designed to be horizontally scalable. Avoid making assumptions about instance affinity; do not design solutions that require that the code is always running in a specific instance of a process. When scaling a cloud service or web site horizontally, don't assume that a series of requests from the same source will always be routed to the same instance. For the same reason, design services to be stateless to avoid requiring a series of requests from an application to always be routed to the same instance of a service. When designing a service that reads messages from a queue and processes them, don't make any assumptions about which instance of the service handles a specific message. Autoscaling could start additional instances of a service as the queue length grows. The Competing Consumers pattern describes how to handle this scenario. If the solution implements a long-running task, design this task to support both scaling out and scaling in. Without due care, such a task could prevent an instance of a process from being shut down cleanly when the system scales in, or it could lose data if the process is forcibly terminated. Ideally, refactor a longrunning task and break up the processing that it performs into smaller, discrete chunks. The Pipes and Filters pattern provides an example of how you can achieve this. Alternatively, you can implement a checkpoint mechanism that records state information about the task at regular intervals, and save this state in durable storage that can be accessed by any instance of the process running the task. In this way, if the process is shutdown, the work that it was performing can be resumed from the last checkpoint by using another instance. When background tasks run on separate compute instances, such as in worker roles of a cloud services hosted application, you may need to scale different parts of the application using different scaling policies. For example, you may need to deploy additional user interface (UI) compute instances without increasing the number of background compute instances, or the opposite of this. If you offer different levels of service (such as basic and premium service packages), you may need to scale out the compute resources for premium service packages more aggressively than those for basic service packages in order to meet SLAs. Consider using the length of the queue over which UI and background compute instances communicate as a criterion for your autoscaling strategy. This is the best indicator of an imbalance or difference between the current load and the processing capacity of the background task. If you base your autoscaling strategy on counters that measure business processes, such as the number of orders placed per hour or the average execution time of a complex transaction, ensure that you fully understand the relationship between the results from these types of counters and the actual compute capacity requirements. It may be necessary to scale more than one component or compute unit in response to changes in business process counters. To prevent a system from attempting to scale out excessively, and to avoid the costs associated with running many thousands of instances, consider limiting the maximum number of instances that can be automatically added. Most autoscaling mechanisms allow you to specify the minimum and maximum number of instances for a rule. In addition, consider gracefully degrading the functionality that the system provides if the maximum number of instances have been deployed, and the system is still overloaded. Keep in mind that autoscaling might not be the most appropriate mechanism to handle a sudden burst in workload. It takes time to provision and start new instances of a service or add resources to a system, and the peak demand may have passed by the time these additional resources have been made available. In this scenario, it may be better to throttle the service. For more information, see the Throttling pattern. Conversely, if you do need the capacity to process all requests when the volume fluctuates rapidly, and cost isn't a major contributing factor, consider using an aggressive autoscaling strategy that starts additional instances more quickly. You can also use a scheduled policy that starts a sufficient number of instances to meet the maximum load before that load is expected. The autoscaling mechanism should monitor the autoscaling process, and log the details of each autoscaling event (what triggered it, what resources were added or removed, and when). If you create a custom autoscaling mechanism, ensure that it incorporates this capability. Analyze the information to help measure the effectiveness of the autoscaling strategy, and tune it if necessary. You can tune both in the short term, as the usage patterns become more obvious, and over the long term, as the business expands or the requirements of the application evolve. If an application reaches the upper limit defined for autoscaling, the mechanism might also alert an operator who could manually start additional resources if necessary. Note that, under these circumstances, the operator may also be responsible for manually removing these resources after the workload eases. Related patterns and guidance The following patterns and guidance may also be relevant to your scenario when implementing autoscaling: Throttling pattern. This pattern describes how an application can continue to function and meet SLAs when an increase in demand places an extreme load on resources. Throttling can be used with autoscaling to prevent a system from being overwhelmed while the system scales out. Competing Consumers pattern. This pattern describes how to implement a pool of service instances that can handle messages from any application instance. Autoscaling can be used to start and stop service instances to match the anticipated workload. This approach enables a system to process multiple messages concurrently to optimize throughput, improve scalability and availability, and balance the workload. Monitoring and diagnostics. Instrumentation and telemetry are vital for gathering the information that can drive the autoscaling process. Background jobs 3/13/2019 • 23 minutes to read • Edit Online Many types of applications require background tasks that run independently of the user interface (UI). Examples include batch jobs, intensive processing tasks, and long-running processes such as workflows. Background jobs can be executed without requiring user interaction--the application can start the job and then continue to process interactive requests from users. This can help to minimize the load on the application UI, which can improve availability and reduce interactive response times. For example, if an application is required to generate thumbnails of images that are uploaded by users, it can do this as a background job and save the thumbnail to storage when it is complete--without the user needing to wait for the process to be completed. In the same way, a user placing an order can initiate a background workflow that processes the order, while the UI allows the user to continue browsing the web app. When the background job is complete, it can update the stored orders data and send an email to the user that confirms the order. When you consider whether to implement a task as a background job, the main criteria is whether the task can run without user interaction and without the UI needing to wait for the job to be completed. Tasks that require the user or the UI to wait while they are completed might not be appropriate as background jobs. Types of background jobs Background jobs typically include one or more of the following types of jobs: CPU -intensive jobs, such as mathematical calculations or structural model analysis. I/O -intensive jobs, such as executing a series of storage transactions or indexing files. Batch jobs, such as nightly data updates or scheduled processing. Long-running workflows, such as order fulfillment, or provisioning services and systems. Sensitive-data processing where the task is handed off to a more secure location for processing. For example, you might not want to process sensitive data within a web app. Instead, you might use a pattern such as the Gatekeeper pattern to transfer the data to an isolated background process that has access to protected storage. Triggers Background jobs can be initiated in several different ways. They fall into one of the following categories: Event-driven triggers. The task is started in response to an event, typically an action taken by a user or a step in a workflow. Schedule-driven triggers. The task is invoked on a schedule based on a timer. This might be a recurring schedule or a one-off invocation that is specified for a later time. Event-driven triggers Event-driven invocation uses a trigger to start the background task. Examples of using event-driven triggers include: The UI or another job places a message in a queue. The message contains data about an action that has taken place, such as the user placing an order. The background task listens on this queue and detects the arrival of a new message. It reads the message and uses the data in it as the input to the background job. The UI or another job saves or updates a value in storage. The background task monitors the storage and detects changes. It reads the data and uses it as the input to the background job. The UI or another job makes a request to an endpoint, such as an HTTPS URI, or an API that is exposed as a web service. It passes the data that is required to complete the background task as part of the request. The endpoint or web service invokes the background task, which uses the data as its input. Typical examples of tasks that are suited to event-driven invocation include image processing, workflows, sending information to remote services, sending email messages, and provisioning new users in multitenant applications. Schedule -driven triggers Schedule-driven invocation uses a timer to start the background task. Examples of using schedule-driven triggers include: A timer that is running locally within the application or as part of the application's operating system invokes a background task on a regular basis. A timer that is running in a different application, or a timer service such as Azure Scheduler, sends a request to an API or web service on a regular basis. The API or web service invokes the background task. A separate process or application starts a timer that causes the background task to be invoked once after a specified time delay, or at a specific time. Typical examples of tasks that are suited to schedule-driven invocation include batch-processing routines (such as updating related-products lists for users based on their recent behavior), routine data processing tasks (such as updating indexes or generating accumulated results), data analysis for daily reports, data retention cleanup, and data consistency checks. If you use a schedule-driven task that must run as a single instance, be aware of the following: If the compute instance that is running the scheduler (such as a virtual machine using Windows scheduled tasks) is scaled, you will have multiple instances of the scheduler running. These could start multiple instances of the task. If tasks run for longer than the period between scheduler events, the scheduler may start another instance of the task while the previous one is still running. Returning results Background jobs execute asynchronously in a separate process, or even in a separate location, from the UI or the process that invoked the background task. Ideally, background tasks are “fire and forget” operations, and their execution progress has no impact on the UI or the calling process. This means that the calling process does not wait for completion of the tasks. Therefore, it cannot automatically detect when the task ends. If you require a background task to communicate with the calling task to indicate progress or completion, you must implement a mechanism for this. Some examples are: Write a status indicator value to storage that is accessible to the UI or caller task, which can monitor or check this value when required. Other data that the background task must return to the caller can be placed into the same storage. Establish a reply queue that the UI or caller listens on. The background task can send messages to the queue that indicate status and completion. Data that the background task must return to the caller can be placed into the messages. If you are using Azure Service Bus, you can use the ReplyTo and CorrelationId properties to implement this capability. Expose an API or endpoint from the background task that the UI or caller can access to obtain status information. Data that the background task must return to the caller can be included in the response. Have the background task call back to the UI or caller through an API to indicate status at predefined points or on completion. This might be through events raised locally or through a publish-and-subscribe mechanism. Data that the background task must return to the caller can be included in the request or event payload. Hosting environment You can host background tasks by using a range of different Azure platform services: Azure Web Apps and WebJobs. You can use WebJobs to execute custom jobs based on a range of different types of scripts or executable programs within the context of a web app. Azure Virtual Machines. If you have a Windows service or want to use the Windows Task Scheduler, it is common to host your background tasks within a dedicated virtual machine. Azure Batch. Batch is a platform service that schedules compute-intensive work to run on a managed collection of virtual machines. It can automatically scale compute resources. Azure Kubernetes Service (AKS ). Azure Kubernetes Service provides a managed hosting environment for Kubernetes on Azure. The following sections describe each of these options in more detail, and include considerations to help you choose the appropriate option. Azure Web Apps and WebJobs You can use Azure WebJobs to execute custom jobs as background tasks within an Azure Web App. WebJobs run within the context of your web app as a continuous process. WebJobs also run in response to a trigger event from Azure Scheduler or external factors, such as changes to storage blobs and message queues. Jobs can be started and stopped on demand, and shut down gracefully. If a continuously running WebJob fails, it is automatically restarted. Retry and error actions are configurable. When you configure a WebJob: If you want the job to respond to an event-driven trigger, you should configure it as Run continuously. The script or program is stored in the folder named site/wwwroot/app_data/jobs/continuous. If you want the job to respond to a schedule-driven trigger, you should configure it as Run on a schedule. The script or program is stored in the folder named site/wwwroot/app_data/jobs/triggered. If you choose the Run on demand option when you configure a job, it will execute the same code as the Run on a schedule option when you start it. Azure WebJobs run within the sandbox of the web app. This means that they can access environment variables and share information, such as connection strings, with the web app. The job has access to the unique identifier of the machine that is running the job. The connection string named AzureWebJobsStorage provides access to Azure storage queues, blobs, and tables for application data, and access to Service Bus for messaging and communication. The connection string named AzureWebJobsDashboard provides access to the job action log files. Azure WebJobs have the following characteristics: Security: WebJobs are protected by the deployment credentials of the web app. Supported file types: You can define WebJobs by using command scripts (.cmd), batch files (.bat), PowerShell scripts (.ps1), bash shell scripts (.sh), PHP scripts (.php), Python scripts (.py), JavaScript code (.js), and executable programs (.exe, .jar, and more). Deployment: You can deploy scripts and executables by using the Azure portal, by using Visual Studio, by using the Azure WebJobs SDK, or by copying them directly to the following locations: For triggered execution: site/wwwroot/app_data/jobs/triggered/{job name} For continuous execution: site/wwwroot/app_data/jobs/continuous/{job name} Logging: Console.Out is treated (marked) as INFO. Console.Error is treated as ERROR. You can access monitoring and diagnostics information by using the Azure portal. You can download log files directly from the site. They are saved in the following locations: For triggered execution: Vfs/data/jobs/triggered/jobName For continuous execution: Vfs/data/jobs/continuous/jobName Configuration: You can configure WebJobs by using the portal, the REST API, and PowerShell. You can use a configuration file named settings.job in the same root directory as the job script to provide configuration information for a job. For example: { "stopping_wait_time": 60 } { "is_singleton": true } Considerations By default, WebJobs scale with the web app. However, you can configure jobs to run on single instance by setting the is_singleton configuration property to true. Single instance WebJobs are useful for tasks that you do not want to scale or run as simultaneous multiple instances, such as reindexing, data analysis, and similar tasks. To minimize the impact of jobs on the performance of the web app, consider creating an empty Azure Web App instance in a new App Service plan to host WebJobs that may be long running or resource intensive. Azure Virtual Machines Background tasks might be implemented in a way that prevents them from being deployed to Azure Web Apps, or these options might not be convenient. Typical examples are Windows services, and third-party utilities and executable programs. Another example might be programs written for an execution environment that is different than that hosting the application. For example, it might be a Unix or Linux program that you want to execute from a Windows or .NET application. You can choose from a range of operating systems for an Azure virtual machine, and run your service or executable on that virtual machine. To help you choose when to use Virtual Machines, see Azure App Services, Cloud Services and Virtual Machines comparison. For information about the options for Virtual Machines, see Sizes for Windows virtual machines in Azure. For more information about the operating systems and prebuilt images that are available for Virtual Machines, see Azure Virtual Machines Marketplace. To initiate the background task in a separate virtual machine, you have a range of options: You can execute the task on demand directly from your application by sending a request to an endpoint that the task exposes. This passes in any data that the task requires. This endpoint invokes the task. You can configure the task to run on a schedule by using a scheduler or timer that is available in your chosen operating system. For example, on Windows you can use Windows Task Scheduler to execute scripts and tasks. Or, if you have SQL Server installed on the virtual machine, you can use the SQL Server Agent to execute scripts and tasks. You can use Azure Scheduler to initiate the task by adding a message to a queue that the task listens on, or by sending a request to an API that the task exposes. See the earlier section Triggers for more information about how you can initiate background tasks. Considerations Consider the following points when you are deciding whether to deploy background tasks in an Azure virtual machine: Hosting background tasks in a separate Azure virtual machine provides flexibility and allows precise control over initiation, execution, scheduling, and resource allocation. However, it will increase runtime cost if a virtual machine must be deployed just to run background tasks. There is no facility to monitor the tasks in the Azure portal and no automated restart capability for failed tasks-although you can monitor the basic status of the virtual machine and manage it by using the Azure Resource Manager Cmdlets. However, there are no facilities to control processes and threads in compute nodes. Typically, using a virtual machine will require additional effort to implement a mechanism that collects data from instrumentation in the task, and from the operating system in the virtual machine. One solution that might be appropriate is to use the System Center Management Pack for Azure. You might consider creating monitoring probes that are exposed through HTTP endpoints. The code for these probes could perform health checks, collect operational information and statistics--or collate error information and return it to a management application. For more information, see the Health Endpoint Monitoring pattern. For more information, see: Virtual Machines Azure Virtual Machines FAQ Azure Batch Consider Azure Batch if you need to run large, parallel high-performance computing (HPC ) workloads across tens, hundreds, or thousands of VMs. The Batch service provisions the VMs, assign tasks to the VMs, runs the tasks, and monitors the progress. Batch can automatically scale out the VMs in response to the workload. Batch also provides job scheduling. Azure Batch supports both Linux and Windows VMs. Considerations Batch works well with intrinsically parallel workloads. It can also perform parallel calculations with a reduce step at the end, or run Message Passing Interface (MPI) applications for parallel tasks that require message passing between nodes. An Azure Batch job runs on a pool of nodes (VMs). One approach is to allocate a pool only when needed and then delete it after the job completes. This maximizes utilization, because nodes are not idle, but the job must wait for nodes to be allocated. Alternatively, you can create a pool ahead of time. That approach minimizes the time that it takes for a job to start, but can result in having nodes that sit idle. For more information, see Pool and compute node lifetime. For more information, see: What is Azure Batch? Develop large-scale parallel compute solutions with Batch Batch and HPC solutions for large-scale computing workloads Azure Kubernetes Service Azure Kubernetes Service (AKS ) manages your hosted Kubernetes environment, which makes it easy to deploy and manage containerized applications. Containers can be useful for running background jobs. Some of the benefits include: Containers support high-density hosting. You can isolate a background task in a container, while placing multiple containers in each VM. The container orchestrator handles internal load balancing, configuring the internal network, and other configuration tasks. Containers can be started and stopped as needed. Azure Container Registry allows you to register your containers inside Azure boundaries. This comes with security, privacy, and proximity benefits. Considerations Requires an understanding of how to use a container orchestrator. Depending on the skillset of your DevOps team, this may or may not be an issue. For more information, see: Overview of containers in Azure Introduction to private Docker container registries Partitioning If you decide to include background tasks within an existing compute instance, you must consider how this will affect the quality attributes of the compute instance and the background task itself. These factors will help you to decide whether to colocate the tasks with the existing compute instance or separate them out into a separate compute instance: Availability: Background tasks might not need to have the same level of availability as other parts of the application, in particular the UI and other parts that are directly involved in user interaction. Background tasks might be more tolerant of latency, retried connection failures, and other factors that affect availability because the operations can be queued. However, there must be sufficient capacity to prevent the backup of requests that could block queues and affect the application as a whole. Scalability: Background tasks are likely to have a different scalability requirement than the UI and the interactive parts of the application. Scaling the UI might be necessary to meet peaks in demand, while outstanding background tasks might be completed during less busy times by a fewer number of compute instances. Resiliency: Failure of a compute instance that just hosts background tasks might not fatally affect the application as a whole if the requests for these tasks can be queued or postponed until the task is available again. If the compute instance and/or tasks can be restarted within an appropriate interval, users of the application might not be affected. Security: Background tasks might have different security requirements or restrictions than the UI or other parts of the application. By using a separate compute instance, you can specify a different security environment for the tasks. You can also use patterns such as Gatekeeper to isolate the background compute instances from the UI in order to maximize security and separation. Performance: You can choose the type of compute instance for background tasks to specifically match the performance requirements of the tasks. This might mean using a less expensive compute option if the tasks do not require the same processing capabilities as the UI, or a larger instance if they require additional capacity and resources. Manageability: Background tasks might have a different development and deployment rhythm from the main application code or the UI. Deploying them to a separate compute instance can simplify updates and versioning. Cost: Adding compute instances to execute background tasks increases hosting costs. You should carefully consider the trade-off between additional capacity and these extra costs. For more information, see the Leader Election pattern and the Competing Consumers pattern. Conflicts If you have multiple instances of a background job, it is possible that they will compete for access to resources and services, such as databases and storage. This concurrent access can result in resource contention, which might cause conflicts in availability of the services and in the integrity of data in storage. You can resolve resource contention by using a pessimistic locking approach. This prevents competing instances of a task from concurrently accessing a service or corrupting data. Another approach to resolve conflicts is to define background tasks as a singleton, so that there is only ever one instance running. However, this eliminates the reliability and performance benefits that a multiple-instance configuration can provide. This is especially true if the UI can supply sufficient work to keep more than one background task busy. It is vital to ensure that the background task can automatically restart and that it has sufficient capacity to cope with peaks in demand. You can achieve this by allocating a compute instance with sufficient resources, by implementing a queueing mechanism that can store requests for later execution when demand decreases, or by using a combination of these techniques. Coordination The background tasks might be complex and might require multiple individual tasks to execute to produce a result or to fulfil all the requirements. It is common in these scenarios to divide the task into smaller discreet steps or subtasks that can be executed by multiple consumers. Multistep jobs can be more efficient and more flexible because individual steps might be reusable in multiple jobs. It is also easy to add, remove, or modify the order of the steps. Coordinating multiple tasks and steps can be challenging, but there are three common patterns that you can use to guide your implementation of a solution: Decomposing a task into multiple reusable steps. An application might be required to perform a variety of tasks of varying complexity on the information that it processes. A straightforward but inflexible approach to implementing this application might be to perform this processing as a monolithic module. However, this approach is likely to reduce the opportunities for refactoring the code, optimizing it, or reusing it if parts of the same processing are required elsewhere within the application. For more information, see the Pipes and Filters pattern. Managing execution of the steps for a task. An application might perform tasks that comprise a number of steps (some of which might invoke remote services or access remote resources). The individual steps might be independent of each other, but they are orchestrated by the application logic that implements the task. For more information, see Scheduler Agent Supervisor pattern. Managing recovery for task steps that fail. An application might need to undo the work that is performed by a series of steps (which together define an eventually consistent operation) if one or more of the steps fail. For more information, see the Compensating Transaction pattern. Resiliency considerations Background tasks must be resilient in order to provide reliable services to the application. When you are planning and designing background tasks, consider the following points: Background tasks must be able to gracefully handle restarts without corrupting data or introducing inconsistency into the application. For long-running or multistep tasks, consider using check pointing by saving the state of jobs in persistent storage, or as messages in a queue if this is appropriate. For example, you can persist state information in a message in a queue and incrementally update this state information with the task progress so that the task can be processed from the last known good checkpoint--instead of restarting from the beginning. When using Azure Service Bus queues, you can use message sessions to enable the same scenario. Sessions allow you to save and retrieve the application processing state by using the SetState and GetState methods. For more information about designing reliable multistep processes and workflows, see the Scheduler Agent Supervisor pattern. When you use queues to communicate with background tasks, the queues can act as a buffer to store requests that are sent to the tasks while the application is under higher than usual load. This allows the tasks to catch up with the UI during less busy periods. It also means that restarts will not block the UI. For more information, see the Queue-Based Load Leveling pattern. If some tasks are more important than others, consider implementing the Priority Queue pattern to ensure that these tasks run before less important ones. Background tasks that are initiated by messages or process messages must be designed to handle inconsistencies, such as messages arriving out of order, messages that repeatedly cause an error (often referred to as poison messages), and messages that are delivered more than once. Consider the following: Messages that must be processed in a specific order, such as those that change data based on the existing data value (for example, adding a value to an existing value), might not arrive in the original order in which they were sent. Alternatively, they might be handled by different instances of a background task in a different order due to varying loads on each instance. Messages that must be processed in a specific order should include a sequence number, key, or some other indicator that background tasks can use to ensure that they are processed in the correct order. If you are using Azure Service Bus, you can use message sessions to guarantee the order of delivery. However, it is usually more efficient, where possible, to design the process so that the message order is not important. Typically, a background task will peek at messages in the queue, which temporarily hides them from other message consumers. Then it deletes the messages after they have been successfully processed. If a background task fails when processing a message, that message will reappear on the queue after the peek time-out expires. It will be processed by another instance of the task or during the next processing cycle of this instance. If the message consistently causes an error in the consumer, it will block the task, the queue, and eventually the application itself when the queue becomes full. Therefore, it is vital to detect and remove poison messages from the queue. If you are using Azure Service Bus, messages that cause an error can be moved automatically or manually to an associated dead letter queue. Queues are guaranteed at least once delivery mechanisms, but they might deliver the same message more than once. In addition, if a background task fails after processing a message but before deleting it from the queue, the message will become available for processing again. Background tasks should be idempotent, which means that processing the same message more than once does not cause an error or inconsistency in the application’s data. Some operations are naturally idempotent, such as setting a stored value to a specific new value. However, operations such as adding a value to an existing stored value without checking that the stored value is still the same as when the message was originally sent will cause inconsistencies. Azure Service Bus queues can be configured to automatically remove duplicated messages. Some messaging systems, such as Azure storage queues and Azure Service Bus queues, support a de-queue count property that indicates the number of times a message has been read from the queue. This can be useful in handling repeated and poison messages. For more information, see Asynchronous Messaging Primer and Idempotency Patterns. Scaling and performance considerations Background tasks must offer sufficient performance to ensure they do not block the application, or cause inconsistencies due to delayed operation when the system is under load. Typically, performance is improved by scaling the compute instances that host the background tasks. When you are planning and designing background tasks, consider the following points around scalability and performance: Azure supports autoscaling (both scaling out and scaling back in) based on current demand and load or on a predefined schedule, for Web Apps and Virtual Machines hosted deployments. Use this feature to ensure that the application as a whole has sufficient performance capabilities while minimizing runtime costs. Where background tasks have a different performance capability from the other parts of a application (for example, the UI or components such as the data access layer), hosting the background tasks together in a separate compute service allows the UI and background tasks to scale independently to manage the load. If multiple background tasks have significantly different performance capabilities from each other, consider dividing them and scaling each type independently. However, note that this might increase runtime costs. Simply scaling the compute resources might not be sufficient to prevent loss of performance under load. You might also need to scale storage queues and other resources to prevent a single point of the overall processing chain from becoming a bottleneck. Also, consider other limitations, such as the maximum throughput of storage and other services that the application and the background tasks rely on. Background tasks must be designed for scaling. For example, they must be able to dynamically detect the number of storage queues in use in order to listen on or send messages to the appropriate queue. By default, WebJobs scale with their associated Azure Web Apps instance. However, if you want a WebJob to run as only a single instance, you can create a Settings.job file that contains the JSON data { "is_singleton": true }. This forces Azure to only run one instance of the WebJob, even if there are multiple instances of the associated web app. This can be a useful technique for scheduled jobs that must run as only a single instance. Related patterns Compute Partitioning Guidance Caching 3/13/2019 • 55 minutes to read • Edit Online Caching is a common technique that aims to improve the performance and scalability of a system. It does this by temporarily copying frequently accessed data to fast storage that's located close to the application. If this fast data storage is located closer to the application than the original source, then caching can significantly improve response times for client applications by serving data more quickly. Caching is most effective when a client instance repeatedly reads the same data, especially if all the following conditions apply to the original data store: It remains relatively static. It's slow compared to the speed of the cache. It's subject to a high level of contention. It's far away when network latency can cause access to be slow. Caching in distributed applications Distributed applications typically implement either or both of the following strategies when caching data: Using a private cache, where data is held locally on the computer that's running an instance of an application or service. Using a shared cache, serving as a common source which can be accessed by multiple processes and/or machines. In both cases, caching can be performed client-side and/or server-side. Client-side caching is done by the process that provides the user interface for a system, such as a web browser or desktop application. Server-side caching is done by the process that provides the business services that are running remotely. Private caching The most basic type of cache is an in-memory store. It's held in the address space of a single process and accessed directly by the code that runs in that process. This type of cache is very quick to access. It can also provide an extremely effective means for storing modest amounts of static data, since the size of a cache is typically constrained by the volume of memory that's available on the machine hosting the process. If you need to cache more information than is physically possible in memory, you can write cached data to the local file system. This will be slower to access than data that's held in-memory, but should still be faster and more reliable than retrieving data across a network. If you have multiple instances of an application that uses this model running concurrently, each application instance has its own independent cache holding its own copy of the data. Think of a cache as a snapshot of the original data at some point in the past. If this data is not static, it is likely that different application instances hold different versions of the data in their caches. Therefore, the same query performed by these instances can return different results, as shown in Figure 1. Figure 1: Using an in-memory cache in different instances of an application. Shared caching Using a shared cache can help alleviate concerns that data might differ in each cache, which can occur with inmemory caching. Shared caching ensures that different application instances see the same view of cached data. It does this by locating the cache in a separate location, typically hosted as part of a separate service, as shown in Figure 2. Figure 2: Using a shared cache. An important benefit of the shared caching approach is the scalability it provides. Many shared cache services are implemented by using a cluster of servers, and utilize software that distributes the data across the cluster in a transparent manner. An application instance simply sends a request to the cache service. The underlying infrastructure is responsible for determining the location of the cached data in the cluster. You can easily scale the cache by adding more servers. There are two main disadvantages of the shared caching approach: The cache is slower to access because it is no longer held locally to each application instance. The requirement to implement a separate cache service might add complexity to the solution. Considerations for using caching The following sections describe in more detail the considerations for designing and using a cache. Decide when to cache data Caching can dramatically improve performance, scalability, and availability. The more data that you have and the larger the number of users that need to access this data, the greater the benefits of caching become. That's because caching reduces the latency and contention that's associated with handling large volumes of concurrent requests in the original data store. For example, a database might support a limited number of concurrent connections. Retrieving data from a shared cache, however, rather than the underlying database, makes it possible for a client application to access this data even if the number of available connections is currently exhausted. Additionally, if the database becomes unavailable, client applications might be able to continue by using the data that's held in the cache. Consider caching data that is read frequently but modified infrequently (for example, data that has a higher proportion of read operations than write operations). However, we don't recommend that you use the cache as the authoritative store of critical information. Instead, ensure that all changes that your application cannot afford to lose are always saved to a persistent data store. This means that if the cache is unavailable, your application can still continue to operate by using the data store, and you won't lose important information. Determine how to cache data effectively The key to using a cache effectively lies in determining the most appropriate data to cache, and caching it at the appropriate time. The data can be added to the cache on demand the first time it is retrieved by an application. This means that the application needs to fetch the data only once from the data store, and that subsequent access can be satisfied by using the cache. Alternatively, a cache can be partially or fully populated with data in advance, typically when the application starts (an approach known as seeding). However, it might not be advisable to implement seeding for a large cache because this approach can impose a sudden, high load on the original data store when the application starts running. Often an analysis of usage patterns can help you decide whether to fully or partially prepopulate a cache, and to choose the data to cache. For example, it can be useful to seed the cache with the static user profile data for customers who use the application regularly (perhaps every day), but not for customers who use the application only once a week. Caching typically works well with data that is immutable or that changes infrequently. Examples include reference information such as product and pricing information in an e-commerce application, or shared static resources that are costly to construct. Some or all of this data can be loaded into the cache at application startup to minimize demand on resources and to improve performance. It might also be appropriate to have a background process that periodically updates reference data in the cache to ensure it is up to date, or that refreshes the cache when reference data changes. Caching is less useful for dynamic data, although there are some exceptions to this consideration (see the section Cache highly dynamic data later in this article for more information). When the original data changes regularly, either the cached information becomes stale very quickly or the overhead of synchronizing the cache with the original data store reduces the effectiveness of caching. Note that a cache does not have to include the complete data for an entity. For example, if a data item represents a multivalued object such as a bank customer with a name, address, and account balance, some of these elements might remain static (such as the name and address), while others (such as the account balance) might be more dynamic. In these situations, it can be useful to cache the static portions of the data and retrieve (or calculate) only the remaining information when it is required. We recommend that you carry out performance testing and usage analysis to determine whether pre-population or on-demand loading of the cache, or a combination of both, is appropriate. The decision should be based on the volatility and usage pattern of the data. Cache utilization and performance analysis is particularly important in applications that encounter heavy loads and must be highly scalable. For example, in highly scalable scenarios it might make sense to seed the cache to reduce the load on the data store at peak times. Caching can also be used to avoid repeating computations while the application is running. If an operation transforms data or performs a complicated calculation, it can save the results of the operation in the cache. If the same calculation is required afterward, the application can simply retrieve the results from the cache. An application can modify data that's held in a cache. However, we recommend thinking of the cache as a transient data store that could disappear at any time. Do not store valuable data in the cache only; make sure that you maintain the information in the original data store as well. This means that if the cache becomes unavailable, you minimize the chance of losing data. Cache highly dynamic data When you store rapidly-changing information in a persistent data store, it can impose an overhead on the system. For example, consider a device that continually reports status or some other measurement. If an application chooses not to cache this data on the basis that the cached information will nearly always be outdated, then the same consideration could be true when storing and retrieving this information from the data store. In the time it takes to save and fetch this data, it might have changed. In a situation such as this, consider the benefits of storing the dynamic information directly in the cache instead of in the persistent data store. If the data is non-critical and does not require auditing, then it doesn't matter if the occasional change is lost. Manage data expiration in a cache In most cases, data that's held in a cache is a copy of data that's held in the original data store. The data in the original data store might change after it was cached, causing the cached data to become stale. Many caching systems enable you to configure the cache to expire data and reduce the period for which data may be out of date. When cached data expires, it's removed from the cache, and the application must retrieve the data from the original data store (it can put the newly-fetched information back into cache). You can set a default expiration policy when you configure the cache. In many cache services, you can also stipulate the expiration period for individual objects when you store them programmatically in the cache. Some caches enable you to specify the expiration period as an absolute value, or as a sliding value that causes the item to be removed from the cache if it is not accessed within the specified time. This setting overrides any cache-wide expiration policy, but only for the specified objects. NOTE Consider the expiration period for the cache and the objects that it contains carefully. If you make it too short, objects will expire too quickly and you will reduce the benefits of using the cache. If you make the period too long, you risk the data becoming stale. It's also possible that the cache might fill up if data is allowed to remain resident for a long time. In this case, any requests to add new items to the cache might cause some items to be forcibly removed in a process known as eviction. Cache services typically evict data on a least-recently-used (LRU ) basis, but you can usually override this policy and prevent items from being evicted. However, if you adopt this approach, you risk exceeding the memory that's available in the cache. An application that attempts to add an item to the cache will fail with an exception. Some caching implementations might provide additional eviction policies. There are several types of eviction policies. These include: A most-recently-used policy (in the expectation that the data will not be required again). A first-in-first-out policy (oldest data is evicted first). An explicit removal policy based on a triggered event (such as the data being modified). Invalidate data in a client-side cache Data that's held in a client-side cache is generally considered to be outside the auspices of the service that provides the data to the client. A service cannot directly force a client to add or remove information from a clientside cache. This means that it's possible for a client that uses a poorly configured cache to continue using outdated information. For example, if the expiration policies of the cache aren't properly implemented, a client might use outdated information that's cached locally when the information in the original data source has changed. If you are building a web application that serves data over an HTTP connection, you can implicitly force a web client (such as a browser or web proxy) to fetch the most recent information. You can do this if a resource is updated by a change in the URI of that resource. Web clients typically use the URI of a resource as the key in the client-side cache, so if the URI changes, the web client ignores any previously cached versions of a resource and fetches the new version instead. Managing concurrency in a cache Caches are often designed to be shared by multiple instances of an application. Each application instance can read and modify data in the cache. Consequently, the same concurrency issues that arise with any shared data store also apply to a cache. In a situation where an application needs to modify data that's held in the cache, you might need to ensure that updates made by one instance of the application do not overwrite the changes made by another instance. Depending on the nature of the data and the likelihood of collisions, you can adopt one of two approaches to concurrency: Optimistic. Immediately prior to updating the data, the application checks to see whether the data in the cache has changed since it was retrieved. If the data is still the same, the change can be made. Otherwise, the application has to decide whether to update it. (The business logic that drives this decision will be applicationspecific.) This approach is suitable for situations where updates are infrequent, or where collisions are unlikely to occur. Pessimistic. When it retrieves the data, the application locks it in the cache to prevent another instance from changing it. This process ensures that collisions cannot occur, but they can also block other instances that need to process the same data. Pessimistic concurrency can affect the scalability of a solution and is recommended only for short-lived operations. This approach might be appropriate for situations where collisions are more likely, especially if an application updates multiple items in the cache and must ensure that these changes are applied consistently. Implement high availability and scalability, and improve performance Avoid using a cache as the primary repository of data; this is the role of the original data store from which the cache is populated. The original data store is responsible for ensuring the persistence of the data. Be careful not to introduce critical dependencies on the availability of a shared cache service into your solutions. An application should be able to continue functioning if the service that provides the shared cache is unavailable. The application should not hang or fail while waiting for the cache service to resume. Therefore, the application must be prepared to detect the availability of the cache service and fall back to the original data store if the cache is inaccessible. The Circuit-Breaker pattern is useful for handling this scenario. The service that provides the cache can be recovered, and once it becomes available, the cache can be repopulated as data is read from the original data store, following a strategy such as the Cache-aside pattern. However, system scalability may be affected if the application falls back to the original data store when the cache is temporarily unavailable. While the data store is being recovered, the original data store could be swamped with requests for data, resulting in timeouts and failed connections. Consider implementing a local, private cache in each instance of an application, together with the shared cache that all application instances access. When the application retrieves an item, it can check first in its local cache, then in the shared cache, and finally in the original data store. The local cache can be populated using the data in either the shared cache, or in the database if the shared cache is unavailable. This approach requires careful configuration to prevent the local cache from becoming too stale with respect to the shared cache. However, the local cache acts as a buffer if the shared cache is unreachable. Figure 3 shows this structure. Figure 3: Using a local private cache with a shared cache. To support large caches that hold relatively long-lived data, some cache services provide a high-availability option that implements automatic failover if the cache becomes unavailable. This approach typically involves replicating the cached data that's stored on a primary cache server to a secondary cache server, and switching to the secondary server if the primary server fails or connectivity is lost. To reduce the latency that's associated with writing to multiple destinations, the replication to the secondary server might occur asynchronously when data is written to the cache on the primary server. This approach leads to the possibility that some cached information might be lost in the event of a failure, but the proportion of this data shuld be small compared to the overall size of the cache. If a shared cache is large, it might be beneficial to partition the cached data across nodes to reduce the chances of contention and improve scalability. Many shared caches support the ability to dynamically add (and remove) nodes and rebalance the data across partitions. This approach might involve clustering, in which the collection of nodes is presented to client applications as a seamless, single cache. Internally, however, the data is dispersed between nodes following a predefined distribution strategy that balances the load evenly. For more information about possible partitioning strategies, see Data partitioning guidance. Clustering can also increase the availability of the cache. If a node fails, the remainder of the cache is still accessible. Clustering is frequently used in conjunction with replication and failover. Each node can be replicated, and the replica can be quickly brought online if the node fails. Many read and write operations are likely to involve single data values or objects. However, at times it might be necessary to store or retrieve large volumes of data quickly. For example, seeding a cache could involve writing hundreds or thousands of items to the cache. An application might also need to retrieve a large number of related items from the cache as part of the same request. Many large-scale caches provide batch operations for these purposes. This enables a client application to package up a large volume of items into a single request and reduces the overhead that's associated with performing a large number of small requests. Caching and eventual consistency For the cache-aside pattern to work, the instance of the application that populates the cache must have access to the most recent and consistent version of the data. In a system that implements eventual consistency (such as a replicated data store) this might not be the case. One instance of an application could modify a data item and invalidate the cached version of that item. Another instance of the application might attempt to read this item from a cache, which causes a cache-miss, so it reads the data from the data store and adds it to the cache. However, if the data store has not been fully synchronized with the other replicas, the application instance could read and populate the cache with the old value. For more information about handling data consistency, see the Data consistency primer. Protect cached data Irrespective of the cache service you use, consider how to protect the data that's held in the cache from unauthorized access. There are two main concerns: The privacy of the data in the cache. The privacy of data as it flows between the cache and the application that's using the cache. To protect data in the cache, the cache service might implement an authentication mechanism that requires that applications specify the following: Which identities can access data in the cache. Which operations (read and write) that these identities are allowed to perform. To reduce overhead that's associated with reading and writing data, after an identity has been granted write and/or read access to the cache, that identity can use any data in the cache. If you need to restrict access to subsets of the cached data, you can do one of the following: Split the cache into partitions (by using different cache servers) and only grant access to identities for the partitions that they should be allowed to use. Encrypt the data in each subset by using different keys, and provide the encryption keys only to identities that should have access to each subset. A client application might still be able to retrieve all of the data in the cache, but it will only be able to decrypt the data for which it has the keys. You must also protect the data as it flows in and out of the cache. To do this, you depend on the security features provided by the network infrastructure that client applications use to connect to the cache. If the cache is implemented using an on-site server within the same organization that hosts the client applications, then the isolation of the network itself might not require you to take additional steps. If the cache is located remotely and requires a TCP or HTTP connection over a public network (such as the Internet), consider implementing SSL. Considerations for implementing caching in Azure Azure Redis Cache is an implementation of the open source Redis cache that runs as a service in an Azure datacenter. It provides a caching service that can be accessed from any Azure application, whether the application is implemented as a cloud service, a website, or inside an Azure virtual machine. Caches can be shared by client applications that have the appropriate access key. Azure Redis Cache is a high-performance caching solution that provides availability, scalability and security. It typically runs as a service spread across one or more dedicated machines. It attempts to store as much information as it can in memory to ensure fast access. This architecture is intended to provide low latency and high throughput by reducing the need to perform slow I/O operations. Azure Redis Cache is compatible with many of the various APIs that are used by client applications. If you have existing applications that already use Azure Redis Cache running on-premises, the Azure Redis Cache provides a quick migration path to caching in the cloud. Features of Redis Redis is more than a simple cache server. It provides a distributed in-memory database with an extensive command set that supports many common scenarios. These are described later in this document, in the section Using Redis caching. This section summarizes some of the key features that Redis provides. Redis as an in-memory database Redis supports both read and write operations. In Redis, writes can be protected from system failure either by being stored periodically in a local snapshot file or in an append-only log file. This is not the case in many caches (which should be considered transitory data stores). All writes are asynchronous and do not block clients from reading and writing data. When Redis starts running, it reads the data from the snapshot or log file and uses it to construct the in-memory cache. For more information, see Redis persistence on the Redis website. NOTE Redis does not guarantee that all writes will be saved in the event of a catastrophic failure, but at worst you might lose only a few seconds worth of data. Remember that a cache is not intended to act as an authoritative data source, and it is the responsibility of the applications using the cache to ensure that critical data is saved successfully to an appropriate data store. For more information, see the Cache-aside pattern. Redis data types Redis is a key-value store, where values can contain simple types or complex data structures such as hashes, lists, and sets. It supports a set of atomic operations on these data types. Keys can be permanent or tagged with a limited time-to-live, at which point the key and its corresponding value are automatically removed from the cache. For more information about Redis keys and values, visit the page An introduction to Redis data types and abstractions on the Redis website. Redis replication and clustering Redis supports master/subordinate replication to help ensure availability and maintain throughput. Write operations to a Redis master node are replicated to one or more subordinate nodes. Read operations can be served by the master or any of the subordinates. In the event of a network partition, subordinates can continue to serve data and then transparently resynchronize with the master when the connection is reestablished. For further details, visit the Replication page on the Redis website. Redis also provides clustering, which enables you to transparently partition data into shards across servers and spread the load. This feature improves scalability, because new Redis servers can be added and the data repartitioned as the size of the cache increases. Furthermore, each server in the cluster can be replicated by using master/subordinate replication. This ensures availability across each node in the cluster. For more information about clustering and sharding, visit the Redis cluster tutorial page on the Redis website. Redis memory use A Redis cache has a finite size that depends on the resources available on the host computer. When you configure a Redis server, you can specify the maximum amount of memory it can use. You can also configure a key in a Redis cache to have an expiration time, after which it is automatically removed from the cache. This feature can help prevent the in-memory cache from filling with old or stale data. As memory fills up, Redis can automatically evict keys and their values by following a number of policies. The default is LRU (least recently used), but you can also select other policies such as evicting keys at random or turning off eviction altogether (in which, case attempts to add items to the cache fail if it is full). The page Using Redis as an LRU cache provides more information. Redis transactions and batches Redis enables a client application to submit a series of operations that read and write data in the cache as an atomic transaction. All the commands in the transaction are guaranteed to run sequentially, and no commands issued by other concurrent clients will be interwoven between them. However, these are not true transactions as a relational database would perform them. Transaction processing consists of two stages--the first is when the commands are queued, and the second is when the commands are run. During the command queuing stage, the commands that comprise the transaction are submitted by the client. If some sort of error occurs at this point (such as a syntax error, or the wrong number of parameters) then Redis refuses to process the entire transaction and discards it. During the run phase, Redis performs each queued command in sequence. If a command fails during this phase, Redis continues with the next queued command and does not roll back the effects of any commands that have already been run. This simplified form of transaction helps to maintain performance and avoid performance problems that are caused by contention. Redis does implement a form of optimistic locking to assist in maintaining consistency. For detailed information about transactions and locking with Redis, visit the Transactions page on the Redis website. Redis also supports non-transactional batching of requests. The Redis protocol that clients use to send commands to a Redis server enables a client to send a series of operations as part of the same request. This can help to reduce packet fragmentation on the network. When the batch is processed, each command is performed. If any of these commands are malformed, they will be rejected (which doesn't happen with a transaction), but the remaining commands will be performed. There is also no guarantee about the order in which the commands in the batch will be processed. Redis security Redis is focused purely on providing fast access to data, and is designed to run inside a trusted environment that can be accessed only by trusted clients. Redis supports a limited security model based on password authentication. (It is possible to remove authentication completely, although we don't recommend this.) All authenticated clients share the same global password and have access to the same resources. If you need more comprehensive sign-in security, you must implement your own security layer in front of the Redis server, and all client requests should pass through this additional layer. Redis should not be directly exposed to untrusted or unauthenticated clients. You can restrict access to commands by disabling them or renaming them (and by providing only privileged clients with the new names). Redis does not directly support any form of data encryption, so all encoding must be performed by client applications. Additionally, Redis does not provide any form of transport security. If you need to protect data as it flows across the network, we recommend implementing an SSL proxy. For more information, visit the Redis security page on the Redis website. NOTE Azure Redis Cache provides its own security layer through which clients connect. The underlying Redis servers are not exposed to the public network. Azure Redis cache Azure Redis Cache provides access to Redis servers that are hosted at an Azure datacenter. It acts as a façade that provides access control and security. You can provision a cache by using the Azure portal. The portal provides a number of predefined configurations. These range from a 53 GB cache running as a dedicated service that supports SSL communications (for privacy) and master/subordinate replication with an SLA of 99.9% availability, down to a 250 MB cache without replication (no availability guarantees) running on shared hardware. Using the Azure portal, you can also configure the eviction policy of the cache, and control access to the cache by adding users to the roles provided. These roles, which define the operations that members can perform, include Owner, Contributor, and Reader. For example, members of the Owner role have complete control over the cache (including security) and its contents, members of the Contributor role can read and write information in the cache, and members of the Reader role can only retrieve data from the cache. Most administrative tasks are performed through the Azure portal. For this reason, many of the administrative commands that are available in the standard version of Redis are not available, including the ability to modify the configuration programmatically, shut down the Redis server, configure additional subordinates, or forcibly save data to disk. The Azure portal includes a convenient graphical display that enables you to monitor the performance of the cache. For example, you can view the number of connections being made, the number of requests being performed, the volume of reads and writes, and the number of cache hits versus cache misses. Using this information, you can determine the effectiveness of the cache and if necessary, switch to a different configuration or change the eviction policy. Additionally, you can create alerts that send email messages to an administrator if one or more critical metrics fall outside of an expected range. For example, you might want to alert an administrator if the number of cache misses exceeds a specified value in the last hour, because it means the cache might be too small or data might be being evicted too quickly. You can also monitor the CPU, memory, and network usage for the cache. For further information and examples showing how to create and configure an Azure Redis Cache, visit the page Lap around Azure Redis Cache on the Azure blog. Caching session state and HTML output If you're building ASP.NET web applications that run by using Azure web roles, you can save session state information and HTML output in an Azure Redis Cache. The session state provider for Azure Redis Cache enables you to share session information between different instances of an ASP.NET web application, and is very useful in web farm situations where client-server affinity is not available and caching session data in-memory would not be appropriate. Using the session state provider with Azure Redis Cache delivers several benefits, including: Sharing session state with a large number of instances of ASP.NET web applications. Providing improved scalability. Supporting controlled, concurrent access to the same session state data for multiple readers and a single writer. Using compression to save memory and improve network performance. For more information, see ASP.NET session state provider for Azure Redis Cache. NOTE Do not use the session state provider for Azure Redis Cache with ASP.NET applications that run outside of the Azure environment. The latency of accessing the cache from outside of Azure can eliminate the performance benefits of caching data. Similarly, the output cache provider for Azure Redis Cache enables you to save the HTTP responses generated by an ASP.NET web application. Using the output cache provider with Azure Redis Cache can improve the response times of applications that render complex HTML output. Application instances that generate similar responses can use the shared output fragments in the cache rather than generating this HTML output afresh. For more information, see ASP.NET output cache provider for Azure Redis Cache. Building a custom Redis cache Azure Redis Cache acts as a façade to the underlying Redis servers. If you require an advanced configuration that is not covered by the Azure Redis cache (such as a cache bigger than 53 GB ) you can build and host your own Redis servers by using Azure virtual machines. This is a potentially complex process because you might need to create several VMs to act as master and subordinate nodes if you want to implement replication. Furthermore, if you wish to create a cluster, then you need multiple masters and subordinate servers. A minimal clustered replication topology that provides a high degree of availability and scalability comprises at least six VMs organized as three pairs of master/subordinate servers (a cluster must contain at least three master nodes). Each master/subordinate pair should be located close together to minimize latency. However, each set of pairs can be running in different Azure datacenters located in different regions, if you wish to locate cached data close to the applications that are most likely to use it. For an example of building and configuring a Redis node running as an Azure VM, see Running Redis on a CentOS Linux VM in Azure. NOTE Please note that if you implement your own Redis cache in this way, you are responsible for monitoring, managing, and securing the service. Partitioning a Redis cache Partitioning the cache involves splitting the cache across multiple computers. This structure gives you several advantages over using a single cache server, including: Creating a cache that is much bigger than can be stored on a single server. Distributing data across servers, improving availability. If one server fails or becomes inaccessible, the data that it holds is unavailable, but the data on the remaining servers can still be accessed. For a cache, this is not crucial because the cached data is only a transient copy of the data that's held in a database. Cached data on a server that becomes inaccessible can be cached on a different server instead. Spreading the load across servers, thereby improving performance and scalability. Geolocating data close to the users that access it, thus reducing latency. For a cache, the most common form of partitioning is sharding. In this strategy, each partition (or shard) is a Redis cache in its own right. Data is directed to a specific partition by using sharding logic, which can use a variety of approaches to distribute the data. The Sharding pattern provides more information about implementing sharding. To implement partitioning in a Redis cache, you can take one of the following approaches: Server-side query routing. In this technique, a client application sends a request to any of the Redis servers that comprise the cache (probably the closest server). Each Redis server stores metadata that describes the partition that it holds, and also contains information about which partitions are located on other servers. The Redis server examines the client request. If it can be resolved locally, it will perform the requested operation. Otherwise it will forward the request on to the appropriate server. This model is implemented by Redis clustering, and is described in more detail on the Redis cluster tutorial page on the Redis website. Redis clustering is transparent to client applications, and additional Redis servers can be added to the cluster (and the data re-partitioned) without requiring that you reconfigure the clients. Client-side partitioning. In this model, the client application contains logic (possibly in the form of a library) that routes requests to the appropriate Redis server. This approach can be used with Azure Redis Cache. Create multiple Azure Redis Caches (one for each data partition) and implement the client-side logic that routes the requests to the correct cache. If the partitioning scheme changes (if additional Azure Redis Caches are created, for example), client applications might need to be reconfigured. Proxy-assisted partitioning. In this scheme, client applications send requests to an intermediary proxy service which understands how the data is partitioned and then routes the request to the appropriate Redis server. This approach can also be used with Azure Redis Cache; the proxy service can be implemented as an Azure cloud service. This approach requires an additional level of complexity to implement the service, and requests might take longer to perform than using client-side partitioning. The page Partitioning: how to split data among multiple Redis instances on the Redis website provides further information about implementing partitioning with Redis. Implement Redis cache client applications Redis supports client applications written in numerous programming languages. If you are building new applications by using the .NET Framework, the recommended approach is to use the StackExchange.Redis client library. This library provides a .NET Framework object model that abstracts the details for connecting to a Redis server, sending commands, and receiving responses. It is available in Visual Studio as a NuGet package. You can use this same library to connect to an Azure Redis Cache, or a custom Redis cache hosted on a VM. To connect to a Redis server you use the static Connect method of the ConnectionMultiplexer class. The connection that this method creates is designed to be used throughout the lifetime of the client application, and the same connection can be used by multiple concurrent threads. Do not reconnect and disconnect each time you perform a Redis operation because this can degrade performance. You can specify the connection parameters, such as the address of the Redis host and the password. If you are using Azure Redis Cache, the password is either the primary or secondary key that is generated for Azure Redis Cache by using the Azure Management portal. After you have connected to the Redis server, you can obtain a handle on the Redis database that acts as the cache. The Redis connection provides the GetDatabase method to do this. You can then retrieve items from the cache and store data in the cache by using the StringGet and StringSet methods. These methods expect a key as a parameter, and return the item either in the cache that has a matching value ( StringGet ) or add the item to the cache with this key ( StringSet ). Depending on the location of the Redis server, many operations might incur some latency while a request is transmitted to the server and a response is returned to the client. The StackExchange library provides asynchronous versions of many of the methods that it exposes to help client applications remain responsive. These methods support the Task-based Asynchronous pattern in the .NET Framework. The following code snippet shows a method named RetrieveItem . It illustrates an implementation of the cacheaside pattern based on Redis and the StackExchange library. The method takes a string key value and attempts to retrieve the corresponding item from the Redis cache by calling the StringGetAsync method (the asynchronous version of StringGet ). If the item is not found, it is fetched from the underlying data source using the GetItemFromDataSourceAsync method (which is a local method and not part of the StackExchange library). It's then added to the cache by using the StringSetAsync method so it can be retrieved more quickly next time. // Connect to the Azure Redis cache ConfigurationOptions config = new ConfigurationOptions(); config.EndPoints.Add("<your DNS name>.redis.cache.windows.net"); config.Password = "<Redis cache key from management portal>"; ConnectionMultiplexer redisHostConnection = ConnectionMultiplexer.Connect(config); IDatabase cache = redisHostConnection.GetDatabase(); ... private async Task<string> RetrieveItem(string itemKey) { // Attempt to retrieve the item from the Redis cache string itemValue = await cache.StringGetAsync(itemKey); // If the value returned is null, the item was not found in the cache // So retrieve the item from the data source and add it to the cache if (itemValue == null) { itemValue = await GetItemFromDataSourceAsync(itemKey); await cache.StringSetAsync(itemKey, itemValue); } // Return the item return itemValue; } The StringGet and StringSet methods are not restricted to retrieving or storing string values. They can take any item that is serialized as an array of bytes. If you need to save a .NET object, you can serialize it as a byte stream and use the StringSet method to write it to the cache. Similarly, you can read an object from the cache by using the StringGet method and deserializing it as a .NET object. The following code shows a set of extension methods for the IDatabase interface (the GetDatabase method of a Redis connection returns an IDatabase object), and some sample code that uses these methods to read and write a BlogPost object to the cache: public static class RedisCacheExtensions { public static async Task<T> GetAsync<T>(this IDatabase cache, string key) { return Deserialize<T>(await cache.StringGetAsync(key)); } public static async Task<object> GetAsync(this IDatabase cache, string key) { return Deserialize<object>(await cache.StringGetAsync(key)); } public static async Task SetAsync(this IDatabase cache, string key, object value) { await cache.StringSetAsync(key, Serialize(value)); } static byte[] Serialize(object o) { byte[] objectDataAsStream = null; if (o != null) { BinaryFormatter binaryFormatter = new BinaryFormatter(); using (MemoryStream memoryStream = new MemoryStream()) { binaryFormatter.Serialize(memoryStream, o); objectDataAsStream = memoryStream.ToArray(); } } return objectDataAsStream; } static T Deserialize<T>(byte[] stream) { T result = default(T); if (stream != null) { BinaryFormatter binaryFormatter = new BinaryFormatter(); using (MemoryStream memoryStream = new MemoryStream(stream)) { result = (T)binaryFormatter.Deserialize(memoryStream); } } return result; } } The following code illustrates a method named RetrieveBlogPost that uses these extension methods to read and write a serializable BlogPost object to the cache following the cache-aside pattern: // The BlogPost type [Serializable] public class BlogPost { private HashSet<string> tags; public BlogPost(int id, string title, int score, IEnumerable<string> tags) { this.Id = id; this.Title = title; this.Score = score; this.tags = new HashSet<string>(tags); } public public public public int Id { get; set; } string Title { get; set; } int Score { get; set; } ICollection<string> Tags => this.tags; } ... private async Task<BlogPost> RetrieveBlogPost(string blogPostKey) { BlogPost blogPost = await cache.GetAsync<BlogPost>(blogPostKey); if (blogPost == null) { blogPost = await GetBlogPostFromDataSourceAsync(blogPostKey); await cache.SetAsync(blogPostKey, blogPost); } return blogPost; } Redis supports command pipelining if a client application sends multiple asynchronous requests. Redis can multiplex the requests using the same connection rather than receiving and responding to commands in a strict sequence. This approach helps to reduce latency by making more efficient use of the network. The following code snippet shows an example that retrieves the details of two customers concurrently. The code submits two requests and then performs some other processing (not shown) before waiting to receive the results. The Wait method of the cache object is similar to the .NET Framework Task.Wait method: ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... var task1 = cache.StringGetAsync("customer:1"); var task2 = cache.StringGetAsync("customer:2"); ... var customer1 = cache.Wait(task1); var customer2 = cache.Wait(task2); For additional information on writing client applications that can the Azure Redis Cache, see the Azure Cache for Redis documentation. More information is also available at StackExchange.Redis. The page Pipelines and multiplexers on the same website provides more information about asynchronous operations and pipelining with Redis and the StackExchange library. Using Redis caching The simplest use of Redis for caching concerns is key-value pairs where the value is an uninterpreted string of arbitrary length that can contain any binary data. (It is essentially an array of bytes that can be treated as a string). This scenario was illustrated in the section Implement Redis Cache client applications earlier in this article. Note that keys also contain uninterpreted data, so you can use any binary information as the key. The longer the key is, however, the more space it will take to store, and the longer it will take to perform lookup operations. For usability and ease of maintenance, design your keyspace carefully and use meaningful (but not verbose) keys. For example, use structured keys such as "customer:100" to represent the key for the customer with ID 100 rather than simply "100". This scheme enables you to easily distinguish between values that store different data types. For example, you could also use the key "orders:100" to represent the key for the order with ID 100. Apart from one-dimensional binary strings, a value in a Redis key-value pair can also hold more structured information, including lists, sets (sorted and unsorted), and hashes. Redis provides a comprehensive command set that can manipulate these types, and many of these commands are available to .NET Framework applications through a client library such as StackExchange. The page An introduction to Redis data types and abstractions on the Redis website provides a more detailed overview of these types and the commands that you can use to manipulate them. This section summarizes some common use cases for these data types and commands. Perform atomic and batch operations Redis supports a series of atomic get-and-set operations on string values. These operations remove the possible race hazards that might occur when using separate GET and SET commands. The operations that are available include: , INCRBY , DECR , and DECRBY , which perform atomic increment and decrement operations on integer numeric data values. The StackExchange library provides overloaded versions of the IDatabase.StringIncrementAsync and IDatabase.StringDecrementAsync methods to perform these operations and return the resulting value that is stored in the cache. The following code snippet illustrates how to use these methods: INCR ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... await cache.StringSetAsync("data:counter", 99); ... long oldValue = await cache.StringIncrementAsync("data:counter"); // Increment by 1 (the default) // oldValue should be 100 long newValue = await cache.StringDecrementAsync("data:counter", 50); // Decrement by 50 // newValue should be 50 , which retrieves the value that's associated with a key and changes it to a new value. The StackExchange library makes this operation available through the IDatabase.StringGetSetAsync method. The code snippet below shows an example of this method. This code returns the current value that's associated with the key "data:counter" from the previous example. Then it resets the value for this key back to zero, all as part of the same operation: GETSET ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... string oldValue = await cache.StringGetSetAsync("data:counter", 0); , which can return or change a set of string values as a single operation. The IDatabase.StringGetAsync and IDatabase.StringSetAsync methods are overloaded to support this functionality, as shown in the following example: MGET and MSET ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... // Create a list of key-value pairs var keysAndValues = new List<KeyValuePair<RedisKey, RedisValue>>() { new KeyValuePair<RedisKey, RedisValue>("data:key1", "value1"), new KeyValuePair<RedisKey, RedisValue>("data:key99", "value2"), new KeyValuePair<RedisKey, RedisValue>("data:key322", "value3") }; // Store the list of key-value pairs in the cache cache.StringSet(keysAndValues.ToArray()); ... // Find all values that match a list of keys RedisKey[] keys = { "data:key1", "data:key99", "data:key322"}; // values should contain { "value1", "value2", "value3" } RedisValue[] values = cache.StringGet(keys); You can also combine multiple operations into a single Redis transaction as described in the Redis transactions and batches section earlier in this article. The StackExchange library provides support for transactions through the ITransaction interface. You create an ITransaction object by using the IDatabase.CreateTransaction method. You invoke commands to the transaction by using the methods provided by the ITransaction object. The ITransaction interface provides access to a set of methods that's similar to those accessed by the IDatabase interface, except that all the methods are asynchronous. This means that they are only performed when the ITransaction.Execute method is invoked. The value that's returned by the ITransaction.Execute method indicates whether the transaction was created successfully (true) or if it failed (false). The following code snippet shows an example that increments and decrements two counters as part of the same transaction: ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... ITransaction transaction = cache.CreateTransaction(); var tx1 = transaction.StringIncrementAsync("data:counter1"); var tx2 = transaction.StringDecrementAsync("data:counter2"); bool result = transaction.Execute(); Console.WriteLine("Transaction {0}", result ? "succeeded" : "failed"); Console.WriteLine("Result of increment: {0}", tx1.Result); Console.WriteLine("Result of decrement: {0}", tx2.Result); Remember that Redis transactions are unlike transactions in relational databases. The Execute method simply queues all the commands that comprise the transaction to be run, and if any of them is malformed then the transaction is stopped. If all the commands have been queued successfully, each command runs asynchronously. If any command fails, the others still continue processing. If you need to verify that a command has completed successfully, you must fetch the results of the command by using the Result property of the corresponding task, as shown in the example above. Reading the Result property will block the calling thread until the task has completed. For more information, see Transactions in Redis. When performing batch operations, you can use the IBatch interface of the StackExchange library. This interface provides access to a set of methods similar to those accessed by the methods are asynchronous. You create an IDatabase interface, except that all the object by using the IDatabase.CreateBatch method, and then run the batch by using the IBatch.Execute method, as shown in the following example. This code simply sets a string value, increments and decrements the same counters used in the previous example, and displays the results: IBatch ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... IBatch batch = cache.CreateBatch(); batch.StringSetAsync("data:key1", 11); var t1 = batch.StringIncrementAsync("data:counter1"); var t2 = batch.StringDecrementAsync("data:counter2"); batch.Execute(); Console.WriteLine("{0}", t1.Result); Console.WriteLine("{0}", t2.Result); It is important to understand that unlike a transaction, if a command in a batch fails because it is malformed, the other commands might still run. The IBatch.Execute method does not return any indication of success or failure. Perform fire and forget cache operations Redis supports fire and forget operations by using command flags. In this situation, the client simply initiates an operation but has no interest in the result and does not wait for the command to be completed. The example below shows how to perform the INCR command as a fire and forget operation: ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... await cache.StringSetAsync("data:key1", 99); ... cache.StringIncrement("data:key1", flags: CommandFlags.FireAndForget); Specify automatically expiring keys When you store an item in a Redis cache, you can specify a timeout after which the item will be automatically removed from the cache. You can also query how much more time a key has before it expires by using the TTL command. This command is available to StackExchange applications by using the IDatabase.KeyTimeToLive method. The following code snippet shows how to set an expiration time of 20 seconds on a key, and query the remaining lifetime of the key: ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... // Add a key with an expiration time of 20 seconds await cache.StringSetAsync("data:key1", 99, TimeSpan.FromSeconds(20)); ... // Query how much time a key has left to live // If the key has already expired, the KeyTimeToLive function returns a null TimeSpan? expiry = cache.KeyTimeToLive("data:key1"); You can also set the expiration time to a specific date and time by using the EXPIRE command, which is available in the StackExchange library as the KeyExpireAsync method: ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... // Add a key with an expiration date of midnight on 1st January 2015 await cache.StringSetAsync("data:key1", 99); await cache.KeyExpireAsync("data:key1", new DateTime(2015, 1, 1, 0, 0, 0, DateTimeKind.Utc)); ... TIP You can manually remove an item from the cache by using the DEL command, which is available through the StackExchange library as the IDatabase.KeyDeleteAsync method. Use tags to cross-correlate cached items A Redis set is a collection of multiple items that share a single key. You can create a set by using the SADD command. You can retrieve the items in a set by using the SMEMBERS command. The StackExchange library implements the SADD command with the IDatabase.SetAddAsync method, and the SMEMBERS command with the IDatabase.SetMembersAsync method. You can also combine existing sets to create new sets by using the SDIFF (set difference), SINTER (set intersection), and SUNION (set union) commands. The StackExchange library unifies these operations in the IDatabase.SetCombineAsync method. The first parameter to this method specifies the set operation to perform. The following code snippets show how sets can be useful for quickly storing and retrieving collections of related items. This code uses the BlogPost type that was described in the section Implement Redis Cache Client Applications earlier in this article. A BlogPost object contains four fields — an ID, a title, a ranking score, and a collection of tags. The first code snippet below shows the sample data that's used for populating a C# list of BlogPost objects: List<string[]> tags = new List<string[]> { new[] { "iot","csharp" }, new[] { "iot","azure","csharp" }, new[] { "csharp","git","big data" }, new[] { "iot","git","database" }, new[] { "database","git" }, new[] { "csharp","database" }, new[] { "iot" }, new[] { "iot","database","git" }, new[] { "azure","database","big data","git","csharp" }, new[] { "azure" } }; List<BlogPost> posts = new List<BlogPost>(); int blogKey = 1; int numberOfPosts = 20; Random random = new Random(); for (int i = 0; i < numberOfPosts; i++) { blogKey++; posts.Add(new BlogPost( blogKey, // Blog post ID string.Format(CultureInfo.InvariantCulture, "Blog Post #{0}", blogKey), // Blog post title random.Next(100, 10000), // Ranking score tags[i % tags.Count])); // Tags--assigned from a collection // in the tags list } You can store the tags for each BlogPost object as a set in a Redis cache and associate each set with the ID of the BlogPost . This enables an application to quickly find all the tags that belong to a specific blog post. To enable searching in the opposite direction and find all blog posts that share a specific tag, you can create another set that holds the blog posts referencing the tag ID in the key: ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... // Tags are easily represented as Redis Sets foreach (BlogPost post in posts) { string redisKey = string.Format(CultureInfo.InvariantCulture, "blog:posts:{0}:tags", post.Id); // Add tags to the blog post in Redis await cache.SetAddAsync( redisKey, post.Tags.Select(s => (RedisValue)s).ToArray()); // Now do the inverse so we can figure out which blog posts have a given tag foreach (var tag in post.Tags) { await cache.SetAddAsync(string.Format(CultureInfo.InvariantCulture, "tag:{0}:blog:posts", tag), post.Id); } } These structures enable you to perform many common queries very efficiently. For example, you can find and display all of the tags for blog post 1 like this: // Show the tags for blog post #1 foreach (var value in await cache.SetMembersAsync("blog:posts:1:tags")) { Console.WriteLine(value); } You can find all tags that are common to blog post 1 and blog post 2 by performing a set intersection operation, as follows: // Show the tags in common for blog posts #1 and #2 foreach (var value in await cache.SetCombineAsync(SetOperation.Intersect, new RedisKey[] { "blog:posts:1:tags", "blog:posts:2:tags" })) { Console.WriteLine(value); } And you can find all blog posts that contain a specific tag: // Show the ids of the blog posts that have the tag "iot". foreach (var value in await cache.SetMembersAsync("tag:iot:blog:posts")) { Console.WriteLine(value); } Find recently accessed items A common task required of many applications is to find the most recently accessed items. For example, a blogging site might want to display information about the most recently read blog posts. You can implement this functionality by using a Redis list. A Redis list contains multiple items that share the same key. The list acts as a double-ended queue. You can push items to either end of the list by using the LPUSH (left push) and RPUSH (right push) commands. You can retrieve items from either end of the list by using the LPOP and RPOP commands. You can also return a set of elements by using the LRANGE and RRANGE commands. The code snippets below show how you can perform these operations by using the StackExchange library. This code uses the BlogPost type from the previous examples. As a blog post is read by a user, the IDatabase.ListLeftPushAsync method pushes the title of the blog post onto a list that's associated with the key "blog:recent_posts" in the Redis cache. ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... string redisKey = "blog:recent_posts"; BlogPost blogPost = ...; // Reference to the blog post that has just been read await cache.ListLeftPushAsync( redisKey, blogPost.Title); // Push the blog post onto the list As more blog posts are read, their titles are pushed onto the same list. The list is ordered by the sequence in which the titles have been added. The most recently read blog posts are towards the left end of the list. (If the same blog post is read more than once, it will have multiple entries in the list.) You can display the titles of the most recently read posts by using the IDatabase.ListRange method. This method takes the key that contains the list, a starting point, and an ending point. The following code retrieves the titles of the 10 blog posts (items from 0 to 9) at the left-most end of the list: // Show latest ten posts foreach (string postTitle in await cache.ListRangeAsync(redisKey, 0, 9)) { Console.WriteLine(postTitle); } Note that the method does not remove items from the list. To do this, you can use the and IDatabase.ListRightPopAsync methods. ListRangeAsync IDatabase.ListLeftPopAsync To prevent the list from growing indefinitely, you can periodically cull items by trimming the list. The code snippet below shows you how to remove all but the five left-most items from the list: await cache.ListTrimAsync(redisKey, 0, 5); Implement a leader board By default, the items in a set are not held in any specific order. You can create an ordered set by using the ZADD command (the IDatabase.SortedSetAdd method in the StackExchange library). The items are ordered by using a numeric value called a score, which is provided as a parameter to the command. The following code snippet adds the title of a blog post to an ordered list. In this example, each blog post also has a score field that contains the ranking of the blog post. ConnectionMultiplexer redisHostConnection = ...; IDatabase cache = redisHostConnection.GetDatabase(); ... string redisKey = "blog:post_rankings"; BlogPost blogPost = ...; // Reference to a blog post that has just been rated await cache.SortedSetAddAsync(redisKey, blogPost.Title, blogPost.Score); You can retrieve the blog post titles and scores in ascending score order by using the IDatabase.SortedSetRangeByRankWithScores method: foreach (var post in await cache.SortedSetRangeByRankWithScoresAsync(redisKey)) { Console.WriteLine(post); } NOTE The StackExchange library also provides the score order, but does not return the scores. IDatabase.SortedSetRangeByRankAsync method, which returns the data in You can also retrieve items in descending order of scores, and limit the number of items that are returned by providing additional parameters to the IDatabase.SortedSetRangeByRankWithScoresAsync method. The next example displays the titles and scores of the top 10 ranked blog posts: foreach (var post in await cache.SortedSetRangeByRankWithScoresAsync( redisKey, 0, 9, Order.Descending)) { Console.WriteLine(post); } The next example uses the IDatabase.SortedSetRangeByScoreWithScoresAsync method, which you can use to limit the items that are returned to those that fall within a given score range: // Blog posts with scores between 5000 and 100000 foreach (var post in await cache.SortedSetRangeByScoreWithScoresAsync( redisKey, 5000, 100000)) { Console.WriteLine(post); } Message by using channels Apart from acting as a data cache, a Redis server provides messaging through a high-performance publisher/subscriber mechanism. Client applications can subscribe to a channel, and other applications or services can publish messages to the channel. Subscribing applications will then receive these messages and can process them. Redis provides the SUBSCRIBE command for client applications to use to subscribe to channels. This command expects the name of one or more channels on which the application will accept messages. The StackExchange library includes the ISubscription interface, which enables a .NET Framework application to subscribe and publish to channels. You create an ISubscription object by using the GetSubscriber method of the connection to the Redis server. Then you listen for messages on a channel by using the SubscribeAsync method of this object. The following code example shows how to subscribe to a channel named "messages:blogPosts": ConnectionMultiplexer redisHostConnection = ...; ISubscriber subscriber = redisHostConnection.GetSubscriber(); ... await subscriber.SubscribeAsync("messages:blogPosts", (channel, message) => Console.WriteLine("Title is: {0}", message)); The first parameter to the Subscribe method is the name of the channel. This name follows the same conventions that are used by keys in the cache. The name can contain any binary data, although it is advisable to use relatively short, meaningful strings to help ensure good performance and maintainability. Note also that the namespace used by channels is separate from that used by keys. This means you can have channels and keys that have the same name, although this may make your application code more difficult to maintain. The second parameter is an Action delegate. This delegate runs asynchronously whenever a new message appears on the channel. This example simply displays the message on the console (the message will contain the title of a blog post). To publish to a channel, an application can use the Redis PUBLISH command. The StackExchange library provides the IServer.PublishAsync method to perform this operation. The next code snippet shows how to publish a message to the "messages:blogPosts" channel: ConnectionMultiplexer redisHostConnection = ...; ISubscriber subscriber = redisHostConnection.GetSubscriber(); ... BlogPost blogPost = ...; subscriber.PublishAsync("messages:blogPosts", blogPost.Title); There are several points you should understand about the publish/subscribe mechanism: Multiple subscribers can subscribe to the same channel, and they will all receive the messages that are published to that channel. Subscribers only receive messages that have been published after they have subscribed. Channels are not buffered, and once a message is published, the Redis infrastructure pushes the message to each subscriber and then removes it. By default, messages are received by subscribers in the order in which they are sent. In a highly active system with a large number of messages and many subscribers and publishers, guaranteed sequential delivery of messages can slow performance of the system. If each message is independent and the order is unimportant, you can enable concurrent processing by the Redis system, which can help to improve responsiveness. You can achieve this in a StackExchange client by setting the PreserveAsyncOrder of the connection used by the subscriber to false: ConnectionMultiplexer redisHostConnection = ...; redisHostConnection.PreserveAsyncOrder = false; ISubscriber subscriber = redisHostConnection.GetSubscriber(); Serialization considerations When you choose a serialization format, consider tradeoffs between performance, interoperability, versioning, compatibility with existing systems, data compression, and memory overhead. When you are evaluating performance, remember that benchmarks are highly dependent on context. They may not reflect your actual workload, and may not consider newer libraries or versions. There is no single "fastest" serializer for all scenarios. Some options to consider include: Protocol Buffers (also called protobuf) is a serialization format developed by Google for serializing structured data efficiently. It uses strongly-typed definition files to define message structures. These definition files are then compiled to language-specific code for serializing and deserializing messages. Protobuf can be used over existing RPC mechanisms, or it can generate an RPC service. Apache Thrift uses a similar approach, with strongly typed definition files and a compilation step to generate the serialization code and RPC services. Apache Avro provides similar functionality to Protocol Buffers and Thrift, but there is no compilation step. Instead, serialized data always includes a schema that describes the structure. JSON is an open standard that uses human-readable text fields. It has broad cross-platform support. JSON does not use message schemas. Being a text-based format, it is not very efficient over the wire. In some cases, however, you may be returning cached items directly to a client via HTTP, in which case storing JSON could save the cost of deserializing from another format and then serializing to JSON. BSON is a binary serialization format that uses a structure similar to JSON. BSON was designed to be lightweight, easy to scan, and fast to serialize and deserialize, relative to JSON. Payloads are comparable in size to JSON. Depending on the data, a BSON payload may be smaller or larger than a JSON payload. BSON has some additional data types that are not available in JSON, notably BinData (for byte arrays) and Date. MessagePack is a binary serialization format that is designed to be compact for transmission over the wire. There are no message schemas or message type checking. Bond is a cross-platform framework for working with schematized data. It supports cross-language serialization and deserialization. Notable differences from other systems listed here are support for inheritance, type aliases, and generics. gRPC is an open source RPC system developed by Google. By default, it uses Protocol Buffers as its definition language and underlying message interchange format. Related patterns and guidance The following patterns might also be relevant to your scenario when you implement caching in your applications: Cache-aside pattern: This pattern describes how to load data on demand into a cache from a data store. This pattern also helps to maintain consistency between data that's held in the cache and the data in the original data store. The Sharding pattern provides information about implementing horizontal partitioning to help improve scalability when storing and accessing large volumes of data. More information Azure Redis Cache documentation Azure Redis Cache FAQ Task-based Asynchronous pattern Redis documentation StackExchange.Redis Data partitioning guide Content delivery networks (CDNs) 3/13/2019 • 8 minutes to read • Edit Online A content delivery network (CDN ) is a distributed network of servers that can efficiently deliver web content to users. CDNs store cached content on edge servers that are close to end-users, to minimize latency. CDNs are typically used to deliver static content such as images, style sheets, documents, client-side scripts, and HTML pages. The major advantages of using a CDN are lower latency and faster delivery of content to users, regardless of their geographical location in relation to the datacenter where the application is hosted. CDNs can also help to reduce load on a web application, because the application does not have to service requests for the content that is hosted in the CDN. In Azure, the Azure Content Delivery Network is a global CDN solution for delivering high-bandwidth content that is hosted in Azure or any other location. Using Azure CDN, you can cache publicly available objects loaded from Azure blob storage, a web application, virtual machine, any publicly accessible web server. This topic describes some general best practices and considerations when using a CDN. For more information, see Azure CDN. How and why a CDN is used Typical uses for a CDN include: Delivering static resources for client applications, often from a website. These resources can be images, style sheets, documents, files, client-side scripts, HTML pages, HTML fragments, or any other content that the server does not need to modify for each request. The application can create items at runtime and make them available to the CDN (for example, by creating a list of current news headlines), but it does not do so for each request. Delivering public static and shared content to devices such as mobile phones and tablet computers. The application itself is a web service that offers an API to clients running on the various devices. The CDN can also deliver static datasets (via the web service) for the clients to use, perhaps to generate the client UI. For example, the CDN could be used to distribute JSON or XML documents. Serving entire websites that consist of only public static content to clients, without requiring any dedicated compute resources. Streaming video files to the client on demand. Video benefits from the low latency and reliable connectivity available from the globally located datacenters that offer CDN connections. Microsoft Azure Media Services (AMS ) integrates with Azure CDN to deliver content directly to the CDN for further distribution. For more information, see Streaming endpoints overview. Generally improving the experience for users, especially those located far from the datacenter hosting the application. These users might otherwise suffer higher latency. A large proportion of the total size of the content in a web application is often static, and using the CDN can help to maintain performance and overall user experience while eliminating the requirement to deploy the application to multiple data centers. For a list of Azure CDN node locations, see Azure CDN POP Locations. Supporting IoT (Internet of Things) solutions. The huge numbers of devices and appliances involved in an IoT solution could easily overwhelm an application if it had to distribute firmware updates directly to each device. Coping with peaks and surges in demand without requiring the application to scale, avoiding the consequent increased running costs. For example, when an update to an operating system is released for a hardware device such as a specific model of router, or for a consumer device such as a smart TV, there will be a huge peak in demand as it is downloaded by millions of users and devices over a short period. Challenges There are several challenges to take into account when planning to use a CDN. Deployment. Decide the origin from which the CDN fetches the content, and whether you need to deploy the content in more than one storage system. Take into account the process for deploying static content and resources. For example, you may need to implement a separate step to load content into Azure blob storage. Versioning and cache-control. Consider how you will update static content and deploy new versions. Understand how the CDN performs caching and time-to-live (TTL ). For Azure CDN, see How caching works. Testing. It can be difficult to perform local testing of your CDN settings when developing and testing an application locally or in a staging environment. Search engine optimization (SEO ). Content such as images and documents are served from a different domain when you use the CDN. This can have an effect on SEO for this content. Content security. Not all CDNs offer any form of access control for the content. Some CDN services, including Azure CDN, support token-based authentication to protect CDN content. For more information, see Securing Azure Content Delivery Network assets with token authentication. Client security. Clients might connect from an environment that does not allow access to resources on the CDN. This could be a security-constrained environment that limits access to only a set of known sources, or one that prevents loading of resources from anything other than the page origin. A fallback implementation is required to handle these cases. Resilience. The CDN is a potential single point of failure for an application. Scenarios where CDN may be less useful include: If the content has a low hit rate, it might be accessed only few times while it is valid (determined by its timeto-live setting). If the data is private, such as for large enterprises or supply chain ecosystems. General guidelines and good practices Using a CDN is a good way to minimize the load on your application, and maximize availability and performance. Consider adopting this strategy for all of the appropriate content and resources your application uses. Consider the points in the following sections when designing your strategy to use a CDN. Deployment Static content may need to be provisioned and deployed independently from the application if you do not include it in the application deployment package or process. Consider how this will affect the versioning approach you use to manage both the application components and the static resource content. Consider using bundling and minification techniques to reduce load times for clients. Bundling combines multiple files into a single file. Minification removes unnecessary characters from scripts and CSS files without altering functionality. If you need to deploy the content to an additional location, this will be an extra step in the deployment process. If the application updates the content for the CDN, perhaps at regular intervals or in response to an event, it must store the updated content in any additional locations as well as the endpoint for the CDN. Consider how you will handle local development and testing when some static content is expected to be served from a CDN. For example, you could pre-deploy the content to the CDN as part of your build script. Alternatively, use compile directives or flags to control how the application loads the resources. For example, in debug mode, the application could load static resources from a local folder. In release mode, the application would use the CDN. Consider the options for file compression, such as gzip (GNU zip). Compression may be performed on the origin server by the web application hosting or directly on the edge servers by the CDN. For more information, see Improve performance by compressing files in Azure CDN. Routing and versioning You may need to use different CDN instances at various times. For example, when you deploy a new version of the application you may want to use a new CDN and retain the old CDN (holding content in an older format) for previous versions. If you use Azure blob storage as the content origin, you can create a separate storage account or a separate container and point the CDN endpoint to it. Do not use the query string to denote different versions of the application in links to resources on the CDN because, when retrieving content from Azure blob storage, the query string is part of the resource name (the blob name). This approach can also affect how the client caches resources. Deploying new versions of static content when you update an application can be a challenge if the previous resources are cached on the CDN. For more information, see the section on cache control, below. Consider restricting the CDN content access by country. Azure CDN allows you to filter requests based on the country of origin and restrict the content delivered. For more information, see Restrict access to your content by country. Cache control Consider how to manage caching within the system. For example, in Azure CDN, you can set global caching rules, and then set custom caching for particular origin endpoints. You can also control how caching is performed in a CDN by sending cache-directive headers at the origin. For more information, see How caching works. To prevent objects from being available on the CDN, you can delete them from the origin, remove or delete the CDN endpoint, or in the case of blob storage, make the container or blob private. However, items are not removed from the until the time-to-live expires. You can also manually purge a CDN endpoint. Security The CDN can deliver content over HTTPS (SSL ), by using the certificate provided by the CDN, as well as over standard HTTP. To avoid browser warnings about mixed content, you might need to use HTTPS to request static content that is displayed in pages loaded through HTTPS. If you deliver static assets such as font files by using the CDN, you might encounter same-origin policy issues if you use an XMLHttpRequest call to request these resources from a different domain. Many web browsers prevent cross-origin resource sharing (CORS ) unless the web server is configured to set the appropriate response headers. You can configure the CDN to support CORS by using one of the following methods: Configure the CDN to add CORS headers to the responses. For more information, see Using Azure CDN with CORS. If the origin is Azure blob storage, add CORS rules to the storage endpoint. For more information, see Cross-Origin Resource Sharing (CORS ) Support for the Azure Storage Services. Configure the application to set the CORS headers. For example, see Enabling Cross-Origin Requests (CORS ) in the ASP.NET Core documentation. CDN fallback Consider how your application will cope with a failure or temporary unavailability of the CDN. Client applications may be able to use copies of the resources that were cached locally (on the client) during previous requests, or you can include code that detects failure and instead requests resources from the origin (the application folder or Azure blob container that holds the resources) if the CDN is unavailable. Horizontal, vertical, and functional data partitioning 3/13/2019 • 17 minutes to read • Edit Online In many large-scale solutions, data is divided into partitions that can be managed and accessed separately. Partitioning can improve scalability, reduce contention, and optimize performance. It can also provide a mechanism for dividing data by usage pattern. For example, you can archive older data in cheaper data storage. However, the partitioning strategy must be chosen carefully to maximize the benefits while minimizing adverse effects. NOTE In this article, the term partitioning means the process of physically dividing data into separate data stores. It is not the same as SQL Server table partitioning. Why partition data? Improve scalability. When you scale up a single database system, it will eventually reach a physical hardware limit. If you divide data across multiple partitions, each hosted on a separate server, you can scale out the system almost indefinitely. Improve performance. Data access operations on each partition take place over a smaller volume of data. Correctly done, partitioning can make your system more efficient. Operations that affect more than one partition can run in parallel. Improve security. In some cases, you can separate sensitive and non-sensitive data into different partitions and apply different security controls to the sensitive data. Provide operational flexibility. Partitioning offers many opportunities for fine tuning operations, maximizing administrative efficiency, and minimizing cost. For example, you can define different strategies for management, monitoring, backup and restore, and other administrative tasks based on the importance of the data in each partition. Match the data store to the pattern of use. Partitioning allows each partition to be deployed on a different type of data store, based on cost and the built-in features that data store offers. For example, large binary data can be stored in blob storage, while more structured data can be held in a document database. See Choose the right data store. Improve availability. Separating data across multiple servers avoids a single point of failure. If one instance fails, only the data in that partition is unavailable. Operations on other partitions can continue. For managed PaaS data stores, this consideration is less relevant, because these services are designed with built-in redundancy. Designing partitions There are three typical strategies for partitioning data: Horizontal partitioning (often called sharding). In this strategy, each partition is a separate data store, but all partitions have the same schema. Each partition is known as a shard and holds a specific subset of the data, such as all the orders for a specific set of customers. Vertical partitioning. In this strategy, each partition holds a subset of the fields for items in the data store. The fields are divided according to their pattern of use. For example, frequently accessed fields might be placed in one vertical partition and less frequently accessed fields in another. Functional partitioning. In this strategy, data is aggregated according to how it is used by each bounded context in the system. For example, an e-commerce system might store invoice data in one partition and product inventory data in another. These strategies can be combined, and we recommend that you consider them all when you design a partitioning scheme. For example, you might divide data into shards and then use vertical partitioning to further subdivide the data in each shard. Horizontal partitioning (sharding) Figure 1 shows horizontal partitioning or sharding. In this example, product inventory data is divided into shards based on the product key. Each shard holds the data for a contiguous range of shard keys (A-G and H-Z ), organized alphabetically. Sharding spreads the load over more computers, which reduces contention and improves performance. Figure 1. Horizontally partitioning (sharding ) data based on a partition key. The most important factor is the choice of a sharding key. It can be difficult to change the key after the system is in operation. The key must ensure that data is partitioned to spread the workload as evenly as possible across the shards. The shards don't have to be the same size. It's more important to balance the number of requests. Some shards might be very large, but each item has a low number of access operations. Other shards might be smaller, but each item is accessed much more frequently. It's also important to ensure that a single shard does not exceed the scale limits (in terms of capacity and processing resources) of the data store. Avoid creating "hot" partitions that can affect performance and availability. For example, using the first letter of a customer’s name causes an unbalanced distribution, because some letters are more common Instead, use a hash of a customer identifier to distribute data more evenly across partitions. Choose a sharding key that minimizes any future requirements to split large shards, coalesce small shards into larger partitions, or change the schema. These operations can be very time consuming, and might require taking one or more shards offline while they are performed. If shards are replicated, it might be possible to keep some of the replicas online while others are split, merged, or reconfigured. However, the system might need to limit the operations that can be performed during the reconfiguration. For example, the data in the replicas might be marked as read-only to prevent data inconsistences. For more information about horizontal partitioning, see [Sharding pattern]. Vertical partitioning The most common use for vertical partitioning is to reduce the I/O and performance costs associated with fetching items that are frequently accessed. Figure 2 shows an example of vertical partitioning. In this example, different properties of an item are stored in different partitions. One partition holds data that is accessed more frequently, including product name, description, and price. Another partition holds inventory data: the stock count and last-ordered date. Figure 2. Vertically partitioning data by its pattern of use. In this example, the application regularly queries the product name, description, and price when displaying the product details to customers. Stock count and last- ordered date are held in a separate partition because these two items are commonly used together. Other advantages of vertical partitioning: Relatively slow -moving data (product name, description, and price) can be separated from the more dynamic data (stock level and last ordered date). Slow moving data is a good candidate for an application to cache in memory. Sensitive data can be stored in a separate partition with additional security controls. Vertical partitioning can reduce the amount of concurrent access that's needed. Vertical partitioning operates at the entity level within a data store, partially normalizing an entity to break it down from a wide item to a set of narrow items. It is ideally suited for column-oriented data stores such as HBase and Cassandra. If the data in a collection of columns is unlikely to change, you can also consider using column stores in SQL Server. Functional partitioning When it's possible to identify a bounded context for each distinct business area in an application, functional partitioning is a way to improve isolation and data access performance. Another common use for functional partitioning is to separate read-write data from read-only data. Figure 3 shows an overview of functional partitioning where inventory data is separated from customer data. Figure 3. Functionally partitioning data by bounded context or subdomain. This partitioning strategy can help reduce data access contention across different parts of a system. Designing partitions for scalability It's vital to consider size and workload for each partition and balance them so that data is distributed to achieve maximum scalability. However, you must also partition the data so that it does not exceed the scaling limits of a single partition store. Follow these steps when designing partitions for scalability: 1. Analyze the application to understand the data access patterns, such as the size of the result set returned by each query, the frequency of access, the inherent latency, and the server-side compute processing requirements. In many cases, a few major entities will demand most of the processing resources. 2. Use this analysis to determine the current and future scalability targets, such as data size and workload. Then distribute the data across the partitions to meet the scalability target. For horizontal partitioning, choosing the right shard key is important to make sure distribution is even. For more information, see the [Sharding pattern]. 3. Make sure each partition has enough resources to handle the scalability requirements, in terms of data size and throughput. Depending on the data store, there might be a limit on the amount of storage space, processing power, or network bandwidth per partition. If the requirements are likely to exceed these limits, you may need to refine your partitioning strategy or split data out further, possibly combining two or more strategies. 4. Monitor the system to verify that data is distributed as expected and that the partitions can handle the load. Actual usage does not always match what an analysis predicts. If so, it might be possible to rebalance the partitions, or else redesign some parts of the system to gain the required balance. Some cloud environments allocate resources in terms of infrastructure boundaries. Ensure that the limits of your selected boundary provide enough room for any anticipated growth in the volume of data, in terms of data storage, processing power, and bandwidth. For example, if you use Azure table storage, there is a limit to the volume of requests that can be handled by a single partition in a particular period of time. (See Azure storage scalability and performance targets.) A busy shard might require more resources than a single partition can handle. If so, the shard might need to be repartitioned to spread the load. If the total size or throughput of these tables exceeds the capacity of a storage account, you might need to create additional storage accounts and spread the tables across these accounts. Designing partitions for query performance Query performance can often be boosted by using smaller data sets and by running parallel queries. Each partition should contain a small proportion of the entire data set. This reduction in volume can improve the performance of queries. However, partitioning is not an alternative for designing and configuring a database appropriately. For example, make sure that you have the necessary indexes in place. Follow these steps when designing partitions for query performance: 1. Examine the application requirements and performance: Use business requirements to determine the critical queries that must always perform quickly. Monitor the system to identify any queries that perform slowly. Find which queries are performed most frequently. Even if a single query has a minimal cost, the cumulative resource consumption could be significant. 2. Partition the data that is causing slow performance: Limit the size of each partition so that the query response time is within target. If you use horizontal partitioning, design the shard key so that the application can easily select the right partition. This prevents the query from having to scan through every partition. Consider the location of a partition. If possible, try to keep data in partitions that are geographically close to the applications and users that access it. 3. If an entity has throughput and query performance requirements, use functional partitioning based on that entity. If this still doesn't satisfy the requirements, apply horizontal partitioning as well. In most cases a single partitioning strategy will suffice, but in some cases it is more efficient to combine both strategies. 4. Consider running queries in parallel across partitions to improve performance. Designing partitions for availability Partitioning data can improve the availability of applications by ensuring that the entire dataset does not constitute a single point of failure and that individual subsets of the dataset can be managed independently. Consider the following factors that affect availability: How critical the data is to business operations. Identify which data is critical business information, such as transactions, and which data is less critical operational data, such as log files. Consider storing critical data in highly-available partitions with an appropriate backup plan. Establish separate management and monitoring procedures for the different datasets. Place data that has the same level of criticality in the same partition so that it can be backed up together at an appropriate frequency. For example, partitions that hold transaction data might need to be backed up more frequently than partitions that hold logging or trace information. How individual partitions can be managed. Designing partitions to support independent management and maintenance provides several advantages. For example: If a partition fails, it can be recovered independently without applications that access data in other partitions. Partitioning data by geographical area allows scheduled maintenance tasks to occur at off-peak hours for each location. Ensure that partitions are not too big to prevent any planned maintenance from being completed during this period. Whether to replicate critical data across partitions. This strategy can improve availability and performance, but can also introduce consistency issues. It takes time to synchronize changes with every replica. During this period, different partitions will contain different data values. Application design considerations Partitioning adds complexity to the design and development of your system. Consider partitioning as a fundamental part of system design even if the system initially only contains a single partition. If you address partitioning as an afterthought, it will be more challenging because you already have a live system to maintain: Data access logic will need to be modified. Large quantities of existing data may need to be migrated, to distribute it across partitions Users expect to be able to continue using the system during the migration. In some cases, partitioning is not considered important because the initial dataset is small and can be easily handled by a single server. This might be true for some workloads, but many commercial systems need to expand as the number of users increases. Moreover, it's not only large data stores that benefit from partitioning. For example, a small data store might be heavily accessed by hundreds of concurrent clients. Partitioning the data in this situation can help to reduce contention and improve throughput. Consider the following points when you design a data partitioning scheme: Minimize cross-partition data access operations. Where possible, keep data for the most common database operations together in each partition to minimize cross-partition data access operations. Querying across partitions can be more time-consuming than querying within a single partition, but optimizing partitions for one set of queries might adversely affect other sets of queries. If you must query across partitions, minimize query time by running parallel queries and aggregating the results within the application. (This approach might not be possible in some cases, such as when the result from one query is used in the next query.) Consider replicating static reference data. If queries use relatively static reference data, such as postal code tables or product lists, consider replicating this data in all of the partitions to reduce separate lookup operations in different partitions. This approach can also reduce the likelihood of the reference data becoming a "hot" dataset, with heavy traffic from across the entire system. However, there is an additional cost associated with synchronizing any changes to the reference data. Minimize cross-partition joins. Where possible, minimize requirements for referential integrity across vertical and functional partitions. In these schemes, the application is responsible for maintaining referential integrity across partitions. Queries that join data across multiple partitions are inefficient because the application typically needs to perform consecutive queries based on a key and then a foreign key. Instead, consider replicating or denormalizing the relevant data. If cross-partition joins are necessary, run parallel queries over the partitions and join the data within the application. Embrace eventual consistency. Evaluate whether strong consistency is actually a requirement. A common approach in distributed systems is to implement eventual consistency. The data in each partition is updated separately, and the application logic ensures that the updates are all completed successfully. It also handles the inconsistencies that can arise from querying data while an eventually consistent operation is running. Consider how queries locate the correct partition. If a query must scan all partitions to locate the required data, there is a significant impact on performance, even when multiple parallel queries are running. With vertical and functional partitioning, queries can naturally specify the partition. Horizontal partitioning, on the other hand, can make locating an item difficult, because every shard has the same schema. A typical solution to maintain a map that is used to look up the shard location for specific items. This map can be implemented in the sharding logic of the application, or maintained by the data store if it supports transparent sharding. Consider periodically rebalancing shards. With horizontal partitioning, rebalancing shards can help distribute the data evenly by size and by workload to minimize hotspots, maximize query performance, and work around physical storage limitations. However, this is a complex task that often requires the use of a custom tool or process. Replicate partitions. If you replicate each partition, it provides additional protection against failure. If a single replica fails, queries can be directed towards a working copy. If you reach the physical limits of a partitioning strategy, you might need to extend the scalability to a different level. For example, if partitioning is at the database level, you might need to locate or replicate partitions in multiple databases. If partitioning is already at the database level, and physical limitations are an issue, it might mean that you need to locate or replicate partitions in multiple hosting accounts. Avoid transactions that access data in multiple partitions. Some data stores implement transactional consistency and integrity for operations that modify data, but only when the data is located in a single partition. If you need transactional support across multiple partitions, you will probably need to implement this as part of your application logic because most partitioning systems do not provide native support. All data stores require some operational management and monitoring activity. The tasks can range from loading data, backing up and restoring data, reorganizing data, and ensuring that the system is performing correctly and efficiently. Consider the following factors that affect operational management: How to implement appropriate management and operational tasks when the data is partitioned. These tasks might include backup and restore, archiving data, monitoring the system, and other administrative tasks. For example, maintaining logical consistency during backup and restore operations can be a challenge. How to load the data into multiple partitions and add new data that's arriving from other sources. Some tools and utilities might not support sharded data operations such as loading data into the correct partition. How to archive and delete the data on a regular basis. To prevent the excessive growth of partitions, you need to archive and delete data on a regular basis (perhaps monthly). It might be necessary to transform the data to match a different archive schema. How to locate data integrity issues. Consider running a periodic process to locate any data integrity issues, such as data in one partition that references missing information in another. The process can either attempt to fix these issues automatically or simply generate a report for manual review. Rebalancing partitions As a system matures, you might have to adjust the partitioning scheme. For example, individual partitions might start get a disproportionate volume of traffic and become hot, leading to excessive contention. Or you might have underestimated the volume of data in some partitions, causing some partitions to approach capacity limits. Some data stores, such as Cosmos DB, can automatically rebalance partitions. In other cases, rebalancing is an administrative task that consists of two stages: 1. Determine a new partitioning strategy. Which partitions need to be split (or possibly combined)? What is the new partition key? 2. Migrate data from the old partitioning scheme to the new set of partitions. Depending on the data store, you might be able to migrate data between partitions while they are in use. This is called online migration. If that's not possible, you might need to make partitions unavailable while the data is relocated (offline migration). Offline migration Offline migration is generally simpler, because it reduces the chances of contention occurring. Conceptually, offline migration works as follows: 1. 2. 3. 4. 5. Mark the partition offline. Split-merge and move the data to the new partitions. Verify the data. Bring the new partitions online. Remove the old partition. Optionally, you can mark a partition as read-only in step 1, so that applications can still read the data while it is being moved. Online migration Online migration is more complex to perform but less disruptive. The process is similar to offline migration, except the original partition is not marked offline. Depending on the granularity of the migration process (for example, item by item versus shard by shard), the data access code in the client applications might have to handle reading and writing data that's held in two locations, the original partition and the new partition. Related patterns The following design patterns might be relevant to your scenario: The sharding pattern describes some common strategies for sharding data. The index table pattern shows how to create secondary indexes over data. An application can quickly retrieve data with this approach, by using queries that do not reference the primary key of a collection. The materialized view pattern describes how to generate pre-populated views that summarize data to support fast query operations. This approach can be useful in a partitioned data store if the partitions that contain the data being summarized are distributed across multiple sites. Next steps Learn about partitioning strategies for specific Azure services. See Data partitioning strategies Data partitioning strategies 3/13/2019 • 31 minutes to read • Edit Online This article describes some strategies for partitioning data in various Azure data stores. For general guidance about when to partition data and best practices, see Data partitioning Partitioning Azure SQL Database A single SQL database has a limit to the volume of data that it can contain. Throughput is constrained by architectural factors and the number of concurrent connections that it supports. Elastic pools support horizontal scaling for a SQL database. Using elastic pools, you can partition your data into shards that are spread across multiple SQL databases. You can also add or remove shards as the volume of data that you need to handle grows and shrinks. Elastic pools can also help reduce contention by distributing the load across databases. Each shard is implemented as a SQL database. A shard can hold more than one dataset (called a shardlet). Each database maintains metadata that describes the shardlets that it contains. A shardlet can be a single data item, or a group of items that share the same shardlet key. For example, in a multitenant application, the shardlet key can be the tenant ID, and all data for a tenant can be held in the same shardlet. Client applications are responsible for associating a dataset with a shardlet key. A separate SQL database acts as a global shard map manager. This database has a list of all the shards and shardlets in the system. The application connects to the shard map manager database to obtain a copy of the shard map. It caches the shard map locally, and uses the map to route data requests to the appropriate shard. This functionality is hidden behind a series of APIs that are contained in the Elastic Database client library, which is available for Java and .NET. For more information about elastic pools, see Scaling out with Azure SQL Database. To reduce latency and improve availability, you can replicate the global shard map manager database. With the Premium pricing tiers, you can configure active geo-replication to continuously copy data to databases in different regions. Alternatively, use Azure SQL Data Sync or Azure Data Factory to replicate the shard map manager database across regions. This form of replication runs periodically and is more suitable if the shard map changes infrequently, and does not require Premium tier. Elastic Database provides two schemes for mapping data to shardlets and storing them in shards: A list shard map associates a single key to a shardlet. For example, in a multitenant system, the data for each tenant can be associated with a unique key and stored in its own shardlet. To guarantee isolation, each shardlet can be held within its own shard. A range shard map associates a set of contiguous key values to a shardlet. For example, you can group the data for a set of tenants (each with their own key) within the same shardlet. This scheme is less expensive than the first, because tenants share data storage, but has less isolation. A single shard can contain the data for several shardlets. For example, you can use list shardlets to store data for different non-contiguous tenants in the same shard. You can also mix range shardlets and list shardlets in the same shard, although they will be addressed through different maps. The following diagram shows this approach: Elastic pools makes it possible to add and remove shards as the volume of data shrinks and grows. Client applications can create and delete shards dynamically, and transparently update the shard map manager. However, removing a shard is a destructive operation that also requires deleting all the data in that shard. If an application needs to split a shard into two separate shards or combine shards, use the split-merge tool. This tool runs as an Azure web service, and migrates data safely between shards. The partitioning scheme can significantly affect the performance of your system. It can also affect the rate at which shards have to be added or removed, or that data must be repartitioned across shards. Consider the following points: Group data that is used together in the same shard, and avoid operations that access data from multiple shards. A shard is a SQL database in its own right, and cross-database joins must be performed on the client side. Although SQL Database does not support cross-database joins, you can use the Elastic Database tools to perform mutli-shard queries. A multi-shard query sends individual queries to each database and merges the results. Don't design a system that has dependencies between shards. Referential integrity constraints, triggers, and stored procedures in one database cannot reference objects in another. If you have reference data that is frequently used by queries, consider replicating this data across shards. This approach can remove the need to join data across databases. Ideally, such data should be static or slow -moving, to minimize the replication effort and reduce the chances of it becoming stale. Shardlets that belong to the same shard map should have the same schema. This rule is not enforced by SQL Database, but data management and querying becomes very complex if each shardlet has a different schema. Instead, create separate shard maps for each schema. Remember that data belonging to different shardlets can be stored in the same shard. Transactional operations are only supported for data within a shard, and not across shards. Transactions can span shardlets as long as they are part of the same shard. Therefore, if your business logic needs to perform transactions, either store the data in the same shard or implement eventual consistency. Place shards close to the users that access the data in those shards. This strategy helps reduce latency. Avoid having a mixture of highly active and relatively inactive shards. Try to spread the load evenly across shards. This might require hashing the sharding keys. If you are geo-locating shards, make sure that the hashed keys map to shardlets held in shards stored close to the users that access that data. Partitioning Azure table storage Azure table storage is a key-value store that's designed around partitioning. All entities are stored in a partition, and partitions are managed internally by Azure table storage. Each entity stored in a table must provide a two-part key that includes: The partition key. This is a string value that determines the partition where Azure table storage will place the entity. All entities with the same partition key are stored in the same partition. The row key. This is a string value that identifies the entity within the partition. All entities within a partition are sorted lexically, in ascending order, by this key. The partition key/row key combination must be unique for each entity and cannot exceed 1 KB in length. If an entity is added to a table with a previously unused partition key, Azure table storage creates a new partition for this entity. Other entities with the same partition key will be stored in the same partition. This mechanism effectively implements an automatic scale-out strategy. Each partition is stored on the same server in an Azure datacenter to help ensure that queries that retrieve data from a single partition run quickly. Microsoft has published scalability targets for Azure Storage. If your system is likely to exceed these limits, consider splitting entities into multiple tables. Use vertical partitioning to divide the fields into the groups that are most likely to be accessed together. The following diagram shows the logical structure of an example storage account. The storage account contains three tables: Customer Info, Product Info, and Order Info. Each table has multiple partitions. In the Customer Info table, the data is partitioned according to the city where the customer is located. The row key contains the customer ID. In the Product Info table, products are partitioned by product category, and the row key contains the product number. In the Order Info table, the orders are partitioned by order date, and the row key specifies the time the order was received. Note that all data is ordered by the row key in each partition. Consider the following points when you design your entities for Azure table storage: Select a partition key and row key by how the data is accessed. Choose a partition key/row key combination that supports the majority of your queries. The most efficient queries retrieve data by specifying the partition key and the row key. Queries that specify a partition key and a range of row keys can be completed by scanning a single partition. This is relatively fast because the data is held in row key order. If queries don't specify which partition to scan, every partition must be scanned. If an entity has one natural key, then use it as the partition key and specify an empty string as the row key. If an entity has a composite key consisting of two properties, select the slowest changing property as the partition key and the other as the row key. If an entity has more than two key properties, use a concatenation of properties to provide the partition and row keys. If you regularly perform queries that look up data by using fields other than the partition and row keys, consider implementing the Index Table pattern, or consider using a different data store that supports indexing, such as Cosmos DB. If you generate partition keys by using a monotonic sequence (such as "0001", "0002", "0003") and each partition only contains a limited amount of data, Azure table storage can physically group these partitions together on the same server. Azure Storage assumes that the application is most likely to perform queries across a contiguous range of partitions (range queries) and is optimized for this case. However, this approach can lead to hotspots, because all insertions of new entities are likely to be concentrated at one end the contiguous range. It can also reduce scalability. To spread the load more evenly, consider hashing the partition key. Azure table storage supports transactional operations for entities that belong to the same partition. An application can perform multiple insert, update, delete, replace, or merge operations as an atomic unit, as long as the transaction doesn't include more than 100 entities and the payload of the request doesn't exceed 4 MB. Operations that span multiple partitions are not transactional, and might require you to implement eventual consistency. For more information about table storage and transactions, see Performing entity group transactions. Consider the granularity of the partition key: Using the same partition key for every entity results in a single partition that's held on one server. This prevents the partition from scaling out and focuses the load on a single server. As a result, this approach is only suitable for storing a small number of entities. However, it does ensure that all entities can participate in entity group transactions. Using a unique partition key for every entity causes the table storage service to create a separate partition for each entity, possibly resulting in a large number of small partitions. This approach is more scalable than using a single partition key, but entity group transactions are not possible. Also, queries that fetch more than one entity might involve reading from more than one server. However, if the application performs range queries, then using a monotonic sequence for the partition keys might help to optimize these queries. Sharing the partition key across a subset of entities makes it possible to group related entities in the same partition. Operations that involve related entities can be performed by using entity group transactions, and queries that fetch a set of related entities can be satisfied by accessing a single server. For more information, see Azure storage table design guide. Partitioning Azure blob storage Azure blob storage makes it possible to hold large binary objects. Use block blobs in scenarios when you need to upload or download large volumes of data quickly. Use page blobs for applications that require random rather than serial access to parts of the data. Each blob (either block or page) is held in a container in an Azure storage account. You can use containers to group related blobs that have the same security requirements. This grouping is logical rather than physical. Inside a container, each blob has a unique name. The partition key for a blob is account name + container name + blob name. The partition key is used to partition data into ranges and these ranges are load-balanced across the system. Blobs can be distributed across many servers in order to scale out access to them, but a single blob can only be served by a single server. If your naming scheme uses timestamps or numerical identifiers, it can lead to excessive traffic going to one partition, limiting the system from effectively load balancing. For instance, if you have daily operations that use a blob object with a timestamp such as yyyy-mm -dd, all the traffic for that operation would go to a single partition server. Instead, consider prefixing the name with a 3-digit hash. For more information, see Partition Naming Convention The actions of writing a single block or page are atomic, but operations that span blocks, pages, or blobs are not. If you need to ensure consistency when performing write operations across blocks, pages, and blobs, take out a write lock by using a blob lease. Partitioning Azure storage queues Azure storage queues enable you to implement asynchronous messaging between processes. An Azure storage account can contain any number of queues, and each queue can contain any number of messages. The only limitation is the space that's available in the storage account. The maximum size of an individual message is 64 KB. If you require messages bigger than this, then consider using Azure Service Bus queues instead. Each storage queue has a unique name within the storage account that contains it. Azure partitions queues based on the name. All messages for the same queue are stored in the same partition, which is controlled by a single server. Different queues can be managed by different servers to help balance the load. The allocation of queues to servers is transparent to applications and users. In a large-scale application, don't use the same storage queue for all instances of the application because this approach might cause the server that's hosting the queue to become a hotspot. Instead, use different queues for different functional areas of the application. Azure storage queues do not support transactions, so directing messages to different queues should have little effect on messaging consistency. An Azure storage queue can handle up to 2,000 messages per second. If you need to process messages at a greater rate than this, consider creating multiple queues. For example, in a global application, create separate storage queues in separate storage accounts to handle application instances that are running in each region. Partitioning Azure Service Bus Azure Service Bus uses a message broker to handle messages that are sent to a Service Bus queue or topic. By default, all messages that are sent to a queue or topic are handled by the same message broker process. This architecture can place a limitation on the overall throughput of the message queue. However, you can also partition a queue or topic when it is created. You do this by setting the EnablePartitioning property of the queue or topic description to true. A partitioned queue or topic is divided into multiple fragments, each of which is backed by a separate message store and message broker. Service Bus takes responsibility for creating and managing these fragments. When an application posts a message to a partitioned queue or topic, Service Bus assigns the message to a fragment for that queue or topic. When an application receives a message from a queue or subscription, Service Bus checks each fragment for the next available message and then passes it to the application for processing. This structure helps distribute the load across message brokers and message stores, increasing scalability and improving availability. If the message broker or message store for one fragment is temporarily unavailable, Service Bus can retrieve messages from one of the remaining available fragments. Service Bus assigns a message to a fragment as follows: If the message belongs to a session, all messages with the same value for the SessionId property are sent to the same fragment. If the message does not belong to a session, but the sender has specified a value for the PartitionKey property, then all messages with the same PartitionKey value are sent to the same fragment. NOTE If the SessionId and PartitionKey properties are both specified, then they must be set to the same value or the message will be rejected. If the SessionId and PartitionKey properties for a message are not specified, but duplicate detection is enabled, the MessageId property will be used. All messages with the same MessageId will be directed to the same fragment. If messages do not include a SessionId, PartitionKey, or MessageId property, then Service Bus assigns messages to fragments sequentially. If a fragment is unavailable, Service Bus will move on to the next. This means that a temporary fault in the messaging infrastructure does not cause the message-send operation to fail. Consider the following points when deciding if or how to partition a Service Bus message queue or topic: Service Bus queues and topics are created within the scope of a Service Bus namespace. Service Bus currently allows up to 100 partitioned queues or topics per namespace. Each Service Bus namespace imposes quotas on the available resources, such as the number of subscriptions per topic, the number of concurrent send and receive requests per second, and the maximum number of concurrent connections that can be established. These quotas are documented in Service Bus quotas. If you expect to exceed these values, then create additional namespaces with their own queues and topics, and spread the work across these namespaces. For example, in a global application, create separate namespaces in each region and configure application instances to use the queues and topics in the nearest namespace. Messages that are sent as part of a transaction must specify a partition key. This can be a SessionId, PartitionKey, or MessageId property. All messages that are sent as part of the same transaction must specify the same partition key because they must be handled by the same message broker process. You cannot send messages to different queues or topics within the same transaction. Partitioned queues and topics can't be configured to be automatically deleted when they become idle. Partitioned queues and topics can't currently be used with the Advanced Message Queuing Protocol (AMQP ) if you are building cross-platform or hybrid solutions. Partitioning Cosmos DB Azure Cosmos DB is a NoSQL database that can store JSON documents using the Azure Cosmos DB SQL API. A document in a Cosmos DB database is a JSON -serialized representation of an object or other piece of data. No fixed schemas are enforced except that every document must contain a unique ID. Documents are organized into collections. You can group related documents together in a collection. For example, in a system that maintains blog postings, you can store the contents of each blog post as a document in a collection. You can also create collections for each subject type. Alternatively, in a multitenant application, such as a system where different authors control and manage their own blog posts, you can partition blogs by author and create separate collections for each author. The storage space that's allocated to collections is elastic and can shrink or grow as needed. Cosmos DB supports automatic partitioning of data based on an application-defined partition key. A logical partition is a partition that stores all the data for a single partition key value. All documents that share the same value for the partition key are placed within the same logical partition. Cosmos DB distributes values according to hash of the partition key. A logical partition has a maximum size of 10 GB. Therefore, the choice of the partition key is an important decision at design time. Choose a property with a wide range of values and even access patterns. For more information, see Partition and scale in Azure Cosmos DB. NOTE Each Cosmos DB database has a performance level that determines the amount of resources it gets. A performance level is associated with a request unit (RU) rate limit. The RU rate limit specifies the volume of resources that's reserved and available for exclusive use by that collection. The cost of a collection depends on the performance level that's selected for that collection. The higher the performance level (and RU rate limit) the higher the charge. You can adjust the performance level of a collection by using the Azure portal. For more information, see Request Units in Azure Cosmos DB. If the partitioning mechanism that Cosmos DB provides is not sufficient, you may need to shard the data at the application level. Document collections provide a natural mechanism for partitioning data within a single database. The simplest way to implement sharding is to create a collection for each shard. Containers are logical resources and can span one or more servers. Fixed-size containers have a maximum limit of 10 GB and 10,000 RU/s throughput. Unlimited containers do not have a maximum storage size, but must specify a partition key. With application sharding, the client application must direct requests to the appropriate shard, usually by implementing its own mapping mechanism based on some attributes of the data that define the shard key. All databases are created in the context of a Cosmos DB database account. A single account can contain several databases, and it specifies in which regions the databases are created. Each account also enforces its own access control. You can use Cosmos DB accounts to geo-locate shards (collections within databases) close to the users who need to access them, and enforce restrictions so that only those users can connect to them. Consider the following points when deciding how to partition data with the Cosmos DB SQL API: The resources available to a Cosmos DB database are subject to the quota limitations of the account. Each database can hold a number of collections, and each collection is associated with a performance level that governs the RU rate limit (reserved throughput) for that collection. For more information, see Azure subscription and service limits, quotas, and constraints. Each document must have an attribute that can be used to uniquely identify that document within the collection in which it is held. This attribute is different from the shard key, which defines which collection holds the document. A collection can contain a large number of documents. In theory, it's limited only by the maximum length of the document ID. The document ID can be up to 255 characters. All operations against a document are performed within the context of a transaction. Transactions are scoped to the collection in which the document is contained. If an operation fails, the work that it has performed is rolled back. While a document is subject to an operation, any changes that are made are subject to snapshot-level isolation. This mechanism guarantees that if, for example, a request to create a new document fails, another user who's querying the database simultaneously will not see a partial document that is then removed. Database queries are also scoped to the collection level. A single query can retrieve data from only one collection. If you need to retrieve data from multiple collections, you must query each collection individually and merge the results in your application code. Cosmos DB supports programmable items that can all be stored in a collection alongside documents. These include stored procedures, user-defined functions, and triggers (written in JavaScript). These items can access any document within the same collection. Furthermore, these items run either inside the scope of the ambient transaction (in the case of a trigger that fires as the result of a create, delete, or replace operation performed against a document), or by starting a new transaction (in the case of a stored procedure that is run as the result of an explicit client request). If the code in a programmable item throws an exception, the transaction is rolled back. You can use stored procedures and triggers to maintain integrity and consistency between documents, but these documents must all be part of the same collection. The collections that you intend to hold in the databases should be unlikely to exceed the throughput limits defined by the performance levels of the collections. For more information, see Request Units in Azure Cosmos DB. If you anticipate reaching these limits, consider splitting collections across databases in different accounts to reduce the load per collection. Partitioning Azure Search The ability to search for data is often the primary method of navigation and exploration that's provided by many web applications. It helps users find resources quickly (for example, products in an e-commerce application) based on combinations of search criteria. The Azure Search service provides full-text search capabilities over web content, and includes features such as type-ahead, suggested queries based on near matches, and faceted navigation. For more information, see What is Azure Search?. Azure Search stores searchable content as JSON documents in a database. You define indexes that specify the searchable fields in these documents and provide these definitions to Azure Search. When a user submits a search request, Azure Search uses the appropriate indexes to find matching items. To reduce contention, the storage that's used by Azure Search can be divided into 1, 2, 3, 4, 6, or 12 partitions, and each partition can be replicated up to 6 times. The product of the number of partitions multiplied by the number of replicas is called the search unit (SU ). A single instance of Azure Search can contain a maximum of 36 SUs (a database with 12 partitions only supports a maximum of 3 replicas). You are billed for each SU that is allocated to your service. As the volume of searchable content increases or the rate of search requests grows, you can add SUs to an existing instance of Azure Search to handle the extra load. Azure Search itself distributes the documents evenly across the partitions. No manual partitioning strategies are currently supported. Each partition can contain a maximum of 15 million documents or occupy 300 GB of storage space (whichever is smaller). You can create up to 50 indexes. The performance of the service varies and depends on the complexity of the documents, the available indexes, and the effects of network latency. On average, a single replica (1 SU ) should be able to handle 15 queries per second (QPS ), although we recommend performing benchmarking with your own data to obtain a more precise measure of throughput. For more information, see Service limits in Azure Search. NOTE You can store a limited set of data types in searchable documents, including strings, Booleans, numeric data, datetime data, and some geographical data. For more details, see the page Supported data types (Azure Search) on the Microsoft website. You have limited control over how Azure Search partitions data for each instance of the service. However, in a global environment you might be able to improve performance and reduce latency and contention further by partitioning the service itself using either of the following strategies: Create an instance of Azure Search in each geographic region, and ensure that client applications are directed towards the nearest available instance. This strategy requires that any updates to searchable content are replicated in a timely manner across all instances of the service. Create two tiers of Azure Search: A local service in each region that contains the data that's most frequently accessed by users in that region. Users can direct requests here for fast but limited results. A global service that encompasses all the data. Users can direct requests here for slower but more complete results. This approach is most suitable when there is a significant regional variation in the data that's being searched. Partitioning Azure Redis Cache Azure Redis Cache provides a shared caching service in the cloud that's based on the Redis key-value data store. As its name implies, Azure Redis Cache is intended as a caching solution. Use it only for holding transient data and not as a permanent data store. Applications that use Azure Redis Cache should be able to continue functioning if the cache is unavailable. Azure Redis Cache supports primary/secondary replication to provide high availability, but currently limits the maximum cache size to 53 GB. If you need more space than this, you must create additional caches. For more information, see Azure Redis Cache. Partitioning a Redis data store involves splitting the data across instances of the Redis service. Each instance constitutes a single partition. Azure Redis Cache abstracts the Redis services behind a façade and does not expose them directly. The simplest way to implement partitioning is to create multiple Azure Redis Cache instances and spread the data across them. You can associate each data item with an identifier (a partition key) that specifies which cache stores the data item. The client application logic can then use this identifier to route requests to the appropriate partition. This scheme is very simple, but if the partitioning scheme changes (for example, if additional Azure Redis Cache instances are created), client applications might need to be reconfigured. Native Redis (not Azure Redis Cache) supports server-side partitioning based on Redis clustering. In this approach, you can divide the data evenly across servers by using a hashing mechanism. Each Redis server stores metadata that describes the range of hash keys that the partition holds, and also contains information about which hash keys are located in the partitions on other servers. Client applications simply send requests to any of the participating Redis servers (probably the closest one). The Redis server examines the client request. If it can be resolved locally, it performs the requested operation. Otherwise it forwards the request on to the appropriate server. This model is implemented by using Redis clustering, and is described in more detail on the Redis cluster tutorial page on the Redis website. Redis clustering is transparent to client applications. Additional Redis servers can be added to the cluster (and the data can be re-partitioned) without requiring that you reconfigure the clients. IMPORTANT Azure Redis Cache currently supports Redis clustering in premium tier only. The page Partitioning: how to split data among multiple Redis instances on the Redis website provides more information about implementing partitioning with Redis. The remainder of this section assumes that you are implementing client-side or proxy-assisted partitioning. Consider the following points when deciding how to partition data with Azure Redis Cache: Azure Redis Cache is not intended to act as a permanent data store, so whatever partitioning scheme you implement, your application code must be able to retrieve data from a location that's not the cache. Data that is frequently accessed together should be kept in the same partition. Redis is a powerful key-value store that provides several highly optimized mechanisms for structuring data. These mechanisms can be one of the following: Simple strings (binary data up to 512 MB in length) Aggregate types such as lists (which can act as queues and stacks) Sets (ordered and unordered) Hashes (which can group related fields together, such as the items that represent the fields in an object) The aggregate types enable you to associate many related values with the same key. A Redis key identifies a list, set, or hash rather than the data items that it contains. These types are all available with Azure Redis Cache and are described by the Data types page on the Redis website. For example, in part of an ecommerce system that tracks the orders that are placed by customers, the details of each customer can be stored in a Redis hash that is keyed by using the customer ID. Each hash can hold a collection of order IDs for the customer. A separate Redis set can hold the orders, again structured as hashes, and keyed by using the order ID. Figure 8 shows this structure. Note that Redis does not implement any form of referential integrity, so it is the developer's responsibility to maintain the relationships between customers and orders. Figure 8. Suggested structure in Redis storage for recording customer orders and their details. NOTE In Redis, all keys are binary data values (like Redis strings) and can contain up to 512 MB of data. In theory, a key can contain almost any information. However, we recommend adopting a consistent naming convention for keys that is descriptive of the type of data and that identifies the entity, but is not excessively long. A common approach is to use keys of the form "entity_type:ID". For example, you can use "customer:99" to indicate the key for a customer with the ID 99. You can implement vertical partitioning by storing related information in different aggregations in the same database. For example, in an e-commerce application, you can store commonly accessed information about products in one Redis hash and less frequently used detailed information in another. Both hashes can use the same product ID as part of the key. For example, you can use "product: nn" (where nn is the product ID ) for the product information and "product_details: nn" for the detailed data. This strategy can help reduce the volume of data that most queries are likely to retrieve. You can repartition a Redis data store, but keep in mind that it's a complex and time-consuming task. Redis clustering can repartition data automatically, but this capability is not available with Azure Redis Cache. Therefore, when you design your partitioning scheme, try to leave sufficient free space in each partition to allow for expected data growth over time. However, remember that Azure Redis Cache is intended to cache data temporarily, and that data held in the cache can have a limited lifetime specified as a time-to-live (TTL ) value. For relatively volatile data, the TTL can be short, but for static data the TTL can be a lot longer. Avoid storing large amounts of long-lived data in the cache if the volume of this data is likely to fill the cache. You can specify an eviction policy that causes Azure Redis Cache to remove data if space is at a premium. NOTE When you use Azure Redis cache, you specify the maximum size of the cache (from 250 MB to 53 GB) by selecting the appropriate pricing tier. However, after an Azure Redis Cache has been created, you cannot increase (or decrease) its size. Redis batches and transactions cannot span multiple connections, so all data that is affected by a batch or transaction should be held in the same database (shard). NOTE A sequence of operations in a Redis transaction is not necessarily atomic. The commands that compose a transaction are verified and queued before they run. If an error occurs during this phase, the entire queue is discarded. However, after the transaction has been successfully submitted, the queued commands run in sequence. If any command fails, only that command stops running. All previous and subsequent commands in the queue are performed. For more information, go to the Transactions page on the Redis website. Redis supports a limited number of atomic operations. The only operations of this type that support multiple keys and values are MGET and MSET operations. MGET operations return a collection of values for a specified list of keys, and MSET operations store a collection of values for a specified list of keys. If you need to use these operations, the key-value pairs that are referenced by the MSET and MGET commands must be stored within the same database. Partitioning Azure Service Fabric Azure Service Fabric is a microservices platform that provides a runtime for distributed applications in the cloud. Service Fabric supports .Net guest executables, stateful and stateless services, and containers. Stateful services provide a reliable collection to persistently store data in a key-value collection within the Service Fabric cluster. For more information about strategies for partitioning keys in a reliable collection, see guidelines and recommendations for reliable collections in Azure Service Fabric. More information Overview of Azure Service Fabric is an introduction to Azure Service Fabric. Partition Service Fabric reliable services provides more information about reliable services in Azure Service Fabric. Partitioning Azure Event Hubs Azure Event Hubs is designed for data streaming at massive scale, and partitioning is built into the service to enable horizontal scaling. Each consumer only reads a specific partition of the message stream. The event publisher is only aware of its partition key, not the partition to which the events are published. This decoupling of key and partition insulates the sender from needing to know too much about the downstream processing. (It's also possible send events directly to a given partition, but generally that's not recommended.) Consider long-term scale when you select the partition count. After an event hub is created, you can't change the number of partitions. For more information about using partitions in Event Hubs, see What is Event Hubs?. For considerations about trade-offs between availability and consistency, see Availability and consistency in Event Hubs. Monitoring and diagnostics 3/13/2019 • 69 minutes to read • Edit Online Distributed applications and services running in the cloud are, by their nature, complex pieces of software that comprise many moving parts. In a production environment, it's important to be able to track the way in which users utilize your system, trace resource utilization, and generally monitor the health and performance of your system. You can use this information as a diagnostic aid to detect and correct issues, and also to help spot potential problems and prevent them from occurring. Monitoring and diagnostics scenarios You can use monitoring to gain an insight into how well a system is functioning. Monitoring is a crucial part of maintaining quality-of-service targets. Common scenarios for collecting monitoring data include: Ensuring that the system remains healthy. Tracking the availability of the system and its component elements. Maintaining performance to ensure that the throughput of the system does not degrade unexpectedly as the volume of work increases. Guaranteeing that the system meets any service-level agreements (SLAs) established with customers. Protecting the privacy and security of the system, users, and their data. Tracking the operations that are performed for auditing or regulatory purposes. Monitoring the day-to-day usage of the system and spotting trends that might lead to problems if they're not addressed. Tracking issues that occur, from initial report through to analysis of possible causes, rectification, consequent software updates, and deployment. Tracing operations and debugging software releases. NOTE This list is not intended to be comprehensive. This document focuses on these scenarios as the most common situations for performing monitoring. There might be others that are less common or are specific to your environment. The following sections describe these scenarios in more detail. The information for each scenario is discussed in the following format: 1. 2. 3. 4. A brief overview of the scenario The typical requirements of this scenario The raw instrumentation data that's required to support the scenario, and possible sources of this information How this raw data can be analyzed and combined to generate meaningful diagnostic information Health monitoring A system is healthy if it is running and capable of processing requests. The purpose of health monitoring is to generate a snapshot of the current health of the system so that you can verify that all components of the system are functioning as expected. Requirements for health monitoring An operator should be alerted quickly (within a matter of seconds) if any part of the system is deemed to be unhealthy. The operator should be able to ascertain which parts of the system are functioning normally, and which parts are experiencing problems. System health can be highlighted through a traffic-light system: Red for unhealthy (the system has stopped) Yellow for partially healthy (the system is running with reduced functionality) Green for completely healthy A comprehensive health-monitoring system enables an operator to drill down through the system to view the health status of subsystems and components. For example, if the overall system is depicted as partially healthy, the operator should be able to zoom in and determine which functionality is currently unavailable. Data sources, instrumentation, and data-collection requirements The raw data that's required to support health monitoring can be generated as a result of: Tracing execution of user requests. This information can be used to determine which requests have succeeded, which have failed, and how long each request takes. Synthetic user monitoring. This process simulates the steps performed by a user and follows a predefined series of steps. The results of each step should be captured. Logging exceptions, faults, and warnings. This information can be captured as a result of trace statements embedded into the application code, as well as retrieving information from the event logs of any services that the system references. Monitoring the health of any third-party services that the system uses. This monitoring might require retrieving and parsing health data that these services supply. This information might take a variety of formats. Endpoint monitoring. This mechanism is described in more detail in the "Availability monitoring" section. Collecting ambient performance information, such as background CPU utilization or I/O (including network) activity. Analyzing health data The primary focus of health monitoring is to quickly indicate whether the system is running. Hot analysis of the immediate data can trigger an alert if a critical component is detected as unhealthy. (It fails to respond to a consecutive series of pings, for example.) The operator can then take the appropriate corrective action. A more advanced system might include a predictive element that performs a cold analysis over recent and current workloads. A cold analysis can spot trends and determine whether the system is likely to remain healthy or whether the system will need additional resources. This predictive element should be based on critical performance metrics, such as: The rate of requests directed at each service or subsystem. The response times of these requests. The volume of data flowing into and out of each service. If the value of any metric exceeds a defined threshold, the system can raise an alert to enable an operator or autoscaling (if available) to take the preventative actions necessary to maintain system health. These actions might involve adding resources, restarting one or more services that are failing, or applying throttling to lowerpriority requests. Availability monitoring A truly healthy system requires that the components and subsystems that compose the system are available. Availability monitoring is closely related to health monitoring. But whereas health monitoring provides an immediate view of the current health of the system, availability monitoring is concerned with tracking the availability of the system and its components to generate statistics about the uptime of the system. In many systems, some components (such as a database) are configured with built-in redundancy to permit rapid failover in the event of a serious fault or loss of connectivity. Ideally, users should not be aware that such a failure has occurred. But from an availability monitoring perspective, it's necessary to gather as much information as possible about such failures to determine the cause and take corrective actions to prevent them from recurring. The data that's required to track availability might depend on a number of lower-level factors. Many of these factors might be specific to the application, system, and environment. An effective monitoring system captures the availability data that corresponds to these low -level factors and then aggregates them to give an overall picture of the system. For example, in an e-commerce system, the business functionality that enables a customer to place orders might depend on the repository where order details are stored and the payment system that handles the monetary transactions for paying for these orders. The availability of the order-placement part of the system is therefore a function of the availability of the repository and the payment subsystem. Requirements for availability monitoring An operator should also be able to view the historical availability of each system and subsystem, and use this information to spot any trends that might cause one or more subsystems to periodically fail. (Do services start to fail at a particular time of day that corresponds to peak processing hours?) A monitoring solution should provide an immediate and historical view of the availability or unavailability of each subsystem. It should also be capable of quickly alerting an operator when one or more services fail or when users can't connect to services. This is a matter of not only monitoring each service, but also examining the actions that each user performs if these actions fail when they attempt to communicate with a service. To some extent, a degree of connectivity failure is normal and might be due to transient errors. But it might be useful to allow the system to raise an alert for the number of connectivity failures to a specified subsystem that occur during a specific period. Data sources, instrumentation, and data-collection requirements As with health monitoring, the raw data that's required to support availability monitoring can be generated as a result of synthetic user monitoring and logging any exceptions, faults, and warnings that might occur. In addition, availability data can be obtained from performing endpoint monitoring. The application can expose one or more health endpoints, each testing access to a functional area within the system. The monitoring system can ping each endpoint by following a defined schedule and collect the results (success or fail). All timeouts, network connectivity failures, and connection retry attempts must be recorded. All data should be time-stamped. Analyzing availability data The instrumentation data must be aggregated and correlated to support the following types of analysis: The immediate availability of the system and subsystems. The availability failure rates of the system and subsystems. Ideally, an operator should be able to correlate failures with specific activities: what was happening when the system failed? A historical view of failure rates of the system or any subsystems across any specified period, and the load on the system (number of user requests, for example) when a failure occurred. The reasons for unavailability of the system or any subsystems. For example, the reasons might be service not running, connectivity lost, connected but timing out, and connected but returning errors. You can calculate the percentage availability of a service over a period of time by using the following formula: %Availability = ((Total Time – Total Downtime) / Total Time ) * 100 This is useful for SLA purposes. (SLA monitoring is described in more detail later in this guidance.) The definition of downtime depends on the service. For example, Visual Studio Team Services Build Service defines downtime as the period (total accumulated minutes) during which Build Service is unavailable. A minute is considered unavailable if all continuous HTTP requests to Build Service to perform customer-initiated operations throughout the minute either result in an error code or do not return a response. Performance monitoring As the system is placed under more and more stress (by increasing the volume of users), the size of the datasets that these users access grows and the possibility of failure of one or more components becomes more likely. Frequently, component failure is preceded by a decrease in performance. If you're able detect such a decrease, you can take proactive steps to remedy the situation. System performance depends on a number of factors. Each factor is typically measured through key performance indicators (KPIs), such as the number of database transactions per second or the volume of network requests that are successfully serviced in a specified time frame. Some of these KPIs might be available as specific performance measures, whereas others might be derived from a combination of metrics. NOTE Determining poor or good performance requires that you understand the level of performance at which the system should be capable of running. This requires observing the system while it's functioning under a typical load and capturing the data for each KPI over a period of time. This might involve running the system under a simulated load in a test environment and gathering the appropriate data before deploying the system to a production environment. You should also ensure that monitoring for performance purposes does not become a burden on the system. You might be able to dynamically adjust the level of detail for the data that the performance monitoring process gathers. Requirements for performance monitoring To examine system performance, an operator typically needs to see information that includes: The response rates for user requests. The number of concurrent user requests. The volume of network traffic. The rates at which business transactions are being completed. The average processing time for requests. It can also be helpful to provide tools that enable an operator to help spot correlations, such as: The number of concurrent users versus request latency times (how long it takes to start processing a request after the user has sent it). The number of concurrent users versus the average response time (how long it takes to complete a request after it has started processing). The volume of requests versus the number of processing errors. Along with this high-level functional information, an operator should be able to obtain a detailed view of the performance for each component in the system. This data is typically provided through low -level performance counters that track information such as: Memory utilization. Number of threads. CPU processing time. Request queue length. Disk or network I/O rates and errors. Number of bytes written or read. Middleware indicators, such as queue length. All visualizations should allow an operator to specify a time period. The displayed data might be a snapshot of the current situation and/or a historical view of the performance. An operator should be able to raise an alert based on any performance measure for any specified value during any specified time interval. Data sources, instrumentation, and data-collection requirements You can gather high-level performance data (throughput, number of concurrent users, number of business transactions, error rates, and so on) by monitoring the progress of users' requests as they arrive and pass through the system. This involves incorporating tracing statements at key points in the application code, together with timing information. All faults, exceptions, and warnings should be captured with sufficient data for correlating them with the requests that caused them. The Internet Information Services (IIS ) log is another useful source. If possible, you should also capture performance data for any external systems that the application uses. These external systems might provide their own performance counters or other features for requesting performance data. If this is not possible, record information such as the start time and end time of each request made to an external system, together with the status (success, fail, or warning) of the operation. For example, you can use a stopwatch approach to time requests: start a timer when the request starts and then stop the timer when the request finishes. Low -level performance data for individual components in a system might be available through features and services such as Windows performance counters and Azure Diagnostics. Analyzing performance data Much of the analysis work consists of aggregating performance data by user request type and/or the subsystem or service to which each request is sent. An example of a user request is adding an item to a shopping cart or performing the checkout process in an e-commerce system. Another common requirement is summarizing performance data in selected percentiles. For example, an operator might determine the response times for 99 percent of requests, 95 percent of requests, and 70 percent of requests. There might be SLA targets or other goals set for each percentile. The ongoing results should be reported in near real time to help detect immediate issues. The results should also be aggregated over the longer time for statistical purposes. In the case of latency issues affecting performance, an operator should be able to quickly identify the cause of the bottleneck by examining the latency of each step that each request performs. The performance data must therefore provide a means of correlating performance measures for each step to tie them to a specific request. Depending on the visualization requirements, it might be useful to generate and store a data cube that contains views of the raw data. This data cube can allow complex ad hoc querying and analysis of the performance information. Security monitoring All commercial systems that include sensitive data must implement a security structure. The complexity of the security mechanism is usually a function of the sensitivity of the data. In a system that requires users to be authenticated, you should record: All sign-in attempts, whether they fail or succeed. All operations performed by — and the details of all resources accessed by — an authenticated user. When a user ends a session and signs out. Monitoring might be able to help detect attacks on the system. For example, a large number of failed sign-in attempts might indicate a brute-force attack. An unexpected surge in requests might be the result of a distributed denial-of-service (DDoS ) attack. You must be prepared to monitor all requests to all resources regardless of the source of these requests. A system that has a sign-in vulnerability might accidentally expose resources to the outside world without requiring a user to actually sign in. Requirements for security monitoring The most critical aspects of security monitoring should enable an operator to quickly: Detect attempted intrusions by an unauthenticated entity. Identify attempts by entities to perform operations on data for which they have not been granted access. Determine whether the system, or some part of the system, is under attack from outside or inside. (For example, a malicious authenticated user might be attempting to bring the system down.) To support these requirements, an operator should be notified: If one account makes repeated failed sign-in attempts within a specified period. If one authenticated account repeatedly tries to access a prohibited resource during a specified period. If a large number of unauthenticated or unauthorized requests occur during a specified period. The information that's provided to an operator should include the host address of the source for each request. If security violations regularly arise from a particular range of addresses, these hosts might be blocked. A key part in maintaining the security of a system is being able to quickly detect actions that deviate from the usual pattern. Information such as the number of failed and/or successful sign-in requests can be displayed visually to help detect whether there is a spike in activity at an unusual time. (An example of this activity is users signing in at 3:00 AM and performing a large number of operations when their working day starts at 9:00 AM ). This information can also be used to help configure time-based autoscaling. For example, if an operator observes that a large number of users regularly sign in at a particular time of day, the operator can arrange to start additional authentication services to handle the volume of work, and then shut down these additional services when the peak has passed. Data sources, instrumentation, and data-collection requirements Security is an all-encompassing aspect of most distributed systems. The pertinent data is likely to be generated at multiple points throughout a system. You should consider adopting a Security Information and Event Management (SIEM ) approach to gather the security-related information that results from events raised by the application, network equipment, servers, firewalls, antivirus software, and other intrusion-prevention elements. Security monitoring can incorporate data from tools that are not part of your application. These tools can include utilities that identify port-scanning activities by external agencies, or network filters that detect attempts to gain unauthenticated access to your application and data. In all cases, the gathered data must enable an administrator to determine the nature of any attack and take the appropriate countermeasures. Analyzing security data A feature of security monitoring is the variety of sources from which the data arises. The different formats and level of detail often require complex analysis of the captured data to tie it together into a coherent thread of information. Apart from the simplest of cases (such as detecting a large number of failed sign-ins, or repeated attempts to gain unauthorized access to critical resources), it might not be possible to perform any complex automated processing of security data. Instead, it might be preferable to write this data, time-stamped but otherwise in its original form, to a secure repository to allow for expert manual analysis. SLA monitoring Many commercial systems that support paying customers make guarantees about the performance of the system in the form of SLAs. Essentially, SLAs state that the system can handle a defined volume of work within an agreed time frame and without losing critical information. SLA monitoring is concerned with ensuring that the system can meet measurable SLAs. NOTE SLA monitoring is closely related to performance monitoring. But whereas performance monitoring is concerned with ensuring that the system functions optimally, SLA monitoring is governed by a contractual obligation that defines what optimally actually means. SLAs are often defined in terms of: Overall system availability. For example, an organization might guarantee that the system will be available for 99.9 percent of the time. This equates to no more than 9 hours of downtime per year, or approximately 10 minutes a week. Operational throughput. This aspect is often expressed as one or more high-water marks, such as guaranteeing that the system can support up to 100,000 concurrent user requests or handle 10,000 concurrent business transactions. Operational response time. The system might also make guarantees for the rate at which requests are processed. An example is that 99 percent of all business transactions will finish within 2 seconds, and no single transaction will take longer than 10 seconds. NOTE Some contracts for commercial systems might also include SLAs for customer support. An example is that all help-desk requests will elicit a response within 5 minutes, and that 99 percent of all problems will be fully addressed within 1 working day. Effective issue tracking (described later in this section) is key to meeting SLAs such as these. Requirements for SLA monitoring At the highest level, an operator should be able to determine at a glance whether the system is meeting the agreed SLAs or not. And if not, the operator should be able to drill down and examine the underlying factors to determine the reasons for substandard performance. Typical high-level indicators that can be depicted visually include: The percentage of service uptime. The application throughput (measured in terms of successful transactions and/or operations per second). The number of successful/failing application requests. The number of application and system faults, exceptions, and warnings. All of these indicators should be capable of being filtered by a specified period of time. A cloud application will likely comprise a number of subsystems and components. An operator should be able to select a high-level indicator and see how it's composed from the health of the underlying elements. For example, if the uptime of the overall system falls below an acceptable value, an operator should be able to zoom in and determine which elements are contributing to this failure. NOTE System uptime needs to be defined carefully. In a system that uses redundancy to ensure maximum availability, individual instances of elements might fail, but the system can remain functional. System uptime as presented by health monitoring should indicate the aggregate uptime of each element and not necessarily whether the system has actually halted. Additionally, failures might be isolated. So even if a specific system is unavailable, the remainder of the system might remain available, although with decreased functionality. (In an e-commerce system, a failure in the system might prevent a customer from placing orders, but the customer might still be able to browse the product catalog.) For alerting purposes, the system should be able to raise an event if any of the high-level indicators exceed a specified threshold. The lower-level details of the various factors that compose the high-level indicator should be available as contextual data to the alerting system. Data sources, instrumentation, and data-collection requirements The raw data that's required to support SLA monitoring is similar to the raw data that's required for performance monitoring, together with some aspects of health and availability monitoring. (See those sections for more details.) You can capture this data by: Performing endpoint monitoring. Logging exceptions, faults, and warnings. Tracing the execution of user requests. Monitoring the availability of any third-party services that the system uses. Using performance metrics and counters. All data must be timed and time-stamped. Analyzing SLA data The instrumentation data must be aggregated to generate a picture of the overall performance of the system. Aggregated data must also support drill-down to enable examination of the performance of the underlying subsystems. For example, you should be able to: Calculate the total number of user requests during a specified period and determine the success and failure rate of these requests. Combine the response times of user requests to generate an overall view of system response times. Analyze the progress of user requests to break down the overall response time of a request into the response times of the individual work items in that request. Determine the overall availability of the system as a percentage of uptime for any specific period. Analyze the percentage time availability of the individual components and services in the system. This might involve parsing logs that third-party services have generated. Many commercial systems are required to report real performance figures against agreed SLAs for a specified period, typically a month. This information can be used to calculate credits or other forms of repayments for customers if the SLAs are not met during that period. You can calculate availability for a service by using the technique described in the section Analyzing availability data. For internal purposes, an organization might also track the number and nature of incidents that caused services to fail. Learning how to resolve these issues quickly, or eliminate them completely, will help to reduce downtime and meet SLAs. Auditing Depending on the nature of the application, there might be statutory or other legal regulations that specify requirements for auditing users' operations and recording all data access. Auditing can provide evidence that links customers to specific requests. Non-repudiation is an important factor in many e-business systems to help maintain trust be between a customer and the organization that's responsible for the application or service. Requirements for auditing An analyst must be able to trace the sequence of business operations that users are performing so that you can reconstruct users' actions. This might be necessary simply as a matter of record, or as part of a forensic investigation. Audit information is highly sensitive. It will likely include data that identifies the users of the system, together with the tasks that they're performing. For this reason, audit information will most likely take the form of reports that are available only to trusted analysts rather than as an interactive system that supports drill-down of graphical operations. An analyst should be able to generate a range of reports. For example, reports might list all users' activities occurring during a specified time frame, detail the chronology of activity for a single user, or list the sequence of operations performed against one or more resources. Data sources, instrumentation, and data-collection requirements The primary sources of information for auditing can include: The security system that manages user authentication. Trace logs that record user activity. Security logs that track all identifiable and unidentifiable network requests. The format of the audit data and the way in which it's stored might be driven by regulatory requirements. For example, it might not be possible to clean the data in any way. (It must be recorded in its original format.) Access to the repository where it's held must be protected to prevent tampering. Analyzing audit data An analyst must be able to access the raw data in its entirety, in its original form. Aside from the requirement to generate common audit reports, the tools for analyzing this data are likely to be specialized and kept external to the system. Usage monitoring Usage monitoring tracks how the features and components of an application are used. An operator can use the gathered data to: Determine which features are heavily used and determine any potential hotspots in the system. Hightraffic elements might benefit from functional partitioning or even replication to spread the load more evenly. An operator can also use this information to ascertain which features are infrequently used and are possible candidates for retirement or replacement in a future version of the system. Obtain information about the operational events of the system under normal use. For example, in an ecommerce site, you can record the statistical information about the number of transactions and the volume of customers that are responsible for them. This information can be used for capacity planning as the number of customers grows. Detect (possibly indirectly) user satisfaction with the performance or functionality of the system. For example, if a large number of customers in an e-commerce system regularly abandon their shopping carts, this might be due to a problem with the checkout functionality. Generate billing information. A commercial application or multitenant service might charge customers for the resources that they use. Enforce quotas. If a user in a multitenant system exceeds their paid quota of processing time or resource usage during a specified period, their access can be limited or processing can be throttled. Requirements for usage monitoring To examine system usage, an operator typically needs to see information that includes: The number of requests that are processed by each subsystem and directed to each resource. The work that each user is performing. The volume of data storage that each user occupies. The resources that each user is accessing. An operator should also be able to generate graphs. For example, a graph might display the most resourcehungry users, or the most frequently accessed resources or system features. Data sources, instrumentation, and data-collection requirements Usage tracking can be performed at a relatively high level. It can note the start and end times of each request and the nature of the request (read, write, and so on, depending on the resource in question). You can obtain this information by: Tracing user activity. Capturing performance counters that measure the utilization for each resource. Monitoring the resource consumption by each user. For metering purposes, you also need to be able to identify which users are responsible for performing which operations, and the resources that these operations utilize. The gathered information should be detailed enough to enable accurate billing. Issue tracking Customers and other users might report issues if unexpected events or behavior occurs in the system. Issue tracking is concerned with managing these issues, associating them with efforts to resolve any underlying problems in the system, and informing customers of possible resolutions. Requirements for issue tracking Operators often perform issue tracking by using a separate system that enables them to record and report the details of problems that users report. These details can include the tasks that the user was trying to perform, symptoms of the problem, the sequence of events, and any error or warning messages that were issued. Data sources, instrumentation, and data-collection requirements The initial data source for issue-tracking data is the user who reported the issue in the first place. The user might be able to provide additional data such as: A crash dump (if the application includes a component that runs on the user's desktop). A screen snapshot. The date and time when the error occurred, together with any other environmental information such as the user's location. This information can be used to help the debugging effort and help construct a backlog for future releases of the software. Analyzing issue -tracking data Different users might report the same problem. The issue-tracking system should associate common reports. The progress of the debugging effort should be recorded against each issue report. When the problem is resolved, the customer can be informed of the solution. If a user reports an issue that has a known solution in the issue-tracking system, the operator should be able to inform the user of the solution immediately. Tracing operations and debugging software releases When a user reports an issue, the user is often only aware of the immediate effect that it has on their operations. The user can only report the results of their own experience back to an operator who is responsible for maintaining the system. These experiences are usually just a visible symptom of one or more fundamental problems. In many cases, an analyst will need to dig through the chronology of the underlying operations to establish the root cause of the problem. This process is called root cause analysis. NOTE Root cause analysis might uncover inefficiencies in the design of an application. In these situations, it might be possible to rework the affected elements and deploy them as part of a subsequent release. This process requires careful control, and the updated components should be monitored closely. Requirements for tracing and debugging For tracing unexpected events and other problems, it's vital that the monitoring data provides enough information to enable an analyst to trace back to the origins of these issues and reconstruct the sequence of events that occurred. This information must be sufficient to enable an analyst to diagnose the root cause of any problems. A developer can then make the necessary modifications to prevent them from recurring. Data sources, instrumentation, and data-collection requirements Troubleshooting can involve tracing all the methods (and their parameters) invoked as part of an operation to build up a tree that depicts the logical flow through the system when a customer makes a specific request. Exceptions and warnings that the system generates as a result of this flow need to be captured and logged. To support debugging, the system can provide hooks that enable an operator to capture state information at crucial points in the system. Or, the system can deliver detailed step-by-step information as selected operations progress. Capturing data at this level of detail can impose an additional load on the system and should be a temporary process. An operator uses this process mainly when a highly unusual series of events occurs and is difficult to replicate, or when a new release of one or more elements into a system requires careful monitoring to ensure that the elements function as expected. The monitoring and diagnostics pipeline Monitoring a large-scale distributed system poses a significant challenge. Each of the scenarios described in the previous section should not necessarily be considered in isolation. There is likely to be a significant overlap in the monitoring and diagnostic data that's required for each situation, although this data might need to be processed and presented in different ways. For these reasons, you should take a holistic view of monitoring and diagnostics. You can envisage the entire monitoring and diagnostics process as a pipeline that comprises the stages shown in Figure 1. Figure 1. The stages in the monitoring and diagnostics pipeline. Figure 1 highlights how the data for monitoring and diagnostics can come from a variety of data sources. The instrumentation and collection stages are concerned with identifying the sources from where the data needs to be captured, determining which data to capture, how to capture it, and how to format this data so that it can be easily examined. The analysis/diagnosis stage takes the raw data and uses it to generate meaningful information that an operator can use to determine the state of the system. The operator can use this information to make decisions about possible actions to take, and then feed the results back into the instrumentation and collection stages. The visualization/alerting stage phase presents a consumable view of the system state. It can display information in near real time by using a series of dashboards. And it can generate reports, graphs, and charts to provide a historical view of the data that can help identify long-term trends. If information indicates that a KPI is likely to exceed acceptable bounds, this stage can also trigger an alert to an operator. In some cases, an alert can also be used to trigger an automated process that attempts to take corrective actions, such as autoscaling. Note that these steps constitute a continuous-flow process where the stages are happening in parallel. Ideally, all the phases should be dynamically configurable. At some points, especially when a system has been newly deployed or is experiencing problems, it might be necessary to gather extended data on a more frequent basis. At other times, it should be possible to revert to capturing a base level of essential information to verify that the system is functioning properly. Additionally, the entire monitoring process should be considered a live, ongoing solution that's subject to finetuning and improvements as a result of feedback. For example, you might start with measuring many factors to determine system health. Analysis over time might lead to a refinement as you discard measures that aren't relevant, enabling you to more precisely focus on the data that you need while minimizing background noise. Sources of monitoring and diagnostic data The information that the monitoring process uses can come from several sources, as illustrated in Figure 1. At the application level, information comes from trace logs incorporated into the code of the system. Developers should follow a standard approach for tracking the flow of control through their code. For example, an entry to a method can emit a trace message that specifies the name of the method, the current time, the value of each parameter, and any other pertinent information. Recording the entry and exit times can also prove useful. You should log all exceptions and warnings, and ensure that you retain a full trace of any nested exceptions and warnings. Ideally, you should also capture information that identifies the user who is running the code, together with activity correlation information (to track requests as they pass through the system). And you should log attempts to access all resources such as message queues, databases, files, and other dependent services. This information can be used for metering and auditing purposes. Many applications use libraries and frameworks to perform common tasks such as accessing a data store or communicating over a network. These frameworks might be configurable to provide their own trace messages and raw diagnostic information, such as transaction rates and data transmission successes and failures. NOTE Many modern frameworks automatically publish performance and trace events. Capturing this information is simply a matter of providing a means to retrieve and store it where it can be processed and analyzed. The operating system where the application is running can be a source of low -level system-wide information, such as performance counters that indicate I/O rates, memory utilization, and CPU usage. Operating system errors (such as the failure to open a file correctly) might also be reported. You should also consider the underlying infrastructure and components on which your system runs. Virtual machines, virtual networks, and storage services can all be sources of important infrastructure-level performance counters and other diagnostic data. If your application uses other external services, such as a web server or database management system, these services might publish their own trace information, logs, and performance counters. Examples include SQL Server Dynamic Management Views for tracking operations performed against a SQL Server database, and IIS trace logs for recording requests made to a web server. As the components of a system are modified and new versions are deployed, it's important to be able to attribute issues, events, and metrics to each version. This information should be tied back to the release pipeline so that problems with a specific version of a component can be tracked quickly and rectified. Security issues might occur at any point in the system. For example, a user might attempt to sign in with an invalid user ID or password. An authenticated user might try to obtain unauthorized access to a resource. Or a user might provide an invalid or outdated key to access encrypted information. Security-related information for successful and failing requests should always be logged. The section Instrumenting an application contains more guidance on the information that you should capture. But you can use a variety of strategies to gather this information: Application/system monitoring. This strategy uses internal sources within the application, application frameworks, operating system, and infrastructure. The application code can generate its own monitoring data at notable points during the lifecycle of a client request. The application can include tracing statements that might be selectively enabled or disabled as circumstances dictate. It might also be possible to inject diagnostics dynamically by using a diagnostics framework. These frameworks typically provide plug-ins that can attach to various instrumentation points in your code and capture trace data at these points. Additionally, your code and/or the underlying infrastructure might raise events at critical points. Monitoring agents that are configured to listen for these events can record the event information. Real user monitoring. This approach records the interactions between a user and the application and observes the flow of each request and response. This information can have a two-fold purpose: it can be used for metering usage by each user, and it can be used to determine whether users are receiving a suitable quality of service (for example, fast response times, low latency, and minimal errors). You can use the captured data to identify areas of concern where failures occur most often. You can also use the data to identify elements where the system slows down, possibly due to hotspots in the application or some other form of bottleneck. If you implement this approach carefully, it might be possible to reconstruct users' flows through the application for debugging and testing purposes. IMPORTANT You should consider the data that's captured by monitoring real users to be highly sensitive because it might include confidential material. If you save captured data, store it securely. If you want to use the data for performance monitoring or debugging purposes, strip out all personally identifiable information first. Synthetic user monitoring. In this approach, you write your own test client that simulates a user and performs a configurable but typical series of operations. You can track the performance of the test client to help determine the state of the system. You can also use multiple instances of the test client as part of a load-testing operation to establish how the system responds under stress, and what sort of monitoring output is generated under these conditions. NOTE You can implement real and synthetic user monitoring by including code that traces and times the execution of method calls and other critical parts of an application. Profiling. This approach is primarily targeted at monitoring and improving application performance. Rather than operating at the functional level of real and synthetic user monitoring, it captures lower-level information as the application runs. You can implement profiling by using periodic sampling of the execution state of an application (determining which piece of code that the application is running at a given point in time). You can also use instrumentation that inserts probes into the code at important junctures (such as the start and end of a method call) and records which methods were invoked, at what time, and how long each call took. You can then analyze this data to determine which parts of the application might cause performance problems. Endpoint monitoring. This technique uses one or more diagnostic endpoints that the application exposes specifically to enable monitoring. An endpoint provides a pathway into the application code and can return information about the health of the system. Different endpoints can focus on various aspects of the functionality. You can write your own diagnostics client that sends periodic requests to these endpoints and assimilate the responses. For more information, see the Health Endpoint Monitoring pattern. For maximum coverage, you should use a combination of these techniques. Instrumenting an application Instrumentation is a critical part of the monitoring process. You can make meaningful decisions about the performance and health of a system only if you first capture the data that enables you to make these decisions. The information that you gather by using instrumentation should be sufficient to enable you to assess performance, diagnose problems, and make decisions without requiring you to sign in to a remote production server to perform tracing (and debugging) manually. Instrumentation data typically comprises metrics and information that's written to trace logs. The contents of a trace log can be the result of textual data that's written by the application or binary data that's created as the result of a trace event (if the application is using Event Tracing for Windows--ETW ). They can also be generated from system logs that record events arising from parts of the infrastructure, such as a web server. Textual log messages are often designed to be human-readable, but they should also be written in a format that enables an automated system to parse them easily. You should also categorize logs. Don't write all trace data to a single log, but use separate logs to record the trace output from different operational aspects of the system. You can then quickly filter log messages by reading from the appropriate log rather than having to process a single lengthy file. Never write information that has different security requirements (such as audit information and debugging data) to the same log. NOTE A log might be implemented as a file on the file system, or it might be held in some other format, such as a blob in blob storage. Log information might also be held in more structured storage, such as rows in a table. Metrics will generally be a measure or count of some aspect or resource in the system at a specific time, with one or more associated tags or dimensions (sometimes called a sample). A single instance of a metric is usually not useful in isolation. Instead, metrics have to be captured over time. The key issue to consider is which metrics you should record and how frequently. Generating data for metrics too often can impose a significant additional load on the system, whereas capturing metrics infrequently might cause you to miss the circumstances that lead to a significant event. The considerations will vary from metric to metric. For example, CPU utilization on a server might vary significantly from second to second, but high utilization becomes an issue only if it's long-lived over a number of minutes. Information for correlating data You can easily monitor individual system-level performance counters, capture metrics for resources, and obtain application trace information from various log files. But some forms of monitoring require the analysis and diagnostics stage in the monitoring pipeline to correlate the data that's retrieved from several sources. This data might take several forms in the raw data, and the analysis process must be provided with sufficient instrumentation data to be able to map these different forms. For example, at the application framework level, a task might be identified by a thread ID. Within an application, the same work might be associated with the user ID for the user who is performing that task. Also, there's unlikely to be a 1:1 mapping between threads and user requests, because asynchronous operations might reuse the same threads to perform operations on behalf of more than one user. To complicate matters further, a single request might be handled by more than one thread as execution flows through the system. If possible, associate each request with a unique activity ID that's propagated through the system as part of the request context. (The technique for generating and including activity IDs in trace information depends on the technology that's used to capture the trace data.) All monitoring data should be time-stamped in the same way. For consistency, record all dates and times by using Coordinated Universal Time. This will help you more easily trace sequences of events. NOTE Computers operating in different time zones and networks might not be synchronized. Don't depend on using time stamps alone for correlating instrumentation data that spans multiple machines. Information to include in the instrumentation data Consider the following points when you're deciding which instrumentation data you need to collect: Make sure that information captured by trace events is machine and human readable. Adopt well-defined schemas for this information to facilitate automated processing of log data across systems, and to provide consistency to operations and engineering staff reading the logs. Include environmental information, such as the deployment environment, the machine on which the process is running, the details of the process, and the call stack. Enable profiling only when necessary because it can impose a significant overhead on the system. Profiling by using instrumentation records an event (such as a method call) every time it occurs, whereas sampling records only selected events. The selection can be time-based (once every n seconds), or frequency-based (once every n requests). If events occur very frequently, profiling by instrumentation might cause too much of a burden and itself affect overall performance. In this case, the sampling approach might be preferable. However, if the frequency of events is low, sampling might miss them. In this case, instrumentation might be the better approach. Provide sufficient context to enable a developer or administrator to determine the source of each request. This might include some form of activity ID that identifies a specific instance of a request. It might also include information that can be used to correlate this activity with the computational work performed and the resources used. Note that this work might cross process and machine boundaries. For metering, the context should also include (either directly or indirectly via other correlated information) a reference to the customer who caused the request to be made. This context provides valuable information about the application state at the time that the monitoring data was captured. Record all requests, and the locations or regions from which these requests are made. This information can assist in determining whether there are any location-specific hotspots. This information can also be useful in determining whether to repartition an application or the data that it uses. Record and capture the details of exceptions carefully. Often, critical debug information is lost as a result of poor exception handling. Capture the full details of exceptions that the application throws, including any inner exceptions and other context information. Include the call stack if possible. Be consistent in the data that the different elements of your application capture, because this can assist in analyzing events and correlating them with user requests. Consider using a comprehensive and configurable logging package to gather information, rather than depending on developers to adopt the same approach as they implement different parts of the system. Gather data from key performance counters, such as the volume of I/O being performed, network utilization, number of requests, memory use, and CPU utilization. Some infrastructure services might provide their own specific performance counters, such as the number of connections to a database, the rate at which transactions are being performed, and the number of transactions that succeed or fail. Applications might also define their own specific performance counters. Log all calls made to external services, such as database systems, web services, or other system-level services that are part of the infrastructure. Record information about the time taken to perform each call, and the success or failure of the call. If possible, capture information about all retry attempts and failures for any transient errors that occur. Ensuring compatibility with telemetry systems In many cases, the information that instrumentation produces is generated as a series of events and passed to a separate telemetry system for processing and analysis. A telemetry system is typically independent of any specific application or technology, but it expects information to follow a specific format that's usually defined by a schema. The schema effectively specifies a contract that defines the data fields and types that the telemetry system can ingest. The schema should be generalized to allow for data arriving from a range of platforms and devices. A common schema should include fields that are common to all instrumentation events, such as the event name, the event time, the IP address of the sender, and the details that are required for correlating with other events (such as a user ID, a device ID, and an application ID ). Remember that any number of devices might raise events, so the schema should not depend on the device type. Additionally, various devices might raise events for the same application; the application might support roaming or some other form of cross-device distribution. The schema might also include domain fields that are relevant to a particular scenario that's common across different applications. This might be information about exceptions, application start and end events, and success and/or failure of web service API calls. All applications that use the same set of domain fields should emit the same set of events, enabling a set of common reports and analytics to be built. Finally, a schema might contain custom fields for capturing the details of application-specific events. Best practices for instrumenting applications The following list summarizes best practices for instrumenting a distributed application running in the cloud. Make logs easy to read and easy to parse. Use structured logging where possible. Be concise and descriptive in log messages. In all logs, identify the source and provide context and timing information as each log record is written. Use the same time zone and format for all time stamps. This will help to correlate events for operations that span hardware and services running in different geographic regions. Categorize logs and write messages to the appropriate log file. Do not disclose sensitive information about the system or personal information about users. Scrub this information before it's logged, but ensure that the relevant details are retained. For example, remove the ID and password from any database connection strings, but write the remaining information to the log so that an analyst can determine that the system is accessing the correct database. Log all critical exceptions, but enable the administrator to turn logging on and off for lower levels of exceptions and warnings. Also, capture and log all retry logic information. This data can be useful in monitoring the transient health of the system. Trace out of process calls, such as requests to external web services or databases. Don't mix log messages with different security requirements in the same log file. For example, don't write debug and audit information to the same log. With the exception of auditing events, make sure that all logging calls are fire-and-forget operations that do not block the progress of business operations. Auditing events are exceptional because they are critical to the business and can be classified as a fundamental part of business operations. Make sure that logging is extensible and does not have any direct dependencies on a concrete target. For example, rather than writing information by using System.Diagnostics.Trace, define an abstract interface (such as ILogger) that exposes logging methods and that can be implemented through any appropriate means. Make sure that all logging is fail-safe and never triggers any cascading errors. Logging must not throw any exceptions. Treat instrumentation as an ongoing iterative process and review logs regularly, not just when there is a problem. Collecting and storing data The collection stage of the monitoring process is concerned with retrieving the information that instrumentation generates, formatting this data to make it easier for the analysis/diagnosis stage to consume, and saving the transformed data in reliable storage. The instrumentation data that you gather from different parts of a distributed system can be held in a variety of locations and with varying formats. For example, your application code might generate trace log files and generate application event log data, whereas performance counters that monitor key aspects of the infrastructure that your application uses can be captured through other technologies. Any third-party components and services that your application uses might provide instrumentation information in different formats, by using separate trace files, blob storage, or even a custom data store. Data collection is often performed through a collection service that can run autonomously from the application that generates the instrumentation data. Figure 2 illustrates an example of this architecture, highlighting the instrumentation data-collection subsystem. Figure 2. Collecting instrumentation data. Note that this is a simplified view. The collection service is not necessarily a single process and might comprise many constituent parts running on different machines, as described in the following sections. Additionally, if the analysis of some telemetry data must be performed quickly (hot analysis, as described in the section Supporting hot, warm, and cold analysis later in this document), local components that operate outside the collection service might perform the analysis tasks immediately. Figure 2 depicts this situation for selected events. After analytical processing, the results can be sent directly to the visualization and alerting subsystem. Data that's subjected to warm or cold analysis is held in storage while it awaits processing. For Azure applications and services, Azure Diagnostics provides one possible solution for capturing data. Azure Diagnostics gathers data from the following sources for each compute node, aggregates it, and then uploads it to Azure Storage: IIS logs IIS Failed Request logs Windows event logs Performance counters Crash dumps Azure Diagnostics infrastructure logs Custom error logs .NET EventSource Manifest-based ETW For more information, see the article Azure: Telemetry Basics and Troubleshooting. Strategies for collecting instrumentation data Considering the elastic nature of the cloud, and to avoid the necessity of manually retrieving telemetry data from every node in the system, you should arrange for the data to be transferred to a central location and consolidated. In a system that spans multiple datacenters, it might be useful to first collect, consolidate, and store data on a region-by-region basis, and then aggregate the regional data into a single central system. To optimize the use of bandwidth, you can elect to transfer less urgent data in chunks, as batches. However, the data must not be delayed indefinitely, especially if it contains time-sensitive information. Pulling and pushing instrumentation data The instrumentation data-collection subsystem can actively retrieve instrumentation data from the various logs and other sources for each instance of the application (the pull model). Or, it can act as a passive receiver that waits for the data to be sent from the components that constitute each instance of the application (the push model). One approach to implementing the pull model is to use monitoring agents that run locally with each instance of the application. A monitoring agent is a separate process that periodically retrieves (pulls) telemetry data collected at the local node and writes this information directly to centralized storage that all instances of the application share. This is the mechanism that Azure Diagnostics implements. Each instance of an Azure web or worker role can be configured to capture diagnostic and other trace information that's stored locally. The monitoring agent that runs alongside each instance copies the specified data to Azure Storage. The article Enabling Diagnostics in Azure Cloud Services and Virtual Machines provides more details on this process. Some elements, such as IIS logs, crash dumps, and custom error logs, are written to blob storage. Data from the Windows event log, ETW events, and performance counters is recorded in table storage. Figure 3 illustrates this mechanism. Figure 3. Using a monitoring agent to pull information and write to shared storage. NOTE Using a monitoring agent is ideally suited to capturing instrumentation data that's naturally pulled from a data source. An example is information from SQL Server Dynamic Management Views or the length of an Azure Service Bus queue. It's feasible to use the approach just described to store telemetry data for a small-scale application running on a limited number of nodes in a single location. However, a complex, highly scalable, global cloud application might generate huge volumes of data from hundreds of web and worker roles, database shards, and other services. This flood of data can easily overwhelm the I/O bandwidth available with a single, central location. Therefore, your telemetry solution must be scalable to prevent it from acting as a bottleneck as the system expands. Ideally, your solution should incorporate a degree of redundancy to reduce the risks of losing important monitoring information (such as auditing or billing data) if part of the system fails. To address these issues, you can implement queuing, as shown in Figure 4. In this architecture, the local monitoring agent (if it can be configured appropriately) or custom data-collection service (if not) posts data to a queue. A separate process running asynchronously (the storage writing service in Figure 4) takes the data in this queue and writes it to shared storage. A message queue is suitable for this scenario because it provides "at least once" semantics that help ensure that queued data will not be lost after it's posted. You can implement the storage writing service by using a separate worker role. Figure 4. Using a queue to buffer instrumentation data. The local data-collection service can add data to a queue immediately after it's received. The queue acts as a buffer, and the storage writing service can retrieve and write the data at its own pace. By default, a queue operates on a first-in, first-out basis. But you can prioritize messages to accelerate them through the queue if they contain data that must be handled more quickly. For more information, see the Priority Queue pattern. Alternatively, you can use different channels (such as Service Bus topics) to direct data to different destinations depending on the form of analytical processing that's required. For scalability, you can run multiple instances of the storage writing service. If there is a high volume of events, you can use an event hub to dispatch the data to different compute resources for processing and storage. Consolidating instrumentation data The instrumentation data that the data-collection service retrieves from a single instance of an application gives a localized view of the health and performance of that instance. To assess the overall health of the system, it's necessary to consolidate some aspects of the data in the local views. You can perform this after the data has been stored, but in some cases, you can also achieve it as the data is collected. Rather than being written directly to shared storage, the instrumentation data can pass through a separate data consolidation service that combines data and acts as a filter and cleanup process. For example, instrumentation data that includes the same correlation information such as an activity ID can be amalgamated. (It's possible that a user starts performing a business operation on one node and then gets transferred to another node in the event of node failure, or depending on how load balancing is configured.) This process can also detect and remove any duplicated data (always a possibility if the telemetry service uses message queues to push instrumentation data out to storage). Figure 5 illustrates an example of this structure. Figure 5. Using a separate service to consolidate and clean up instrumentation data. Storing instrumentation data The previous discussions have depicted a rather simplistic view of the way in which instrumentation data is stored. In reality, it can make sense to store the different types of information by using technologies that are most appropriate to the way in which each type is likely to be used. For example, Azure blob and table storage have some similarities in the way in which they're accessed. But they have limitations in the operations that you can perform by using them, and the granularity of the data that they hold is quite different. If you need to perform more analytical operations or require full-text search capabilities on the data, it might be more appropriate to use data storage that provides capabilities that are optimized for specific types of queries and data access. For example: Performance counter data can be stored in a SQL database to enable ad hoc analysis. Trace logs might be better stored in Azure Cosmos DB. Security information can be written to HDFS. Information that requires full-text search can be stored through Elasticsearch (which can also speed searches by using rich indexing). You can implement an additional service that periodically retrieves the data from shared storage, partitions and filters the data according to its purpose, and then writes it to an appropriate set of data stores as shown in Figure 6. An alternative approach is to include this functionality in the consolidation and cleanup process and write the data directly to these stores as it's retrieved rather than saving it in an intermediate shared storage area. Each approach has its advantages and disadvantages. Implementing a separate partitioning service lessens the load on the consolidation and cleanup service, and it enables at least some of the partitioned data to be regenerated if necessary (depending on how much data is retained in shared storage). However, it consumes additional resources. Also, there might be a delay between the receipt of instrumentation data from each application instance and the conversion of this data into actionable information. Figure 6. Partitioning data according to analytical and storage requirements. The same instrumentation data might be required for more than one purpose. For example, performance counters can be used to provide a historical view of system performance over time. This information might be combined with other usage data to generate customer billing information. In these situations, the same data might be sent to more than one destination, such as a document database that can act as a long-term store for holding billing information, and a multidimensional store for handling complex performance analytics. You should also consider how urgently the data is required. Data that provides information for alerting must be accessed quickly, so it should be held in fast data storage and indexed or structured to optimize the queries that the alerting system performs. In some cases, it might be necessary for the telemetry service that gathers the data on each node to format and save data locally so that a local instance of the alerting system can quickly notify you of any issues. The same data can be dispatched to the storage writing service shown in the previous diagrams and stored centrally if it's also required for other purposes. Information that's used for more considered analysis, for reporting, and for spotting historical trends is less urgent and can be stored in a manner that supports data mining and ad hoc queries. For more information, see the section Supporting hot, warm, and cold analysis later in this document. Log rotation and data retention Instrumentation can generate considerable volumes of data. This data can be held in several places, starting with the raw log files, trace files, and other information captured at each node to the consolidated, cleaned, and partitioned view of this data held in shared storage. In some cases, after the data has been processed and transferred, the original raw source data can be removed from each node. In other cases, it might be necessary or simply useful to save the raw information. For example, data that's generated for debugging purposes might be best left available in its raw form but can then be discarded quickly after any bugs have been rectified. Performance data often has a longer life so that it can be used for spotting performance trends and for capacity planning. The consolidated view of this data is usually kept online for a finite period to enable fast access. After that, it can be archived or discarded. Data gathered for metering and billing customers might need to be saved indefinitely. Additionally, regulatory requirements might dictate that information collected for auditing and security purposes also needs to be archived and saved. This data is also sensitive and might need to be encrypted or otherwise protected to prevent tampering. You should never record users' passwords or other information that might be used to commit identity fraud. Such details should be scrubbed from the data before it's stored. Down-sampling It's useful to store historical data so you can spot long-term trends. Rather than saving old data in its entirety, it might be possible to down-sample the data to reduce its resolution and save storage costs. As an example, rather than saving minute-by-minute performance indicators, you can consolidate data that's more than a month old to form an hour-by-hour view. Best practices for collecting and storing logging information The following list summarizes best practices for capturing and storing logging information: The monitoring agent or data-collection service should run as an out-of-process service and should be simple to deploy. All output from the monitoring agent or data-collection service should be an agnostic format that's independent of the machine, operating system, or network protocol. For example, emit information in a self-describing format such as JSON, MessagePack, or Protobuf rather than ETL/ETW. Using a standard format enables the system to construct processing pipelines; components that read, transform, and send data in the agreed format can be easily integrated. The monitoring and data-collection process must be fail-safe and must not trigger any cascading error conditions. In the event of a transient failure in sending information to a data sink, the monitoring agent or datacollection service should be prepared to reorder telemetry data so that the newest information is sent first. (The monitoring agent/data-collection service might elect to drop the older data, or save it locally and transmit it later to catch up, at its own discretion.) Analyzing data and diagnosing issues An important part of the monitoring and diagnostics process is analyzing the gathered data to obtain a picture of the overall well-being of the system. You should have defined your own KPIs and performance metrics, and it's important to understand how you can structure the data that has been gathered to meet your analysis requirements. It's also important to understand how the data that's captured in different metrics and log files is correlated, because this information can be key to tracking a sequence of events and help diagnose problems that arise. As described in the section Consolidating instrumentation data, the data for each part of the system is typically captured locally, but it generally needs to be combined with data generated at other sites that participate in the system. This information requires careful correlation to ensure that data is combined accurately. For example, the usage data for an operation might span a node that hosts a website to which a user connects, a node that runs a separate service accessed as part of this operation, and data storage held on another node. This information needs to be tied together to provide an overall view of the resource and processing usage for the operation. Some pre-processing and filtering of data might occur on the node on which the data is captured, whereas aggregation and formatting are more likely to occur on a central node. Supporting hot, warm, and cold analysis Analyzing and reformatting data for visualization, reporting, and alerting purposes can be a complex process that consumes its own set of resources. Some forms of monitoring are time-critical and require immediate analysis of data to be effective. This is known as hot analysis. Examples include the analyses that are required for alerting and some aspects of security monitoring (such as detecting an attack on the system). Data that's required for these purposes must be quickly available and structured for efficient processing. In some cases, it might be necessary to move the analysis processing to the individual nodes where the data is held. Other forms of analysis are less time-critical and might require some computation and aggregation after the raw data has been received. This is called warm analysis. Performance analysis often falls into this category. In this case, an isolated, single performance event is unlikely to be statistically significant. (It might be caused by a sudden spike or glitch.) The data from a series of events should provide a more reliable picture of system performance. Warm analysis can also be used to help diagnose health issues. A health event is typically processed through hot analysis and can raise an alert immediately. An operator should be able to drill into the reasons for the health event by examining the data from the warm path. This data should contain information about the events leading up to the issue that caused the health event. Some types of monitoring generate more long-term data. This analysis can be performed at a later date, possibly according to a predefined schedule. In some cases, the analysis might need to perform complex filtering of large volumes of data captured over a period of time. This is called cold analysis. The key requirement is that the data is stored safely after it has been captured. For example, usage monitoring and auditing require an accurate picture of the state of the system at regular points in time, but this state information does not have to be available for processing immediately after it has been gathered. An operator can also use cold analysis to provide the data for predictive health analysis. The operator can gather historical information over a specified period and use it in conjunction with the current health data (retrieved from the hot path) to spot trends that might soon cause health issues. In these cases, it might be necessary to raise an alert so that corrective action can be taken. Correlating data The data that instrumentation captures can provide a snapshot of the system state, but the purpose of analysis is to make this data actionable. For example: What has caused an intense I/O loading at the system level at a specific time? Is it the result of a large number of database operations? Is this reflected in the database response times, the number of transactions per second, and application response times at the same juncture? If so, one remedial action that might reduce the load might be to shard the data over more servers. In addition, exceptions can arise as a result of a fault in any level of the system. An exception in one level often triggers another fault in the level above. For these reasons, you need to be able to correlate the different types of monitoring data at each level to produce an overall view of the state of the system and the applications that are running on it. You can then use this information to make decisions about whether the system is functioning acceptably or not, and determine what can be done to improve the quality of the system. As described in the section Information for correlating data, you must ensure that the raw instrumentation data includes sufficient context and activity ID information to support the required aggregations for correlating events. Additionally, this data might be held in different formats, and it might be necessary to parse this information to convert it into a standardized format for analysis. Troubleshooting and diagnosing issues Diagnosis requires the ability to determine the cause of faults or unexpected behavior, including performing root cause analysis. The information that's required typically includes: Detailed information from event logs and traces, either for the entire system or for a specified subsystem during a specified time window. Complete stack traces resulting from exceptions and faults of any specified level that occur within the system or a specified subsystem during a specified period. Crash dumps for any failed processes either anywhere in the system or for a specified subsystem during a specified time window. Activity logs recording the operations that are performed either by all users or for selected users during a specified period. Analyzing data for troubleshooting purposes often requires a deep technical understanding of the system architecture and the various components that compose the solution. As a result, a large degree of manual intervention is often required to interpret the data, establish the cause of problems, and recommend an appropriate strategy to correct them. It might be appropriate simply to store a copy of this information in its original format and make it available for cold analysis by an expert. Visualizing data and raising alerts An important aspect of any monitoring system is the ability to present the data in such a way that an operator can quickly spot any trends or problems. Also important is the ability to quickly inform an operator if a significant event has occurred that might require attention. Data presentation can take several forms, including visualization by using dashboards, alerting, and reporting. Visualization by using dashboards The most common way to visualize data is to use dashboards that can display information as a series of charts, graphs, or some other illustration. These items can be parameterized, and an analyst should be able to select the important parameters (such as the time period) for any specific situation. Dashboards can be organized hierarchically. Top-level dashboards can give an overall view of each aspect of the system but enable an operator to drill down to the details. For example, a dashboard that depicts the overall disk I/O for the system should allow an analyst to view the I/O rates for each individual disk to ascertain whether one or more specific devices account for a disproportionate volume of traffic. Ideally, the dashboard should also display related information, such as the source of each request (the user or activity) that's generating this I/O. This information can then be used to determine whether (and how ) to spread the load more evenly across devices, and whether the system would perform better if more devices were added. A dashboard might also use color-coding or some other visual cues to indicate values that appear anomalous or that are outside an expected range. Using the previous example: A disk with an I/O rate that's approaching its maximum capacity over an extended period (a hot disk) can be highlighted in red. A disk with an I/O rate that periodically runs at its maximum limit over short periods (a warm disk) can be highlighted in yellow. A disk that's exhibiting normal usage can be displayed in green. Note that for a dashboard system to work effectively, it must have the raw data to work with. If you are building your own dashboard system, or using a dashboard developed by another organization, you must understand which instrumentation data you need to collect, at what levels of granularity, and how it should be formatted for the dashboard to consume. A good dashboard does not only display information, it also enables an analyst to pose ad hoc questions about that information. Some systems provide management tools that an operator can use to perform these tasks and explore the underlying data. Alternatively, depending on the repository that's used to hold this information, it might be possible to query this data directly, or import it into tools such as Microsoft Excel for further analysis and reporting. NOTE You should restrict access to dashboards to authorized personnel, because this information might be commercially sensitive. You should also protect the underlying data for dashboards to prevent users from changing it. Raising alerts Alerting is the process of analyzing the monitoring and instrumentation data and generating a notification if a significant event is detected. Alerting helps ensure that the system remains healthy, responsive, and secure. It's an important part of any system that makes performance, availability, and privacy guarantees to the users where the data might need to be acted on immediately. An operator might need to be notified of the event that triggered the alert. Alerting can also be used to invoke system functions such as autoscaling. Alerting usually depends on the following instrumentation data: Security events. If the event logs indicate that repeated authentication and/or authorization failures are occurring, the system might be under attack and an operator should be informed. Performance metrics. The system must quickly respond if a particular performance metric exceeds a specified threshold. Availability information. If a fault is detected, it might be necessary to quickly restart one or more subsystems, or fail over to a backup resource. Repeated faults in a subsystem might indicate more serious concerns. Operators might receive alert information by using many delivery channels such as email, a pager device, or an SMS text message. An alert might also include an indication of how critical a situation is. Many alerting systems support subscriber groups, and all operators who are members of the same group can receive the same set of alerts. An alerting system should be customizable, and the appropriate values from the underlying instrumentation data can be provided as parameters. This approach enables an operator to filter data and focus on those thresholds or combinations of values that are of interest. Note that in some cases, the raw instrumentation data can be provided to the alerting system. In other situations, it might be more appropriate to supply aggregated data. (For example, an alert can be triggered if the CPU utilization for a node has exceeded 90 percent over the last 10 minutes). The details provided to the alerting system should also include any appropriate summary and context information. This data can help reduce the possibility that false-positive events will trip an alert. Reporting Reporting is used to generate an overall view of the system. It might incorporate historical data in addition to current information. Reporting requirements themselves fall into two broad categories: operational reporting and security reporting. Operational reporting typically includes the following aspects: Aggregating statistics that you can use to understand resource utilization of the overall system or specified subsystems during a specified time window. Identifying trends in resource usage for the overall system or specified subsystems during a specified period. Monitoring the exceptions that have occurred throughout the system or in specified subsystems during a specified period. Determining the efficiency of the application in terms of the deployed resources, and understanding whether the volume of resources (and their associated cost) can be reduced without affecting performance unnecessarily. Security reporting is concerned with tracking customers' use of the system. It can include: Auditing user operations. This requires recording the individual requests that each user performs, together with dates and times. The data should be structured to enable an administrator to quickly reconstruct the sequence of operations that a user performs over a specified period. Tracking resource use by user. This requires recording how each request for a user accesses the various resources that compose the system, and for how long. An administrator must be able to use this data to generate a utilization report by user over a specified period, possibly for billing purposes. In many cases, batch processes can generate reports according to a defined schedule. (Latency is not normally an issue.) But they should also be available for generation on an ad hoc basis if needed. As an example, if you are storing data in a relational database such as Azure SQL Database, you can use a tool such as SQL Server Reporting Services to extract and format data and present it as a set of reports. Related patterns and guidance Autoscaling guidance describes how to decrease management overhead by reducing the need for an operator to continually monitor the performance of a system and make decisions about adding or removing resources. Health Endpoint Monitoring pattern describes how to implement functional checks within an application that external tools can access through exposed endpoints at regular intervals. Priority Queue pattern shows how to prioritize queued messages so that urgent requests are received and can be processed before less urgent messages. More information Monitor, diagnose, and troubleshoot Microsoft Azure Storage Azure: Telemetry Basics and Troubleshooting Enabling Diagnostics in Azure Cloud Services and Virtual Machines Azure Redis Cache, Azure Cosmos DB, and HDInsight How to use Service Bus queues SQL Server business intelligence in Azure Virtual Machines Receive alert notifications and Track service health Application Insights Naming conventions for Azure resources 3/25/2019 • 9 minutes to read • Edit Online This article is a summary of the naming rules and restrictions for Azure resources and a baseline set of recommendations for naming conventions. You can use these recommendations as a starting point for your own conventions specific to your needs. The choice of a name for any resource in Microsoft Azure is important because: It is difficult to change a name later. Names must meet the requirements of their specific resource type. Consistent naming conventions make resources easier to locate. They can also indicate the role of a resource in a solution. The key to success with naming conventions is establishing and following them across your applications and organizations. Naming subscriptions When naming Azure subscriptions, verbose names make understanding the context and purpose of each subscription clear. When working in an environment with many subscriptions, following a shared naming convention can improve clarity. A recommended pattern for naming subscriptions is: <Company> <Department (optional)> <Product Line (optional)> <Environment> Company would usually be the same for each subscription. However, some companies may have child companies within the organizational structure. These companies may be managed by a central IT group. In these cases, they could be differentiated by having both the parent company name (Contoso) and child company name (Northwind). Department is a name within the organization that contains a group of individuals. This item within the namespace is optional. Product line is a specific name for a product or function that is performed from within the department. This is generally optional for internal-facing services and applications. However, it is highly recommended to use for public-facing services that require easy separation and identification (such as for clear separation of billing records). Environment is the name that describes the deployment lifecycle of the applications or services, such as Dev, QA, or Prod. COMPANY DEPARTMENT PRODUCT LINE OR SERVICE ENVIRONMENT FULL NAME Contoso SocialGaming AwesomeService Production Contoso SocialGaming AwesomeService Production Contoso SocialGaming AwesomeService Dev Contoso SocialGaming AwesomeService Dev COMPANY DEPARTMENT PRODUCT LINE OR SERVICE ENVIRONMENT FULL NAME Contoso IT InternalApps Production Contoso IT InternalApps Production Contoso IT InternalApps Dev Contoso IT InternalApps Dev For more information on how to organize subscriptions for larger enterprises, see Azure enterprise scaffold prescriptive subscription governance. Use affixes to avoid ambiguity When naming resources in Azure, it is recommended to use common prefixes or suffixes to identify the type and context of the resource. While all the information about type, metadata, context, is available programmatically, applying common affixes simplifies visual identification. When incorporating affixes into your naming convention, it is important to clearly specify whether the affix is at the beginning of the name (prefix) or at the end (suffix). For instance, here are two possible names for a service hosting a calculation engine: SvcCalculationEngine (prefix) CalculationEngineSvc (suffix) Affixes can refer to different aspects that describe the particular resources. The following table shows some examples typically used. ASPECT EXAMPLE NOTES Environment dev, prod, QA Identifies the environment for the resource Location uw (US West), ue (US East) Identifies the region into which the resource is deployed Instance 1, 2, ... For resources that have more than one named instance such as VMs or NICs. Product or Service service Identifies the product, application, or service that the resource supports Role sql, web, messaging Identifies the role of the associated resource When developing a specific naming convention for your company or projects, it is important to choose a common set of affixes and their position (suffix or prefix). Naming rules and restrictions Each resource or service type in Azure enforces a set of naming restrictions and scope; any naming convention or pattern must adhere to the requisite naming rules and scope. For example, while the name of a VM maps to a DNS name (and is thus required to be unique across all of Azure), the name of a VNET is scoped to the Resource Group that it is created within. In general, avoid having any special characters ( - or _ ) as the first or last character in any name. These characters will cause most validation rules to fail. General VALID CHARACTERS SUGGESTED PATTERN ENTITY SCOPE LENGTH CASING EXAMPLE Resource Group Subscription 1-90 Case insensitive Alphanumeric, underscore, parentheses, hyphen, period (except at end), and Unicode characters that match the regex documented here. <service short name><environment>rg profx-prodrg Availability Set Resource Group 1-80 Case insensitive Alphanumeric, underscore, and hyphen <serviceshortname><context>as profx-sqlas Tag Associated Entity 512 (name), 256 (value) Case insensitive Alphanumeric, special characters except < , > , %, &, \, ? , / . See limitations here. "key" : "value" "department" : "Central IT" Web App Global 1-60 Case insensitive Alphanumeric and hyphen <app_name><sourceslot-name> contosostaging ENTITY SCOPE LENGTH CASING VALID CHARACTERS Virtual Machine Resource Group 1-15 (Windows), 164 (Linux) Case insensitive Alphanumeric and hyphen <name><role>vm<number> profx-sqlvm1 Function App Global 1-60 Case insensitive Alphanumeric and hyphen <name>-func calcprofitfunc Compute SUGGESTED PATTERN EXAMPLE NOTE Virtual machines in Azure have two distinct names: virtual machine name, and host name. When you create a VM in the portal, the same name is used for both the host name, and the virtual machine resource name. The restrictions above are for the host name. The actual resource name can have up to 64 characters. Storage ENTITY SCOPE LENGTH CASING VALID CHARACTERS Storage account name (data) Global 3-24 Lowercase Alphanumeric SUGGESTED PATTERN <globally unique name> <number> EXAMPLE profxdata001 (use a function to calculate a unique guid for naming storage accounts) Storage account name (disks) Global 3-24 Lowercase Alphanumeric <vm name without profxsql001st0 hyphens>st<number> Container name Storage account 3-63 Lowercase Alphanumeric and hyphen <context> logs Blob name Container 1-1024 Case sensitive Any URL characters <variable based on blob usage> <variable based on blob usage> Queue name Storage account 3-63 Lowercase Alphanumeric and hyphen <service short name><context><num> awesomeservicemessages-001 Table name Storage account 3-63 Case insensitive Alphanumeric <service short name> <context> awesomeservicelogs File name Storage account 3-63 Lowercase Alphanumeric <variable based on blob usage> <variable based on blob usage> Data Lake Store Global 3-24 Lowercase Alphanumeric <name>dls telemetrydls ENTITY SCOPE LENGTH CASING VALID CHARACTERS Virtual Network (VNet) Resource Group 2-64 Case insensitive Alphanumeric, hyphen, underscore, and period <service short name>-vnet profx-vnet Subnet Parent VNet 2-80 Case insensitive Alphanumeric, hyphen, underscore, and period <descriptive context> web Networking SUGGESTED PATTERN EXAMPLE VALID CHARACTERS SUGGESTED PATTERN ENTITY SCOPE LENGTH CASING EXAMPLE Network Interface Resource Group 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <vmname>nic<num> profx-sql1vm1-nic1 Network Security Group Resource Group 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <service short name><context>nsg profx-appnsg Network Security Group Rule Resource Group 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <descriptive context> sql-allow Public IP Address Resource Group 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <vm or service name>-pip profx-sql1vm1-pip Load Balancer Resource Group 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <service or role>-lb profx-lb Load Balanced Rules Config Load Balancer 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <descriptive context> http Azure Application Gateway Resource Group 1-80 Case insensitive Alphanumeric, hyphen, underscore, and period <service or role>-agw profx-agw Traffic Manager Profile Resource Group 1-63 Case insensitive Alphanumeric, hyphen, and period <descriptive context> app1 ENTITY SCOPE LENGTH CASING Container Registry Global 5-50 Case insensitive SCOPE LENGTH CASING Containers VALID CHARACTERS Alphanumeric SUGGESTED PATTERN <service short name>registry EXAMPLE app1registry Service Bus ENTITY VALID CHARACTERS SUGGESTED PATTERN EXAMPLE VALID CHARACTERS ENTITY SCOPE LENGTH CASING Service Bus Namespace Global 6-50 Case insensitive Alphanumeric, hyphen, must start with leter; see here for details. SUGGESTED PATTERN <service short name>-bus EXAMPLE app1-bus Organize resources with tags The Azure Resource Manager supports tagging entities with arbitrary text strings to identify context and streamline automation. For example, the tag "sqlVersion"="sql2014ee" could identify VMs running SQL Server 2014 Enterprise Edition. Tags should be used to augment and enhance context along side of the naming conventions chosen. TIP One other advantage of tags is that tags span resource groups, allowing you to link and correlate entities across disparate deployments. Each resource or resource group can have a maximum of 15 tags. The tag name is limited to 512 characters, and the tag value is limited to 256 characters. For more information on resource tagging, refer to Using tags to organize your Azure resources. Some of the common tagging use cases are: Billing. Grouping resources and associating them with billing or charge back codes. Service Context Identification. Identify groups of resources across Resource Groups for common operations and grouping. Access Control and Security Context. Administrative role identification based on portfolio, system, service, app, instance, etc. TIP Tag early, tag often. Better to have a baseline tagging scheme in place and adjust over time rather than having to retrofit after the fact. An example of some common tagging approaches: TAG NAME KEY EXAMPLE COMMENT Bill To / Internal Chargeback ID billTo IT-Chargeback-1234 An internal I/O or billing code Operator or Directly Responsible Individual (DRI) managedBy joe@contoso.com Alias or email address Project Name projectName myproject Name of the project or product line Project Version projectVersion 3.4 Version of the project or product line TAG NAME KEY EXAMPLE COMMENT Environment environment <Production, Staging, QA > Environmental identifier Tier tier Front End, Back End, Data Tier or role/context identification Data Profile dataProfile Public, Confidential, Restricted, Internal Sensitivity of data stored in the resource Tips and tricks Some types of resources may require additional care on naming and conventions. Virtual machines Especially in larger topologies, carefully naming virtual machines streamlines identifying the role and purpose of each machine, and enabling more predictable scripting. Storage accounts and storage entities There are two primary use cases for storage accounts: backing disks for VMs, and storing data in blobs, queues and tables. Storage accounts used for VM disks should follow the naming convention of associating them with the parent VM name (and with the potential need for multiple storage accounts for high-end VM SKUs, also apply a number suffix). TIP Storage accounts - whether for data or disks - should follow a naming convention that allows for multiple storage accounts to be leveraged (i.e. always using a numeric suffix). It's possible to configure a custom domain name for accessing blob data in your Azure Storage account. The default endpoint for the Blob service is https://<name>.blob.core.windows.net . But if you map a custom domain (such as www.contoso.com ) to the blob endpoint for your storage account, you can also access blob data in your storage account by using that domain. For example, with a custom domain name, https://mystorage.blob.core.windows.net/mycontainer/myblob could be accessed as https://www.contoso.com/mycontainer/myblob . For more information about configuring this feature, refer to Configure a custom domain name for your Blob storage endpoint. For more information on naming blobs, containers and tables, refer to the following list: Naming and Referencing Containers, Blobs, and Metadata Naming Queues and Metadata Naming Tables A blob name can contain any combination of characters, but reserved URL characters must be properly escaped. Avoid blob names that end with a period (.), a forward slash (/), or a sequence or combination of the two. By convention, the forward slash is the virtual directory separator. Do not use a backward slash (\) in a blob name. The client APIs may allow it, but then fail to hash properly, and the signatures will not match. It is not possible to modify the name of a storage account or container after it has been created. If you want to use a new name, you must delete it and create a new one. TIP We recommend that you establish a naming convention for all storage accounts and types before embarking on the development of a new service or application. Transient fault handling 3/13/2019 • 19 minutes to read • Edit Online All applications that communicate with remote services and resources must be sensitive to transient faults. This is especially the case for applications that run in the cloud, where the nature of the environment and connectivity over the Internet means these types of faults are likely to be encountered more often. Transient faults include the momentary loss of network connectivity to components and services, the temporary unavailability of a service, or timeouts that arise when a service is busy. These faults are often self-correcting, and if the action is repeated after a suitable delay it is likely succeed. This document covers general guidance for transient fault handling. For information about handling transient faults when using Microsoft Azure services, see Azure service-specific retry guidelines. Why do transient faults occur in the cloud? Transient faults can occur in any environment, on any platform or operating system, and in any kind of application. In solutions that run on local, on-premises infrastructure, performance and availability of the application and its components is typically maintained through expensive and often under-used hardware redundancy, and components and resources are located close to each another. While this makes a failure less likely, it can still result in transient faults - and even an outage through unforeseen events such as external power supply or network issues, or other disaster scenarios. Cloud hosting, including private cloud systems, can offer a higher overall availability by using shared resources, redundancy, automatic failover, and dynamic resource allocation across a huge number of commodity compute nodes. However, the nature of these environments can mean that transient faults are more likely to occur. There are several reasons for this: Many resources in a cloud environment are shared, and access to these resources is subject to throttling in order to protect the resource. Some services will refuse connections when the load rises to a specific level, or a maximum throughput rate is reached, in order to allow processing of existing requests and to maintain performance of the service for all users. Throttling helps to maintain the quality of service for neighbors and other tenants using the shared resource. Cloud environments are built using vast numbers of commodity hardware units. They deliver performance by dynamically distributing the load across multiple computing units and infrastructure components, and deliver reliability by automatically recycling or replacing failed units. This dynamic nature means that transient faults and temporary connection failures may occasionally occur. There are often more hardware components, including network infrastructure such as routers and load balancers, between the application and the resources and services it uses. This additional infrastructure can occasionally introduce additional connection latency and transient connection faults. Network conditions between the client and the server may be variable, especially when communication crosses the Internet. Even in on-premises locations, very heavy traffic loads may slow communication and cause intermittent connection failures. Challenges Transient faults can have a huge effect on the perceived availability of an application, even if it has been thoroughly tested under all foreseeable circumstances. To ensure that cloud-hosted applications operate reliably, they must be able to respond to the following challenges: The application must be able to detect faults when they occur, and determine if these faults are likely to be transient, more long-lasting, or are terminal failures. Different resources are likely to return different responses when a fault occurs, and these responses may also vary depending on the context of the operation; for example, the response for an error when reading from storage may be different from response for an error when writing to storage. Many resources and services have well-documented transient failure contracts. However, where such information is not available, it may be difficult to discover the nature of the fault and whether it is likely to be transient. The application must be able to retry the operation if it determines that the fault is likely to be transient and keep track of the number of times the operation was retried. The application must use an appropriate strategy for the retries. This strategy specifies the number of times it should retry, the delay between each attempt, and the actions to take after a failed attempt. The appropriate number of attempts and the delay between each one are often difficult to determine, and vary based on the type of resource as well as the current operating conditions of the resource and the application itself. General guidelines The following guidelines will help you to design a suitable transient fault handing mechanism for your applications: Determine if there is a built-in retry mechanism: Many services provide an SDK or client library that contains a transient fault handling mechanism. The retry policy it uses is typically tailored to the nature and requirements of the target service. Alternatively, REST interfaces for services may return information that is useful in determining whether a retry is appropriate, and how long to wait before the next retry attempt. Use the built-in retry mechanism where one is available unless you have specific and wellunderstood requirements that mean a different retry behavior is more appropriate. Determine if the operation is suitable for retrying: You should only retry operations where the faults are transient (typically indicated by the nature of the error), and if there is at least some likelihood that the operation will succeed when reattempted. There is no point in reattempting operations that indicate an invalid operation such as a database update to an item that does not exist, or requests to a service or resource that has suffered a fatal error In general, you should implement retries only where the full impact of this can be determined, and the conditions are well understood and can be validated. If not, leave it to the calling code to implement retries. Remember that the errors returned from resources and services outside your control may evolve over time, and you may need to revisit your transient fault detection logic. When you create services or components, consider implementing error codes and messages that will help clients determine whether they should retry failed operations. In particular, indicate if the client should retry the operation (perhaps by returning an isTransient value) and suggest a suitable delay before the next retry attempt. If you build a web service, consider returning custom errors defined within your service contracts. Even though generic clients may not be able to read these, they will be useful when building custom clients. Determine an appropriate retry count and interval: It is vital to optimize the retry count and the interval to the type of use case. If you do not retry a sufficient number of times, the application will be unable to complete the operation and is likely to experience a failure. If you retry too many times, or with too short an interval between tries, the application can potentially hold resources such as threads, connections, and memory for long periods, which will adversely affect the health of the application. The appropriate values for the time interval and the number of retry attempts depend on the type of operation being attempted. For example, if the operation is part of a user interaction, the interval should be short and only a few retries attempted to avoid making users wait for a response (which holds open connections and can reduce availability for other users). If the operation is part of a long running or critical workflow, where cancelling and restarting the process is expensive or timeconsuming, it is appropriate to wait longer between attempts and retry more times. Determining the appropriate intervals between retries is the most difficult part of designing a successful strategy. Typical strategies use the following types of retry interval: Exponential back-off. The application waits a short time before the first retry, and then exponentially increasing times between each subsequent retry. For example, it may retry the operation after 3 seconds, 12 seconds, 30 seconds, and so on. Incremental intervals. The application waits a short time before the first retry, and then incrementally increasing times between each subsequent retry. For example, it may retry the operation after 3 seconds, 7 seconds, 13 seconds, and so on. Regular intervals. The application waits for the same period of time between each attempt. For example, it may retry the operation every 3 seconds. Immediate retry. Sometimes a transient fault is extremely short, perhaps caused by an event such as a network packet collision or a spike in a hardware component. In this case, retrying the operation immediately is appropriate because it may succeed if the fault has cleared in the time it takes the application to assemble and send the next request. However, there should never be more than one immediate retry attempt, and you should switch to alternative strategies, such as such as exponential back-off or fallback actions, if the immediate retry fails. Randomization. Any of the retry strategies listed above may include a randomization to prevent multiple instances of the client sending subsequent retry attempts at the same time. For example, one instance may retry the operation after 3 seconds, 11 seconds, 28 seconds, and so on while another instance may retry the operation after 4 seconds, 12 seconds, 26 seconds, and so on. Randomization is a useful technique that may be combined with other strategies. As a general guideline, use an exponential back-off strategy for background operations, and immediate or regular interval retry strategies for interactive operations. In both cases, you should choose the delay and the retry count so that the maximum latency for all retry attempts is within the required end-to-end latency requirement. Take into account the combination of all the factors that contribute to the overall maximum timeout for a retried operation. These factors include the time taken for a failed connection to produce a response (typically set by a timeout value in the client) as well as the delay between retry attempts and the maximum number of retries. The total of all these times can result in very large overall operation times, especially when using an exponential delay strategy where the interval between retries grows rapidly after each failure. If a process must meet a specific service level agreement (SLA), the overall operation time, including all timeouts and delays, must be within that defined in the SLA. Over-aggressive retry strategies, which have too short intervals or too many retries, can have an adverse effect on the target resource or service. This may prevent the resource or service from recovering from its overloaded state, and it will continue to block or refuse requests. This results in a vicious circle where more and more requests are sent to the resource or service, and consequently its ability to recover is further reduced. Take into account the timeout of the operations when choosing the retry intervals to avoid launching a subsequent attempt immediately (for example, if the timeout period is similar to the retry interval). Also consider if you need to keep the total possible period (the timeout plus the retry intervals) to below a specific total time. Operations that have unusually short or very long timeouts may influence how long to wait, and how often to retry the operation. Use the type of the exception and any data it contains, or the error codes and messages returned from the service, to optimize the interval and the number of retries. For example, some exceptions or error codes (such as the HTTP code 503 Service Unavailable with a Retry-After header in the response) may indicate how long the error might last, or that the service has failed and will not respond to any subsequent attempt. Avoid anti-patterns: In the vast majority of cases, you should avoid implementations that include duplicated layers of retry code. Avoid designs that include cascading retry mechanisms, or that implement retry at every stage of an operation that involves a hierarchy of requests, unless you have specific requirements that demand this. In these exceptional circumstances, use policies that prevent excessive numbers of retries and delay periods, and make sure you understand the consequences. For example, if one component makes a request to another, which then accesses the target service, and you implement retry with a count of three on both calls there will be nine retry attempts in total against the service. Many services and resources implement a built-in retry mechanism and you should investigate how you can disable or modify this if you need to implement retries at a higher level. Never implement an endless retry mechanism. This is likely to prevent the resource or service recovering from overload situations, and cause throttling and refused connections to continue for a longer period. Use a finite number or retries, or implement a pattern such as Circuit Breaker to allow the service to recover. Never perform an immediate retry more than once. Avoid using a regular retry interval, especially when you have a large number of retry attempts, when accessing services and resources in Azure. The optimum approach is this scenario is an exponential back-off strategy with a circuit-breaking capability. Prevent multiple instances of the same client, or multiple instances of different clients, from sending retries at the same times. If this is likely to occur, introduce randomization into the retry intervals. Test your retry strategy and implementation: Ensure you fully test your retry strategy implementation under as wide a set of circumstances as possible, especially when both the application and the target resources or services it uses are under extreme load. To check behavior during testing, you can: Inject transient and non-transient faults into the service. For example, send invalid requests or add code that detects test requests and responds with different types of errors. For an example using TestApi, see Fault Injection Testing with TestApi and Introduction to TestApi – Part 5: Managed Code Fault Injection APIs. Create a mock of the resource or service that returns a range of errors that the real service may return. Ensure you cover all the types of error that your retry strategy is designed to detect. Force transient errors to occur by temporarily disabling or overloading the service if it is a custom service that you created and deployed (you should not, of course, attempt to overload any shared resources or shared services within Azure). For HTTP -based APIs, consider using the FiddlerCore library in your automated tests to change the outcome of HTTP requests, either by adding extra roundtrip times or by changing the response (such as the HTTP status code, headers, body, or other factors). This enables deterministic testing of a subset of the failure conditions, whether transient faults or other types of failure. For more information, see FiddlerCore. For examples of how to use the library, particularly the HttpMangler class, examine the source code for the Azure Storage SDK. Perform high load factor and concurrent tests to ensure that the retry mechanism and strategy works correctly under these conditions, and does not have an adverse effect on the operation of the client or cause cross-contamination between requests. Manage retry policy configurations: A retry policy is a combination of all of the elements of your retry strategy. It defines the detection mechanism that determines whether a fault is likely to be transient, the type of interval to use (such as regular, exponential back-off, and randomization), the actual interval value(s), and the number of times to retry. Retries must be implemented in many places within even the simplest application, and in every layer of more complex applications. Rather than hard-coding the elements of each policy at multiple locations, consider using a central point for storing all the policies. For example, store the values such as the interval and retry count in application configuration files, read them at runtime, and programmatically build the retry policies. This makes it easier to manage the settings, and to modify and fine tune the values in order to respond to changing requirements and scenarios. However, design the system to store the values rather than rereading a configuration file every time, and ensure suitable defaults are used if the values cannot be obtained from configuration. In an Azure Cloud Services application, consider storing the values that are used to build the retry policies at runtime in the service configuration file so that they can be changed without needing to restart the application. Take advantage of built-in or default retry strategies available in the client APIs you use, but only where they are appropriate for your scenario. These strategies are typically general-purpose. In some scenarios they may be all that is required, but in other scenarios they may not offer the full range of options to suit your specific requirements. You must understand how the settings will affect your application through testing to determine the most appropriate values. Log and track transient and non-transient faults: As part of your retry strategy, include exception handling and other instrumentation that logs when retry attempts are made. While an occasional transient failure and retry are to be expected, and do not indicate a problem, regular and increasing numbers of retries are often an indicator of an issue that may cause a failure, or is currently degrading application performance and availability. Log transient faults as Warning entries rather than Error entries so that monitoring systems do not detect them as application errors that may trigger false alerts. Consider storing a value in your log entries that indicates if the retries were caused by throttling in the service, or by other types of faults such as connection failures, so that you can differentiate them during analysis of the data. An increase in the number of throttling errors is often an indicator of a design flaw in the application or the need to switch to a premium service that offers dedicated hardware. Consider measuring and logging the overall time taken for operations that include a retry mechanism. This is a good indicator of the overall effect of transient faults on user response times, process latency, and the efficiency of the application use cases. Also log the number of retries occurred in order to understand the factors that contributed to the response time. Consider implementing a telemetry and monitoring system that can raise alerts when the number and rate of failures, the average number of retries, or the overall times taken for operations to succeed, is increasing. Manage operations that continually fail: There will be circumstances where the operation continues to fail at every attempt, and it is vital to consider how you will handle this situation: Although a retry strategy will define the maximum number of times that an operation should be retried, it does not prevent the application repeating the operation again, with the same number of retries. For example, if an order processing service fails with a fatal error that puts it out of action permanently, the retry strategy may detect a connection timeout and consider it to be a transient fault. The code will retry the operation a specified number of times and then give up. However, when another customer places an order, the operation will be attempted again - even though it is sure to fail every time. To prevent continual retries for operations that continually fail, consider implementing the Circuit Breaker pattern. In this pattern, if the number of failures within a specified time window exceeds the threshold, requests are returned to the caller immediately as errors, without attempting to access the failed resource or service. The application can periodically test the service, on an intermittent basis and with very long intervals between requests, to detect when it becomes available. An appropriate interval will depend on the scenario, such as the criticality of the operation and the nature of the service, and might be anything between a few minutes and several hours. At the point where the test succeeds, the application can resume normal operations and pass requests to the newly recovered service. In the meantime, it may be possible to fall back to another instance of the service (perhaps in a different datacenter or application), use a similar service that offers compatible (perhaps simpler) functionality, or perform some alternative operations in the hope that the service will become available soon. For example, it may be appropriate to store requests for the service in a queue or data store and replay them later. Otherwise you might be able to redirect the user to an alternative instance of the application, degrade the performance of the application but still offer acceptable functionality, or just return a message to the user indicating that the application is not available at present. Other considerations When deciding on the values for the number of retries and the retry intervals for a policy, consider if the operation on the service or resource is part of a long-running or multi-step operation. It may be difficult or expensive to compensate all the other operational steps that have already succeeded when one fails. In this case, a very long interval and a large number of retries may be acceptable as long as it does not block other operations by holding or locking scarce resources. Consider if retrying the same operation may cause inconsistencies in data. If some parts of a multistep process are repeated, and the operations are not idempotent, it may result in an inconsistency. For example, an operation that increments a value, if repeated, will produce an invalid result. Repeating an operation that sends a message to a queue may cause an inconsistency in the message consumer if it cannot detect duplicate messages. To prevent this, ensure that you design each step as an idempotent operation. For more information about idempotency, see Idempotency patterns. Consider the scope of the operations that will be retried. For example, it may be easier to implement retry code at a level that encompasses several operations, and retry them all if one fails. However, doing this may result in idempotency issues or unnecessary rollback operations. If you choose a retry scope that encompasses several operations, take into account the total latency of all of them when determining the retry intervals, when monitoring the time taken, and before raising alerts for failures. Consider how your retry strategy may affect neighbors and other tenants in a shared application, or when using shared resources and services. Aggressive retry policies can cause an increasing number of transient faults to occur for these other users and for applications that share the resources and services. Likewise, your application may be affected by the retry policies implemented by other users of the resources and services. For mission-critical applications, you may decide to use premium services that are not shared. This provides you with much more control over the load and consequent throttling of these resources and services, which can help to justify the additional cost. More information Azure service-specific retry guidelines Circuit Breaker pattern Compensating Transaction pattern Idempotency patterns Retry guidance for specific services 3/13/2019 • 40 minutes to read • Edit Online Most Azure services and client SDKs include a retry mechanism. However, these differ because each service has different characteristics and requirements, and so each retry mechanism is tuned to a specific service. This guide summarizes the retry mechanism features for the majority of Azure services, and includes information to help you use, adapt, or extend the retry mechanism for that service. For general guidance on handling transient faults, and retrying connections and operations against services and resources, see Retry guidance. The following table summarizes the retry features for the Azure services described in this guidance. SERVICE RETRY CAPABILITIES POLICY CONFIGURATION Azure Active Directory Native in ADAL library Cosmos DB SCOPE TELEMETRY FEATURES Embeded into ADAL library Internal None Native in service Non-configurable Global TraceSource Data Lake Store Native in client Non-configurable Individual operations None Event Hubs Native in client Programmatic Client None IoT Hub Native in client SDK Programmatic Client None Redis Cache Native in client Programmatic Client TextWriter Search Native in client Programmatic Client ETW or Custom Service Bus Native in client Programmatic Namespace Manager, Messaging Factory, and Client ETW Service Fabric Native in client Programmatic Client None SQL Database with ADO.NET Polly Declarative and programmatic Single statements or blocks of code Custom SQL Database with Entity Framework Native in client Programmatic Global per AppDomain None SQL Database with Entity Framework Core Native in client Programmatic Global per AppDomain None Storage Native in client Programmatic Client and individual operations TraceSource NOTE For most of the Azure built-in retry mechanisms, there is currently no way apply a different retry policy for different types of error or exception. You should configure a policy that provides the optimum average performance and availability. One way to fine-tune the policy is to analyze log files to determine the type of transient faults that are occurring. Azure Active Directory Azure Active Directory (Azure AD ) is a comprehensive identity and access management cloud solution that combines core directory services, advanced identity governance, security, and application access management. Azure AD also offers developers an identity management platform to deliver access control to their applications, based on centralized policy and rules. NOTE For retry guidance on Managed Service Identity endpoints, see How to use an Azure VM Managed Service Identity (MSI) for token acquisition. Retry mechanism There is a built-in retry mechanism for Azure Active Directory in the Active Directory Authentication Library (ADAL ). To avoid unexpected lockouts, we recommend that third party libraries and application code do not retry failed connections, but allow ADAL to handle retries. Retry usage guidance Consider the following guidelines when using Azure Active Directory: When possible, use the ADAL library and the built-in support for retries. If you are using the REST API for Azure Active Directory, retry the operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. An exponential back-off policy is recommended for use in batch scenarios with Azure Active Directory. Consider starting with the following settings for retrying operations. These are general purpose settings, and you should monitor the operations and fine tune the values to suit your own scenario. SAMPLE TARGET E2E MAX LATENCY RETRY STRATEGY SETTINGS VALUES HOW IT WORKS Interactive, UI, or foreground 2 sec FixedInterval Retry count Retry interval First fast retry 3 500 ms true Attempt 1 delay 0 sec Attempt 2 delay 500 ms Attempt 3 delay 500 ms Background or batch 60 sec ExponentialBacko ff Retry count Min back-off Max back-off Delta back-off First fast retry 5 0 sec 60 sec 2 sec false Attempt 1 delay 0 sec Attempt 2 delay ~2 sec Attempt 3 delay ~6 sec Attempt 4 delay ~14 sec Attempt 5 delay ~30 sec CONTEX T More information Azure Active Directory Authentication Libraries Cosmos DB Cosmos DB is a fully-managed multi-model database that supports schema-less JSON data. It offers configurable and reliable performance, native JavaScript transactional processing, and is built for the cloud with elastic scale. Retry mechanism The DocumentClient class automatically retries failed attempts. To set the number of retries and the maximum wait time, configure ConnectionPolicy.RetryOptions. Exceptions that the client raises are either beyond the retry policy or are not transient errors. If Cosmos DB throttles the client, it returns an HTTP 429 error. Check the status code in the DocumentClientException . Policy configuration The following table shows the default settings for the RetryOptions class. SETTING DEFAULT VALUE DESCRIPTION MaxRetryAttemptsOnThrottledRequest s 9 The maximum number of retries if the request fails because Cosmos DB applied rate limiting on the client. MaxRetryWaitTimeInSeconds 30 The maximum retry time in seconds. Example DocumentClient client = new DocumentClient(new Uri(endpoint), authKey); ; var options = client.ConnectionPolicy.RetryOptions; options.MaxRetryAttemptsOnThrottledRequests = 5; options.MaxRetryWaitTimeInSeconds = 15; Telemetry Retry attempts are logged as unstructured trace messages through a .NET TraceSource. You must configure a TraceListener to capture the events and write them to a suitable destination log. For example, if you add the following to your App.config file, traces will be generated in a text file in the same location as the executable: <configuration> <system.diagnostics> <switches> <add name="SourceSwitch" value="Verbose"/> </switches> <sources> <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" > <listeners> <add name="MyTextListener" type="System.Diagnostics.TextWriterTraceListener" traceOutputOptions="DateTime,ProcessId,ThreadId" initializeData="CosmosDBTrace.txt"></add> </listeners> </source> </sources> </system.diagnostics> </configuration> Event Hubs Azure Event Hubs is a hyper-scale telemetry ingestion service that collects, transforms, and stores millions of events. Retry mechanism Retry behavior in the Azure Event Hubs Client Library is controlled by the RetryPolicy property on the EventHubClient class. The default policy retries with exponential backoff when Azure Event Hub returns a transient EventHubsException or an OperationCanceledException . Example EventHubClient client = EventHubClient.CreateFromConnectionString("[event_hub_connection_string]"); client.RetryPolicy = RetryPolicy.Default; More information .NET Standard client library for Azure Event Hubs IoT Hub Azure IoT Hub is a service for connecting, monitoring, and managing devices to develop Internet of Things (IoT) applications. Retry mechanism The Azure IoT device SDK can detect errors in the network, protocol, or application. Based on the error type, the SDK checks whether a retry needs to be performed. If the error is recoverable, the SDK begins to retry using the configured retry policy. The default retry policy is exponential back-off with random jitter, but it can be configured. Policy configuration Policy configuration differs by language. For more details, see IoT Hub retry policy configuration. More information IoT Hub retry policy Troubleshoot IoT Hub device disconnection Azure Redis Cache Azure Redis Cache is a fast data access and low latency cache service based on the popular open source Redis Cache. It is secure, managed by Microsoft, and is accessible from any application in Azure. The guidance in this section is based on using the StackExchange.Redis client to access the cache. A list of other suitable clients can be found on the Redis website, and these may have different retry mechanisms. Note that the StackExchange.Redis client uses multiplexing through a single connection. The recommended usage is to create an instance of the client at application startup and use this instance for all operations against the cache. For this reason, the connection to the cache is made only once, and so all of the guidance in this section is related to the retry policy for this initial connection — and not for each operation that accesses the cache. Retry mechanism The StackExchange.Redis client uses a connection manager class that is configured through a set of options, including: ConnectRetry. The number of times a failed connection to the cache will be retried. ReconnectRetryPolicy. The retry strategy to use. ConnectTimeout. The maximum waiting time in milliseconds. Policy configuration Retry policies are configured programmatically by setting the options for the client before connecting to the cache. This can be done by creating an instance of the ConfigurationOptions class, populating its properties, and passing it to the Connect method. The built-in classes support linear (constant) delay and exponential backoff with randomized retry intervals. You can also create a custom retry policy by implementing the IReconnectRetryPolicy interface. The following example configures a retry strategy using exponential backoff. var deltaBackOffInMilliseconds = TimeSpan.FromSeconds(5).Milliseconds; var maxDeltaBackOffInMilliseconds = TimeSpan.FromSeconds(20).Milliseconds; var options = new ConfigurationOptions { EndPoints = {"localhost"}, ConnectRetry = 3, ReconnectRetryPolicy = new ExponentialRetry(deltaBackOffInMilliseconds, maxDeltaBackOffInMilliseconds), ConnectTimeout = 2000 }; ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(options, writer); Alternatively, you can specify the options as a string, and pass this to the Connect method. Note that the ReconnectRetryPolicy property cannot be set this way, only through code. var options = "localhost,connectRetry=3,connectTimeout=2000"; ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(options, writer); You can also specify options directly when you connect to the cache. var conn = ConnectionMultiplexer.Connect("redis0:6380,redis1:6380,connectRetry=3"); For more information, see Stack Exchange Redis Configuration in the StackExchange.Redis documentation. The following table shows the default settings for the built-in retry policy. CONTEX T SETTING DEFAULT VALUE (V 1.2.2) ConfigurationOptions ConnectRetry 3 ConnectTimeout SyncTimeout Maximum 5000 ms plus SyncTimeout 1000 ReconnectRetryPolicy LinearRetry 5000 ms MEANING The number of times to repeat connect attempts during the initial connection operation. Timeout (ms) for connect operations. Not a delay between retry attempts. Time (ms) to allow for synchronous operations. Retry every 5000 ms. NOTE For synchronous operations, SyncTimeout can add to the end-to-end latency, but setting the value too low can cause excessive timeouts. See How to troubleshoot Azure Redis Cache. In general, avoid using synchronous operations, and use asynchronous operations instead. For more information see Pipelines and Multiplexers. Retry usage guidance Consider the following guidelines when using Azure Redis Cache: The StackExchange Redis client manages its own retries, but only when establishing a connection to the cache when the application first starts. You can configure the connection timeout, the number of retry attempts, and the time between retries to establish this connection, but the retry policy does not apply to operations against the cache. Instead of using a large number of retry attempts, consider falling back by accessing the original data source instead. Telemetry You can collect information about connections (but not other operations) using a TextWriter. var writer = new StringWriter(); ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(options, writer); An example of the output this generates is shown below. localhost:6379,connectTimeout=2000,connectRetry=3 1 unique nodes specified Requesting tie-break from localhost:6379 > __Booksleeve_TieBreak... Allowing endpoints 00:00:02 to respond... localhost:6379 faulted: SocketFailure on PING localhost:6379 failed to nominate (Faulted) > UnableToResolvePhysicalConnection on GET No masters detected localhost:6379: Standalone v2.0.0, master; keep-alive: 00:01:00; int: Connecting; sub: Connecting; not in use: DidNotRespond localhost:6379: int ops=0, qu=0, qs=0, qc=1, wr=0, sync=1, socks=2; sub ops=0, qu=0, qs=0, qc=0, wr=0, socks=2 Circular op-count snapshot; int: 0 (0.00 ops/s; spans 10s); sub: 0 (0.00 ops/s; spans 10s) Sync timeouts: 0; fire and forget: 0; last heartbeat: -1s ago resetting failing connections to retry... retrying; attempts left: 2... ... Examples The following code example configures a constant (linear) delay between retries when initializing the StackExchange.Redis client. This example shows how to set the configuration using a ConfigurationOptions instance. using using using using using using using System; System.Collections.Generic; System.IO; System.Linq; System.Text; System.Threading.Tasks; StackExchange.Redis; namespace RetryCodeSamples { class CacheRedisCodeSamples { public async static Task Samples() { var writer = new StringWriter(); { try { var retryTimeInMilliseconds = TimeSpan.FromSeconds(4).Milliseconds; // delay between retries // Using object-based configuration. var options = new ConfigurationOptions { EndPoints = { "localhost" }, ConnectRetry = 3, ReconnectRetryPolicy = new LinearRetry(retryTimeInMilliseconds) }; ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(options, writer); // Store a reference to the multiplexer for use in the application. } catch { Console.WriteLine(writer.ToString()); throw; } } } } } The next example sets the configuration by specifying the options as a string. The connection timeout is the maximum period of time to wait for a connection to the cache, not the delay between retry attempts. Note that the ReconnectRetryPolicy property can only be set by code. using using using using using using System.Collections.Generic; System.IO; System.Linq; System.Text; System.Threading.Tasks; StackExchange.Redis; namespace RetryCodeSamples { class CacheRedisCodeSamples { public async static Task Samples() { var writer = new StringWriter(); { try { // Using string-based configuration. var options = "localhost,connectRetry=3,connectTimeout=2000"; ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(options, writer); // Store a reference to the multiplexer for use in the application. } catch { Console.WriteLine(writer.ToString()); throw; } } } } } For more examples, see Configuration on the project website. More information Redis website Azure Search Azure Search can be used to add powerful and sophisticated search capabilities to a website or application, quickly and easily tune search results, and construct rich and fine-tuned ranking models. Retry mechanism Retry behavior in the Azure Search SDK is controlled by the SetRetryPolicy method on the SearchServiceClient and SearchIndexClient classes. The default policy retries with exponential backoff when Azure Search returns a 5xx or 408 (Request Timeout) response. Telemetry Trace with ETW or by registering a custom trace provider. For more information, see the AutoRest documentation. Service Bus Service Bus is a cloud messaging platform that provides loosely coupled message exchange with improved scale and resiliency for components of an application, whether hosted in the cloud or on-premises. Retry mechanism Service Bus implements retries using implementations of the RetryPolicy base class. All of the Service Bus clients expose a RetryPolicy property that can be set to one of the implementations of the RetryPolicy base class. The built-in implementations are: The RetryExponential class. This exposes properties that control the back-off interval, the retry count, and the TerminationTimeBuffer property that is used to limit the total time for the operation to complete. The NoRetry class. This is used when retries at the Service Bus API level are not required, such as when retries are managed by another process as part of a batch or multiple step operation. Service Bus actions can return a range of exceptions, as listed in Service Bus messaging exceptions. The list provides information about which if these indicate that retrying the operation is appropriate. For example, a ServerBusyException indicates that the client should wait for a period of time, then retry the operation. The occurrence of a ServerBusyException also causes Service Bus to switch to a different mode, in which an extra 10-second delay is added to the computed retry delays. This mode is reset after a short period. The exceptions returned from Service Bus expose the IsTransient property that indicates if the client should retry the operation. The built-in RetryExponential policy relies on the IsTransient property in the MessagingException class, which is the base class for all Service Bus exceptions. If you create custom implementations of the RetryPolicy base class you could use a combination of the exception type and the IsTransient property to provide more fine-grained control over retry actions. For example, you could detect a QuotaExceededException and take action to drain the queue before retrying sending a message to it. Policy configuration Retry policies are set programmatically, and can be set as a default policy for a NamespaceManager and for a MessagingFactory, or individually for each messaging client. To set the default retry policy for a messaging session you set the RetryPolicy of the NamespaceManager. namespaceManager.Settings.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(0.1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 3); To set the default retry policy for all clients created from a messaging factory, you set the RetryPolicy of the MessagingFactory. messagingFactory.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(0.1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 3); To set the retry policy for a messaging client, or to override its default policy, you set its RetryPolicy property using an instance of the required policy class: client.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(0.1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 3); The retry policy cannot be set at the individual operation level. It applies to all operations for the messaging client. The following table shows the default settings for the built-in retry policy. SETTING DEFAULT VALUE MEANING Policy Exponential Exponential back-off. MinimalBackoff 0 Minimum back-off interval. This is added to the retry interval computed from deltaBackoff. SETTING DEFAULT VALUE MEANING MaximumBackoff 30 seconds Maximum back-off interval. MaximumBackoff is used if the computed retry interval is greater than MaxBackoff. DeltaBackoff 3 seconds Back-off interval between retries. Multiples of this timespan will be used for subsequent retry attempts. TimeBuffer 5 seconds The termination time buffer associated with the retry. Retry attempts will be abandoned if the remaining time is less than TimeBuffer. MaxRetryCount 10 The maximum number of retries. ServerBusyBaseSleepTime 10 seconds If the last exception encountered was ServerBusyException, this value will be added to the computed retry interval. This value cannot be changed. Retry usage guidance Consider the following guidelines when using Service Bus: When using the built-in RetryExponential implementation, do not implement a fallback operation as the policy reacts to Server Busy exceptions and automatically switches to an appropriate retry mode. Service Bus supports a feature called Paired Namespaces, which implements automatic failover to a backup queue in a separate namespace if the queue in the primary namespace fails. Messages from the secondary queue can be sent back to the primary queue when it recovers. This feature helps to address transient failures. For more information, see Asynchronous Messaging Patterns and High Availability. Consider starting with following settings for retrying operations. These are general purpose settings, and you should monitor the operations and fine tune the values to suit your own scenario. EXAMPLE MAXIMUM LATENCY RETRY POLICY SETTINGS HOW IT WORKS Interactive, UI, or foreground 2 seconds* Exponential MinimumBackoff = 0 MaximumBackoff = 30 sec. DeltaBackoff = 300 msec. TimeBuffer = 300 msec. MaxRetryCount = 2 Attempt 1: Delay 0 sec. Attempt 2: Delay ~300 msec. Attempt 3: Delay ~900 msec. Background or batch 30 seconds Exponential MinimumBackoff = 1 MaximumBackoff = 30 sec. DeltaBackoff = 1.75 sec. TimeBuffer = 5 sec. MaxRetryCount = 3 Attempt 1: Delay ~1 sec. Attempt 2: Delay ~3 sec. Attempt 3: Delay ~6 msec. Attempt 4: Delay ~13 msec. CONTEX T * Not including additional delay that is added if a Server Busy response is received. Telemetry Service Bus logs retries as ETW events using an EventSource. You must attach an EventListener to the event source to capture the events and view them in Performance Viewer, or write them to a suitable destination log. The retry events are of the following form: Microsoft-ServiceBus-Client/RetryPolicyIteration ThreadID="14,500" FormattedMessage="[TrackingId:] RetryExponential: Operation Get:https://retrytests.servicebus.windows.net/TestQueue/?api-version=2014-05 at iteration 0 is retrying after 00:00:00.1000000 sleep because of Microsoft.ServiceBus.Messaging.MessagingCommunicationException: The remote name could not be resolved: 'retry-tests.servicebus.windows.net'.TrackingId:6a26f99c-dc6d-422e-8565-f89fdd0d4fe3, TimeStamp:9/5/2014 10:00:13 PM." trackingId="" policyType="RetryExponential" operation="Get:https://retry-tests.servicebus.windows.net/TestQueue/?api-version=2014-05" iteration="0" iterationSleep="00:00:00.1000000" lastExceptionType="Microsoft.ServiceBus.Messaging.MessagingCommunicationException" exceptionMessage="The remote name could not be resolved: 'retrytests.servicebus.windows.net'.TrackingId:6a26f99c-dc6d-422e-8565-f89fdd0d4fe3,TimeStamp:9/5/2014 10:00:13 PM" Examples The following code example shows how to set the retry policy for: A namespace manager. The policy applies to all operations on that manager, and cannot be overridden for individual operations. A messaging factory. The policy applies to all clients created from that factory, and cannot be overridden when creating individual clients. An individual messaging client. After a client has been created, you can set the retry policy for that client. The policy applies to all operations on that client. using using using using System; System.Threading.Tasks; Microsoft.ServiceBus; Microsoft.ServiceBus.Messaging; namespace RetryCodeSamples { class ServiceBusCodeSamples { private const string connectionString = @"Endpoint=sb://[my-namespace].servicebus.windows.net/; SharedAccessKeyName=RootManageSharedAccessKey; SharedAccessKey=C99..........Mk="; public async static Task Samples() { const string QueueName = "TestQueue"; ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http; var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString); // // // // The namespace manager will have a default exponential policy with 10 retry attempts and a 3 second delay delta. Retry delays will be approximately 0 sec, 3 sec, 9 sec, 25 sec and the fixed 30 sec, with an extra 10 sec added when receiving a ServiceBusyException. { // Set different values for the retry policy, used for all operations on the namespace manager. namespaceManager.Settings.RetryPolicy = new RetryExponential( minBackoff: TimeSpan.FromSeconds(0), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 3); // Policies cannot be specified on a per-operation basis. if (!await namespaceManager.QueueExistsAsync(QueueName)) { await namespaceManager.CreateQueueAsync(QueueName); } } var messagingFactory = MessagingFactory.Create( namespaceManager.Address, namespaceManager.Settings.TokenProvider); // The messaging factory will have a default exponential policy with 10 retry attempts // and a 3 second delay delta. // Retry delays will be approximately 0 sec, 3 sec, 9 sec, 25 sec and the fixed 30 sec, // with an extra 10 sec added when receiving a ServiceBusyException. { // Set different values for the retry policy, used for clients created from it. messagingFactory.RetryPolicy = new RetryExponential( minBackoff: TimeSpan.FromSeconds(1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 3); // Policies cannot be specified on a per-operation basis. var session = await messagingFactory.AcceptMessageSessionAsync(); } { var client = messagingFactory.CreateQueueClient(QueueName); // The client inherits the policy from the factory that created it. // Set different values for the retry policy on the client. client.RetryPolicy = new RetryExponential( minBackoff: TimeSpan.FromSeconds(0.1), maxBackoff: TimeSpan.FromSeconds(30), maxRetryCount: 3); // Policies cannot be specified on a per-operation basis. var session = await client.AcceptMessageSessionAsync(); } } } } More information Asynchronous messaging patterns and high availability Service Fabric Distributing reliable services in a Service Fabric cluster guards against most of the potential transient faults discussed in this article. Some transient faults are still possible, however. For example, the naming service might be in the middle of a routing change when it gets a request, causing it to throw an exception. If the same request comes 100 milliseconds later, it will probably succeed. Internally, Service Fabric manages this kind of transient fault. You can configure some settings by using the OperationRetrySettings class while setting up your services. The following code shows an example. In most cases, this should not be necessary, and the default settings will be fine. FabricTransportRemotingSettings transportSettings = new FabricTransportRemotingSettings { OperationTimeout = TimeSpan.FromSeconds(30) }; var retrySettings = new OperationRetrySettings(TimeSpan.FromSeconds(15), TimeSpan.FromSeconds(1), 5); var clientFactory = new FabricTransportServiceRemotingClientFactory(transportSettings); var serviceProxyFactory = new ServiceProxyFactory((c) => clientFactory, retrySettings); var client = serviceProxyFactory.CreateServiceProxy<ISomeService>( new Uri("fabric:/SomeApp/SomeStatefulReliableService"), new ServicePartitionKey(0)); More information Remote exception handling SQL Database using ADO.NET SQL Database is a hosted SQL database available in a range of sizes and as both a standard (shared) and premium (non-shared) service. Retry mechanism SQL Database has no built-in support for retries when accessed using ADO.NET. However, the return codes from requests can be used to determine why a request failed. For more information about SQL Database throttling, see Azure SQL Database resource limits. For a list of relevant error codes, see SQL error codes for SQL Database client applications. You can use the Polly library to implement retries for SQL Database. See Transient fault handling with Polly. Retry usage guidance Consider the following guidelines when accessing SQL Database using ADO.NET: Choose the appropriate service option (shared or premium). A shared instance may suffer longer than usual connection delays and throttling due to the usage by other tenants of the shared server. If more predictable performance and reliable low latency operations are required, consider choosing the premium option. Ensure that you perform retries at the appropriate level or scope to avoid non-idempotent operations causing inconsistency in the data. Ideally, all operations should be idempotent so that they can be repeated without causing inconsistency. Where this is not the case, the retry should be performed at a level or scope that allows all related changes to be undone if one operation fails; for example, from within a transactional scope. For more information, see Cloud Service Fundamentals Data Access Layer – Transient Fault Handling. A fixed interval strategy is not recommended for use with Azure SQL Database except for interactive scenarios where there are only a few retries at very short intervals. Instead, consider using an exponential back-off strategy for the majority of scenarios. Choose a suitable value for the connection and command timeouts when defining connections. Too short a timeout may result in premature failures of connections when the database is busy. Too long a timeout may prevent the retry logic working correctly by waiting too long before detecting a failed connection. The value of the timeout is a component of the end-to-end latency; it is effectively added to the retry delay specified in the retry policy for every retry attempt. Close the connection after a certain number of retries, even when using an exponential back off retry logic, and retry the operation on a new connection. Retrying the same operation multiple times on the same connection can be a factor that contributes to connection problems. For an example of this technique, see Cloud Service Fundamentals Data Access Layer – Transient Fault Handling. When connection pooling is in use (the default) there is a chance that the same connection will be chosen from the pool, even after closing and reopening a connection. If this is the case, a technique to resolve it is to call the ClearPool method of the SqlConnection class to mark the connection as not reusable. However, you should do this only after several connection attempts have failed, and only when encountering the specific class of transient failures such as SQL timeouts (error code -2) related to faulty connections. If the data access code uses transactions initiated as TransactionScope instances, the retry logic should reopen the connection and initiate a new transaction scope. For this reason, the retryable code block should encompass the entire scope of the transaction. Consider starting with following settings for retrying operations. These are general purpose settings, and you should monitor the operations and fine tune the values to suit your own scenario. SAMPLE TARGET E2E MAX LATENCY RETRY STRATEGY SETTINGS VALUES HOW IT WORKS Interactive, UI, or foreground 2 sec FixedInterval Retry count Retry interval First fast retry 3 500 ms true Attempt 1 delay 0 sec Attempt 2 delay 500 ms Attempt 3 delay 500 ms Background or batch 30 sec ExponentialBacko ff Retry count Min back-off Max back-off Delta back-off First fast retry 5 0 sec 60 sec 2 sec false Attempt 1 delay 0 sec Attempt 2 delay ~2 sec Attempt 3 delay ~6 sec Attempt 4 delay ~14 sec Attempt 5 delay ~30 sec CONTEX T NOTE The end-to-end latency targets assume the default timeout for connections to the service. If you specify longer connection timeouts, the end-to-end latency will be extended by this additional time for every retry attempt. Examples This section shows how you can use Polly to access Azure SQL Database using a set of retry policies configured in the Policy class. The following code shows an extension method on the backoff. SqlCommand class that calls ExecuteAsync with exponential public async static Task<SqlDataReader> ExecuteReaderWithRetryAsync(this SqlCommand command) { GuardConnectionIsNotNull(command); var policy = Policy.Handle<Exception>().WaitAndRetryAsync( retryCount: 3, // Retry 3 times sleepDurationProvider: attempt => TimeSpan.FromMilliseconds(200 * Math.Pow(2, attempt - 1)), // Exponential backoff based on an initial 200ms delay. onRetry: (exception, attempt) => { // Capture some info for logging/telemetry. logger.LogWarn($"ExecuteReaderWithRetryAsync: Retry {attempt} due to {exception}."); }); // Retry the following call according to the policy. await policy.ExecuteAsync<SqlDataReader>(async token => { // This code is executed within the Policy if (conn.State != System.Data.ConnectionState.Open) await conn.OpenAsync(token); return await command.ExecuteReaderAsync(System.Data.CommandBehavior.Default, token); }, cancellationToken); } This asynchronous extension method can be used as follows. var sqlCommand = sqlConnection.CreateCommand(); sqlCommand.CommandText = "[some query]"; using (var reader = await sqlCommand.ExecuteReaderWithRetryAsync()) { // Do something with the values } More information Cloud Service Fundamentals Data Access Layer – Transient Fault Handling For general guidance on getting the most from SQL Database, see Azure SQL Database performance and elasticity guide. SQL Database using Entity Framework 6 SQL Database is a hosted SQL database available in a range of sizes and as both a standard (shared) and premium (non-shared) service. Entity Framework is an object-relational mapper that enables .NET developers to work with relational data using domain-specific objects. It eliminates the need for most of the data-access code that developers usually need to write. Retry mechanism Retry support is provided when accessing SQL Database using Entity Framework 6.0 and higher through a mechanism called Connection resiliency / retry logic. The main features of the retry mechanism are: The primary abstraction is the IDbExecutionStrategy interface. This interface: Defines synchronous and asynchronous Execute* methods. Defines classes that can be used directly or can be configured on a database context as a default strategy, mapped to provider name, or mapped to a provider name and server name. When configured on a context, retries occur at the level of individual database operations, of which there might be several for a given context operation. Defines when to retry a failed connection, and how. It includes several built-in implementations of the IDbExecutionStrategy interface: Default - no retrying. Default for SQL Database (automatic) - no retrying, but inspects exceptions and wraps them with suggestion to use the SQL Database strategy. Default for SQL Database - exponential (inherited from base class) plus SQL Database detection logic. It implements an exponential back-off strategy that includes randomization. The built-in retry classes are stateful and are not thread safe. However, they can be reused after the current operation is completed. If the specified retry count is exceeded, the results are wrapped in a new exception. It does not bubble up the current exception. Policy configuration Retry support is provided when accessing SQL Database using Entity Framework 6.0 and higher. Retry policies are configured programmatically. The configuration cannot be changed on a per-operation basis. When configuring a strategy on the context as the default, you specify a function that creates a new strategy on demand. The following code shows how you can create a retry configuration class that extends the DbConfiguration base class. public class BloggingContextConfiguration : DbConfiguration { public BlogConfiguration() { // Set up the execution strategy for SQL Database (exponential) with 5 retries and 4 sec delay this.SetExecutionStrategy( "System.Data.SqlClient", () => new SqlAzureExecutionStrategy(5, TimeSpan.FromSeconds(4))); } } You can then specify this as the default retry strategy for all operations using the SetConfiguration method of the DbConfiguration instance when the application starts. By default, EF will automatically discover and use the configuration class. DbConfiguration.SetConfiguration(new BloggingContextConfiguration()); You can specify the retry configuration class for a context by annotating the context class with a DbConfigurationType attribute. However, if you have only one configuration class, EF will use it without the need to annotate the context. [DbConfigurationType(typeof(BloggingContextConfiguration))] public class BloggingContext : DbContext If you need to use different retry strategies for specific operations, or disable retries for specific operations, you can create a configuration class that allows you to suspend or swap strategies by setting a flag in the CallContext. The configuration class can use this flag to switch strategies, or disable the strategy you provide and use a default strategy. For more information, see Suspend Execution Strategy (EF6 onwards). Another technique for using specific retry strategies for individual operations is to create an instance of the required strategy class and supply the desired settings through parameters. You then invoke its ExecuteAsync method. var executionStrategy = new SqlAzureExecutionStrategy(5, TimeSpan.FromSeconds(4)); var blogs = await executionStrategy.ExecuteAsync( async () => { using (var db = new BloggingContext("Blogs")) { // Acquire some values asynchronously and return them } }, new CancellationToken() ); The simplest way to use a DbConfiguration class is to locate it in the same assembly as the DbContext class. However, this is not appropriate when the same context is required in different scenarios, such as different interactive and background retry strategies. If the different contexts execute in separate AppDomains, you can use the built-in support for specifying configuration classes in the configuration file or set it explicitly using code. If the different contexts must execute in the same AppDomain, a custom solution will be required. For more information, see Code-Based Configuration (EF6 onwards). The following table shows the default settings for the built-in retry policy when using EF6. SETTING DEFAULT VALUE MEANING Policy Exponential Exponential back-off. MaxRetryCount 5 The maximum number of retries. MaxDelay 30 seconds The maximum delay between retries. This value does not affect how the series of delays are computed. It only defines an upper bound. DefaultCoefficient 1 second The coefficient for the exponential backoff computation. This value cannot be changed. DefaultRandomFactor 1.1 The multiplier used to add a random delay for each entry. This value cannot be changed. DefaultExponentialBase 2 The multiplier used to calculate the next delay. This value cannot be changed. Retry usage guidance Consider the following guidelines when accessing SQL Database using EF6: Choose the appropriate service option (shared or premium). A shared instance may suffer longer than usual connection delays and throttling due to the usage by other tenants of the shared server. If predictable performance and reliable low latency operations are required, consider choosing the premium option. A fixed interval strategy is not recommended for use with Azure SQL Database. Instead, use an exponential back-off strategy because the service may be overloaded, and longer delays allow more time for it to recover. Choose a suitable value for the connection and command timeouts when defining connections. Base the timeout on both your business logic design and through testing. You may need to modify this value over time as the volumes of data or the business processes change. Too short a timeout may result in premature failures of connections when the database is busy. Too long a timeout may prevent the retry logic working correctly by waiting too long before detecting a failed connection. The value of the timeout is a component of the end-to-end latency, although you cannot easily determine how many commands will execute when saving the context. You can change the default timeout by setting the CommandTimeout property of the DbContext instance. Entity Framework supports retry configurations defined in configuration files. However, for maximum flexibility on Azure you should consider creating the configuration programmatically within the application. The specific parameters for the retry policies, such as the number of retries and the retry intervals, can be stored in the service configuration file and used at runtime to create the appropriate policies. This allows the settings to be changed without requiring the application to be restarted. Consider starting with the following settings for retrying operations. You cannot specify the delay between retry attempts (it is fixed and generated as an exponential sequence). You can specify only the maximum values, as shown here; unless you create a custom retry strategy. These are general purpose settings, and you should monitor the operations and fine tune the values to suit your own scenario. SAMPLE TARGET E2E MAX LATENCY RETRY POLICY SETTINGS VALUES HOW IT WORKS Interactive, UI, or foreground 2 seconds Exponential MaxRetryCount MaxDelay 3 750 ms Attempt 1 delay 0 sec Attempt 2 delay 750 ms Attempt 3 – delay 750 ms Background or batch 30 seconds Exponential MaxRetryCount MaxDelay 5 12 seconds Attempt 1 delay 0 sec Attempt 2 delay ~1 sec Attempt 3 delay ~3 sec Attempt 4 delay ~7 sec Attempt 5 delay 12 sec CONTEX T NOTE The end-to-end latency targets assume the default timeout for connections to the service. If you specify longer connection timeouts, the end-to-end latency will be extended by this additional time for every retry attempt. Examples The following code example defines a simple data access solution that uses Entity Framework. It sets a specific retry strategy by defining an instance of a class named BlogConfiguration that extends DbConfiguration. using using using using using System; System.Collections.Generic; System.Data.Entity; System.Data.Entity.SqlServer; System.Threading.Tasks; namespace RetryCodeSamples { public class BlogConfiguration : DbConfiguration { public BlogConfiguration() { // Set up the execution strategy for SQL Database (exponential) with 5 retries and 12 sec delay. // These values could be loaded from configuration rather than being hard-coded. this.SetExecutionStrategy( "System.Data.SqlClient", () => new SqlAzureExecutionStrategy(5, TimeSpan.FromSeconds(12))); } } // Specify the configuration type if more than one has been defined. // [DbConfigurationType(typeof(BlogConfiguration))] public class BloggingContext : DbContext { // Definition of content goes here. } class EF6CodeSamples { public async static Task Samples() { // Execution strategy configured by DbConfiguration subclass, discovered automatically or // or explicitly indicated through configuration or with an attribute. Default is no retries. using (var db = new BloggingContext("Blogs")) { // Add, edit, delete blog items here, then: await db.SaveChangesAsync(); } } } } More examples of using the Entity Framework retry mechanism can be found in Connection Resiliency / Retry Logic. More information Azure SQL Database performance and elasticity guide SQL Database using Entity Framework Core Entity Framework Core is an object-relational mapper that enables .NET Core developers to work with data using domain-specific objects. It eliminates the need for most of the data-access code that developers usually need to write. This version of Entity Framework was written from the ground up, and doesn't automatically inherit all the features from EF6.x. Retry mechanism Retry support is provided when accessing SQL Database using Entity Framework Core through a mechanism called connection resiliency. Connection resiliency was introduced in EF Core 1.1.0. The primary abstraction is the IExecutionStrategy interface. The execution strategy for SQL Server, including SQL Azure, is aware of the exception types that can be retried and has sensible defaults for maximum retries, delay between retries, and so on. Examples The following code enables automatic retries when configuring the DbContext object, which represents a session with the database. protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder .UseSqlServer( @"Server= (localdb)\mssqllocaldb;Database=EFMiscellanous.ConnectionResiliency;Trusted_Connection=True;", options => options.EnableRetryOnFailure()); } The following code shows how to execute a transaction with automatic retries, by using an execution strategy. The transaction is defined in a delegate. If a transient failure occurs, the execution strategy will invoke the delegate again. using (var db = new BloggingContext()) { var strategy = db.Database.CreateExecutionStrategy(); strategy.Execute(() => { using (var transaction = db.Database.BeginTransaction()) { db.Blogs.Add(new Blog { Url = "https://blogs.msdn.com/dotnet" }); db.SaveChanges(); db.Blogs.Add(new Blog { Url = "https://blogs.msdn.com/visualstudio" }); db.SaveChanges(); transaction.Commit(); } }); } Azure Storage Azure Storage services include table and blob storage, files, and storage queues. Retry mechanism Retries occur at the individual REST operation level and are an integral part of the client API implementation. The client storage SDK uses classes that implement the IExtendedRetryPolicy Interface. There are different implementations of the interface. Storage clients can choose from policies specifically designed for accessing tables, blobs, and queues. Each implementation uses a different retry strategy that essentially defines the retry interval and other details. The built-in classes provide support for linear (constant delay) and exponential with randomization retry intervals. There is also a no retry policy for use when another process is handling retries at a higher level. However, you can implement your own retry classes if you have specific requirements not provided by the built-in classes. Alternate retries switch between primary and secondary storage service location if you are using read access georedundant storage (RA-GRS ) and the result of the request is a retryable error. See Azure Storage Redundancy Options for more information. Policy configuration Retry policies are configured programmatically. A typical procedure is to create and populate a TableRequestOptions, BlobRequestOptions, FileRequestOptions, or QueueRequestOptions instance. TableRequestOptions interactiveRequestOption = new TableRequestOptions() { RetryPolicy = new LinearRetry(TimeSpan.FromMilliseconds(500), 3), // For Read-access geo-redundant storage, use PrimaryThenSecondary. // Otherwise set this to PrimaryOnly. LocationMode = LocationMode.PrimaryThenSecondary, // Maximum execution time based on the business use case. MaximumExecutionTime = TimeSpan.FromSeconds(2) }; The request options instance can then be set on the client, and all operations with the client will use the specified request options. client.DefaultRequestOptions = interactiveRequestOption; var stats = await client.GetServiceStatsAsync(); You can override the client request options by passing a populated instance of the request options class as a parameter to operation methods. var stats = await client.GetServiceStatsAsync(interactiveRequestOption, operationContext: null); You use an OperationContext instance to specify the code to execute when a retry occurs and when an operation has completed. This code can collect information about the operation for use in logs and telemetry. // Set up notifications for an operation var context = new OperationContext(); context.ClientRequestID = "some request id"; context.Retrying += (sender, args) => { /* Collect retry information */ }; context.RequestCompleted += (sender, args) => { /* Collect operation completion information */ }; var stats = await client.GetServiceStatsAsync(null, context); In addition to indicating whether a failure is suitable for retry, the extended retry policies return a RetryContext object that indicates the number of retries, the results of the last request, whether the next retry will happen in the primary or secondary location (see table below for details). The properties of the RetryContext object can be used to decide if and when to attempt a retry. For more details, see IExtendedRetryPolicy.Evaluate Method. The following tables show the default settings for the built-in retry policies. Request options: SETTING DEFAULT VALUE MEANING MaximumExecutionTime None Maximum execution time for the request, including all potential retry attempts. If it is not specified, then the amount of time that a request is permitted to take is unlimited. In other words, the request might hang. SETTING DEFAULT VALUE MEANING ServerTimeout None Server timeout interval for the request (value is rounded to seconds). If not specified, it will use the default value for all requests to the server. Usually, the best option is to omit this setting so that the server default is used. LocationMode None If the storage account is created with the Read access geo-redundant storage (RA-GRS) replication option, you can use the location mode to indicate which location should receive the request. For example, if PrimaryThenSecondary is specified, requests are always sent to the primary location first. If a request fails, it is sent to the secondary location. RetryPolicy ExponentialPolicy See below for details of each option. SETTING DEFAULT VALUE MEANING maxAttempt 3 Number of retry attempts. deltaBackoff 4 seconds Back-off interval between retries. Multiples of this timespan, including a random element, will be used for subsequent retry attempts. MinBackoff 3 seconds Added to all retry intervals computed from deltaBackoff. This value cannot be changed. MaxBackoff 120 seconds MaxBackoff is used if the computed retry interval is greater than MaxBackoff. This value cannot be changed. SETTING DEFAULT VALUE MEANING maxAttempt 3 Number of retry attempts. deltaBackoff 30 seconds Back-off interval between retries. Exponential policy: Linear policy: Retry usage guidance Consider the following guidelines when accessing Azure storage services using the storage client API: Use the built-in retry policies from the Microsoft.WindowsAzure.Storage.RetryPolicies namespace where they are appropriate for your requirements. In most cases, these policies will be sufficient. Use the ExponentialRetry policy in batch operations, background tasks, or non-interactive scenarios. In these scenarios, you can typically allow more time for the service to recover — with a consequently increased chance of the operation eventually succeeding. Consider specifying the MaximumExecutionTime property of the RequestOptions parameter to limit the total execution time, but take into account the type and size of the operation when choosing a timeout value. If you need to implement a custom retry, avoid creating wrappers around the storage client classes. Instead, use the capabilities to extend the existing policies through the IExtendedRetryPolicy interface. If you are using read access geo-redundant storage (RA-GRS ) you can use the LocationMode to specify that retry attempts will access the secondary read-only copy of the store should the primary access fail. However, when using this option you must ensure that your application can work successfully with data that may be stale if the replication from the primary store has not yet completed. Consider starting with following settings for retrying operations. These are general purpose settings, and you should monitor the operations and fine tune the values to suit your own scenario. SAMPLE TARGET E2E MAX LATENCY RETRY POLICY SETTINGS VALUES HOW IT WORKS Interactive, UI, or foreground 2 seconds Linear maxAttempt deltaBackoff 3 500 ms Attempt 1 delay 500 ms Attempt 2 delay 500 ms Attempt 3 delay 500 ms Background or batch 30 seconds Exponential maxAttempt deltaBackoff 5 4 seconds Attempt 1 delay ~3 sec Attempt 2 delay ~7 sec Attempt 3 delay ~15 sec CONTEX T Telemetry Retry attempts are logged to a TraceSource. You must configure a TraceListener to capture the events and write them to a suitable destination log. You can use the TextWriterTraceListener or XmlWriterTraceListener to write the data to a log file, the EventLogTraceListener to write to the Windows Event Log, or the EventProviderTraceListener to write trace data to the ETW subsystem. You can also configure auto-flushing of the buffer, and the verbosity of events that will be logged (for example, Error, Warning, Informational, and Verbose). For more information, see Client-side Logging with the .NET Storage Client Library. Operations can receive an OperationContext instance, which exposes a Retrying event that can be used to attach custom telemetry logic. For more information, see OperationContext.Retrying Event. Examples The following code example shows how to create two TableRequestOptions instances with different retry settings; one for interactive requests and one for background requests. The example then sets these two retry policies on the client so that they apply for all requests, and also sets the interactive strategy on a specific request so that it overrides the default settings applied to the client. using using using using using System; System.Threading.Tasks; Microsoft.WindowsAzure.Storage; Microsoft.WindowsAzure.Storage.RetryPolicies; Microsoft.WindowsAzure.Storage.Table; namespace RetryCodeSamples namespace RetryCodeSamples { class AzureStorageCodeSamples { private const string connectionString = "UseDevelopmentStorage=true"; public async static Task Samples() { var storageAccount = CloudStorageAccount.Parse(connectionString); TableRequestOptions interactiveRequestOption = new TableRequestOptions() { RetryPolicy = new LinearRetry(TimeSpan.FromMilliseconds(500), 3), // For Read-access geo-redundant storage, use PrimaryThenSecondary. // Otherwise set this to PrimaryOnly. LocationMode = LocationMode.PrimaryThenSecondary, // Maximum execution time based on the business use case. MaximumExecutionTime = TimeSpan.FromSeconds(2) }; TableRequestOptions backgroundRequestOption = new TableRequestOptions() { // Client has a default exponential retry policy with 4 sec delay and 3 retry attempts // Retry delays will be approximately 3 sec, 7 sec, and 15 sec MaximumExecutionTime = TimeSpan.FromSeconds(30), // PrimaryThenSecondary in case of Read-access geo-redundant storage, else set this to PrimaryOnly LocationMode = LocationMode.PrimaryThenSecondary }; var client = storageAccount.CreateCloudTableClient(); // Client has a default exponential retry policy with 4 sec delay and 3 retry attempts // Retry delays will be approximately 3 sec, 7 sec, and 15 sec // ServerTimeout and MaximumExecutionTime are not set { // Set properties for the client (used on all requests unless overridden) // Different exponential policy parameters for background scenarios client.DefaultRequestOptions = backgroundRequestOption; // Linear policy for interactive scenarios client.DefaultRequestOptions = interactiveRequestOption; } { // set properties for a specific request var stats = await client.GetServiceStatsAsync(interactiveRequestOption, operationContext: null); } { // Set up notifications for an operation var context = new OperationContext(); context.ClientRequestID = "some request id"; context.Retrying += (sender, args) => { /* Collect retry information */ }; context.RequestCompleted += (sender, args) => { /* Collect operation completion information */ }; var stats = await client.GetServiceStatsAsync(null, context); } } } } More information Azure Storage client Library retry policy recommendations Storage Client Library 2.0 – Implementing retry policies General REST and retry guidelines Consider the following when accessing Azure or third party services: Use a systematic approach to managing retries, perhaps as reusable code, so that you can apply a consistent methodology across all clients and all solutions. Consider using a retry framework such as Polly to manage retries if the target service or client has no builtin retry mechanism. This will help you implement a consistent retry behavior, and it may provide a suitable default retry strategy for the target service. However, you may need to create custom retry code for services that have non-standard behavior, that do not rely on exceptions to indicate transient failures, or if you want to use a Retry-Response reply to manage retry behavior. The transient detection logic will depend on the actual client API you use to invoke the REST calls. Some clients, such as the newer HttpClient class, will not throw exceptions for completed requests with a nonsuccess HTTP status code. The HTTP status code returned from the service can help to indicate whether the failure is transient. You may need to examine the exceptions generated by a client or the retry framework to access the status code or to determine the equivalent exception type. The following HTTP codes typically indicate that a retry is appropriate: 408 Request Timeout 429 Too Many Requests 500 Internal Server Error 502 Bad Gateway 503 Service Unavailable 504 Gateway Timeout If you base your retry logic on exceptions, the following typically indicate a transient failure where no connection could be established: WebExceptionStatus.ConnectionClosed WebExceptionStatus.ConnectFailure WebExceptionStatus.Timeout WebExceptionStatus.RequestCanceled In the case of a service unavailable status, the service might indicate the appropriate delay before retrying in the Retry-After response header or a different custom header. Services might also send additional information as custom headers, or embedded in the content of the response. Do not retry for status codes representing client errors (errors in the 4xx range) except for a 408 Request Timeout. Thoroughly test your retry strategies and mechanisms under a range of conditions, such as different network states and varying system loadings. Retry strategies The following are the typical types of retry strategy intervals: Exponential. A retry policy that performs a specified number of retries, using a randomized exponential back off approach to determine the interval between retries. For example: var random = new Random(); var delta = (int)((Math.Pow(2.0, currentRetryCount) - 1.0) * random.Next((int)(this.deltaBackoff.TotalMilliseconds * 0.8), (int)(this.deltaBackoff.TotalMilliseconds * 1.2))); var interval = (int)Math.Min(checked(this.minBackoff.TotalMilliseconds + delta), this.maxBackoff.TotalMilliseconds); retryInterval = TimeSpan.FromMilliseconds(interval); Incremental. A retry strategy with a specified number of retry attempts and an incremental time interval between retries. For example: retryInterval = TimeSpan.FromMilliseconds(this.initialInterval.TotalMilliseconds + (this.increment.TotalMilliseconds * currentRetryCount)); LinearRetry. A retry policy that performs a specified number of retries, using a specified fixed time interval between retries. For example: retryInterval = this.deltaBackoff; Transient fault handling with Polly Polly is a library to programatically handle retries and circuit breaker strategies. The Polly project is a member of the .NET Foundation. For services where the client does not natively support retries, Polly is a valid alternative and avoids the need to write custom retry code, which can be hard to implement correctly. Polly also provides a way to trace errors when they occur, so that you can log retries. More information Connection resiliency Data Points - EF Core 1.1 Performance antipatterns for cloud applications 3/13/2019 • 2 minutes to read • Edit Online A performance antipattern is a common practice that is likely to cause scalability problems when an application is under pressure. Here is a common scenario: An application behaves well during performance testing. It's released to production, and begins to handle real workloads. At that point, it starts to perform poorly — rejecting user requests, stalling, or throwing exceptions. The development team is then faced with two questions: Why didn't this behavior show up during testing? How do we fix it? The answer to the first question is straightforward. It's very difficult in a test environment to simulate real users, their behavior patterns, and the volumes of work they might perform. The only completely sure way to understand how a system behaves under load is to observe it in production. To be clear, we aren't suggesting that you should skip performance testing. Performance tests are crucial for getting baseline performance metrics. But you must be prepared to observe and correct performance issues when they arise in the live system. The answer to the second question, how to fix the problem, is less straightforward. Any number of factors might contribute, and sometimes the problem only manifests under certain circumstances. Instrumentation and logging are key to finding the root cause, but you also have to know what to look for. Based on our engagements with Microsoft Azure customers, we've identified some of the most common performance issues that customers see in production. For each antipattern, we describe why the antipattern typically occurs, symptoms of the antipattern, and techniques for resolving the problem. We also provide sample code that illustrates both the antipattern and a suggested solution. Some of these antipatterns may seem obvious when you read the descriptions, but they occur more often than you might think. Sometimes an application inherits a design that worked on-premises, but doesn't scale in the cloud. Or an application might start with a very clean design, but as new features are added, one or more of these antipatterns creeps in. Regardless, this guide will help you to identify and fix these antipatterns. Here is the list of the antipatterns that we've identified: ANTIPATTERN DESCRIPTION Busy Database Offloading too much processing to a data store. Busy Front End Moving resource-intensive tasks onto background threads. Chatty I/O Continually sending many small network requests. Extraneous Fetching Retrieving more data than is needed, resulting in unnecessary I/O. Improper Instantiation Repeatedly creating and destroying objects that are designed to be shared and reused. Monolithic Persistence Using the same data store for data with very different usage patterns. ANTIPATTERN DESCRIPTION No Caching Failing to cache data. Synchronous I/O Blocking the calling thread while I/O completes. Busy Database antipattern 3/13/2019 • 7 minutes to read • Edit Online Offloading processing to a database server can cause it to spend a significant proportion of time running code, rather than responding to requests to store and retrieve data. Problem description Many database systems can run code. Examples include stored procedures and triggers. Often, it's more efficient to perform this processing close to the data, rather than transmitting the data to a client application for processing. However, overusing these features can hurt performance, for several reasons: The database server may spend too much time processing, rather than accepting new client requests and fetching data. A database is usually a shared resource, so it can become a bottleneck during periods of high use. Runtime costs may be excessive if the data store is metered. That's particularly true of managed database services. For example, Azure SQL Database charges for Database Transaction Units (DTUs). Databases have finite capacity to scale up, and it's not trivial to scale a database horizontally. Therefore, it may be better to move processing into a compute resource, such as a VM or App Service app, that can easily scale out. This antipattern typically occurs because: The database is viewed as a service rather than a repository. An application might use the database server to format data (for example, converting to XML ), manipulate string data, or perform complex calculations. Developers try to write queries whose results can be displayed directly to users. For example a query might combine fields, or format dates, times, and currency according to locale. Developers are trying to correct the Extraneous Fetching antipattern by pushing computations to the database. Stored procedures are used to encapsulate business logic, perhaps because they are considered easier to maintain and update. The following example retrieves the 20 most valuable orders for a specified sales territory and formats the results as XML. It uses Transact-SQL functions to parse the data and convert the results to XML. You can find the complete sample here. SELECT TOP 20 soh.[SalesOrderNumber] AS '@OrderNumber', soh.[Status] AS '@Status', soh.[ShipDate] AS '@ShipDate', YEAR(soh.[OrderDate]) AS '@OrderDateYear', MONTH(soh.[OrderDate]) AS '@OrderDateMonth', soh.[DueDate] AS '@DueDate', FORMAT(ROUND(soh.[SubTotal],2),'C') AS '@SubTotal', FORMAT(ROUND(soh.[TaxAmt],2),'C') AS '@TaxAmt', FORMAT(ROUND(soh.[TotalDue],2),'C') AS '@TotalDue', CASE WHEN soh.[TotalDue] > 5000 THEN 'Y' ELSE 'N' END AS '@ReviewRequired', ( SELECT c.[AccountNumber] AS '@AccountNumber', UPPER(LTRIM(RTRIM(REPLACE( CONCAT( p.[Title], ' ', p.[FirstName], ' ', p.[MiddleName], ' ', p.[LastName], ' ', p.[Suffix]), ' ', ' ')))) AS '@FullName' FROM [Sales].[Customer] c INNER JOIN [Person].[Person] p ON c.[PersonID] = p.[BusinessEntityID] WHERE c.[CustomerID] = soh.[CustomerID] FOR XML PATH ('Customer'), TYPE ), ( SELECT sod.[OrderQty] AS '@Quantity', FORMAT(sod.[UnitPrice],'C') AS '@UnitPrice', FORMAT(ROUND(sod.[LineTotal],2),'C') AS '@LineTotal', sod.[ProductID] AS '@ProductId', CASE WHEN (sod.[ProductID] >= 710) AND (sod.[ProductID] <= 720) AND (sod.[OrderQty] >= 5) THEN 'Y' ELSE 'N' END AS '@InventoryCheckRequired' FROM [Sales].[SalesOrderDetail] sod WHERE sod.[SalesOrderID] = soh.[SalesOrderID] ORDER BY sod.[SalesOrderDetailID] FOR XML PATH ('LineItem'), TYPE, ROOT('OrderLineItems') ) FROM [Sales].[SalesOrderHeader] soh WHERE soh.[TerritoryId] = @TerritoryId ORDER BY soh.[TotalDue] DESC FOR XML PATH ('Order'), ROOT('Orders') Clearly, this is complex query. As we'll see later, it turns out to use significant processing resources on the database server. How to fix the problem Move processing from the database server into other application tiers. Ideally, you should limit the database to performing data access operations, using only the capabilities that the database is optimized for, such as aggregation in an RDBMS. For example, the previous Transact-SQL code can be replaced with a statement that simply retrieves the data to be processed. SELECT soh.[SalesOrderNumber] AS [OrderNumber], soh.[Status] AS [Status], soh.[OrderDate] AS [OrderDate], soh.[DueDate] AS [DueDate], soh.[ShipDate] AS [ShipDate], soh.[SubTotal] AS [SubTotal], soh.[TaxAmt] AS [TaxAmt], soh.[TotalDue] AS [TotalDue], c.[AccountNumber] AS [AccountNumber], p.[Title] AS [CustomerTitle], p.[FirstName] AS [CustomerFirstName], p.[MiddleName] AS [CustomerMiddleName], p.[LastName] AS [CustomerLastName], p.[Suffix] AS [CustomerSuffix], sod.[OrderQty] AS [Quantity], sod.[UnitPrice] AS [UnitPrice], sod.[LineTotal] AS [LineTotal], sod.[ProductID] AS [ProductId] FROM [Sales].[SalesOrderHeader] soh INNER JOIN [Sales].[Customer] c ON soh.[CustomerID] = c.[CustomerID] INNER JOIN [Person].[Person] p ON c.[PersonID] = p.[BusinessEntityID] INNER JOIN [Sales].[SalesOrderDetail] sod ON soh.[SalesOrderID] = sod.[SalesOrderID] WHERE soh.[TerritoryId] = @TerritoryId AND soh.[SalesOrderId] IN ( SELECT TOP 20 SalesOrderId FROM [Sales].[SalesOrderHeader] soh WHERE soh.[TerritoryId] = @TerritoryId ORDER BY soh.[TotalDue] DESC) ORDER BY soh.[TotalDue] DESC, sod.[SalesOrderDetailID] The application then uses the .NET Framework System.Xml.Linq APIs to format the results as XML. // Create a new SqlCommand to run the Transact-SQL query using (var command = new SqlCommand(...)) { command.Parameters.AddWithValue("@TerritoryId", id); // Run the query and create the initial XML document using (var reader = await command.ExecuteReaderAsync()) { var lastOrderNumber = string.Empty; var doc = new XDocument(); var orders = new XElement("Orders"); doc.Add(orders); XElement lineItems = null; // Fetch each row in turn, format the results as XML, and add them to the XML document while (await reader.ReadAsync()) { var orderNumber = reader["OrderNumber"].ToString(); if (orderNumber != lastOrderNumber) { lastOrderNumber = orderNumber; var order = new XElement("Order"); orders.Add(order); var customer = new XElement("Customer"); lineItems = new XElement("OrderLineItems"); order.Add(customer, lineItems); var orderDate = (DateTime)reader["OrderDate"]; var totalDue = (Decimal)reader["TotalDue"]; var reviewRequired = totalDue > 5000 ? 'Y' : 'N'; order.Add( new new new ... XAttribute("OrderNumber", orderNumber), XAttribute("Status", reader["Status"]), XAttribute("ShipDate", reader["ShipDate"]), // More attributes, not shown. var fullName = string.Join(" ", reader["CustomerTitle"], reader["CustomerFirstName"], reader["CustomerMiddleName"], reader["CustomerLastName"], reader["CustomerSuffix"] ) .Replace(" ", " ") //remove double spaces .Trim() .ToUpper(); customer.Add( new XAttribute("AccountNumber", reader["AccountNumber"]), new XAttribute("FullName", fullName)); } var productId = (int)reader["ProductID"]; var quantity = (short)reader["Quantity"]; var inventoryCheckRequired = (productId >= 710 && productId <= 720 && quantity >= 5) ? 'Y' : 'N'; lineItems.Add( new XElement("LineItem", new XAttribute("Quantity", quantity), new XAttribute("UnitPrice", ((Decimal)reader["UnitPrice"]).ToString("C")), new XAttribute("LineTotal", RoundAndFormat(reader["LineTotal"])), new XAttribute("ProductId", productId), new XAttribute("InventoryCheckRequired", inventoryCheckRequired) )); } // Match the exact formatting of the XML returned from SQL var xml = doc .ToString(SaveOptions.DisableFormatting) .Replace(" />", "/>"); } } NOTE This code is somewhat complex. For a new application, you might prefer to use a serialization library. However, the assumption here is that the development team is refactoring an existing application, so the method needs to return the exact same format as the original code. Considerations Many database systems are highly optimized to perform certain types of data processing, such as calculating aggregate values over large datasets. Don't move those types of processing out of the database. Do not relocate processing if doing so causes the database to transfer far more data over the network. See the Extraneous Fetching antipattern. If you move processing to an application tier, that tier may need to scale out to handle the additional work. How to detect the problem Symptoms of a busy database include a disproportionate decline in throughput and response times in operations that access the database. You can perform the following steps to help identify this problem: 1. Use performance monitoring to identify how much time the production system spends performing database activity. 2. Examine the work performed by the database during these periods. 3. If you suspect that particular operations might cause too much database activity, perform load testing in a controlled environment. Each test should run a mixture of the suspect operations with a variable user load. Examine the telemetry from the load tests to observe how the database is used. 4. If the database activity reveals significant processing but little data traffic, review the source code to determine whether the processing can better be performed elsewhere. If the volume of database activity is low or response times are relatively fast, then a busy database is unlikely to be a performance problem. Example diagnosis The following sections apply these steps to the sample application described earlier. Monitor the volume of database activity The following graph shows the results of running a load test against the sample application, using a step load of up to 50 concurrent users. The volume of requests quickly reaches a limit and stays at that level, while the average response time steadily increases. Note that a logarithmic scale is used for those two metrics. The next graph shows CPU utilization and DTUs as a percentage of service quota. DTUs provides a measure of how much processing the database performs. The graph shows that CPU and DTU utilization both quickly reached 100%. Examine the work performed by the database It could be that the tasks performed by the database are genuine data access operations, rather than processing, so it is important to understand the SQL statements being run while the database is busy. Monitor the system to capture the SQL traffic and correlate the SQL operations with application requests. If the database operations are purely data access operations, without a lot of processing, then the problem might be Extraneous Fetching. Implement the solution and verify the result The following graph shows a load test using the updated code. Throughput is significantly higher, over 400 requests per second versus 12 earlier. The average response time is also much lower, just above 0.1 seconds compared to over 4 seconds. CPU and DTU utilization shows that the system took longer to reach saturation, despite the increased throughput. Related resources Extraneous Fetching antipattern Busy Front End antipattern 3/13/2019 • 8 minutes to read • Edit Online Performing asynchronous work on a large number of background threads can starve other concurrent foreground tasks of resources, decreasing response times to unacceptable levels. Problem description Resource-intensive tasks can increase the response times for user requests and cause high latency. One way to improve response times is to offload a resource-intensive task to a separate thread. This approach lets the application stay responsive while processing happens in the background. However, tasks that run on a background thread still consume resources. If there are too many of them, the