1. Horizontally Scaling Compute Pattern
In this pattern, compute nodes are horizontally scaled to utilize cloud resources effectively and increase operational efficiency. This pattern can be leveraged to ensure applications are allocated resources in a cloud-native manner. Potential benefits for applications using this pattern include enhanced scalability, availability, cost optimization, and user experience.
2. Queue-Centric Workflow Pattern
This pattern focuses on the asynchronous delivery of command requests sent from the user interface to a back-end service for processing. You can use this pattern to decouple application tiers, especially between the web user interface and service tiers. Messages are queued and communicated from the web tier to the service tier in one direction. Reliable cloud queue services simplify implementation.
3. Auto-Scaling Pattern
This pattern makes horizontal scaling more practical and cost-effective by automating routine scaling activities for greater efficiency and cost optimization. Cloud-native applications can easily handle the dynamic increase or decrease in resource levels.
4. Database Sharding Pattern
This pattern focuses on horizontally scaling data through sharding (dividing up data from a single database across two or more databases). Using this approach, you can overcome size, query performance, and transaction throughput limitations of traditional single-node databases. With managed sharding support, the economics of sharding a database become favorable.
5. Node Failure Pattern
This pattern addresses application response when a compute node shuts down or fails. You can use this pattern to prepare, handle, and recover from occasional disruptions and failures of compute nodes where your application is running. A cloud application that does not account for node failure scenarios will not be reliable.
6. Multisite Deployment Pattern
This advanced pattern focuses on deploying a single application to more than one data
center to improve the user experience for geographically dispersed users. To improve performance, users should be distributed so that more than one data center provides sufficient value. This pattern is also helpful for applications requiring a failover strategy if one data center is unavailable.