Case Study: How custom autoscaling saved our time and money?
Check Reference
[Abstract]
If you struggle with scaling your infrastructure for your application queue on AWS, this talk will share the solution that works and scales the application based on number of tasks in queue.
[Description]
A distributed system that allows application to process tasks asynchronously and allows task scheduling as well. Asynchronous tasks are delivered to servers via message queues.
But if your message queues have variable amount of load and needs a auto scaling solution, the default and common metrics you have is CPU utilisation and memory utilisation. Unfortunately these metrics are not the best auto scaling solution in case of variable load.
The idea was scale the workers based on the load instead of running bigger instances and save cost as well.
In this talk, I will share the solution that works on AWS infrastructure with celery and redis to scale your workers for variable load. How we solve the custom scaling problem to process all tasks in our queues almost in real time (irrespective of the number of tasks in the message queue) and keep the cost of our infrastructure as low as possible.
[Talk Timeline]
- Introduction
- Distributed computing and challenges
- Problem statement
- Possible Solutions
- Why autoscaling on basis of CPU/memory didn't work for us?
- Optimal Solution (Custom autoscaling)
- Implementation
- Monitoring service
- Configuration
- Scale Up Logic
- Scale Down Logic
- Other similar possible solutions
- What we saved?
- QnA
[Pre-requisites]
- Understanding about how distributed system works with messaging queue
- Basics of AWS autoscaling groups
[Attendees takeaway] Gain understanding of how monitoring tasks in a queue helped us process variable load. The talk is intended to share the solution to help other developers trying to solve the similar problem.
Note The talk will be suitable for both intermediate and advance level participants.