Amazon AWS - Auto Scaling reference

CODEDRAGON Security/System

반응형

   

   

Auto Scaling 90min.com

https://railsadventures.wordpress.com/2015/02/13/auto-scaling-90min-com/

   

   

  • Metrics are constantly going from the ELB and the EC2 instances themselves into AWS CloudWatch.
    CloudWatch is AWS's monitoring platform. It receives data from within AWS or custom data from outside. It visualises the metrics into graphs and allows you to define alarms when a metric has crossed a threshold.
  • When a certain metric goes above/below the threshold an alarm is triggered CloudWatch.
    For example, you can set an alarm when the average CPU usage of your instances is above 60%.
    Alarms can trigger a "Auto Scaling Policy".
  • An Auto Scaling Policy is how we scale up or down. It can be set to add a constant number of servers (add 5 servers for example) or to add a certain percentage of our current number of servers (add 20%).
    We can have a few Auto Scaling Policies to handle different alarms. An example usage is: add 20% of current servers is CPU > 50%, but add 75% of current servers if CPU > 85%. This way we can scale up/down faster or slower, depending on the circumstances.
  • Auto Scaling Policy decides to launch X instances from an AMI. An AMI is an image that should make your service available on boot (more on this later).
    Of course it may take it some time to boot and run the proper processes but eventually it should run the service.
  • The ELB performs health check on the newly created instances. For example, the ELB might send HTTP request to /ping on your instance. You should verify that an instance responds with 200 OK to the /ping only when it is up and running with your service.
  • When the instances pass the health check they are added behind the ELB and start serving requests like all other instances in your service.

       

       

       

반응형