從劍宗到氣宗 - 談aws ecs與serverless最佳實踐
Post on 16-Apr-2017
1.204 Views
Preview:
TRANSCRIPT
Best Practice in AWS ECS and Serverless
• Last Updated: April/19 2016• Scheduled for 45 minutes
- The Challenges- Foundational Concepts of ECS and Serverless- New Challenges- The Future
• Q & A
A Bit About Me• Both an IT Pro and developer for the past 15 years• Chief Architect of Astra Cloud(miiicasa.com) from Taiwan• Experienced in IoT cloud platform across multiple AWS regions globally• AWS All-5 Certificates holder
- AWS Certified Solution Architect - Associate- AWS Certified SysOps - Associate- AWS Certified Developer - Associate- AWS Certified Solution Architect - Professional- AWS Certified DevOps Engineer - Professional
Challenges• You pay too much for EC2 instances• pay even much for micro services• Complexity in Infrastructure• VPC, subnet, routing-table, NAT, NACL, security groups, ELB, ASG• Complexity in A/B testing and B/G deployment• CFN re-deploy, EB env swap, CodePipeline/CodeDeploy, OpsWorks, etc.• complexity means error-proneness
More Challenges
• dev/testing/QA/staging/prod consistency• CI & CD challenges• even worse to manage multiple AWS
regions • service decoupling means nightmares
Questions
• Can I just focus on my service stack unit, instead of computing unit(EC2) ?
• Self-Healing, Auto-Scaling, AZ-balancing ?• Log Consolidation ?• Immutable and Stateless Architecture ?• Cost Optimization and Resource Optimization ?• still having full control on my tech stack (frameworks and languages)• simple deployment, A/B and B/G ?
a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances.
AWS EC2 Container Service
Auto Scaling Policy Design
• scale out spot on 30%-60%• scale out on-demand when >= 60%• scale in on-demand when <60%• scale in spot when <=30%• with minimal 1 on-demand or RI
Simply Put
• on-demand/RI 打底 spot 伸縮• on-demand scale out last, scale in first• try spot fleet if you need couples of
instances( lets talk about it next time )
Benefits and Tips• Leverage ELB to build micro-services• Monitor service loading by CloudWatch and adjust spot fleet to scale out/in
services/tasks dynamically• Self-healing in container level• Fully-managed deployment and rolling update with revisions• Better resource utilization• Consolidate application logs to CloudWatch Logs• Create filter, metrics and build alarms from CloudWatch Logs• Push your docker images to ECR and deploy across regions with exactly the same
image
ECS Service Load Balancing
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Load Balancing on Random Ports
http://www.slideshare.net/JulienSIMON5/amazon-ecs-january-2016/12
Meteor Galaxy session-aware with random ports
http://www.slideshare.net/AmazonWebServices/dvo313-building-nextgeneration-applications-with-amazon-ecs
AWS Lambda AWS API Gateway
“a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure”
“a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale”
Mobile Integration
AWS Lambda
RequestResponse(Sync)Event(Asyc)
{“foo”:”bar”} event.context={“foo”:”bar”
}SDK
API Gateway Lambda function Integration
RESTful APIHTTP PUT /items/123{“foo”:”bar”} event.param_id=123
event.http_body={“foo”:”bar”}
Pros• cloud native with your business code in Lambda• no infrastructure to manage• leverage AWS PaaS infrastructure at scale• custom or federated authorization• very minimal cost for small-medium teams
- 30m requests = $11.63 per month (Lambda)- $4.25 per million requests(API Gateway)
http://www.slideshare.net/CaseyLee2/serverless-delivery
Cons - Lambda Limit• Lambda soft limit concurrency is 100 • 300 seconds max duration per invocation• Lambda in VPC restriction
- private IP addresses- ENIC limit(default 20*5=100)
Cons - API Gateway• 500-1000 QPS per AWS Account• 5M requests / month = $18.79• 100 QPS = $974.07 / month = 31,350NTD• No async or parallel invocation with
Lambda
Cons - Performance• push and pull invocation model of Lambda• -> delegation with higher memory• no connection pooling• -> always open/close conn in handler
scope
Cons - Development• CloudWatch debugging• immature CI/CD toolchains• lack of PHP, Ruby and Golang• re-deploy the whole bundle could be a pain
Use ECS• financial concern - When you have traffic more than
100QPS + • operation concern - Long running process or API service • language concern - Golang, PHP, Ruby, etc.• performance concern - need really big memory or CPU-
optimized• protocol concern - websockets, MQTT, other TCP
protocols
Use Serverless• small project, simple business logic• focus on the code only• no infrastructure management• stateless• quick micro services implementation• simply integrated with other AWS services
- i.e. API Gateway update DynamoDB, Kinesis, SQS as service proxy.
Conclusions• containerize your stack, and try serverless as much as you can• build stateless application • immutable architecture - every computing component can be replaced and
scaled with no impact• focus on your business logic, instead of the infrastructure, forget your
infrastructure• try not use any EC2, if necessary, avoid SSH into EC2 for manual operation• fully-managed and fully-automation is the way to go• embrace event-driven cloud computing
top related