packer, where devops begins
TRANSCRIPT
Packer, where DevOps begins
Jeff Hung
@jeffhung
• github.com/jeffhung
• Works in Trend Micro– Hadoop infrastructure– Platform as-a Service
• Experience– Runs agile/scrum 5 years– Runs DevOps 2 years
What is DevOps?
DevOps 是一種 92 共識,認真就輸了。 ( 誤 ?!)
Dev ♥ Ops
ContinuousIntegration /
Delivery
Forever Stack
DevOps could be…
Release Early, Release Often
Fast Iteration
Forever Stack Tools
Jenkins New Relic
Ganglia
Nagios
Cacti
Gradle
Ant
Solano
Chef
Ansible Puppe
t
SaltStack
Logstash
Splunk
PaperTrial
NoSQL
BalsamiqIaaS, PaaS
DockerSelenium
Every software runs on Operating System
Packer Workflow
Build Provision
Post-Process
AWS EC2
VMware
VirtualBox
Docker
…
packer.json
Packer Workflow
Build Provision
Post-Process
AWS EC2
VMware
VirtualBox
Docker
…
packer.json
packer.json{ "variables": { "aws_access_key": "{{env `AWS_ACCESS_KEY`}}", "aws_secret_key”: "{{env `AWS_SECRET_KEY`}}" }, "builders": [{ "type": "amazon-ebs", "access_key": "{{user `aws_access_key`}}", "secret_key": "{{user `aws_secret_key`}}", "region”: "us-east-1", "source_ami": "ami-9eaa1cf6", "instance_type": "t2.micro", "ssh_username": "ubuntu", "ami_name": "packer-example {{timestamp}}" }]}
The variablessection
The builderssection
The variables section{ "variables": { "aws_access_key": "{{env `AWS_ACCESS_KEY`}}", "aws_secret_key”: "{{env `AWS_SECRET_KEY`}}" }, "builders": [{ "type": "amazon-ebs", "access_key": "{{user `aws_access_key`}}", "secret_key": "{{user `aws_secret_key`}}", "region”: "us-east-1", "source_ami": "ami-9eaa1cf6", "instance_type": "t2.micro", "ssh_username": "ubuntu", "ami_name": "packer-example {{timestamp}}" }]}
User Variables
Calls the user function to get value
Calls the env function to get value from environment variables.
The env function is only valid within the variables section.
The builders section{ "variables": { "aws_access_key": "{{env `AWS_ACCESS_KEY`}}", "aws_secret_key”: "{{env `AWS_SECRET_KEY`}}" }, "builders": [{ "type": "amazon-ebs", "access_key": "{{user `aws_access_key`}}", "secret_key": "{{user `aws_secret_key`}}", "region”: "us-east-1", "source_ami": "ami-9eaa1cf6", "instance_type": "t2.micro", "ssh_username": "ubuntu", "ami_name": "packer-example {{timestamp}}" }]}
Creates EBS-backed AMI by launching a source AMI and re-packaging it into a new AMI after provisioning.
The source AMI
Use timestamp function to make it unique
The resulting AMI
$ packer build -var 'aws_access_key=YOUR ACCESS KEY' \ -var 'aws_secret_key=YOUR SECRET KEY' \ packer.json==> amazon-ebs: amazon-ebs output will be in this color.==> amazon-ebs: Creating temporary keypair for this instance...==> amazon-ebs: Creating temporary security group for this instance...==> amazon-ebs: Authorizing SSH access on the temporary security group...==> amazon-ebs: Launching a secure AWS instance...==> amazon-ebs: Waiting for instance to become ready...==> amazon-ebs: Connecting to the instance via SSH...==> amazon-ebs: Stopping the source instance...==> amazon-ebs: Waiting for the instance to stop...==> amazon-ebs: Creating the AMI: packer-example 1371856345==> amazon-ebs: AMI: ami-19601070==> amazon-ebs: Waiting for AMI to become ready...==> amazon-ebs: Terminating the source AMI instance...==> amazon-ebs: Deleting temporary security group...==> amazon-ebs: Deleting temporary keypair...==> amazon-ebs: Build finished.
==> Builds finished. The artifacts of successful builds are:--> amazon-ebs: AMIs were created:
us-east-1: ami-19601070
Builders
• Amazon EC2 (AMI)• DigitalOcean• Docker• Google Compute
Engine (GCE)
• OpenStack• Parallels• QEMU• VirtualBox• VMware
Builders are responsible for creating machines and generating images from them for various platforms.
Packer Workflow
Build Provision
Post-Process
AWS EC2
VMware
VirtualBox
Docker
…
packer.json
Customize with provisioners{ "variables": {…}, "builders": […], "provisioners": [{ "type": "shell", "script": "./scripts/install-puppet.sh” }, { "type": ”puppet-masterless", "manifest_file": "puppet/manifest/site.pp", "module_paths": [ "puppet/modules" ], "hiera_config_path": "puppet/hiera.yaml” }]}
Provisioners are executed one by one.
1
2
Install puppet agent{ "variables": {…}, "builders": […], "provisioners": [{ "type": "shell", "script": "./scripts/install-puppet.sh” }, { "type": ”puppet-masterless", "manifest_file": "puppet/manifest/site.pp", "module_paths": [ "puppet/modules" ], "hiera_config_path": "puppet/hiera.yaml” }]}
Provision machines using shell scripts
Usually we will reuse these scripts in different kinds of machines.
Provision with puppet scripts
{ "variables": {…}, "builders": […], "provisioners": [{ "type": "shell", "script": "./scripts/install-puppet.sh” }, { "type": ”puppet-masterless", "manifest_file": "puppet/manifest/site.pp", "module_paths": [ "puppet/modules" ], "hiera_config_path": "puppet/hiera.yaml” }]}
No need for a puppet server
Manifests, modules, and hiera data can all be stored in git.
Provisioners
Templates to install and configure software within running machines prior to turning them into machine images.
• Remote Shell• Local Shell• File Uploads• PowerShell• Windows Shell
• Ansible• Chef Client/Solo• Puppet
Masterless/Server• Salt• Windows Restart
Packer Workflow
Build Provision
Post-Process
AWS EC2
VMware
VirtualBox
Docker
…
packer.json
Local Repository
Packaging and Publishing
After the machine is built, we would like to:• Package as a zip-ball for local use• Package as Vagrant Box and publish on
Atlas• Preserve Vagrant Box in Local
Machine Built
Compress
Package Publish Atlas
Foo.zip Foo.box
{ … "post-processors": [{ "type": "compress", "output": "{{.BuildName}}-{{isotime \"20060102\"}}.zip" }, [{ "type": "vagrant", "output": "{{.BuildName}}-{{isotime \"20060102\"}}.box" }, { "type": "atlas", "token": "{{user `atlas_token`}}", "artifact": "trendmicro/centos62", "artifact_type": "virtualbox", "keep_input_artifact": true }]]}
Post-Processor ChainsPackage as a zip-ball for local use
Package as Vagrant Box and publish on Atlas
{ … "post-processors": [{ "type": "compress", "output": "{{.BuildName}}-{{isotime \"20060102\"}}.zip" }, [{ "type": "vagrant", "output": "{{.BuildName}}-{{isotime \"20060102\"}}.box" }, { "type": "atlas", "token": "{{user `atlas_token`}}", "artifact": "trendmicro/centos62", "artifact_type": "virtualbox", "keep_input_artifact": true }]]}
Compress into Single Archive
Go-style date format
Compression format auto-inferred from extension
{ … "post-processors": [{ "type": "compress", "output": "{{.BuildName}}-{{isotime \"20060102\"}}.zip" }, [{ "type": "vagrant", "output": "{{.BuildName}}-{{isotime \"20060102\"}}.box" }, { "type": "atlas", "token": "{{user `atlas_token`}}", "artifact": "trendmicro/centos62", "artifact_type": "virtualbox", "keep_input_artifact": true }]]}
Sequence definition
Publish Vagrant Box
Preserve the vagrant box packaged in previous step
Preserve Vagrant Box
in Local
Post-Processors
The post-processor section configures any post-processing that will be done to image built by the builders.
• compress• vSphere• Vagrant• Vagrant Cloud• Atlas
• docker-import• docker-push• docker-save• docker-tag
What Else Do You Need?
• Kickstart– Use kickstart file to install Linux from ISO
• chef/bento– Vagrant Box Packer definitions by Chef– Published on Atlas:
https://atlas.hashicorp.com/chef
• Windows–Windows Automated Installation Kit (AIK)– Unattended Windows Setup
Pets Cattles
Pets vs. Cattles
• Ticket-based• Handcrafted• Scale-up• Smart Hardware
• Self-Service• Automated• Scale-out• Smart Apps
Jenkins
Cattles Workflow
Base.json
Std.json
Win.json
Code Dev
Dev.box
YUM repo
RPM
Web.jsonApp.jso
n
…
DB.json
Image repo
Web.boxApp.box
…
DB.box
Win7.box
…
Win8.box
AWS
Build RPM
Build Imag
e
Deploy App
TowerPlayboo
k
Jenkins
SPN (Pets) Flow
Base.json
Std.json
Win.json
Code Dev
Dev.box
YUM repo
RPM
Win7.box
…
Win8.box
DC / AWS
Build RPM
Deploy App
PuppetManifes
t
To Docker or not?
What is Your Flow?
• You need to define your DevOps flow• No need to build Rome in one day• Consider company culture• Tool adoption
Summary
• DevOps Fast Iteration
• Packer as the starting point
• Builders Provisioners Post-
Processors
• Pets or Cattle?
• Define Your DevOps Workflow
THANK YOU!
Alternative Format?
But we needs comments to add annotations and disable entire experimental blocks...
It is one of the primary reason we choose JSON as the configuration format: it is highly convenient to write a script to generate the configuration.
@mitchellh
.SUFFIXES: .json .yml
.yml.json:ruby -ryaml -rjson \
-e 'puts JSON.pretty_generate(YAML.load(ARGF))' \ < $< > $@;
https://github.com/mitchellh/packer/issues/887