[오픈소스컨설팅] open stack ceph, neutron, ha, multi-region

106
OpenStack - Ceph & Neutron - 오픈소스컨설팅 염진영(jyy at osci.kr)

Upload: ji-woong-choi

Post on 09-Jan-2017

1.309 views

Category:

Software


13 download

TRANSCRIPT

OpenStack - Ceph & Neutron -

오픈소스컨설팅

염진영(jyy at osci.kr)

2

1. OpenStack

2. How to create instance

3. Ceph - Ceph - OpenStack with Ceph

4. Neutron - Neutron - How neutron works

5. OpenStack HA - controller - l3 agent

6. OpenStack multi-region

1. OpenStack

4

OpenSource cloud computing platform

OpenStack

컴포넌트 설명

컴퓨팅

리소스(Nova)

• 가상머신을 생성하고 종료 • KVM, XenServer 가 많이 사용되며 윈도계열인 Hyper-V

도 지원 • 특별한 HW 가 없이 레거시 하드웨어를 사용해서 확장성

(Scalability)

오브젝트

스토리지(Swift)

• Rackspace 에서 개발한 안정된 기술로 이미지들, 볼륨

백업, 스냅샷 등의 대용량 스토리지를 분산ㆍ저장

블록스토리지

(Cinder)

• 생성된 VM 에 부착되어 사용자 데이터를

지속적으로(persistent) 저장하기 위한 블록레벨의

스토리지

이미지(Glance) • 가상머신 생성을 위한 가상머신 이미지를 템플릿 형태로

제공하는 서비스

네트워크

(Neutron)

• 가장 기본적인 API 사용과 다양한 벤더의 요구사항을

수용하기 위해 플러그인-에이전트 구조로 구성

• SDN/NFV 컴포넌트에 대한 지원 추가 예정

인증(Keystone) • 오픈스택 서비스 사용을 위한 인증 서비스 제공

• 신원(identity), 토큰, 정책, 서비스, End-Point 관리

오픈스택 특징

6개의 코어 서비스와 다수의 선택 서비스 및 인큐베이션 서비스를

통해서 IDC에 필요한 거의 모든 컴포넌트들, 버추얼 서버나 네트워크,

스토리지, 빌링, 보안, UI, 이미지, 오케스트레이션 등을 제공

596개사가 참여하고 있고, 전세계 180개국에 사용자 그룹을 가지고

있으며, 수만 명의 개발자가 등록하여 활동 중에 있다. 강력한 리더십에

의해 1년에 두 번 메이저 업데이트

NASA, 페이팔, 시스코, CERN, Nectar, 인텔, IBM 시게이트, 소니 등

많은 레퍼런스를 가지고 있다. 오픈스택의 기술적인 목표는 사용자나

일반 개발자가 필요로 하는 컴퓨팅 리소스를 무한하게 제공

서버로부터 CPU, 메모리를, 스토리지로 부터 디스크를, 네트워크

장비로부터 네트워크, 이 네가지 물리적인 요소로 구분해 놓고 사용자

요청이 있을 때마다 이 요소들을 결합해서 서버, 네트워크, 스토리지

같은 가상 컴퓨팅 리소스를 제공

오픈스택 구성 컴포넌트

5

인증 서비스를 제공하는 코어 컴포넌트

유저 인증 및 openstack components의 접근 권한 확인

권한에 대한 정책 정의 및 관리

API 제공을 위한 Service endpoint(네트워크

URL) 관리하고 이를 user나 service에 제공(catalog)

domain : user, group, project을 namespace 처럼

분리

project : user나 group을 포함

user : role을 할당 받아 역할 수행

user group : user을 group으로 관리

role : user에게 assign되며, 어떠한 권한(authorization)을 가지는지 정의

keystone

출처 : https://prosuncsedu.wordpress.com/2016/03/09/relationship-among-concepts-domain-project-role-user-group-user-and-token-in-openstack-keystone/

6

VM Instance 생성 시 사용할 OS 이미지 제공

이미지 저장소로 local filesystem, S3, Swift, rbd등 사용 가능

glance-api

- glanc에 접근하기 위한 API 제공

glance-registry

- OS 이미지에 대한 metadata 저장 및 조회

Image store

- 실제 OS 이미지 데이터 저장

Glance

출처 : http://platform9.com/cnt/uploads/2015/09/glance.png

7

Block storage volume을 제공

Block-device 생성, Instance에 연결

Vendor 스토리지(NetApp, EMC etc), LVM, Ceph 등 지원

Instance 생성 시 제공되는 OS 영역의 Volume은 Ephemeral disk로 instance가 종료되면 함께 사라짐, 저장이 필요한 데이터는 Cinder에서 제공되는 block-device를 제공 받아 그곳에 저장

cinder-api - cinder에 접근하기 위한 API 제공

cinder-volume - driver에 설정된 back-end와 연결, 볼륨 연결 정보

cinder-scheduler - filter & weight 사용, 볼륨 저장할 장소 선택

cinder-backup - back-up을 위한 back-end 저장소 연결

data store - 실제 데이터 저장

Cinder

출처 : https://ilearnstack.files.wordpress.com/2013/04/cinder.png

8

L2, L3, router, dhcp, service(LB, firewall, VPN)를 제공하는 network를 생성 및 관리

neutron-server - 메인 네트워크 관리 컴포넌트 - neutron 연결을 위한 API 제공 - 데이터 DB 저장

Neutron-openvswitch-agent - OpenVSwitch를 핸들링하여 L2 및 Overlay network 제공

neutron-l3-agent - Virtual router 이용, External Network와 내부의 Private Network 연결

Neutron-dhcp-agent - private subnet에 생성되는 vm들에 dhcp 기능 제공(dnsmasq 사용)

neutron-metadata-agent - network metadata 제공

Neutron – networking service

출처 : http://image.slidesharecdn.com/openstackoverview-meet-upsoct2013-131025125320-phpapp02/95/openstack-neutron-havana-overview-oct-2013-27-638.jpg?cb=1382705727

9

이미지, 디스크, CPU, Memory, Network 리소스를 해당 서비스와 연결, VM Instance의 life cycle 관리

nova-api - nova components에 접근하기 위한 API 제공

nova-scheduler - filter, weigher 등의 설정을 기반으로 instance가 어떤 compute node에서 동작할지 결정하는 역할

nova-conductor - nova-compute와 DataBase 중간에서 중계 역할(DB 바로 접근 제거)

nova-consoleauth - console proxy에 접근하기 위한 user을 인증 기능 제공

nova-novncproxy - Instance에 VNC을 접속을 web-based console로 제공

nova-compute - compute node에서 동작, 다른 서비스로부터 전달 받은 리소스로 hypervisor을 컨트롤하여 VM 생성 및 관리

Nova

2. How to create instance

11

Openstak에 필수 Project : Nova, Cinder, Glance, Neutron, Keystone

각 service 간 통신은 RESTful API 이용

각 service 는 여러 개의 Component들로 구성

각 component 간 통신은 message broker(RabbitMQ)를 이용

서비스는 persistent data와 objects state를 database에 저장

출처 : mastering-openstack(2016) - safari

12

1. Login

UI : Openstack-dashboard CLI

user

• Openstack-dashboard나

cli에서 userID&PassWD를

이용하여 Login

• Keystone은 임시(default

1시간) 동안 사용할 token

발행

Keystone

keystoneAPI

keystone DB

13

2. Token 획득

UI : Openstack-dashboard CLI

user

• Openstack-dashboard나

cli에서 userID&PassWD를

이용하여 Login

• Keystone은 임시(default

1시간) 동안 사용할 token

발행

Keystone

keystoneAPI

keystone DB

HTTP/1.1 201 Created

Date: Thu, 07 Jul 2016 05:46:13 GMT

Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5

X-Subject-Token:

gAAAAABXfeylv8IBfSlOS14ns0WjrOCEwNkhIeSXjrzosC_F10KcN2r6WvRhcvYw3Ke2Q9BehEVAC3LpC9xEejTNF1MjpBN

rwFOR70-LnYoQE4s1ar2Pupr2iX-H7DgD7M9NRc1FQyyO1lpu7guK4jsvETmfVUn4Cw

Vary: X-Auth-Token

x-openstack-request-id: req-644c25ed-67b9-4e0d-a8d2-6ae42ac9a724^M

Content-Length: 308

Keep-Alive: timeout=5, max=100

Connection: Keep-Alive

Content-Type: application/json

{"token": {"issued_at": "2016-07-07T05:46:13.000000Z", "audit_ids": ["5Ys4oIpARZu-etj8YbX6Bg"], "methods":

["password"], "expires_at": "2016-07-07T06:46:13.860861Z", "user": {"domain": {"id":

"556b867fff9c473bb20e857ee98efb8a", "name": "default"}, "id": "308dd673a1be4a8fbdc24b3493bcf513", "name":

"admin"}}}

토큰 생성

14

3. 인스턴스 생성 요청

UI : Openstack-dashboard CLI

user

Instance 생성

- Instance 이름

- Image 선택

- Flavor 선택

- Network 선택

- 기타(key, security-group)

“인스턴스 시작"클릭

Keystone

keystoneAPI

keystone DB

15

4. nova API

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

Instace 생성을 위한 POST Request 요청

(Keystone에서 받은 token이용)

VM

POST /v2.1/be95edeb746c4f279bdbd666fc5f3f00/servers HTTP/1.1

Host: osc-openstack-controller.osci.kr:8774

X-Auth-Project-Id: be95edeb746c4f279bdbd666fc5f3f00

Accept-Encoding: gzip, deflate

Content-Length: 258

Accept: application/json

X-Auth-Token:

gAAAAABXfwWXq_PGkZAA2RZXBvNWTk5DE6ltWNkVLq3qjMUZhHipixcco5n1qkoAVZ8CIeUFL

JpP15CDhu-C6zvW46nw4wiupaPEj7ahY_8j-sXfUrKA7d7vO7kLUd78XzSWpAhgs4qTsZQK-

E_AMuPSCrlPaL3mgy73pgJW6uOWoiqbbKbde6X5ktmm_oX6VQ0ZOhwIlfks

Connection: keep-alive

User-Agent: python-novaclient

Content-Type: application/json

{"server": {"name": "vm-test-01", "imageRef": "0873afbc-7298-4361-b33d-87071e1d8df2",

"availability_zone": "nova", "flavorRef": "1", "OS-DCF:diskConfig": "AUTO", "max_count": 1,

"min_count": 1, "networks": [{"uuid": "0f4c4113-3cd5-4ffa-a43d-158152a1bac3"}]}}

16

5. nova – token 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

Keystone에 API Token 확인을 위해

HTTP Request 전달

VM

GET /v3/auth/tokens HTTP/1.1

Host: osc-openstack-controller.osci.kr:35357

Accept-Encoding: gzip, deflate

X-Subject-Token:

gAAAAABXfwWXq_PGkZAA2RZXBvNWTk5DE6ltWNkVLq3qjMUZhHipixcco5n1qkoAVZ8CIeUFL

JpP15CDhu-C6zvW46nw4wiupaPEj7ahY_8j-sXfUrKA7d7vO7kLUd78XzSWpAhgs4qTsZQK-

E_AMuPSCrlPaL3mgy73pgJW6uOWoiqbbKbde6X5ktmm_oX6VQ0ZOhwIlfks^M

Accept: application/json

X-Auth-Token:

gAAAAABXfwkWRCC4jXbjAf5KxU99Coda5UxKC93FQcTOYNh86wTOIZdnqY2hrqg_ykBenhqvN

v6WalnpDz0v2ikzwCsvmwX7i7BfeU6dB9R9C_735phLDaudLjiKBlZK851Zc7MSPwS505JZaEo-

oi-SdflX4AMmLSXBSclkNtiF0NXXICYsPmI

Connection: keep-alive

User-Agent: python-keystoneclient

17

6. nova - token 확인 완료

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

Keystone은 API token에 대한 확인 후,

결과에 대한 응답 전달(허용/거부)

VM

HTTP/1.1 200 OK

Content-Length: 369

Content-Type: application/json

Date: Fri, 08 Jul 2016 02:03:12 GMT

Connection: close

{"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", "links": [{"href":

"http://10.0.0.226:8774/v2/", "rel": "self"}], "min_version": "", "version": "", "id": "v2.0"}, {"status":

"CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href":

"http://10.0.0.226:8774/v2.1/", "rel": "self"}], "min_version": "2.1", "version": "2.25", "id":

"v2.1"}]}

18

7. nova API – 생성 요청 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

Nova api가 들어온 request를 확인

Instance 생성을 위한 initial entry을

DB에 생성

VM

19

8. nova API – message 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

Nova api가 scheduler에 vm 생성 요청을

RPC call(rpc.cast)로 message 생성

VM

{"oslo.message": "{\"_context_domain\": null, \"_context_request_id\": \"req-bf6fb61c-6001-4677-9de2-8750a7fd1 669\", \"_context_quota_class\": null, \"_context_service_catalog\":

[{\"endpoints\": [{\"adminURL\": \"http://osc-openstack- controller.osci.kr:8776/v2/be95edeb746c4f279bdbd666fc5f3f00\", \"region\": \"RegionOne\", \"internalURL\": \"http://osc-opens tack-

controller:8776/v2/be95edeb746c4f279bdbd666fc5f3f00\", \"publicURL\": \"http://osc-openstack-controller.osci.kr:8776/v2/ be95edeb746c4f279bdbd666fc5f3f00\"}], \"type\": \"volumev2\",

\"name\": \"cinderv2\"}, {\"endpoints\": [{\"adminURL\": \"http ://osc-openstack-controller.osci.kr:8776/v1/be95edeb746c4f279bdbd666fc5f3f00\", \"region\": \"RegionOne\", \"internalURL\": \

"http://osc-openstack-controller:8776/v1/be95edeb746c4f279bdbd666fc5f3f00\", \"publicURL\": \"http://osc-openstack-controller .osci.kr:8776/v1/be95edeb746c4f279bdbd666fc5f3f00\"}], \"type\":

\"volume\", \"name\": \"cinder\"}], \"_context_auth_token\": \"gAAAAABXg3yhvlTVj6mYGwsGolXCUkYxJ8uRiDHIg72zR3nsEuuE1XeKbTV0NymorlUAPrGbT1huIxcGopC1qkMySk_JoCtc4-

oK4fSngzlCE91gLTwJL4nBHF WXABWnfWVfAPV8Z3I85xPRwQtTGwEHUoTsyTiNjBQUxGNIPEuEMqfqCzkgwh28jfhxLiqf49TDdIaXfbz-\", \"_context_resource_uuid\": null, \"_co

ntext_user\": \"308dd673a1be4a8fbdc24b3493bcf513\", \"_context_user_id\": \"308dd673a1be4a8fbdc24b3493bcf513\", \"_context_sh ow_deleted\": false, \"namespace\": \"compute_task\",

\"_context_is_admin\": true, \"version\": \"1.10\", \"_context_project_ domain\": null, \"_context_timestamp\": \"2016-07-11T11:02:37.752483\", \"method\": \"build_instances\",

\"_context_remote_ad dress\": \"10.0.0.226\", \"_context_roles\": [\"admin\"], \"args\": {\"requested_networks\": {\"nova_object.version\": \"1.1\ ", \"nova_object.changes\": [\"objects\"],

\"nova_object.name\": \"NetworkRequestList\", \"nova_object.data\": {\"objects\": [{\"nova_object.version\": \"1.1\", \"nova_object.changes\": [\"network_id\", \"pci_request_id\", \"port_id\",

\"address\"], \"nova_object.name\": \"NetworkRequest\", \"nova_object.data\": {\"network_id\": \"0f4c4113-3cd5-4ffa-a43d-158152a1bac3\", \" pci_request_id\": null, \"port_id\": null,

\"address\": null}, \"nova_object.namespace\": \"nova\"}]}, \"nova_object.namespac e\": \"nova\"}, \"image\": {\"status\": \"active\", \"deleted\": false, \"container_format\": \"bare\", \"min_ram\": 0,

\"upd ated_at\": \"2016-06-08T06:54:08.000000\", \"min_disk\": 0, \"owner\": \"be95edeb746c4f279bdbd666fc5f3f00\", \"is_public\": t rue, \"deleted_at\": null, \"id\": \"0873afbc-7298-4361-

b33d-87071e1d8df2\", \"size\": 41126400, \"name\": \"cirros-0.3.4-x86 _64\", \"checksum\": \"56730d3091a764d5f8b38feeef0bfcef\", \"created_at\": \"2016-06-08T06:54:04.000000\",

\"disk_format\": \ "raw\", \"properties\": {}}, \"filter_properties\": {\"instance_type\": {\"nova_object.version\": \"1.1\", \"nova_object.name \": \"Flavor\", \"nova_object.data\": {\"disabled\": false,

\"root_gb\": 1, \"name\": \"m1.tiny\", \"flavorid\": \"1\", \"del eted\": false, \"created_at\": null, \"ephemeral_gb\": 0, \"updated_at\": null, \"memory_mb\": 512, \"vcpus\": 1, \"extra_spe cs\":

20

9. nova-scheduler – message 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

nova scheduler는 queue에서 message

조회, instance 생성 요청 확인

VM

21

10. nova-scheduler – host node 선택

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

- nova DB에서 전체 cluster에 정보 조회

- filter, weighter 등을 기반으로 VM을

어떤 compute node에 런칭할 지 결정

VM

22

11. nova-scheduler – message 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

scheduler는 instance 실행을 위해 queue에 전달

VM

23

12. nova-compute – message 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

nova compute는 queue로부터 scheduler가

요청한 message 조회

VM

24

13. nova-compute – message 전달(DB 조회)

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

nova compute는 새로 런칭할 VM에 대한 정보를

조회하기 위해 queue에 message 요청

VM

25

14. nova-conductor – DB 조회, message 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

Conductor는 queue에 조회, compute의 요청을

nova DB에서 Instace 정보를 조회하여 compute

node에 전달하기 위해 queue에 전달

VM

26

15. nova-compute – message 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

nova compute는 queue를 통해 Instance 정보 확인

VM

27

16. nova-compute – glance에 이미지 정보 요청

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Nova compute가 glance에 VM에 사용할

OS Image를 API 통해 요청

Glance

glance API

registry

glance DB

28

17. glance-api – token 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Glance-api는 노바와 동일하기 token에 대한

인증 확인을 keystone에 요청

Glance

glance API

registry

glance DB

29

18. glance-api – DB 조회, 이미지 정보 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM - glance-api는 image 정보

(이미지 위치 및 metadata 등) 전달

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

30

19. nova-compute – netron에 network 요청

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

nova compute은 VM에 사용할 네트워크 할당

받기 위해 neutron API에게 request 전달 VM

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

31

20. neutron-server – token 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron-api는 nova와 동일하기 token에 대한

인증 확인을 keystone에 요청 & 확인

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

32

21. neutron-server – agent 정보 수집

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Neuton server는 network-agent 들에 queue를

통해 연결, 현재 네트워크 정보 수집

33

22. neutron-server – 정보 저장 & message 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

해당 정보를 DB에 저장

Nova-compute에 전달하기 위해 queue에 전달

34

23. nova-compute – message 확인(네트워크)

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Nova-compute에 queue에 연결하여 네트워크

정보 확인

35

24. nova-compute - cinder에 volume 요청

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

추가로 Persist Data을 저장하기 위한 Volume을

할당 받을 경우, cinder에 API를 통해 요청

36

25. cinder-api – token 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Cinder-api도 동일하게 인증 확인

37

26. cinder-api – volume 정보 조회 & message 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Cinder-api는 volume 정보를 db에서 확인

정보를 queue에 전달

38

27. nova-compute – message 확인(volume 정보)

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Nova-compute가 queue에 연결하여 볼륨

정보 확인

39

28. nova-compute – Hypervisor에 VM 생성 요청

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Nova compute는 지금까지 할당된 모든 리소스

가 정의된 VM 생성 정보를 기반으로 Hypervisor

에 VM 생성 요청 전달

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

40

29. Hypervisor – ceph 접속 & 리소스 확인

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Ceph가 nova, glance나 cinder의 백엔드로 설정

된 경우, ceph-cluster에 접속하여 instance에 필

요한 리소스 확보(OS 이미지 복사 및 볼륨 생성)

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

41

30. nova-compute – new instance 정보 업데이트

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Nova compute가 새로 생성된 VM에 대한 정보를

conductor을 통해 nova DB에 업데이트

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

42

31. nova-api – UI에 정보 전달

UI : Openstack-dashboard CLI

user

NOVA

Keystone

keystoneAPI

keystone DB

nova API

scheduler

conductor

Message Broker

nova DB

Compute Node

nova compute

Hypervisor Network

VM

Network node

DHCP

Router

IP Port

Dashboard에서 nova API를 통해 nova DB에

조회하여 VM 상태를 조회하여 Web 출력

Cinder

cinder API

scheduler

cinder backup

Message Broker

cinder DB

Ceph Storage

Mon Mon Mon OSD OSD OSD OSD

Glance

glance API

registry

glance DB

Neutron

neutron server

scheduler

plugin/agent

Message Broker

neutron DB

3. Ceph - Ceph

44

데이터의 양이 폭발적으로 증가, 하드웨어 기반 스토리지를 압도

데이터 증가 대처 어려움

변경의 어려움

scale-up 형태의 한계

지속적인 유지보수비

전통적인 스토리지의 한계

Scale-Out 형태의 스토로지 제공 및 관리를 소프트웨어로 구현 ex) GlusterFS, HDFS, Ceph etc

민첩 & 유연한 서비스 제공

특정 스토리지 장비 제조사의 종속(lock-in) 탈피

고가의 장비에 대한 투자 비용 X

운영비용 절감

SDS (Software Defined Storage) 등장

45

object, block, file storage를 제공하는 신뢰성과 확장성을 가진 고성능 분산 스토리지 시스템

2003년 논문으로 시작,

2012년 ceph 서비스를 위해 Inktank라는 회사를 설립하였고, Red Hat이 2014

년 4월 인수

Ceph는 상용 하드웨어(commodity HW)를 가지고 확장 가능하며, 빠른 복구성과 replication이 기본 구성

Ceph는 RADOS (Reliable Autonomic Distributed Object Store)라는 Storage

Clusters를 기반으로 구성

CRUSH (Controlled Replication Under Scalable Hashing) 알고리즘을 통해

RADOS 내에서 file 저장

CEPH

46

Ceph Interfaces

47

Ceph cluster의 핵심

Ceph Monitors와 Ceph object storage daemons(OSDs) 구성

OSDs - Object를 Logical Groups(pools)로 관리 - 데이터 저장, 복제, 복구 - backfilling, rebalancing - OSD간 health-check - Access Control

Monitors - Cluster Map 관리 및 clien에게 Cluster Map 전달 - quorum을 위해 최소 3대 이상의 홀수

Cluster Map(CRUSH Map) : monitor map, OSD map, placement group map, (metadata server map) 구성

RADOS(Reliable Autonomic Distributed Object Store)

48

Data read & write :

1. Mon으로 부터 최신의 cluster map을

수신

2. CRUSH(cluster map + pool ID + file

name) 를 이용하여 어떤 OSDs에 read

& write 할지 client에서 연산

3. OSDs와 직접 연결, Data에 대한 read

& write을 수행

How read or write

3. Ceph - OpenStack with Ceph

50

Ceph Block Device(RBD)를 이용한 Nova, Cinder 그리고 Glance 연동

OpenStack with Ceph

51

OpenStack with Ceph

구성

[pool 생성]

# ceph osd pool create osc_vms 64

# ceph osd pool create osc_volumes 64

# ceph osd pool create osc_images 64

# ceph auth get-or-create client.osc_nova mon 'allow r' osd 'allow class-read object_prefix osc_children, allow rwx pool=osc_vms, allow rwx p

ool=osc_volumes, allow rx pool=osc_images' –o /etc/ceph/ceph.client.osc_nova.keyring

# ceph auth get-or-create client.osc_cinder mon 'allow r' osd 'allow class-read object_prefix osc_children, allow rwx pool=osc_volumes, allow rx

pool=osc_images' –o /etc/ceph/ceph.client.osc_cinder.keyring

# ceph auth get-or-create client.osc_glance mon 'allow r' osd 'allow class-read object_prefix osc_children, allow rwx pool=osc_images' –o /et

c/ceph/ceph.client.osc_glance.keyring

# vi /etc/ceph/ceph.conf

---------------------------------------------------------------------------

[client.osc_nova]

keyring = /etc/ceph/ceph.client.osc_nova.keyring

[client.osc_glance]

keyring = /etc/ceph/ceph.client.osc_glance.keyring

[client.osc_cinder]

keyring = /etc/ceph/ceph.client.osc_cinder.keyring

---------------------------------------------------------------------------

Data Network, Storage Network 분리 - Data network : rbd를 이용하여 데이터 read/write을 수행하는 대역(MONs, OSDs) - Storage network : ceph cluster가 replica/rebalancing을 수행하는 대역(OSDs)

Jumbo frame : mtu -> 9000

52

With openstack nova & glance

# rbd -p osc_vms ls 1c6c7ed4-7fb5-438c-a0b7-9a4ca002e623_disk 356feed0-24cd-411b-8f8e-3588948f0347_disk

# rbd -p osc_images ls 0873afbc-7298-4361-b33d-87071e1d8df2

53

With openstack cinder

# rbd -p osc_volumes-ssd ls volume-e9bd0f9f-cf7e-4cdc-8657-26bc226e93ea

# rbd -p osc_volumes ls volume-0ccf9c54-778e-4c7b-bd5c-4998899e5a50

54

With openstack nova & cinder

<disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='osc_nova'> <secret type='ceph' uuid='8a6768e5-a38e-44c9-b3ba-0fb0655b9938'/> </auth> <source protocol='rbd' name='osc_vms/1c6c7ed4-7fb5-438c-a0b7-9a4ca002e623_disk'> <host name='10.0.0.97' port='6789'/> <host name='10.0.0.98' port='6789'/> <host name='10.0.0.99' port='6789'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='osc_nova'> <secret type='ceph' uuid='8a6768e5-a38e-44c9-b3ba-0fb0655b9938'/> </auth> <source protocol='rbd' name='osc_volumes/volume-0ccf9c54-778e-4c7b-bd5c-4998899e5a50'> <host name='10.0.0.97' port='6789'/> <host name='10.0.0.98' port='6789'/> <host name='10.0.0.99' port='6789'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <serial>0ccf9c54-778e-4c7b-bd5c-4998899e5a50</serial> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </disk>

4. Neutron - Neutron

56

복잡, 유연한 네트워크 구축 한계 변경의 어려움 트래픽 증가 대처 어려움 새로운 서비스 적용 어려움 지속적인 유지보수비 클라우드 환경에 부적함

전통적인 Network architecture의 한계

전통적인 통신장비의 하드웨어와 소프트웨어를 분리(decoupling)하여 소프트웨어로 구현된 네트워크 기능을 범용의 클라우드 인프라 환경에서 필요에 따라 동적으로 구성하고 운용함으로써 소프트웨어 중심의 네트워크 인프라 실현

Control Plane : packet을 어떻게 컨트롤 할지에 대한 정보를 관리, Data Plane으로 전달

Data Plane : packet을 받아 control plane에 정의된 rule에 따라 forwarding, drop

SDN(Software Defined Network)

57

Neutron

OpenStack에서 복잡한 cloud 네트워크 환경을 구현하는 컴포넌트

SDN 기반으로 구현

OpenVSwitch, Linux Bridge, Linux Network Namespace, VxLAN, VLAN, GRE 등 기술 활용

멀티테넌트 네트워크 지원

Load Balance, Firewall, VPN 기능 등 제공

다양한 plugin 제공

4. Neutron - How Neutron works

59

Compute node

Integration Bridge

Tunnel Bridge

각 노드에서 아래 명령으로 openvswitch의 정보 확인 # ovs-vsctl show

60

Network node

Tunnel Bridge

Integration Bridge

External Bridge 각 노드에서 아래 명령으로 openvswitch의 정보 확인 # ovs-vsctl show

61

Subnet이 같은 경우

62

Subnet이 다른 경우

63

Subnet이 다른 경우

64

Subnet이 다른 경우

65

외부로 통신하는 경우

66

외부로 통신하는 경우

67

openvswitch

# ovs-vsctl show

c30c4452-93d8-4a09-b929-33f7585c5c21

Bridge br-tun

fail_mode: secure

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port "vxlan-0a00006b"

Interface "vxlan-0a00006b"

type: vxlan

options: {df_default="true", in_key=flow, local_ip="10.0.0.101",

out_key=flow, remote_ip="10.0.0.107"}

Port br-tun

Interface br-tun

type: internal

Bridge br-ex

Port br-ex

Interface br-ex

type: internal

Port phy-br-ex

Interface phy-br-ex

type: patch

options: {peer=int-br-ex}

Port "enp0s25"

Interface "enp0s25"

Bridge br-int

fail_mode: secure

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port "qr-3a0b2d2a-3a"

tag: 4

Interface "qr-3a0b2d2a-3a"

type: internal

Port br-int

Interface br-int

type: internal

Port "qr-cd7ee3c7-5f"

tag: 7

Interface "qr-cd7ee3c7-5f"

type: internal

Port "qr-60b7ca31-3f"

tag: 11

Interface "qr-60b7ca31-3f"

type: internal

Port int-br-ex

Interface int-br-ex

type: patch

options: {peer=phy-br-ex}

68

openvswitch

# ovs-dpctl dump-flows

recirc_id(0),in_port(25),eth(src=fa:16:3e:58:91:dc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=192.168.0.153,tip=192.168.0.153,op=1/0xff), packets:550,

bytes:23100, used:2.147s, actions:6,21,22,16,push_vlan(vid=5,pcp=0),15

recirc_id(0),in_port(26),eth(src=fa:16:3e:aa:81:32,dst=33:33:00:00:00:01),eth_type(0x86dd),ipv6(tclass=0/0x3,frag=no), packets:550, bytes:47300,

used:2.147s,

actions:push_vlan(vid=9,pcp=0),15,set(tunnel(tun_id=0xa,src=10.0.0.101,dst=10.0.0.102,ttl=64,flags(df,key))),pop_vlan,19,set(tunnel(tun_id=0xa,src=10.0.0.1

01,dst=10.0.0.103,ttl=64,flags(df,key))),19,set(tunnel(tun_id=0xa,src=10.0.0.101,dst=10.0.0.107,ttl=64,flags(df,key))),19,23

recirc_id(0),tunnel(tun_id=0x35,src=10.0.0.103,dst=10.0.0.101,ttl=64,flags(-df-

csum+key)),in_port(19),skb_mark(0),eth(src=fa:16:3e:25:2a:71,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=169.254.0.1,tip=169.254.0.1,op=1/0xff),

packets:367, bytes:15414, used:0.402s, actions:2,push_vlan(vid=6,pcp=0),15

recirc_id(0),in_port(21),eth(src=00:19:99:e6:17:98,dst=64:e5:99:f4:73:2f),eth_type(0x0806), packets:1, bytes:42, used:5.453s, actions:22

recirc_id(0),in_port(22),eth(src=64:e5:99:f4:73:2f,dst=00:19:99:e6:17:98),eth_type(0x0800),ipv4(frag=no), packets:4765421, bytes:546913956, used:0.143s,

flags:SFP., actions:21

recirc_id(0),in_port(24),eth(src=fa:16:3e:bd:8b:42,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=169.254.0.1,tip=169.254.0.1,op=1/0xff), packets:550,

bytes:23100, used:2.147s,

actions:push_vlan(vid=12,pcp=0),15,set(tunnel(tun_id=0x26,src=10.0.0.101,dst=10.0.0.102,ttl=64,flags(df,key))),pop_vlan,19,set(tunnel(tun_id=0x26,src=10.0.

0.101,dst=10.0.0.103,ttl=64,flags(df,key))),19

recirc_id(0),in_port(22),eth(src=00:26:2d:01:3a:8c,dst=00:19:99:e6:17:98),eth_type(0x0806), packets:0, bytes:0, used:never, actions:21

recirc_id(0),tunnel(tun_id=0x5a,src=10.0.0.102,dst=10.0.0.101,ttl=64,flags(-df-

csum+key)),in_port(19),skb_mark(0),eth(src=fa:16:3e:da:a8:1b,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=169.254.0.1,tip=169.254.0.1,op=1/0xff),

packets:1464, bytes:61488, used:5.788s, actions:12,push_vlan(vid=8,pcp=0),15

recirc_id(0),in_port(22),eth(src=52:54:00:36:9f:be,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=192.168.0.87,tip=192.168.0.37,op=1/0xff), packets:10614,

bytes:636840, used:0.359s, actions:21,6,16,push_vlan(vid=5,pcp=0),15,pop_vlan,25

recirc_id(0),in_port(25),eth(src=fa:16:3e:58:91:dc,dst=fa:16:3e:fe:fc:f4),eth_type(0x0800),ipv4(frag=no), packets:984739, bytes:163659561, used:0.105s,

flags:R., actions:22

recirc_id(0),in_port(21),eth(src=00:19:99:e6:17:98,dst=90:9f:33:65:47:02),eth_type(0x0800),ipv4(frag=no), packets:60, bytes:6580, used:0.002s, flags:P.,

actions:22

69

namespace

Namespace

- NIC, IP, Firewall, route 등의 네트워크를 독립된 공간에서 관리하기 위해 사용

- NIC는 한 개의 네임스페이스에서만 존재, 물리 NIC는 root namespace에 존재

- 여러 개의 서로 다른 Private Subnet 컨트롤을 위해 사용

# ip netns list

qdhcp-503cddce-c04e-4623-9a82-81881cd12b3d

qdhcp-6fcfdb3b-ba47-44e1-92b8-37fb2c8098a3

qdhcp-88fc6e9a-250c-48de-9123-b7f3a0ba241e

qdhcp-acfb008b-5c7f-4c6b-840c-238218993977

qdhcp-bb7817fc-4195-40ee-9e0e-82b320707d4b

qdhcp-d7802001-8c26-450a-a6d8-03bd9e006b3b

qdhcp-e2431162-047e-4468-bea8-f4d48bf69eb4

qrouter-46fb5544-801b-4f07-a775-e1f990a48544

qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92

qrouter-72ef731c-0e66-4834-bc39-1c569552f88f

qrouter-8d1af2d6-f9d3-4445-b878-15987750faa2

Private Network(20.20.20.0/24)

Private Network(172.172.0.0/24)

Private Network(30.30.30.0/24)

Private Network(10.10.10.0/24)

Private Network(172.16.1.0/24)

Private Network(10.10.10.0/24)

Private Network(100.100.100.0/24)

미사용중

L3 router gateway(10.10.10.1/24)

L3 router gateway(10.10.10.1/24)

L3 router gateway(100.100.100.1/24)

70

namespace

dhcp namespace # ip netns exec qdhcp-d7802001-8c26-450a-a6d8-03bd9e006b3b ip a

12: tap542598e9-d7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:51:5a:59 brd ff:ff:ff:ff:ff:ff

inet 10.10.10.4/24 brd 10.10.10.255 scope global tap542598e9-d7

valid_lft forever preferred_lft forever

router namespace(l3-agent) # ip netns exec qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92 ip a

26: ha-6db87dac-46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:bd:8b:42 brd ff:ff:ff:ff:ff:ff

inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-6db87dac-46

valid_lft forever preferred_lft forever

inet 169.254.0.1/24 scope global ha-6db87dac-46

valid_lft forever preferred_lft forever

27: qg-72eaaf00-98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:58:91:dc brd ff:ff:ff:ff:ff:ff

inet 192.168.0.147/24 scope global qg-72eaaf00-98

valid_lft forever preferred_lft forever

inet 192.168.0.153/32 scope global qg-72eaaf00-98

valid_lft forever preferred_lft forever

28: qr-a64b30bc-99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:aa:81:32 brd ff:ff:ff:ff:ff:ff

inet 10.10.10.1/24 scope global qr-a64b30bc-99

valid_lft forever preferred_lft forever Router Interface(Private Network : 10.10.10.0/24)

L3 Gateway to Public Network

L3 router HA Interface(Keepalived)

dhcp Interface(Private Network : 10.10.10.0/24)

71

tap, linuxbridge

TAP(Test Access Point) device

- KVM이나 Xen와 같은 hypervisor에서 virtual network interface로 사용

- Ethernet frame을 TAP devic로 보내면, 해당 TAP device와 연결된 VM에서 수신

Linux Bridge

- network hub와 같이 동작하며, 물리나 가상 네트워크 인터페이스 장치를 연결

- OpenVSwitch를 사용하는 OpenStack에서는 SecurityGroup 구현을 위해 사용

# brctl show

bridge name bridge id STP enabled interfaces

qbr023ddf37-be 8000.de52605518ba no qvb023ddf37-be

tap023ddf37-be

qbr06da74c5-8a 8000.0a9d8ef6e5cb no qvb06da74c5-8a

tap06da74c5-8a

qbr0e6ff60b-d6 8000.16a5aa6168b8 no qvb0e6ff60b-d6

tap0e6ff60b-d6

TAB device Linux Bridge

veth pair

72

veth pari

veth pair

- patch cable과 같이 쌍으로 이루어진 virtual network interface

- Namespace간 통신이나, linuxbridge와 openvswitch 연결에 사용

# ip a

220: qbr023ddf37-be: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP

link/ether de:52:60:55:18:ba brd ff:ff:ff:ff:ff:ff

inet6 fe80::b0e1:fcff:fe74:ec37/64 scope link

valid_lft forever preferred_lft forever

222: qvb023ddf37-be: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbr023ddf37-be state UP qlen

1000

link/ether de:52:60:55:18:ba brd ff:ff:ff:ff:ff:ff

inet6 fe80::dc52:60ff:fe55:18ba/64 scope link

valid_lft forever preferred_lft forever

221: qvo023ddf37-be: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen

1000

link/ether 02:88:95:44:44:0d brd ff:ff:ff:ff:ff:ff

inet6 fe80::88:95ff:fe44:440d/64 scope link

valid_lft forever preferred_lft forever

# ethtool -S qvo023ddf37-be

NIC statistics:

peer_ifindex: 222

Linux Bridge

Interface connected OVS

Interface connected

Linux bridge

73

Security Group

security group

- Intance로 접근하는 트래픽을 제한하기 위한 network access rules의 모음

- iptables에 정의하여 적용

- OpenVSwitch는 iptables 적용 불가, linuxbridge를 추가하여 적용

Chain neutron-openvswi-i679c4649-6 (1 references)

target prot opt source destination

RETURN all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct packets associated with a known session to the RETURN chain. */

RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp multiport dports 1:65535

RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp multiport dports 1:65535

RETURN icmp -- 0.0.0.0/0 0.0.0.0/0

RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22

RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80

RETURN all -- 0.0.0.0/0 0.0.0.0/0 match-set NIPv414a9c2c9-862e-485d-b02a- src

DROP all -- 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) but do not have an entry in conntrack. */

neutron-openvswi-sg-fallback all -- 0.0.0.0/0 0.0.0.0/0 /* Send unmatched traffic to the fallback chain. */

74

Instance with Public IP

75

iptables for Public IP

L3 DNAT

# ip netns exec qrouter-72ef731c-0e66-4834-bc39-1c569552f88f iptables -t nat -nL

Chain neutron-l3-agent-PREROUTING (1 references)

target prot opt source destination

REDIRECT tcp -- 0.0.0.0/0 169.254.169.254 tcp dpt:80 redir ports 9697

DNAT all -- 0.0.0.0/0 192.168.0.167 to:10.10.10.60

DNAT all -- 0.0.0.0/0 192.168.0.136 to:10.10.10.27

DNAT all -- 0.0.0.0/0 192.168.0.185 to:20.20.20.5

DNAT all -- 0.0.0.0/0 192.168.0.119 to:10.10.10.19

DNAT all -- 0.0.0.0/0 192.168.0.131 to:10.10.10.69

DNAT all -- 0.0.0.0/0 192.168.0.140 to:10.10.10.55

DNAT all -- 0.0.0.0/0 192.168.0.169 to:10.10.10.90

DNAT all -- 0.0.0.0/0 192.168.0.190 to:10.10.10.94

Chain neutron-l3-agent-float-snat (1 references)

target prot opt source destination

SNAT all -- 10.10.10.60 0.0.0.0/0 to:192.168.0.167

SNAT all -- 10.10.10.27 0.0.0.0/0 to:192.168.0.136

SNAT all -- 20.20.20.5 0.0.0.0/0 to:192.168.0.185

SNAT all -- 10.10.10.19 0.0.0.0/0 to:192.168.0.119

SNAT all -- 10.10.10.69 0.0.0.0/0 to:192.168.0.131

SNAT all -- 10.10.10.55 0.0.0.0/0 to:192.168.0.140

SNAT all -- 10.10.10.90 0.0.0.0/0 to:192.168.0.169

SNAT all -- 10.10.10.94 0.0.0.0/0 to:192.168.0.190

76

VLAN

VLAN(Virtual LAN)

- Single L2-layer network를 여러 개의 broadcast domain으로 나누어 논리적으로

네트워크를 분리

- VLAN ID tag를 이용하여 분리하여, 4096개까지 가능

- NIC와 switch에서 vlan 지원 필요

77

GRE ^ VXLAN

GRE(Generic Routing Encapsulation) –

Cisco에서 개발한 tunneling protocol

Underlay Network IP + UDP + GRE Header +

Overlay Ethernet frame

Encapsulation 방식 : GRE w/42Byte

LB 불가

VxLAN(Virtual Extensible LAN) – VLAN 확장

Underlay Network IP + UDP + VXLAN Header

+ Overlay Ethernet frame

Encapsulation 방식 : UDP w/50Byte

5가지 조합 Hashing LB

24bit segment ID – 16M 이상의 네트워크 세그먼트

78

Network overlay

VxLAN 역할 : network segment, tunneling

VxLAN Tunneling을 이용하여 Overlay Network 구현

물리적으로 독립된 Host들의 Network를 하나의 flat한 네트워크 처럼 구현

4. OpenStack HA - controller

80

OSC 사내 오픈스택 구성도

CEPH MON

Ceph Node #3

CEPH OSD

CEPH OSD

CEPH OSD

KVM Hypervisor

Nova-compute

Neutron-ml2-plugin

Neutron-linuxbridge-agent

Linux bridge

Compute Node #6

CEPH MON

Ceph Node #2

CEPH OSD

CEPH OSD

CEPH OSD

mariadb(galera) NTP

RabbitMQ Neutron

(Network Management)

Nova (Compute Management)

Neutron ML2 Plug-in

Keystone(identify) neutron-linuxbridge-

agent

Glance (Image Service)

Neutron L3 agent

Cinder (Block Management)

Neutron DHCP agent

Pacemaker HA Proxy

Controller Node #3

mariadb(galera) NTP

RabbitMQ Neutron

(Network Management)

Nova (Compute Management)

Neutron ML2 Plug-in

Keystone(identify) neutron-linuxbridge-

agent

Glance (Image Service)

Neutron L3 agent

Cinder (Block Management)

Neutron DHCP agent

Pacemaker HA Proxy

Controller Node #2

mariadb(galera) NTP

RabbitMQ Neutron

(Network Management)

Nova (Compute Management)

Neutron ML2 Plug-in

Keystone(identify) neutron-openvswitch-

agent

Glance (Image Service)

Neutron L3 agent

Cinder (Block Management)

Neutron DHCP agent

Pacemaker HA Proxy

Controller Node #1

CEPH MON

Ceph Node #1

CEPH OSD

CEPH OSD

CEPH OSD

KVM Hypervisor

Nova-compute

Neutron-ml2-plugin

Neutron-linuxbridge-agent

Linux bridge

Compute Node #5

KVM Hypervisor

Nova-compute

Neutron-ml2-plugin

Neutron-linuxbridge-agent

Linux bridge

Compute Node #4

KVM Hypervisor

Nova-compute

Neutron-ml2-plugin

Neutron-linuxbridge-agent

Linux bridge

Compute Node #3

KVM Hypervisor

Nova-compute

Neutron-ml2-plugin

Neutron-linuxbridge-agent

Linux bridge

Compute Node #2

KVM Hypervisor

Nova-compute

Neutron-ml2-plugin

Neutron-openvswitch-agent

OpenVSwitch

Compute Node #1

81

Pacemaker : HA 구성을 위한 Active-

Active cluster 제공 솔루션

VIP 기능 제공

HAProxy : Service API 요청에 대한

Load Balancing 제공 솔루션

Galera : MariaDB Multi-Master

cluster 구성 솔루션

OpenStack - HA

출처 : production-ready-openstack(2016) - safari

82

pacemaker

# pcs status

Online: [ controller1 controller2 controller3 ]

Resource Group: openstack-group

openstack-int-vip (ocf::heartbeat:IPaddr2): Started controller2

openstack-ext-vip (ocf::heartbeat:IPaddr2): Started controller2

openstack-haproxy (systemd:haproxy): Started controller2

Clone Set: openstack-nova-api-clone [openstack-nova-api]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-cinder-api-clone [openstack-cinder-api]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-cinder-volume-clone [openstack-cinder-volume]

Started: [ controller1 controller2 controller3 ]

Clone Set: openstack-glance-api-clone [openstack-glance-api]

Started: [ controller1 controller2 controller3 ]

Clone Set: neutron-server-clone [neutron-server]

Started: [ controller1 controller2 controller3 ]

Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]

Started: [ controller1 controller2 controller3 ]

PCSD Status:

controller1: Online

controller2: Online

controller3: Online

Active-backup

resources

Active-Active

resources

4. OpenStack HA - L3 agent

84

network HA

qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92

qrouter-72ef731c-0e66-4834-bc39-1c569552f88f

qrouter-8d1af2d6-f9d3-4445-b878-15987750faa2

# neutron router-list

+---------------------------+-------------------+----------------------------------------------------------------------------------------------------------------+-------------+-----+

| id | name | external_gateway_info | distributed | ha |

+---------------------------+-------------------+----------------------------------------------------------------------------------------------------------------+-------------+-----+

| x | demo-router | null | False | True |

| x | admin-router | {"network_id": “-", "enable_snat": true, "external_fixed_ips": [{"subnet_id": “-", "ip_address": "192.168.0.147"}]} | False | True |

| x | router1 | {"network_id": “-", "enable_snat": true, "external_fixed_ips": [{"subnet_id": “-", "ip_address": "192.168.0.120"}]} | False | True |

| x | Consulting-Router01 | {"network_id": “-", "enable_snat": true, "external_fixed_ips": [{"subnet_id": “-", "ip_address": "192.168.0.111"}]} | False | True |

+---------------------------+-------------------+----------------------------------------------------------------------------------------------------------------+-------------+-----+

85

router1

# ip netns exec qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92 ip a

26: ha-6db87dac-46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:bd:8b:42 brd ff:ff:ff:ff:ff:ff

inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-6db87dac-46

valid_lft forever preferred_lft forever

inet 169.254.0.1/24 scope global ha-6db87dac-46

valid_lft forever preferred_lft forever

27: qg-72eaaf00-98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:58:91:dc brd ff:ff:ff:ff:ff:ff

inet 192.168.0.147/24 scope global qg-72eaaf00-98

valid_lft forever preferred_lft forever

inet 192.168.0.153/32 scope global qg-72eaaf00-98

valid_lft forever preferred_lft forever

28: qr-a64b30bc-99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:aa:81:32 brd ff:ff:ff:ff:ff:ff

inet 10.10.10.1/24 scope global qr-a64b30bc-99

valid_lft forever preferred_lft forever

86

router2

# ip netns exec qrouter-8d1af2d6-f9d3-4445-b878-15987750faa2 ip a

10: qr-cd7ee3c7-5f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:2a:10:2b brd ff:ff:ff:ff:ff:ff

inet 100.100.100.1/24 scope global qr-cd7ee3c7-5f

valid_lft forever preferred_lft forever

inet6 fe80::f816:3eff:fe2a:102b/64 scope link nodad

valid_lft forever preferred_lft forever

18: qg-8cc7a0f3-65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:fe:fc:f4 brd ff:ff:ff:ff:ff:ff

inet 192.168.0.111/24 scope global qg-8cc7a0f3-65

valid_lft forever preferred_lft forever

inet 192.168.0.121/32 scope global qg-8cc7a0f3-65

valid_lft forever preferred_lft forever

inet 192.168.0.122/32 scope global qg-8cc7a0f3-65

valid_lft forever preferred_lft forever

inet 192.168.0.123/32 scope global qg-8cc7a0f3-65

valid_lft forever preferred_lft forever

inet 192.168.0.124/32 scope global qg-8cc7a0f3-65

valid_lft forever preferred_lft forever

inet 192.168.0.125/32 scope global qg-8cc7a0f3-65

valid_lft forever preferred_lft forever

21: ha-12c2936e-a4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:da:a8:1b brd ff:ff:ff:ff:ff:ff

inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-12c2936e-a4

valid_lft forever preferred_lft forever

inet 169.254.0.1/24 scope global ha-12c2936e-a4

valid_lft forever preferred_lft forever

87

router3

# ip netns exec qrouter-72ef731c-0e66-4834-bc39-1c569552f88f ip a

7: qr-c562cbf6-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:75:a7:4d brd ff:ff:ff:ff:ff:ff

inet 20.20.20.1/24 scope global qr-c562cbf6-e0

valid_lft forever preferred_lft forever

20: qr-60b7ca31-3f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:4d:96:6a brd ff:ff:ff:ff:ff:ff

inet 30.30.30.1/24 scope global qr-60b7ca31-3f

valid_lft forever preferred_lft forever

22: qr-3a0b2d2a-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:93:bd:76 brd ff:ff:ff:ff:ff:ff

inet 10.10.10.1/24 scope global qr-3a0b2d2a-3a

valid_lft forever preferred_lft forever

10: qg-63600375-01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:25:91:dd brd ff:ff:ff:ff:ff:ff

inet 192.168.0.119/32 scope global qg-63600375-01

valid_lft forever preferred_lft forever

inet 192.168.0.120/24 scope global qg-63600375-01

valid_lft forever preferred_lft forever

inet 192.168.0.131/32 scope global qg-63600375-01

valid_lft forever preferred_lft forever

inet 192.168.0.136/32 scope global qg-63600375-01

valid_lft forever preferred_lft forever

12: ha-2e38ae2e-31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:25:2a:71 brd ff:ff:ff:ff:ff:ff

inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-2e38ae2e-31

valid_lft forever preferred_lft forever

inet 169.254.0.1/24 scope global ha-2e38ae2e-31

valid_lft forever preferred_lft forever

88

# ip netns exec qrouter-72ef731c-0e66-4834-bc39-1c569552f88f ip a

7: qr-c562cbf6-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:75:a7:4d brd ff:ff:ff:ff:ff:ff

inet 20.20.20.1/24 scope global qr-c562cbf6-e0

valid_lft forever preferred_lft forever

20: qr-60b7ca31-3f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:4d:96:6a brd ff:ff:ff:ff:ff:ff

inet 30.30.30.1/24 scope global qr-60b7ca31-3f

valid_lft forever preferred_lft forever

22: qr-3a0b2d2a-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:93:bd:76 brd ff:ff:ff:ff:ff:ff

inet 10.10.10.1/24 scope global qr-3a0b2d2a-3a

valid_lft forever preferred_lft forever

10: qg-63600375-01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:25:91:dd brd ff:ff:ff:ff:ff:ff

inet 192.168.0.119/32 scope global qg-63600375-01

valid_lft forever preferred_lft forever

inet 192.168.0.120/24 scope global qg-63600375-01

valid_lft forever preferred_lft forever

inet 192.168.0.131/32 scope global qg-63600375-01

valid_lft forever preferred_lft forever

inet 192.168.0.136/32 scope global qg-63600375-01

valid_lft forever preferred_lft forever

12: ha-2e38ae2e-31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

link/ether fa:16:3e:25:2a:71 brd ff:ff:ff:ff:ff:ff

inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-2e38ae2e-31

valid_lft forever preferred_lft forever

inet 169.254.0.1/24 scope global ha-2e38ae2e-31

valid_lft forever preferred_lft forever

89

L3 HA

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

# vi neutron.conf

l3_ha = true

max_l3_agents_per_router = 3

min_l3_agents_per_router = 2

l3_ha_net_cidr = 169.254.192.0/18

# ip netns list | grep qrouter

qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92

qrouter-72ef731c-0e66-4834-bc39-1c569552f88f

qrouter-8d1af2d6-f9d3-4445-b878-15987750faa2

qrouter-46fb5544-801b-4f07-a775-e1f990a48544

90

L3 HA

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

# vi neutron.conf

l3_ha = true

max_l3_agents_per_router = 3

min_l3_agents_per_router = 2

l3_ha_net_cidr = 169.254.192.0/18

# ip netns list | grep qrouter

qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92

qrouter-72ef731c-0e66-4834-bc39-1c569552f88f

qrouter-8d1af2d6-f9d3-4445-b878-15987750faa2

qrouter-46fb5544-801b-4f07-a775-e1f990a48544

keepalived

keepalived

keepalived

keepalived

91

L3 HA

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

# vi neutron.conf

l3_ha = true

max_l3_agents_per_router = 3

min_l3_agents_per_router = 2

l3_ha_net_cidr = 169.254.192.0/18

# ip netns list | grep qrouter

qrouter-68e9823e-e726-4a7b-8906-0b24b5e1de92

qrouter-72ef731c-0e66-4834-bc39-1c569552f88f

qrouter-8d1af2d6-f9d3-4445-b878-15987750faa2

qrouter-46fb5544-801b-4f07-a775-e1f990a48544

keepalived

keepalived

keepalived

keepalived

keepalived : 부하분산 기능을 제공하는 IPVS(IP Virtual Server) 커널 모듈을 이용,

VIP를 설정하여 Loadbalancing 기능 제공

- vrrp

92

L3 HA – 4 routers

# ps -ef | grep keepalived.con[f]

root 1350 1 0 Jun20 ? 00:01:03 keepalived -P -f /var/lib/neutron/ha_confs/68e9823e-e726-4a7b-8906-

0b24b5e1de92/keepalived.conf -p /var/lib/neutron/ha_confs/68e9823e-e726-4a7b-8906-0b24b5e1de92.pid -r

/var/lib/neutron/ha_confs/68e9823e-e726-4a7b-8906-0b24b5e1de92.pid-vrrp

root 1351 1350 0 Jun20 ? 00:03:04 keepalived -P -f /var/lib/neutron/ha_confs/68e9823e-e726-4a7b-8906-

0b24b5e1de92/keepalived.conf -p /var/lib/neutron/ha_confs/68e9823e-e726-4a7b-8906-0b24b5e1de92.pid -r

/var/lib/neutron/ha_confs/68e9823e-e726-4a7b-8906-0b24b5e1de92.pid-vrrp

root 11395 1 0 Jun09 ? 00:01:42 keepalived -P -f /var/lib/neutron/ha_confs/46fb5544-801b-4f07-a775-

e1f990a48544/keepalived.conf -p /var/lib/neutron/ha_confs/46fb5544-801b-4f07-a775-e1f990a48544.pid -r

/var/lib/neutron/ha_confs/46fb5544-801b-4f07-a775-e1f990a48544.pid-vrrp

root 11396 11395 0 Jun09 ? 00:02:54 keepalived -P -f /var/lib/neutron/ha_confs/46fb5544-801b-4f07-a775-

e1f990a48544/keepalived.conf -p /var/lib/neutron/ha_confs/46fb5544-801b-4f07-a775-e1f990a48544.pid -r

/var/lib/neutron/ha_confs/46fb5544-801b-4f07-a775-e1f990a48544.pid-vrrp

root 11443 1 0 Jun09 ? 00:01:40 keepalived -P -f /var/lib/neutron/ha_confs/8d1af2d6-f9d3-4445-b878-

15987750faa2/keepalived.conf -p /var/lib/neutron/ha_confs/8d1af2d6-f9d3-4445-b878-15987750faa2.pid -r

/var/lib/neutron/ha_confs/8d1af2d6-f9d3-4445-b878-15987750faa2.pid-vrrp

root 11447 11443 0 Jun09 ? 00:02:53 keepalived -P -f /var/lib/neutron/ha_confs/8d1af2d6-f9d3-4445-b878-

15987750faa2/keepalived.conf -p /var/lib/neutron/ha_confs/8d1af2d6-f9d3-4445-b878-15987750faa2.pid -r

/var/lib/neutron/ha_confs/8d1af2d6-f9d3-4445-b878-15987750faa2.pid-vrrp

root 11587 1 0 Jun09 ? 00:01:40 keepalived -P -f /var/lib/neutron/ha_confs/72ef731c-0e66-4834-bc39-

1c569552f88f/keepalived.conf -p /var/lib/neutron/ha_confs/72ef731c-0e66-4834-bc39-1c569552f88f.pid -r

/var/lib/neutron/ha_confs/72ef731c-0e66-4834-bc39-1c569552f88f.pid-vrrp

root 11588 11587 0 Jun09 ? 00:02:52 keepalived -P -f /var/lib/neutron/ha_confs/72ef731c-0e66-4834-bc39-

1c569552f88f/keepalived.conf -p /var/lib/neutron/ha_confs/72ef731c-0e66-4834-bc39-1c569552f88f.pid -r

/var/lib/neutron/ha_confs/72ef731c-0e66-4834-bc39-1c569552f88f.pid-vrrp

93

L3 HA – keepalived(Active/Standby)

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Standby

Standby

Active

Standby

Standby

Standby

Standby

Standby

Standby

Active

Active

Active

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

94

L3 HA vrrp health check

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Standby

Standby

Active

Standby

Standby

Standby

Standby

Standby

Standby

Active

Active

Active

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

# ip netns exec qrouter-72ef731c-0e66-4834-bc39-1c569552f88f tcpdump -A -s0 -n -vvvv -i any vrrp

tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes

15:04:57.861022 IP (tos 0xc0, ttl 255, id 2918, offset 0, flags [none], proto VRRP (112), length 40)

169.254.192.7 > 224.0.0.18: vrrp 169.254.192.7 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20, addrs: 169.254.0.1

15:04:59.862077 IP (tos 0xc0, ttl 255, id 2919, offset 0, flags [none], proto VRRP (112), length 40)

169.254.192.7 > 224.0.0.18: vrrp 169.254.192.7 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20, addrs: 169.254.0.1

15:05:01.863125 IP (tos 0xc0, ttl 255, id 2920, offset 0, flags [none], proto VRRP (112), length 40)

169.254.192.7 > 224.0.0.18: vrrp 169.254.192.7 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20, addrs: 169.254.0.1

15:05:03.863625 IP (tos 0xc0, ttl 255, id 2921, offset 0, flags [none], proto VRRP (112), length 40)

169.254.192.7 > 224.0.0.18: vrrp 169.254.192.7 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20, addrs: 169.254.0.1

15:05:05.865123 IP (tos 0xc0, ttl 255, id 2922, offset 0, flags [none], proto VRRP (112), length 40)

169.254.192.7 > 224.0.0.18: vrrp 169.254.192.7 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20, addrs: 169.254.0.1

95

L3 HA – keepalived IP

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Standby

Standby

Active

Standby

Standby

Standby

Standby

Standby

Standby

Active

Active

Active

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

# ip netns exec qrouter-72ef731c-0e66-4834-bc39-1c569552f88f ip a

12: ha-2e38ae2e-31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc

noqueue state UNKNOWN

link/ether fa:16:3e:25:2a:71 brd ff:ff:ff:ff:ff:ff

inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-2e38ae2e-31

valid_lft forever preferred_lft forever

inet 169.254.0.1/24 scope global ha-2e38ae2e-31

valid_lft forever preferred_lft forever

# ip netns exec qrouter-72ef731c-0e66-4834-bc39-1c569552f88f ip a

6: ha-1b0a35be-94: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc

noqueue state UNKNOWN

link/ether fa:16:3e:ea:3d:99 brd ff:ff:ff:ff:ff:ff

inet 169.254.192.8/18 brd 169.254.255.255 scope global ha-1b0a35be-94

valid_lft forever preferred_lft forever

inet6 fe80::f816:3eff:feea:3d99/64 scope link

valid_lft forever preferred_lft forever

96

L3 HA

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Standby

Standby

Active

Standby

Standby

Standby

Standby

Standby

Standby

Active

Active

Active

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

97

L3 HA - fail

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Standby

Standby

Active

Standby

Standby

Standby

Standby

Standby

Standby

Active

Active

Active

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

VRRP

98

L3 HA – failover

Network node 1

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 2

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Network node 3

Virtual router 1

Virtual router 4

Virtual router 2

Virtual router 3

Standby

Active

Standby

Standby

Standby

Active

Active

Active

VRRP

VRRP

VRRP

VRRP VRRP

VRRP

VRRP

VRRP

99

keepalived

5. OpenStack – multi-regions

101

Segregation

요구사항 : 각 서비스 별 문리적으로 분리된 인프라 구성 하나의 Dashboard로 컨트롤 가능 여부

102

Share Horizon

Share Horizon and Keystone

Multi-region 구성

출처 : https://www.openstack.org/assets/presentation-media/divideandconquer-2.pdf

103

Add regions

# openstack region create RegionTwo

# openstack region create RegionThree

# openstack region create RegionFour

Create new region

# openstack endpoint create --region RegionTwo identity public http://osc-openstack2-controller.osci.kr:5000/v3 # openstack endpoint create --region RegionTwo identity admin http://osc-openstack2-controller.osci.kr:35357/v3 # openstack endpoint create --region RegionTwo identity internal http://osc-openstack2-controller:5000/v3 … # openstack endpoint create --region RegionThree identity public http://osc-openstack3-controller.osci.kr:5000/v3 # openstack endpoint create --region RegionThree identity admin http://osc-openstack3-controller.osci.kr:35357/v3 # openstack endpoint create --region RegionThree identity internal http://osc-openstack3-controller:5000/v3 … # openstack endpoint create --region RegionFour identity public http://osc-openstack4-controller.osci.kr:5000/v3 # openstack endpoint create --region RegionFour identity admin http://osc-openstack4-controller.osci.kr:35357/v3 # openstack endpoint create --region RegionFour identity internal http://osc-openstack4-controller:5000/v3 …

Create endpoints for keystone

104

Dashboard with Multi-region

# vi /etc/openstack-dashboard/local_settings

AVAILABLE_REGIONS = [

('http://osc-openstack-controller.osci.kr:5000/v3', 'RegionOne'),

('http://osc-openstack2-controller.osci.kr:5000/v3', 'RegionTwo'),

('http://osc-openstack3-controller.osci.kr:5000/v3', 'RegionThree'),

('http://osc-openstack4-controller.osci.kr:5000/v3', 'RegionFour'),

]

Openstack-dashboard(django) 설정에 horizon에서 접속한 region을 추가

105

With Four regions

RegionOne

Nova Glance

Keystone Ceph

Cinder Neutron

RegionTwo

Nova Glance

Keystone Ceph

Cinder Neutron

RegionThree

Nova Glance

Keystone Ceph

Cinder Neutron

RegionFour

Nova Glance

Keystone Ceph

Cinder Neutron

Horizon

106

OPEN

SHARE

CONTRIBUTE

ADOPT

REUSE