jpoint 2017 - where is my service, dude?

158
Где мой сервис, чувак?

Upload: vietnguyen334

Post on 23-Jan-2018

315 views

Category:

Software


8 download

TRANSCRIPT

  • ,?

  • Vit []

    2

  • Vit []

    3

    10

    -

    6 Java

  • Vit []

    4

    10

    -

    6 Java

  • Vit []

    5

    10

    -

    6 Java

  • 6

  • 7

    DevOps

  • 8

    !

  • @EnableEverything @FixThis @FixThat @DoWhatever public class App { // no code - no cry }

    9:(

  • ?

    10

  • ?

    11

  • 12

  • 13

    UI UI UI

  • 14

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

  • 15

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

    APIAPI

    APIAPI

  • 16

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

    APIAPI

    APIAPI

    DB DB

    DB

  • 17

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

    APIAPI

    APIAPI

    DB DB

    DB

  • 18

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

    APIAPI

    APIAPI

    DB DB

    DB

  • 19

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

    APIAPI

    APIAPI

    DB DB

    DB

  • 20

  • 21

  • 22

  • 23

    ? ?

    ?

  • -

    24

  • Hope is not a strategy

    25

  • KISS FTW

    27

    cards API

    transactions API

    :8081

    :8080Client

  • KISS FTW

    28

    cards API

    transactions API

    :8081

    :8080Client

  • ?!

  • KISS FTW

    30

    cards API

    transactions API

    :8081

    :8080Client

  • In-place update

    Proxy

    Resource management

    Client side LB

    31

  • vegeta

    KISS FTW

    32

    cards API:8081

    :8080

    ansible

    transactions API

  • vegeta

    $ echo "GET http://localhost \ | vegeta attack \ | vegeta report

    33

  • vegeta

    $ echo "GET http://localhost \ | vegeta attack \ | vegeta report

    34

  • vegeta

    $ echo "GET http://localhost \ | vegeta attack \ | vegeta report

    35

  • vegeta$ echo "GET http://localhost | vegeta attack | vegeta report Requests [total, rate] 500, 50.10

    Duration [total, attack, wait] 9.9s, 9.9s, 8.2ms

    Latencies [mean, 50, 95, 99, max] 9.7ms, 7.4ms,

    Bytes In [total, mean] 73260, 146.52

    Bytes Out [total, mean] 0, 0.00

    Success [ratio] 84.20% Status Codes [code:count] 500:79 200:421

    36

  • ansible

    $ ansible-playbook \ -i inventory \ playbook.yml

    37

  • ansible

    $ ansible-playbook \ -i inventory \ playbook.yml

    38

  • ansible

    $ ansible-playbook \ -i inventory \ playbook.yml

    39

  • ansible inventory[frontend]

    server1

    server2

    [backend]

    server3

    [backend:vars]

    timeout = 2s

    40

  • ansible inventory[frontend]

    server1

    server2

    [backend]

    server3

    [backend:vars]

    timeout = 2s

    41

  • ansible inventory[frontend]

    server1

    server2

    [backend]

    server3

    [backend:vars]

    timeout = 2s

    42

  • ansible inventory[frontend]

    server1

    server2

    [backend]

    server3

    [backend:vars]

    timeout = 2s

    43

  • ansible playbook- hosts: frontend

    tasks:

    - apt:

    name: haproxy

    update_cache: yes

    - service:

    name: haproxy

    enabled: yes

    state: started

    44

  • ansible playbook- hosts: frontend

    tasks:

    - apt:

    name: haproxy

    update_cache: yes

    - service:

    name: haproxy

    enabled: yes

    state: started

    45

  • ansible playbook- hosts: frontend

    tasks:

    - apt:

    name: haproxy

    update_cache: yes

    - service:

    name: haproxy

    enabled: yes

    state: started

    46

  • !

  • 48

    t

    :(

    old

    new

  • !

  • In-place update

    Proxy

    Resource management

    Client side LB

    50

  • In-place update

    Proxy

    Resource management

    Client side LB

    51

  • Proxy has arrived

    52

    cards API

    transactions API oldHAProxy

    :8080:8082

  • 53

    cards API

    transactions API oldHAProxy

    transactions APInew

    :8080:8082

    :8083

  • 54

    cards API

    transactions API oldHAProxy

    transactions APInew

    :8080:8082

    :8083

  • 55

    cards API

    HAProxy

    transactions APInew

    :8080

    :8083

  • HAProxy

    56

    haproxy -f /my.conf -D -p /my.pid

  • HAProxy

    57

    haproxy -f /my.conf -D -p /my.pid

  • HAProxy

    58

    haproxy -f /my.conf -D -p /my.pid

  • HAProxy

    59

    haproxy -f /my.conf -D -p /my.pid

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

  • HAProxy

    60

    haproxy -f /my.conf -D -p /my.pid

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

  • HAProxy

    61

    haproxy -f /my.conf -D -p /my.pid

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

    finish, please

  • HAProxy

    62

    haproxy -f /my.conf -D -p /my.pid

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

    finish, please

    OK!

  • HAProxy

    63

    haproxy -f /my.conf -D -p /my.pid

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

    finish, please

    OK!

    Unbind ports

  • HAProxy

    64

    haproxy -f /my.conf -D -p /my.pid

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

    finish, please

    OK!

    Unbind ports

    Serve last connections

  • HAProxy

    65

    haproxy -f /my.conf -D -p /my.pid \ -sf $(cat /my.pid)

  • !

  • ?

    67

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    68

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    69

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    70

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    71

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    72

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    73

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    74

    t

    old

    new

    haproxy

    1 2

    3 4

  • ?

    75

    t

    old

    new

    haproxy

    1 2

    3 4

  • Healthcheck !

    @RequestMapping("/health") public String getHealth() { return "OK";}

    76

  • !

  • :)

    healthcheck :) :)

    deployment :(

    78

  • :)

    healthcheck :) :)

    deployment :(

    79

  • 80

    cards API transactions API

    HAProxy:8080

    :8082

  • 81

    cards API transactions API

    HAProxy:8080

    :8082

  • 82

    UI UI UI

    APIAPI

    API

    APIAPI

    API

    API

    API

    API

    APIAPI

    APIAPI

    DB DB

    DB

  • In-place update

    Proxy

    Resource management

    Client side LB

    83

  • In-place update

    Proxy

    Resource management

    Client side LB

    84

  • Mesos, Marathon & Co.

    85

    Host 1

    Host 2

    Host 5 Host 4

    Host 3

  • Mesos, Marathon & Co.

    86

    Host 1

    Host 2

    Host 5

    Host 3

    Host 4

    a

    a

    aa

    a

    a = mesos agent

  • Mesos, Marathon & Co.

    87

    Host 1

    Host 2

    Host 5

    Host 3

    Host 4

    a

    mesos

    master

    a

    aa

    a

  • Mesos, Marathon & Co.

    88

    Host 1

    Host 2

    Host 5

    Host 3

    Host 4

    a

    mesos

    master

    a

    aa

    a

    marathon

  • Mesos, Marathon & Co.

    89

    Host 1

    Host 2

    Host 5

    Host 3

    Host 4

    a

    mesos

    master

    a

    aa

    a

    marathon

    app manifest

  • Mesos, Marathon & Co.

    90

    Host 1

    Host 2

    Host 5

    Host 3

    Host 4

    a

    mesos

    master

    a

    aa

    a

    marathon

    app

    app app

  • HAProxy ?

    91

    marathon

    marathon-lb

    marathon_lb.py

    HAProxy

  • !

  • HAProxy socket flags

    SO_REUSEADDR - ignore TIME_WAIT

    SO_REUSEPORT - bind same IP:PORT

    93

  • 94

  • spring-boot-starter-web

    95

    spring-boot-starter-web

    org.springframework.boot

    spring-boot-starter-tomcat

  • spring-boot-starter-web

    96

    spring-boot-starter-web

    org.springframework.boot

    spring-boot-starter-tomcat

  • Tomcat

    97

    public void bind() throws Exception {

    this.serverSock = ServerSocketChannel.open(); this.socketProperties.setProperties(this.serverSock.socket()); InetSocketAddress addr = // ; this.serverSock.socket().bind(addr, this.getBacklog()); // }

  • Tomcat

    98

    public void bind() throws Exception {

    this.serverSock = ServerSocketChannel.open(); this.socketProperties.setProperties(this.serverSock.socket()); InetSocketAddress addr = // ; this.serverSock.socket().bind(addr, this.getBacklog()); // }

  • Tomcat

    99

    public void bind() throws Exception {

    this.serverSock = ServerSocketChannel.open(); this.socketProperties.setProperties(this.serverSock.socket()); InetSocketAddress addr = // ; this.serverSock.socket().bind(addr, this.getBacklog()); // }

  • Tomcat

    100

    public void bind() throws Exception {

    this.serverSock = ServerSocketChannel.open(); this.socketProperties.setProperties(this.serverSock.socket()); InetSocketAddress addr = // ; this.serverSock.socket().bind(addr, this.getBacklog()); // }

  • Tomcat

    101

    public void bind() throws Exception {

    this.serverSock = ServerSocketChannel.open(); this.socketProperties.setProperties(this.serverSock.socket()); InetSocketAddress addr = // ; this.serverSock.socket().bind(addr, this.getBacklog()); // }

  • Tomcat

    102

    public void setProperties(ServerSocket socket) throws SocketException {

    if (this.soReuseAddress != null) {

    socket.setReuseAddress(this.soReuseAddress.booleanValue());

    }

    //

    }

  • Tomcat

    103

    public void setProperties(ServerSocket socket) throws SocketException {

    if (this.soReuseAddress != null) {

    socket.setReuseAddress(true);

    }

    if (this.soReusePort != null) {

    socket.setReusePort(true);

    }

    //

    }

  • !

  • linux/net/core/sock_reuseport.c

    105

  • linux/net/core/sock_reuseport.c

    106

    struct sock *reuseport_select_sock(hash, )

    {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks);

    sk2 = reuse->socks[reciprocal_scale(hash, socks)];

    //

    return sk2;

    }

  • linux/net/core/sock_reuseport.c

    107

    struct sock *reuseport_select_sock(hash, )

    {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks);

    sk2 = reuse->socks[reciprocal_scale(hash, socks)]; //

    return sk2; }

  • linux/net/core/sock_reuseport.c

    108

    struct sock *reuseport_select_sock(hash, )

    {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks);

    sk2 = reuse->socks[reciprocal_scale(hash, socks)]; //

    return sk2; }

  • linux/net/core/sock_reuseport.c

    109

    struct sock *reuseport_select_sock(hash, )

    {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks);

    sk2 = reuse->socks[reciprocal_scale(hash, socks)]; //

    return sk2; }

  • linux/net/core/sock_reuseport.c

    110

    struct sock *reuseport_select_sock(hash, ) {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks);

    sk2 = reuse->socks[reciprocal_scale(hash, socks)]; //

    return sk2;

    }

  • linux/net/core/sock_reuseport.c

    111

    struct sock *reuseport_select_sock(hash, )

    {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks); sk2 = reuse->socks[reciprocal_scale(hash, socks)]; //

    return sk2;

    }

  • linux/net/core/sock_reuseport.c

    112

    struct sock *reuseport_select_sock(hash, )

    {

    //

    struct sock *sk2 = NULL;

    reuse = rcu_dereference(sk->sk_reuseport_cb);

    socks = READ_ONCE(reuse->num_socks);

    sk2 = reuse->socks[reciprocal_scale(hash, socks)]; //

    return sk2; }

  • struct sock *__inet_lookup_listener()

    {

    u32 phash = 0;

    struct sock *result = NULL;

    phash = inet_ehashfn(net, daddr, hnum, saddr, sport);

    result = reuseport_select_sock(phash, );

    //

    return result;

    }

    linux/net/ipv4/inet_hashtables.c

    113

  • struct sock *__inet_lookup_listener()

    {

    u32 phash = 0;

    struct sock *result = NULL;

    phash = inet_ehashfn(net, daddr, hnum, saddr, sport);

    result = reuseport_select_sock(phash, ); //

    return result;

    }

    linux/net/ipv4/inet_hashtables.c

    114

  • struct sock *__inet_lookup_listener()

    {

    u32 phash = 0;

    struct sock *result = NULL;

    phash = inet_ehashfn(net, daddr, hnum, saddr, sport); result = reuseport_select_sock(phash, ); //

    return result;

    }

    linux/net/ipv4/inet_hashtables.c

    115

  • static u32 inet_ehashfn(const struct net *net,

    const __be32 laddr, const __u16 lport,

    const __be32 faddr, const __be16 fport)

    {

    //

    return __inet_ehashfn(laddr, lport, faddr, fport,

    inet_ehash_secret + net_hash_mix(net));

    }

    linux/net/ipv4/inet_hashtables.c

    116

  • static u32 inet_ehashfn(const struct net *net,

    const __be32 laddr, const __u16 lport, const __be32 faddr, const __be16 fport) {

    //

    return __inet_ehashfn(laddr, lport, faddr, fport, inet_ehash_secret + net_hash_mix(net));

    }

    linux/net/ipv4/inet_hashtables.c

    117

  • static u32 inet_ehashfn(const struct net *net,

    const __be32 laddr, const __u16 lport,

    const __be32 faddr, const __be16 fport) {

    //

    return __inet_ehashfn(laddr, lport, faddr, fport, inet_ehash_secret + net_hash_mix(net));

    }

    linux/net/ipv4/inet_hashtables.c

    118

  • struct sock *__inet_lookup_listener(const __be32 saddr, __be16 sport, ) {

    u32 phash = 0;

    struct sock *result = NULL;

    phash = inet_ehashfn(net, daddr, hnum, saddr, sport); result = reuseport_select_sock(phash, );

    //

    return result;

    }

    linux/net/ipv4/inet_hashtables.c

    119

  • Here comes the pain

    socket

    HAProxy connect

    RPS

    HAProxy , :(

    120

  • Here comes the pain

    socket

    HAProxy connect

    RPS

    HAProxy , :(

    121

  • Here comes the pain

    socket

    HAProxy connect

    RPS

    HAProxy , :(

    122

  • Here comes the pain

    socket

    HAProxy connect

    RPS

    HAProxy , :(

    123

  • ?

    HAProxy

    HAProxy

    124

  • ?

    HAProxy

    HAProxy

    125

  • ?

    HAProxy

    HAProxy

    126

  • In-place update

    Proxy

    Resource management

    Client side LB

    127

  • We are server side

    128

    app 1

    app 2proxy

  • 129

    app 1

    app 2proxy

  • Client side!

    130

    app 1

    app 2

  • A piece of proxy

    131

  • A piece of proxy

    132

  • Netflix Ribbon

    133

    HttpResourceGroup resourceGroup = Ribbon .createHttpResourceGroup(

    transactionsClient",

    ClientOptions.create()

    .withConfigurationBasedServerList(srv1:8080, srv2:8088)

    );

  • Netflix Ribbon

    134

    HttpRequestTemplate requestTemplate = resourceGroup

    .newTemplateBuilder("requestTemplate")

    .withMethod(POST")

    .withUriTemplate("/transactions")

    .build();

  • Netflix Ribbon

    135

    public class BaseLoadBalancer extends AbstractLoadBalancer ... {

    protected IRule rule = new RoundRobinRule();

    protected volatile List allServerList = Collections

    .synchronizedList(new ArrayList());

    protected volatile List upServerList = Collections

    .synchronizedList(new ArrayList());

    }

  • Netflix Ribbon

    136

    public class BaseLoadBalancer extends AbstractLoadBalancer ... {

    protected IRule rule = new RoundRobinRule();

    protected volatile List allServerList = Collections

    .synchronizedList(new ArrayList());

    protected volatile List upServerList = Collections

    .synchronizedList(new ArrayList());

    }

  • Netflix Ribbon

    137

    public class BaseLoadBalancer extends AbstractLoadBalancer ... {

    protected IRule rule = new RoundRobinRule();

    protected volatile List allServerList = Collections

    .synchronizedList(new ArrayList());

    protected volatile List upServerList = Collections

    .synchronizedList(new ArrayList());

    }

  • Netflix Ribbon

    138

    public class BaseLoadBalancer extends AbstractLoadBalancer ... {

    protected IRule rule = new RoundRobinRule();

    protected volatile List allServerList = Collections

    .synchronizedList(new ArrayList());

    protected volatile List upServerList = Collections

    .synchronizedList(new ArrayList());

    }

  • Netflix Ribbon

    139

    Timer lbTimer = // ;

    void setupPingTask() {

    lbTimer.schedule(

    new PingTask(),

    0,

    pingIntervalSeconds * 1000);

    }

  • Netflix Ribbon

    140

    public Server chooseServer(Object key) {

    try {

    return rule.choose(key);

    } catch (Exception e) {

    log.warn("LoadBalancer: Error choosing server for key {},

    key, e);

    return null;

    }

    }

  • Netflix Ribbon

    141

    public Server chooseServer(Object key) {

    try {

    return rule.choose(key);

    } catch (Exception e) {

    log.warn("LoadBalancer: Error choosing server for key {},

    key, e);

    return null;

    }

    }

  • Netflix Ribbon

    142

    public class RoundRobinRule extends AbstractLoadBalancerRule {

    public Server choose(Object key) {

    nextServerIndex = incrementAndGetModulo(serverCount);

    return allServers.get(nextServerIndex);

    }

    }

  • Netflix Ribbon

    143

    private int incrementAndGetModulo(int modulo) {

    for (;;) {

    int current = nextServerCyclicCounter.get();

    int next = (current + 1) % modulo;

    if (nextServerCyclicCounter.compareAndSet(current, next))

    return next;

    }

    }

  • Netflix Ribbon

    144

    private int incrementAndGetModulo(int modulo) {

    for (;;) {

    int current = nextServerCyclicCounter.get();

    int next = (current + 1) % modulo;

    if (nextServerCyclicCounter.compareAndSet(current, next))

    return next;

    }

    }

  • Netflix Ribbon

    145

    private int incrementAndGetModulo(int modulo) {

    for (;;) {

    int current = nextServerCyclicCounter.get();

    int next = (current + 1) % modulo;

    if (nextServerCyclicCounter.compareAndSet(current, next))

    return next;

    }

    }

  • Spring cloud Netflix

    146

    app 1

    eureka server

    app 2

    app 3app 4

  • Spring cloud Netflix

    147

    app 1

    eureka server

    app 2

    app 3app 4

  • Eureka server

    148

    @SpringBootApplication

    @EnableEurekaServer

    public class EurekaServer {

    public static void main(String[] args) {

    new SpringApplicationBuilder(EurekaServer.class)

    .web(true)

    .run(args);

    }

    }

  • Eureka server

    149

    @SpringBootApplication

    @EnableEurekaServer

    public class EurekaServer {

    public static void main(String[] args) {

    new SpringApplicationBuilder(EurekaServer.class)

    .web(true)

    .run(args);

    }

    }

  • Eureka client

    150

    @SpringBootApplication

    @RestController

    @EnableEurekaClient

    public class CardsApi {

    public static void main(String... args) {

    SpringApplication.run(CardsApi.class, args);

    }

    }

  • Eureka client

    151

    @SpringBootApplication

    @RestController

    @EnableEurekaClient

    public class CardsApi {

    public static void main(String... args) {

    SpringApplication.run(CardsApi.class, args);

    }

    }

  • -

    per-app

    152

    Client-side Server-side

    vs

    Load balancing

  • -

    per-app

    153

    Client-side Server-side

    vs

    Load balancing

  • In-place - . .

    healthcheck

    large-scale Mesos/Marathon + Marathon-LB

    - Client-side Service Discovery

    154

  • In-place - . .

    healthcheck

    large-scale Mesos/Marathon + Marathon-LB

    - Client-side Service Discovery

    155

  • In-place - . .

    healthcheck

    large-scale Mesos/Marathon + Marathon-LB

    - Client-side Service Discovery

    156

  • In-place - . .

    healthcheck

    large-scale Mesos/Marathon + Marathon-LB

    - Client-side Service Discovery

    157

  • !

    stereohorse/jpoint2017

    stereohorse