evolving architecture - further
DESCRIPTION
More tools and their evolution - how they make your life easier - from server configuration to job queuing systems, cloud computing and autoinstallsTRANSCRIPT
Evolving architecture - further
Adding the Michelin stars
Leo Lapworth @ YAPC::EU 2008http://leo.cuckoo.org/
(See seperate evolving architecture presentation for first part)
Evolving architecture - further
Adding the Michelin stars
Leo Lapworth @ YAPC::EU 2008http://leo.cuckoo.org/
(See seperate evolving architecture presentation for first part)
Covering
Covering
Server configuration
Covering
Server configurationTesting
Covering
Server configurationTestingPerformance
Covering
Server configurationTestingPerformanceWeb Frameworks
Covering
Server configurationTestingPerformanceWeb FrameworksServer maintenance
Covering
Server configurationTestingPerformanceWeb FrameworksServer maintenanceObject relational mapping
Covering
Server configurationTestingPerformanceWeb FrameworksServer maintenanceObject relational mappingCloud Computing
Covering
Server configurationTestingPerformanceWeb FrameworksServer maintenanceObject relational mappingCloud Computing
Kid splashing in puddle(representing the non-
depth nature of this talk)
Deep sea glass squid(representing depth)Kid splashing in puddle
(representing the non-depth nature of this talk)
Deep sea glass squid(representing depth)Kid splashing in puddle
(representing the non-depth nature of this talk)
How to use this talk
How to use this talk
How this could work
for me?
Play!
How to use this talk
How this could work
for me?
Original content
Original content
0%
Original content
0%It's about putting the pieces together
Managing many servers
Automatic server setup
Automatic server setup
Look at Puppet
Puppet & Puppetmaster
Puppet & Puppetmaster
Elmo(database)
Puppet & Puppetmaster
Bert(app server)
Elmo(database)
Puppet & Puppetmaster
Bert(app server)
Ernie(image server)
Elmo(database)
Puppet & Puppetmaster
Puppet & Puppetmaster
Elmo(database)
Puppet & Puppetmaster
Elmo(database)
Bert(app server)
Puppet & Puppetmaster
Elmo(database)
Bert(app server)
Ernie(image server)
Puppet & Puppetmaster
Kermit(Puppetmaster)
Elmo(database)
Bert(app server)
Ernie(image server)
Puppet & Puppetmaster
Kermit(Puppetmaster)
Elmo(database)
Bert(app server)
Ernie(image server)
Yes, I know Kermit was Muppetsand below were Sesame Street
What to manage?
What to manage?
Packages (installing/removing/configuring)
What to manage?
Packages (installing/removing/configuring)
Services (running/restarting)
What to manage?
Packages (installing/removing/configuring)
Services (running/restarting)
Users (adding/setting passwords)
What to manage?
Packages (installing/removing/configuring)
Services (running/restarting)
Users (adding/setting passwords)
Cron (installing)
What to manage?
Packages (installing/removing/configuring)
Services (running/restarting)
Users (adding/setting passwords)
Cron (installing)
Files and directories (installing/removing/checking)
Manage packages
Manage packages
# Editors
package { emacs: ensure => latest }
package { vim: ensure => latest }
package { nano: ensure => absent }
Services
Servicesclass base_ssh { package { "openssh-client": ensure => installed } package { "openssh-server": ensure => installed }
user { sshd: home => "/var/run/sshd", shell => "/usr/sbin/nologin", allowdupe => false, }
service { ssh: ensure => running, pattern => "sshd", require => Package["openssh-server"], }}
Manage users
Manage users user { leouser: user => 'leo', fullname => 'Leo', path => $path, password => '$1$qW*&^%&^%$^&%$.' }
Manage users user { leouser: user => 'leo', fullname => 'Leo', path => $path, password => '$1$qW*&^%&^%$^&%$.' } # Setup ssh keys as well file { "$path/$user/.ssh/authorized_keys": owner => "$user", group => "$user", mode => 600, source => [ "$fileserver/default/$path/$user/ssh/authorized_keys" ], }
Cron
Croncron {
script_name:
command => "/usr/local/bin/script.pl foo",
user => "web",
hour => '4',
minute => '30',
ensure => "present";
}
Node setup
Node setup
node ernie {
include base_ssh
include base_users
include cron::database
}
Installing on client
Installing on client
apt-get install puppet
puppetd --server=puppet.domain.com
Monitoring (simply)
Monitoring (simply)
Monitoring (simply)
Monitoring (simply)With (m)any servers you need
- Monitoring
- Alerts
Munin
Munin
Install (with puppet!):
Munin
Install (with puppet!):
Server:
Munin
Install (with puppet!):
Server:
package { "munin": ensure => latest }
Munin
Install (with puppet!):
Server:
package { "munin": ensure => latest }
Client:
Munin
Install (with puppet!):
Server:
package { "munin": ensure => latest }
Client:
package { "munin-node": ensure => latest }
Quick overview (Munin)
Quick overview (Munin)Server config:
[cuckoo;braga] address 62.222.3.44
And you get:
Quick overview (Munin)Server config:
[cuckoo;braga] address 62.222.3.44
And you get:
What's happening?
What's happening?
Also emails alerts:
cuckoo :: Braga :: WARNINGs:
packets is 0.00 (outside range [:1]),
CRITICALs:
sshd is 0.00 (outside range [1:]),
Custom plugins
Custom pluginsconfigure my_app
Custom pluginsconfigure my_appgraph_title My App
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requests
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requestsgraph_category Apps
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requestsgraph_category Appsgraph_info Requests to my app
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requestsgraph_category Appsgraph_info Requests to my apprequests.label Requests
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requestsgraph_category Appsgraph_info Requests to my apprequests.label Requestsrequests.type COUNTER
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requestsgraph_category Appsgraph_info Requests to my apprequests.label Requestsrequests.type COUNTER
fetch my_app
Custom pluginsconfigure my_appgraph_title My Appgraph_vlabel requestsgraph_category Appsgraph_info Requests to my apprequests.label Requestsrequests.type COUNTER
fetch my_apprequests.value 450424
Virtual Machines
Virtual MachinesPhysical server
Virtual MachinesPhysical server
Virtual machine
Virtual MachinesPhysical server
Virtual machine
Virtual machine
Virtual MachinesPhysical server
Virtual machine
Virtual machine
Virtual machine
Virtual Machines
Xen - dom0 (Linux, NetBSD, Solaris - controller)
Physical serverVirtual machine
Virtual machine
Virtual machine
Virtual Machines
Xen - dom0 (Linux, NetBSD, Solaris - controller)
VM - domU (*nix, even some Windoz) guest OS
Physical serverVirtual machine
Virtual machine
Virtual machine
Virtual Machines
Xen - dom0 (Linux, NetBSD, Solaris - controller)
VM - domU (*nix, even some Windoz) guest OS
+ Puppet = configured VMs quickly
Physical serverVirtual machine
Virtual machine
Virtual machine
Virtual Machines
Xen - dom0 (Linux, NetBSD, Solaris - controller)
VM - domU (*nix, even some Windoz) guest OS
+ Puppet = configured VMs quickly
Physical serverVirtual machine
Virtual machine
Virtual machine
Testing & VMs
Testing & VMs
Why use mock objects / databases when you can start a whole new machine in < 10 mins?
Testing & VMs
Why use mock objects / databases when you can start a whole new machine in < 10 mins?
TDD - Test Driven Development
Testing & VMs
Why use mock objects / databases when you can start a whole new machine in < 10 mins?
TDD - Test Driven Development
TDD - Test Driven Deployment
Testing & VMs
Why use mock objects / databases when you can start a whole new machine in < 10 mins?
TDD - Test Driven Development
TDD - Test Driven Deployment
Start with monitoring, your VM is installed when your monitoring reports OKs...
Profiling
Profiling
Quote from someone talking at a Google Code Jam "Why don't people spend more time profiling so they can make their code faster?"
Profiling
Quote from someone talking at a Google Code Jam "Why don't people spend more time profiling so they can make their code faster?"
Response: "Developer time is expensive... computing power is not."
Profiling
Quote from someone talking at a Google Code Jam "Why don't people spend more time profiling so they can make their code faster?"
Response: "Developer time is expensive... computing power is not."
However if you do need find bottle necks Devel::NYTProf looks fantastic for Perl profiling.
Profiling
Quote from someone talking at a Google Code Jam "Why don't people spend more time profiling so they can make their code faster?"
Response: "Developer time is expensive... computing power is not."
However if you do need find bottle necks Devel::NYTProf looks fantastic for Perl profiling.
Or use more computers...
Problem with more computers
Problem with more computers
More hardware... to go wrong!
Cloud Computing
Cloud storage
Cloud storageuse Net::Amazon::S3;
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new(
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id,
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1,
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, }
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, });
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, });$bucket = $s3->bucket('$name_backup_important_stuff');
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, });$bucket = $s3->bucket('$name_backup_important_stuff');
# store a file in the bucket (any size - handled well)
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, });$bucket = $s3->bucket('$name_backup_important_stuff');
# store a file in the bucket (any size - handled well)$bucket->add_key_filename( '1.JPG', 'DSC06256.JPG');
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, });$bucket = $s3->bucket('$name_backup_important_stuff');
# store a file in the bucket (any size - handled well)$bucket->add_key_filename( '1.JPG', 'DSC06256.JPG');
# get the file back
Cloud storageuse Net::Amazon::S3;
my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key,
retry => 1, });$bucket = $s3->bucket('$name_backup_important_stuff');
# store a file in the bucket (any size - handled well)$bucket->add_key_filename( '1.JPG', 'DSC06256.JPG');
# get the file back$bucket->get_key_filename( '1.JPG', 'GET', 'backup.jpg' );
Cloud storage
Cloud storage
• No worrying about dead disks
Cloud storage
• No worrying about dead disks
• No worrying about RAID configuration
Cloud storage
• No worrying about dead disks
• No worrying about RAID configuration
• No worrying about backups
Cloud storage
• No worrying about dead disks
• No worrying about RAID configuration
• No worrying about backups
Cloud storage
• No worrying about dead disks
• No worrying about RAID configuration
• No worrying about backups
• Cost of storing 1/2 TB =~ £38 =~ €48 per month (transfer in/out is on top of that)
Cloud storage
• No worrying about dead disks
• No worrying about RAID configuration
• No worrying about backups
• Cost of storing 1/2 TB =~ £38 =~ €48 per month (transfer in/out is on top of that)
• Issue: Trusting someone else with your data
Cloud computing
Cloud computinguse Net::Amazon::EC2;
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new(
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id,
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
my $instance = $ec2->run_instances(
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
my $instance = $ec2->run_instances( ImageId => $ami,
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
my $instance = $ec2->run_instances( ImageId => $ami, MinCount => 1,
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
my $instance = $ec2->run_instances( ImageId => $ami, MinCount => 1, MaxCount => 1,
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
my $instance = $ec2->run_instances( ImageId => $ami, MinCount => 1, MaxCount => 1, SecurityGroup => $secure_group_name,
Cloud computinguse Net::Amazon::EC2;
my $ec2 = Net::Amazon::EC2->new( AWSAccessKeyId => $aws_access_key_id, SecretAccessKey => $aws_secret_access_key,);
# developer-tools/Debian-Etch_Catalyst_DBIC_TT.manifest.xml my $ami = 'ami-bdbe5ad4'; # Your AMI here
my $instance = $ec2->run_instances( ImageId => $ami, MinCount => 1, MaxCount => 1, SecurityGroup => $secure_group_name,);
Cloud computing + Puppet
Cloud computing + Puppet
Upload your Xen instance (your own AMI)
Cloud computing + Puppet
Upload your Xen instance (your own AMI)
Can now start and manage as many or as few servers as we want.
EC2 issues
EC2 issuesWhen an instance goes down all local data is lost
EC2 issuesWhen an instance goes down all local data is lost
S3 is only permanent storage (at the moment)
EC2 issuesWhen an instance goes down all local data is lost
S3 is only permanent storage (at the moment)
Gandi - beta
EC2 issuesWhen an instance goes down all local data is lost
S3 is only permanent storage (at the moment)
Gandi - beta
See Léon's talk after the break for more
Job queuing
Job queuing
Running jobs across many machines
Job queuing
Running jobs across many machines
- Resizing images
Job queuing
Running jobs across many machines
- Resizing images
- Running calculations
Job queuing
Running jobs across many machines
- Resizing images
- Running calculations
- Anything that doesn't have to be realtime
Jobs to do
Jobs to do
Image resizing
Jobs to do
2/4
1/3
9/3
CalculationsImage resizing
Jobs to do
2/4
1/3
9/3
CalculationsImage resizing File conversions
Put jobs into queue
Put jobs into queue
my $job = TheSchwartz::Job->new(
funcname => $funcname,
arg => $arg,
uniqkey => $uniqkey,
);
$the_schwartz->insert($job);
Jobs go into the queue
Jobs go into the queue
Jobs go into the queue
Jobs go into the queue
1/3
Jobs go into the queue
1/3
9/3
Jobs go into the queue
1/3
9/3
Jobs go into the queue
1/3
9/3
Jobs go into the queue
1/3
9/3
Jobs go into the queue
1/3
9/3
Jobs go into the queue
1/3
9/3
2/4
And on each worker
And on each worker
$the_schwartz->can_do($funcname_1);
$the_schwartz->can_do($funcname_2);
$the_schwartz->work_until_done;
Each computer does tasks
Worker 1 Worker 2 Worker 3
Each computer does tasks
Worker 1 Worker 2 Worker 3
Each computer does tasks
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/3
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/39/3
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/39/3
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/39/3
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/39/3
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/39/3
Worker 1 Worker 2 Worker 3
Each computer does tasks
1/39/3
2/4
Worker 1 Worker 2 Worker 3
Tools to make development easier
DBIx::Class (ORM)
DBIx::Class (ORM)ORM = Object Relational Mapper
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
Old:
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
Old:
my $sth = $dbh->prepare("select * from $table where id = ?");
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
Old:
my $sth = $dbh->prepare("select * from $table where id = ?");
$sth->execute($id);
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
Old:
my $sth = $dbh->prepare("select * from $table where id = ?");
$sth->execute($id);
my $result = $sth->fetchrow_hashref();
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
Old:
my $sth = $dbh->prepare("select * from $table where id = ?");
$sth->execute($id);
my $result = $sth->fetchrow_hashref();
my $update = $dbh->prepare("update $table set FIELD = ? where id = ?");
DBIx::Class (ORM)ORM = Object Relational Mapper
= AKA Database tables to objects
Old:
my $sth = $dbh->prepare("select * from $table where id = ?");
$sth->execute($id);
my $result = $sth->fetchrow_hashref();
my $update = $dbh->prepare("update $table set FIELD = ? where id = ?");
$update->execute('new value',$result->{id});
DBIx::Class (ORM)
DBIx::Class (ORM)
vs
DBIx::Class (ORM)
vs
my $result =
DBIx::Class (ORM)
vs
my $result =
$schema->resultset($table)->find($id);
DBIx::Class (ORM)
vs
my $result =
$schema->resultset($table)->find($id);
$result->field('new value');
DBIx::Class (ORM)
vs
my $result =
$schema->resultset($table)->find($id);
$result->field('new value');
$result->update;
The 'F' word - Frameworks
The 'F' word - Frameworks
Do not keep reinventing the wheel
The 'F' word - Frameworks
Web development - use tools so you do not re-implement the the basics each time (Catalyst, JIFTY, one of the many others)
Do not keep reinventing the wheel
The 'F' word - Frameworks
Web development - use tools so you do not re-implement the the basics each time (Catalyst, JIFTY, one of the many others)
Do not keep reinventing the wheel
The 'F' word - Frameworks
Web development - use tools so you do not re-implement the the basics each time (Catalyst, JIFTY, one of the many others)
Any development - see above
Do not keep reinventing the wheel
The 'F' word - Frameworks
Web development - use tools so you do not re-implement the the basics each time (Catalyst, JIFTY, one of the many others)
Any development - see above
Do not keep reinventing the wheel
Catalyst
Catalyst
Is an MVC:
Catalyst
Is an MVC:
Model - objects representing data/logic
Catalyst
Is an MVC:
Model - objects representing data/logic
View - templates
Catalyst
Is an MVC:
Model - objects representing data/logic
View - templates
Controller - decisions of what to do based on input (can alter model and choose views)
Catalyst::Plugin::SimpleAuth
Catalyst::Plugin::SimpleAuth
__PACKAGE__->config(
simpleauth => { class => 'Users' },
Catalyst::Plugin::SimpleAuth
__PACKAGE__->config(
simpleauth => { class => 'Users' },
);
Catalyst::Plugin::SimpleAuth
__PACKAGE__->config(
simpleauth => { class => 'Users' },
);
sub auto : Private {
if(!$c->user && $c->action ne 'login') {
$c->res->redirect( $c->uri_for('/login') );
}
Catalyst::Plugin::SimpleAuth
__PACKAGE__->config(
simpleauth => { class => 'Users' },
);
sub auto : Private {
if(!$c->user && $c->action ne 'login') {
$c->res->redirect( $c->uri_for('/login') );
}
}
Catalyst::Plugin::*
Catalyst::Plugin::*
Session
Catalyst::Plugin::*
Session
Authentication
Catalyst::Plugin::*
Session
Authentication
FormValidator
Catalyst::Plugin::*
Session
Authentication
FormValidator
FillInForm
Catalyst::Plugin::*
Session
Authentication
FormValidator
FillInForm
Catalyst::Plugin::*
Session
Authentication
FormValidator
FillInForm
and {flash} - a read once stash!
Catalyst::Plugin::*
Session
Authentication
FormValidator
FillInForm
and {flash} - a read once stash!
and - well lots more
Catalyst::Plugin::*
Session
Authentication
FormValidator
FillInForm
and {flash} - a read once stash!
and - well lots more
Exciting times
Exciting times
We have all these tools to play with
Exciting times
We have all these tools to play with
It takes time (to learn and setup)
Exciting times
We have all these tools to play with
It takes time (to learn and setup)
but
Exciting times
We have all these tools to play with
It takes time (to learn and setup)
but
Makes developing so much more fun
We covered...
Server configuration
- kitchens!
Load balancing http
- perlbal
Testing
- Test More
Performance
- mod_gzip
- gzip
- headers
- Devel::NYTProf
Image/application serving
- MogileFS
- mod_perl
Caching
- Memcache
Server maintanence
- puppet
- munin
Object relational mapping
- DBIx::Class
Cloud Computing
- EC2
- S3
We covered...
QUESTIONS?
Slides: http://leo.cuckoo.org/projects/ea/