chep’06 highlights

43
CHEP’06 Highlights Tony Chan

Upload: delano

Post on 05-Jan-2016

66 views

Category:

Documents


3 download

DESCRIPTION

CHEP’06 Highlights. Tony Chan. CHEP’06 Highlights. 478 registered participants 467 submitted abstracts President of India address Warm temperatures (90+ degrees) Traveler’s diarrhea, mosquitoes, etc. CHEP’06 Highlights. LHC status Status of various computer facilities - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CHEP’06 Highlights

CHEP’06 Highlights

Tony Chan

Page 2: CHEP’06 Highlights

CHEP’06 Highlights

• 478 registered participants

• 467 submitted abstracts

• President of India address

• Warm temperatures (90+ degrees)

• Traveler’s diarrhea, mosquitoes, etc

Page 3: CHEP’06 Highlights

CHEP’06 Highlights

• LHC status

• Status of various computer facilities

• Grid Middleware reports

• Distributed computing models

• Other interesting reports

Page 4: CHEP’06 Highlights

                                                                                                                     

                                                                                                                                                                           

Page 5: CHEP’06 Highlights

Barrel Toroid installation statusThe mechanical installation is complete, electrical and cryogenic connections are being made now, for a first in-situ cool-down and excitation test in spring 2006

Page 6: CHEP’06 Highlights

Full physics

run

First physics

First beams

cosmics

2007

2005

2008

2006

Building the ServiceSC1 - Nov04-Jan05 - data transfer between CERN and three Tier-1s (FNAL, NIKHEF, FZK)

SC2 – Apr05 - data distribution from CERN to 7 Tier-1s – 600 MB/sec sustained for 10 days (one third of final nominal rate)

SC3 – Sep-Dec05 - demonstrate reliable basic service – most Tier-1s, some Tier-2s; push up Tier-1 data rates to 150 MB/sec (60 MB/sec to tape)

SC4 – May-Aug06 - demonstrate full service – all Tier-1s, major Tier-2s; full set of baseline services; data distribution and recording at nominal LHC rate (1.6 GB/sec)

LHC Service in operation – Sep06 – over following six months ramp up to full operational capacity & performance

LHC service commissioned – Apr07

today

LCG

Page 7: CHEP’06 Highlights

ConclusionsConclusions

The LHC project (machine; detectors; LCG) is well underway for physics in 2007

Detector construction is generally proceeding well, although not without concerns in some cases; an enormous integration/installation effort is ongoing – schedules are tight but are also taken very seriously.

LCG (like machine and detectors at a technological level that defines the new ‘state of the art’) needs to fully develop the functionality required; new ‘paradigm’.

Large potential for exciting physics.Large potential for exciting physics.

Page 8: CHEP’06 Highlights

Status of FNAL Tier 1 Status of FNAL Tier 1

Sole Tier 1 in the Americas for CSMSole Tier 1 in the Americas for CSM

2006 is first year of 3-year procurement ramp-up2006 is first year of 3-year procurement ramp-up

Currently have 1MSI2K, 100 TB dCache Currently have 1MSI2K, 100 TB dCache storage, single 10 Gb linkstorage, single 10 Gb link

Expect to have by 2008:Expect to have by 2008:– 4.3 MSI2K (2000 CPU’s)4.3 MSI2K (2000 CPU’s)– 2 PB storage (200 servers, 1600 MB/s I/O)2 PB storage (200 servers, 1600 MB/s I/O)– 15 Gb/s between FNAL and CERN15 Gb/s between FNAL and CERN– 30 FTE 30 FTE

Page 9: CHEP’06 Highlights

Status of FNAL Tier 1 (cont.)Status of FNAL Tier 1 (cont.)

Supports both LCG and OSGSupports both LCG and OSG

50% usage by local (450+) users, 50% by grid50% usage by local (450+) users, 50% by grid

Batch switched to Condor in 2005 – scaling well Batch switched to Condor in 2005 – scaling well so farso far

Enstore/dCache deployed Enstore/dCache deployed

dCache performed well in stress test (2-3 GB/s, dCache performed well in stress test (2-3 GB/s, 200 TB/day)200 TB/day)

SRM v.2 to be deployed for dCache storage SRM v.2 to be deployed for dCache storage element in early 2006element in early 2006

Page 10: CHEP’06 Highlights

ATLAS Canada Tier 1ATLAS Canada Tier 1

Page 11: CHEP’06 Highlights

ATLAS Canada Tier 1 (cont.)ATLAS Canada Tier 1 (cont.)

Page 12: CHEP’06 Highlights

ATLAS Canada Tier 1 (cont.)ATLAS Canada Tier 1 (cont.)

Page 13: CHEP’06 Highlights

ATLAS Canada Tier 1 (cont.)ATLAS Canada Tier 1 (cont.)

Page 14: CHEP’06 Highlights

Other FacilitiesOther Facilities

Tier 2 center in Manchester Tier 2 center in Manchester scalable scalable remote cluster management & monitoring remote cluster management & monitoring and provisioning software (nagios, and provisioning software (nagios, cfengine, kickstart)cfengine, kickstart)

Indiana/Chicago USATLAS Tier 2 centerIndiana/Chicago USATLAS Tier 2 center

RAL Tier 1 center RAL Tier 1 center

Page 15: CHEP’06 Highlights

Multi Core CPUs & ROOTMulti Core CPUs & ROOT

http://www.intel.com/technology/computing/archinnov/platform2015/

This is going to affect the evolution of ROOT in many areas

Page 16: CHEP’06 Highlights

Moore’s law revisitedMoore’s law revisited

Your laptop in 2016 with32 processors

16 Gbytes RAM16 Tbytes disk

> 50 today’s laptop

Page 17: CHEP’06 Highlights

There are many areas in ROOT that can benefit from a multi core There are many areas in ROOT that can benefit from a multi core architecture. Because the hardware is becoming available on architecture. Because the hardware is becoming available on commodity laptops, it is urgent to implement the most obvious commodity laptops, it is urgent to implement the most obvious asap.asap.Multi-Core often implies multi-threading. There are several areas Multi-Core often implies multi-threading. There are several areas to be made not only to be made not only thread-safethread-safe but also but also thread awarethread aware..– PROOF obvious candidate. By default a ROOT interactive PROOF obvious candidate. By default a ROOT interactive

session should run in PROOF mode. It would be nice if this session should run in PROOF mode. It would be nice if this could be made totally transparent to a user.could be made totally transparent to a user.

– Speed-up I/O with multi-threaded I/O and read-aheadSpeed-up I/O with multi-threaded I/O and read-ahead– Buffer compression in parallelBuffer compression in parallel– Minimization function in parallelMinimization function in parallel– Interactive compilation with ACLIC in parallelInteractive compilation with ACLIC in parallel– etc..etc..

Impact on ROOTImpact on ROOT

Page 18: CHEP’06 Highlights

Gridview Project GoalGridview Project Goal

Provide a high level view of the various Provide a high level view of the various Grid resources and functional aspects of Grid resources and functional aspects of the LCGthe LCGCentral Archival, Analysis, Summarization Central Archival, Analysis, Summarization Graphical Presentation and Pictorial Graphical Presentation and Pictorial Visualization of Data from various LCG Visualization of Data from various LCG sites and monitoring toolssites and monitoring toolsUseful in GOCs/ROCs and to site Useful in GOCs/ROCs and to site admins/VO adminsadmins/VO admins

Page 19: CHEP’06 Highlights

Gridview ArchitectureGridview Architecture

Loosely coupled components with Loosely coupled components with independent sensors, transport, archival, independent sensors, transport, archival, analysis and visualization components. analysis and visualization components.

Sensors are the various LCG information Sensors are the various LCG information providers and monitoring tools at sitesproviders and monitoring tools at sites

Transport used is R-GMATransport used is R-GMA

Gridview provides Archival, Analysis and Gridview provides Archival, Analysis and VisualizationVisualization

Page 20: CHEP’06 Highlights
Page 21: CHEP’06 Highlights

On-Going work in GridviewOn-Going work in GridviewService Availability MonitoringService Availability Monitoring– Being interfaced with SFT (Site Functional Tests) for monitoring Being interfaced with SFT (Site Functional Tests) for monitoring

availability of various services such as CE, SE, RB, BDII etc.availability of various services such as CE, SE, RB, BDII etc.– Rating of sites according to average resource availability and Rating of sites according to average resource availability and

acceptable thresholdsacceptable thresholds– Service availability metrics such as MTTR, uptime, failure rate to Service availability metrics such as MTTR, uptime, failure rate to

be computed and visualisedbe computed and visualised

gLite FTSgLite FTS– Gridview to be adapted to monitor file transfer statistics like Gridview to be adapted to monitor file transfer statistics like

successful transfers, failure rates etc for FTS channels across successful transfers, failure rates etc for FTS channels across grid sitesgrid sites

Enhancement of GUI & Visualisation module to Enhancement of GUI & Visualisation module to function as full-fledged dashboard for LCGfunction as full-fledged dashboard for LCG

Page 22: CHEP’06 Highlights

JobMonJobMon

Page 23: CHEP’06 Highlights

JobMon (cont.)JobMon (cont.)

Page 24: CHEP’06 Highlights

JobMon (cont.)JobMon (cont.)

Page 25: CHEP’06 Highlights

Introduction (Terapaths)Introduction (Terapaths)

The problem: The problem: support support efficient/reliable/predictable peta-scale data efficient/reliable/predictable peta-scale data movement in modern high-speed networksmovement in modern high-speed networks– Multiple data flows with varying priorityMultiple data flows with varying priority– Default “best effort” network behavior can cause Default “best effort” network behavior can cause

performance and service disruption problemsperformance and service disruption problems

Solution:Solution: enhance network functionality with enhance network functionality with QoS features to allow prioritization and QoS features to allow prioritization and protection of data flowsprotection of data flows

Page 26: CHEP’06 Highlights

The TeraPaths project investigates the integration and use of LAN The TeraPaths project investigates the integration and use of LAN QoS and MPLS/GMPLS-based differentiated network services in the QoS and MPLS/GMPLS-based differentiated network services in the ATLAS data intensive distributed computing environment in order to ATLAS data intensive distributed computing environment in order to manage the network as a manage the network as a critical resourcecritical resource

DOE: The collaboration includes BNL and the University of Michigan, DOE: The collaboration includes BNL and the University of Michigan, as well as OSCARS (ESnet), Lambdaas well as OSCARS (ESnet), Lambda Station (FNAL), and DWMI Station (FNAL), and DWMI (SLAC)(SLAC)

NSF: BNL participates in UltraLight to provide the network advances NSF: BNL participates in UltraLight to provide the network advances required in enabling petabyte-scale analysis of globally distributed required in enabling petabyte-scale analysis of globally distributed data data

NSF: BNL participates in a new network initiative: PLaNetS (Physics NSF: BNL participates in a new network initiative: PLaNetS (Physics Lambda Network System ), led by CalTechLambda Network System ), led by CalTech

The TeraPaths ProjectThe TeraPaths Project

Page 27: CHEP’06 Highlights

dCache dCache

New version (availability unknown?)New version (availability unknown?)FeaturesFeatures– Resilient dCache (n < copies < m)Resilient dCache (n < copies < m)– SRM v2SRM v2– Partitioning (one instance, multiple pool Partitioning (one instance, multiple pool

configurations)configurations)– Support for xrootd protocolSupport for xrootd protocol

Performance Performance – multiple I/O queuesmultiple I/O queues– multiple file system serversmultiple file system servers

Page 28: CHEP’06 Highlights

Computing Resources (ATLAS)Computing Resources (ATLAS)Computing Model fairly well evolved, documented in C-TDRComputing Model fairly well evolved, documented in C-TDR– Externally reviewed Externally reviewed – http://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lhhttp://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lh

cc-2005-022.pdfcc-2005-022.pdf

There are (and will remain for some time) many unknownsThere are (and will remain for some time) many unknowns– Calibration and alignment strategy is still evolvingCalibration and alignment strategy is still evolving– Physics data access patterns MAY be exercised from June Physics data access patterns MAY be exercised from June

Unlikely to know the real patterns until 2007/2008!Unlikely to know the real patterns until 2007/2008!– Still uncertainties on the event sizes , reconstruction timeStill uncertainties on the event sizes , reconstruction time

Lesson from the previous round of experiments at CERN (LEP, Lesson from the previous round of experiments at CERN (LEP,

1989-2000)1989-2000)– Reviews in 1988 underestimated the computing requirements by Reviews in 1988 underestimated the computing requirements by

an order of magnitude!an order of magnitude!

Page 29: CHEP’06 Highlights

ATLAS FacilitiesATLAS FacilitiesEvent Filter Farm at CERN Event Filter Farm at CERN – Located near the Experiment, assembles data into a stream to the Tier 0 CenterLocated near the Experiment, assembles data into a stream to the Tier 0 Center

Tier 0 Center at CERNTier 0 Center at CERN– Raw data Raw data Mass storage at CERN and to Tier 1 centers Mass storage at CERN and to Tier 1 centers– Swift production of Event Summary Data (ESD) and Analysis Object Data (AOD)Swift production of Event Summary Data (ESD) and Analysis Object Data (AOD)– Ship ESD, AOD to Tier 1 centers Ship ESD, AOD to Tier 1 centers Mass storage at CERN Mass storage at CERN

Tier 1 Centers distributed worldwide (10 centers)Tier 1 Centers distributed worldwide (10 centers)– Re-reconstruction of raw data, producing new ESD, AODRe-reconstruction of raw data, producing new ESD, AOD– Scheduled, group access to full ESD and AODScheduled, group access to full ESD and AOD

Tier 2 Centers distributed worldwide (approximately 30 centers)Tier 2 Centers distributed worldwide (approximately 30 centers)– Monte Carlo Simulation, producing ESD, AOD, ESD, AOD Monte Carlo Simulation, producing ESD, AOD, ESD, AOD Tier 1 centers Tier 1 centers– On demand user physics analysisOn demand user physics analysis

CERN Analysis FacilityCERN Analysis Facility

– AnalysisAnalysis

– Heightened access to ESD and RAW/calibration data on demandHeightened access to ESD and RAW/calibration data on demand

Tier 3 Centers distributed worldwideTier 3 Centers distributed worldwide– Physics analysisPhysics analysis

Page 30: CHEP’06 Highlights

ProcessingProcessingTier-0:Tier-0:– Prompt first pass processing on express/calibration Prompt first pass processing on express/calibration

physics streamphysics stream– 24-48 hours later, process full physics data stream with 24-48 hours later, process full physics data stream with

reasonable calibrationsreasonable calibrationsImplies large data movement from T0 →T1sImplies large data movement from T0 →T1s

Tier-1:Tier-1:– Reprocess 1-2 months after arrival with better calibrationsReprocess 1-2 months after arrival with better calibrations– Reprocess all resident RAW at year end with improved Reprocess all resident RAW at year end with improved

calibration and softwarecalibration and softwareImplies large data movement from T1↔T1 and T1 → Implies large data movement from T1↔T1 and T1 →

T2T2

Page 31: CHEP’06 Highlights

ProdDB

CECE CE

DulcineaDulcineaDulcinea

DulcineaDulcinea

LexorDulcinea

DulcineaCondorG

CG

PANDA

RBRB

RB

ATLAS Prodsys

Page 32: CHEP’06 Highlights

Analysis model

Analysis model broken into two components– Scheduled central production of augmented

AOD, tuples & TAG collections from ESDDerived files moved to other T1s and to T2s– Chaotic user analysis of augmented AOD

streams, tuples, new selections etc and individual user simulation and CPU-bound tasks matching the official MC production

Modest job traffic between T2s

Page 33: CHEP’06 Highlights

Initial experiences

• PANDA on OSG

• Analysis with the Production System

• GANGA

Page 34: CHEP’06 Highlights

Summary

• Systems have been exposed to selected users– Positive feedback– Direct contact to the experts still essential– For this year – power users and grid experts …

• Main issues– Data distribution → New DDM– Scalability → New Prodsys/PANDA/gLite/CondorG– Analysis in parallel to Production → Job Priorities

Page 35: CHEP’06 Highlights

ATLAS T0 ResourcesATLAS T0 Resources

Page 36: CHEP’06 Highlights

ATLAS T1 ResourcesATLAS T1 Resources

Page 37: CHEP’06 Highlights

ATLAS T2 ResourcesATLAS T2 Resources

Page 38: CHEP’06 Highlights

DIAL Performance

•The reference dataset was run as a single job

– Athena clock time was 70 minutes

• I.e. 43 ms/event, 3.0 MB/s

• Actual data transfer is about half that value

– Some of the event data is not read

•Following figure shows results

– Local fast queue (LSF)

• Green squares

– Local short queue (Condor preemptive)

• Blue triangles

– Condor-G to local fast

• Red diamonds

– PANDA

• Violet circles

Page 39: CHEP’06 Highlights

DIAL 1.30 AOD processing time 2/10/06

0

600

1200

1800

2400

3000

3600

0 500 1000Thousands of events

Tim

e (

se

c)

single job

(single job)/10

100 MB/s

50 MB/s

10k events

8feb-lfast-nfs-100

9feb-lshort-nfs-100

9feb-cgfast-nfs-100

9feb-panda-nfs-100

10feb-lfast-nfs-100

10feb-lfast-nfs-50

10feb-lfast-nfs-20

Page 40: CHEP’06 Highlights

CMS Distributed ComputingCMS Distributed ComputingDistributed model for computing in CMSDistributed model for computing in CMS

– Cope with computing requirements for storage, processing and analysis of data Cope with computing requirements for storage, processing and analysis of data provided by the experiment provided by the experiment

– Computing resources are geographically distributed, interconnected via high Computing resources are geographically distributed, interconnected via high throughput networks and operated by means of Grid softwarethroughput networks and operated by means of Grid software

Running expectationsRunning expectations

– Beam time: 2-3x10Beam time: 2-3x1066 secs in 2007, 10 secs in 2007, 1077 secs in 2008, 2009 and 2010 secs in 2008, 2009 and 2010

– Detector output rate: ~250 MB/s Detector output rate: ~250 MB/s 2.5 PetaBytes raw data in 2008 2.5 PetaBytes raw data in 2008

Aggregate computing resources requiredAggregate computing resources required

– CMS computing model document (CERN-LHCC-2004-035)CMS computing model document (CERN-LHCC-2004-035)

– CMS computing TDR released on June 2005 (CERN-LHCC-2005-023)CMS computing TDR released on June 2005 (CERN-LHCC-2005-023)

Page 41: CHEP’06 Highlights

Resources and data flows in 2008

tape

Tier 04.6 MSI2K0.4 PB disk4.9 PB tape5 Gbps WAN

WNs

Tier-1s

280 MB/s(RAW, RECO, AOD)

225 MB/s(RAW)

280 MB/s(RAW, RECO, AOD)

225 MB/s(RAW)

Up to 1 GB/s(AOD analysis,

calibration)

Tier 20.9 MSI2K0.2 PB disk1Gbps WAN

WNs

12 MB/s(MC)

60 MB/s(skimmed AOD,

Some RAW+RECO)

Tier-1

Tier 12.5 MSI2K0.8 PB disk2.2 PB tape10 Gbps WAN

WNs

900 MB/s(AOD skimming,

data reprocessing)

Tier-2s

Tier-1s

Tier-0

48 MB/s(MC)

40 MB/s(RAW, RECO, AOD)

AOD

AOD

240 MB/s(skimmed AOD,

Some RAW+RECO)

Page 42: CHEP’06 Highlights

FNAL 64 bit TestsFNAL 64 bit Tests

Benchmark tests of single/dual cores (32 Benchmark tests of single/dual cores (32 and 64 bit OS/applications)and 64 bit OS/applications)

Dual cores provide 2x improvement over Dual cores provide 2x improvement over single core (same as BNL tests)single core (same as BNL tests)

Better performance with 64/64 (app Better performance with 64/64 (app dependent)dependent)

Dual cores provides 2x improvement in Dual cores provides 2x improvement in performance/watt compared to single coreperformance/watt compared to single core

Page 43: CHEP’06 Highlights

Network InfrastructureNetwork Infrastructure

Harvey Newmann’s talkHarvey Newmann’s talk

10 Gbs backbone becoming widespread, 10 Gbs backbone becoming widespread, move to 10’s (100’s?) Gbs in LHC eramove to 10’s (100’s?) Gbs in LHC era

PC’s moving in similar directionPC’s moving in similar direction

Digital divide (Europe/US/Japan compared Digital divide (Europe/US/Japan compared to rest of the world)to rest of the world)

Next CHEP in Victoria, BC (Sep. 07)Next CHEP in Victoria, BC (Sep. 07)