cơ bản lập trình song song mpi (fortran)
TRANSCRIPT
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
1/27
C bn lp trnh song song MPI cho Fortran
ng Nguyn [email protected]
Ngy 21 thng 11 nm 2013
Mc lc
1 M u 2
2 MPI 32.1 Gii thiu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Ci t MPICH2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Bin dch v thc thi chng trnh vi MPICH2. . . . . . . . . . . . . . . . . . . 4
3 Cu trc c bn ca mt chng trnh MPI 53.1 Cu trc chng trnh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2 Cc khi nim c bn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.3 V d Hello world . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4 V d truyn thng ip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Cc lnh MPI 114.1 Cc lnh qun l mi trng MPI . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Cc kiu d liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.3 Cc c ch truyn thng ip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.4 Cc lnh truyn thng ip blocking . . . . . . . . . . . . . . . . . . . . . . . . . 154.5 Cc lnh truyn thng ip non-blocking. . . . . . . . . . . . . . . . . . . . . . . 174.6 Cc lnh truyn thng tp th . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Mt s v d 205.1 V d tnh s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.2 V d nhn ma trn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Ti liu tham kho 26
1
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
2/27
ng Nguyn Phng Ti liu ni b NMTP
1 M u
Thng thng hin nay, hu ht cc chng trnh tnh ton u c thit k chy trn mtli (single core), l cch tnh ton tun t (serial computation). c th chy c chngtrnh mt cch hiu qu trn cc h thng my tnh (cluster) hoc cc cpu a li (multi-core),
chng ta cn phi tin hnh song song ha chng trnh . u im ca vic tnh ton songsong (parallel computation) chnh l kh nng x l nhiu tc v cng mt lc. Vic lp trnhsong song c th c thc hin thng qua vic s dng cc hm th vin (vd: mpi.h) hoc ccc tnh c tch hp trong cc chng trnh bin dch song song d liu, chng hn nhOpenMP trong cc trnh bin dch fortran F90, F95.
Cng vic lp trnh song song bao gm vic thit k, lp trnh cc chng trnh my tnh songsong sao cho n chy c trn cc h thng my tnh song song. Hay c ngha l song songho cc chng trnh tun t nhm gii quyt mt vn ln hoc lm gim thi gian thc thihoc c hai. Lp trnh song song tp trung vo vic phn chia bi ton tng th ra thnh cccng vic con nh hn ri nh v cc cng vic n tng b x l (processor) v ng bcc cng vic nhn c kt qu cui cng. Nguyn tc quan trng nht y chnh l tnh
ng thi hoc x l nhiu tc v (task) hay tin trnh (process) cng mt lc. Do , trc khilp trnh song song ta cn phi bit c rng bi ton c th c song song ho hay khng(c th da trn d liu hay chc nng ca bi ton). C hai hng chnh trong vic tip cnlp trnh song song:
Song song ho ngm nh (implicit parallelism): b bin dch hay mt vi chng trnhkhc t ng phn chia cng vic n cc b x l.
Song song ho bng tay (explicit parallelism): ngi lp trnh phi t phn chia chngtrnh ca mnh n c th c thc thi song song.
Ngoi ra trong lp trnh song song, ngi lp trnh cng cn phi tnh n yu t cn bng ti
(load balancing) trong h thng. Phi lm cho cc b x l thc hin s cng vic nh nhau,nu c mt b x l c ti qu ln th cn phi di chuyn cng vic n b x l c ti nh hn.
Mt m hnh lp trnh song song l mt tp hp cc k thut phn mm th hin cc giithut song song v a vo ng dng trong h thng song song. M hnh ny bao gm cc ngdng, ngn ng, b bin dch, th vin, h thng truyn thng v vo/ra song song. Trong thct, cha c mt my tnh song song no cng nh cch phn chia cng vic cho cc b x lno c th p dng hiu qu cho mi bi ton. Do , ngi lp trnh phi la chn chnh xcmt m hnh song song hoc pha trn gia cc m hnh vi nhau pht trin cc ng dngsong song trn mt h thng c th.
Hin nay c rt nhiu m hnh lp trnh song song nh m hnh a lung ( multi-threads), truyn
thng ip (message passing), song song d liu (data parallel), lai (hybrid),... Cc loi m hnhny c phn chia da theo hai tiu ch l tng tc gia cc tin trnh (process interaction)v cch thc x l bi ton (problem decomposition). Theo tiu ch th nht, chng ta c 2 loim hnh song song ch yu l m hnh dng b nh chia s (shared memory) hoc truyn thngip (message passing). Theo tiu ch th hai, chng ta cng c hai loi m hnh l song songha tc v (task parallelism) v song song ha d liu (data parallelism).
Vi m hnh b nh chia s, tt c cc x l u truy cp mt d liu chung thng quamt vng nh dng chung.
Vi m hnh truyn thng ip th mi x l u c ring mt b nh cc b ca n, ccx l trao i d liu vi nhau thng qua hai phng thc gi v nhn thng ip.
Song song tc v l phng thc phn chia cc tc v khc nhau n cc nt tnh tonkhac nhau, d liu c s dng bi cc tc v c th hon ton ging nhau.
2
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
3/27
ng Nguyn Phng Ti liu ni b NMTP
Song song d liu l phng thc phn phi d liu ti cc nt tnh ton khc nhau c x l ng thi, cc tc v ti cc nt tnh ton c th hon ton ging nhau.
M hnh truyn thng ip l mt trong nhng m hnh c s dng rng ri nht trongtnh ton song song hin nay. N thng c p dng cho cc h thng phn tn (distributedsystem). Cc c trng ca m hnh ny l:
Cc lung (thread) s dng vng nh cc b ring ca chng trong sut qu trnh tnhton.
Nhiu lung c th cng s dng mt ti nguyn vt l.
Cc lung trao i d liu bng cch gi nhn cc thng ip
Vic truyn d liu thng yu cu thao tc iu phi thc hin bi mi lung. V d,mt thao tc gi mt lung th phi ng vi mt thao tc nhn lung khc.
Ti liu ny c xy dng vi mc ch cung cp cc kin thc c bn bc u nhm tmhiu kh nng vit mt chng trnh song song bng ngn ng lp trnh Fortran theo c ch
trao i thng ip s dng cc th vin theo chun MPI. Mc ch l nhm ti vic thc thicc chng trnh Fortran trn my tnh a li hoc h thng cm my tnh (computer cluster)gip nng cao hiu nng tnh ton. Trong ti liu ny, th vin MPICH2 c s dng bindch cc chng trnh Fortran trn h iu hnh Linux.
2 MPI
2.1 Gii thiu
M hnh truyn thng ip l mt trong nhng m hnh lu i nht v c ng dng rngri nht trong lp trnh song song. Hai b cng c ph bin nht cho lp trnh song song theo
m hnh ny l PVM (Parallel Virtual Machine) v MPI (Message Passing Interface). Cc bcng c ny cung cp cc hm dng cho vic trao i thng tin gia cc tin trnh tnh tontrong h thng my tnh song song.
MPI (Message Passing Interface) l mt chun m t cc c im v c php ca mt th vinlp trnh song song, c a ra vo nm 1994 bi MPIF (Message Passing Interface Forum),v c nng cp ln chun MPI-2 t nm 2001. C rt nhiu cc th vin da trn chun MPIny chng hn nh MPICH, OpenMPI hay LAM/MPI.
MPICH2 l mt th vin min ph bao gm cc hm theo chun MPI dng cho lp trnh songsong theo phng thc truyn thng ip, c thit k cho nhiu ngn ng lp trnh khc nhau(C++, Fortran, Python,. . . ) v c th s dng trn nhiu loi h iu hnh (Windows, Linux,
MacOS,...).
2.2 Ci t MPICH2
Gi MPICH2 c th c ci t trn tt c cc my tnh thng qua lnh sau
$ s ud o a ptg et i n st a ll m p ic h 2
Sau khi ci t thnh cng MPICH2, ta cn phi cu hnh trc khi chy song song. Trongtrng hp phin bn c s dng l 1.2.x tr v trc th trnh qun l thc thi mc nh sl MPD, cn t 1.3.x tr v sau th trnh qun l s l Hydra. Cch thc cu hnh dnh cho 2
trnh qun l s nh sau:
3
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
4/27
ng Nguyn Phng Ti liu ni b NMTP
MPD To 2 filempd.hosts v .mpd.conf trong th mc ch (vd: /home/phuong). Trong ,filempd.hostss cha tn ca cc my con trong h thng, v d nh
ma st er
node1
node2
node3
Cn i vi file .mpd.conf, ta cn phi thit lp quyn truy cp cho file ny thng qua lnh
$ c h mo d 600 .mp d.conf
Sau m file v thm dng sau vo trong file
secretword=random_text_here
khi ng MPD, g lnh sau trn my ch
$ m p db o ot n
vi N l s my c trong h thng.Hydra tng t nh vi MPD nhng n gin hn, ta ch cn to duy nht 1 file c tnhoststi th mc /home/phuongcha tn ca tt c cc my con trong h thng
ma st er
node1
node2
node3
2.3 Bin dch v thc thi chng trnh vi MPICH2
Bin dch bin dch mt chng trnh ng dng vi MPICH2, ta c th s dng mt trongcc trnh bin dch sau
Ngn ng Trnh bin dchC mpicc
C++ mpicxx, mpic++Fortran mpif77, mpif90, mpifort
V d nh ta mun bin dch mt chng trnh ng dng vit bng ngn ng Fortran, ta c thg lnh sau
mp if 90 o helloworld helloworld. f
Trong , helloworld.fl file cha m ngun ca chng trnh, ty chnh -ocho ta xc nhtrc tn ca file ng dng c bin dch ra, trong trng hp ny l file helloworld.
Thc thi Trong trng hp phin bn MPICH2 s dng trnh qun l MPD, trc khi thcthi chng trnh ta cn gi MPD qua lnh mpdbootnh cp n trn hoc
mpd &
Chng trnh c bin dch bng MPI c th c thc thi bng cch s dng lnh
mpirun -np
hoc
mpiexec -n
4
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
5/27
ng Nguyn Phng Ti liu ni b NMTP
Trong N l s tc v song song cn chy v tenchuongtrinh l tn ca chng trnh ngdng cn thc thi.
V d:
$ m pd &
$ m p ir u n
np 8 helloworld
3 Cu trc c bn ca mt chng trnh MPI
3.1 Cu trc chng trnh
Cu trc c bn ca mt chng trnh MPI nh sau:
Khai bo cc header, bin, prototype,...
Bt u chng trnh
. . .
. . .Khi ng mi trng MPI
. . .
. . .Kt thc mi trng MPI
. . .
. . .
Kt thc chng trnh
3.2 Cc khi nim c bn
Mt chng trnh song song MPI thng cha nhiu hn mt tc v (task) hay cn gi l tintrnh (process) thc thi. Mi tc v (tin trnh) c phn bit vi nhau bi ch s tc v (cgi lrankhaytask ID). Ch s ny l mt s nguyn t 0 n (N1) vi N l tng s tc vMPI tham gia chy chng trnh. i vi cc chng trnh chy theo c ch master/slave thtrong h thng thng c mt tc v ch (master) iu khin cc tc v khc c gi l tcv con (slave), tc v ch ny thng c ch s l 0 cn cc tc v con c ch s t 1 n (N 1).
Tp hp ca cc tc v MPI cng chy mt chng trnh c gi l mt nhm ( group). Vtp hp ca cc tc v trong cng mt nhm m c th trao i thng tin vi nhau c gil mt communicator. Khi bt u chng trnh, communicatorm bao gm tt c cc tc vthc thi c mc nh l MPI_COMM_WORLD.
Cc tc v trong MPI trao i vi nhau thng qua vic gi/nhn cc thng ip ( message). Mithng ip u cha hai thnh phn bao gm d liu (data) vheader, miheaderbao gm:
Ch s ca tc v gi
Ch s ca tc v nhn
Nhn (tag) ca thng ip
Ch s ca communicator
5
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
6/27
ng Nguyn Phng Ti liu ni b NMTP
3.3 V d Hello world
bc u lm quen vi vic vit mt chng trnh MPI, ta s bt u vi mt v d ngin, l vit chng trnh Hello world. Cc bc thc hin nh sau:
Bc u tin, ta s to mt file c tn hello.fv m file ny bng cc chng trnh son
tho vn bn dng text (vd: gedit,emacs,vim,...)
Khai bo tn chng trnh v thm header MPI
program hello
u se m pi
Cn lu rng header MPI cn phi c thm vo trong tt c cc hm, th tc c thchin song song, thng qua khai bo use mpi(trong F77 th s dng khai bo includempif.h).
Khai bo mi trng MPI
program hello
u se m piinteger ierror
call MPI_INIT(ierror)
Lnh MPI_INITkhi to mi trng MPI thc hin tc v song song, lnh ny s trv mt gi tr nguyn cho bin ierror, gi tr ny s bng 0 nu c xy ra li trong qutrnh khi to mi trng.
Gi lnh qun l s tc v song song
program hello
u se m pi
integer ierror,ntasks
call MPI_INIT ( ierror )call MPI_COMM_SIZE(MPI_COMM_WORLD, ntasks, ierror)
Lnh MPI_COMM_SIZE tr v gi tr s lng tc v song song vo trong bin ntasks.Tham s MPI_COMM_WORLDl tham s ch communicator ton cc, c gi tr l mt hngs nguyn.
Gi lnh xc nh ch s ca tc v
program hello
u se m pi
integer ierror,ntasks,mytaskcall MPI_INIT ( ierror )call MPI_COMM_SIZE
(MPI_COMM_WORLD
, ntasks
, ierror
)call MPI_COMM_RANK(MPI_COMM_WORLD, mytask, ierror)
Lnh MPI_COMM_RANKtr v ch s (rank) ca tc v vo trong bin mytask, ch s ny cgi tr t 0 n ntasks-1, v c s dng nhn bit tc v khi iu khin gi/nhnthng tin.
Thc hin lnh xut ra mn hnh
program hello
u se m pi
integer ierror,ntasks,mytaskcall MPI_INIT ( ierror )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, ierror )call MPI_COMM_RANK ( MPI_COMM_WORLD, mytask, ierror )p ri n t * , " H el lo w o rl d f r om t as k " , m yt as k , " o f " , n ta s ks
6
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
7/27
ng Nguyn Phng Ti liu ni b NMTP
Kt thc mi trng MPI
program hello
u se m pi
integer ierror,ntasks,mytaskcall MPI_INIT ( ierror )
call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, ierror )call MPI_COMM_RANK ( MPI_COMM_WORLD, mytask, ierror )print , " H e l l o w or ld f ro m t a s k " ,my ta sk , " o f " , ntaskscall MPI_FINALIZE(ierror)
en d
LnhMPI_FINALIZEng mi trng MPI, tuy nhin cc tc v chy song song ang cthc thi vn c tip tc. Tt c cc lnh MPI c gi sau MPI_FINALIZE u khngc hiu lc v b bo li.
3.4 V d truyn thng ip
Trong v d Hello world ta lm quen vi 4 lnh c bn ca MPI. Trong thc t, rt nhiuchng trnh song song MPI c th c xy dng ch vi 6 lnh c bn, ngoi 4 lnh va ktrn ta cn s dng thm hai lnh na l MPI_SEND gi thng ip v MPI_RECV nhnthng ip gia cc tc v vi nhau. Cu trc ca hai lnh ny nh sau:
MPI_SEND (buffer,count,datatype,destination,tag,communicator,ierr)
MPI_RECV (buffer,count,datatype,source,tag,communicator,status,ierr)
Trong buffer mng d liu cn chuyn/nhncount s phn t trong mngdatatype kiu d liu (vd: MPI_INTEGER, MPI_REAL,...)destination ch s ca tc v ch (bn trong communicator)source ch s ca tc v ngun (bn trong communicator)tag nhn ca thng ip (dng s nguyn)communicator tp hp cc tc vstatus trng thi ca thng ipierror m s li
c th hiu r hn v cch s dng hai lnh ny, ta s xem v d vng lp (fixed_loop) sau.Trong v d ny,MPI_SENDc s dng gi i s vng lp hon thnh t mi tc v con(c ch s t 1 n N1) n tc v ch (ch s l 0). Lnh MPI_RECVc gi N1 ln tcv ch nhn N1 thng tin c gi t N1 tc v con.
Cc bc khai bo u tin cng tng t nh trong v d Hello world
program fixedloop
u se m pi
i n te g er i, rank, ntasks, count, start, stop, nloopsinteger total_nloops,er rinteger status ( MPI_STATUS_SIZE )
call MPI_INIT ( er r )call MPI_COMM_RANK ( MPI_COMM_WORLD, rank, er r )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, er r )
Gi s chng ta mun thc hin vng lp 1000 ln, do s ln lp ca mi tc v s bng1000/ntasks vi ntasksl tng s tc v. Chng ta s s dng ch s ca mi tc v nhdu phn khc lp ca mi tc v
7
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
8/27
ng Nguyn Phng Ti liu ni b NMTP
program fixedloop
u se m pi
i n te g er i, rank, ntasks, count, start, stop, nloopsinteger total_nloops,er r
integer status ( MPI_STATUS_SIZE )
call MPI_INIT ( er r )call MPI_COMM_RANK ( MPI_COMM_WORLD, rank, er r )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, er r )
c ou nt = 1 00 0 / n ta sk s
s ta rt = r an k * c ou nt
sto p = s tar t + c ou nt - 1
n lo op s = 0
do i=start,stop
n lo op s = n lo op s + 1
enddop r in t * , " Ta sk " , ra nk , " p e rf o rm e d " , nl oo ps , " i t er a ti o ns o f t he l o op . "
Trong countl s ln lp ca mi tc v, cc tc v c ch s ranks thc hin ln lp thrank*count n ln lp th rank*count+count-1. Bin nloops s m s ln thc hin vnglp ca tc v c ch s rankv xut ra mn hnh.Trong trng hp tc v ang thc hin khng phi l tc v ch (rankkhc 0), tc v ny sgi kt qu nloopscho tc v ch.
program fixedloop
u se m pi
i n te g er i, rank, ntasks, count, start, stop, nloops
integer total_nloops,er rinteger status ( MPI_STATUS_SIZE )
call MPI_INIT ( er r )call MPI_COMM_RANK ( MPI_COMM_WORLD, rank, er r )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, er r )
count = 1 0 0 0 / ntasksstart = rank countstop = start + count 1nloops = 0do i=start,stop
nloops = nloops + 1enddo
print ,"Task " , rank, " pe r for me d " , nloops, " i t e r a t i o n s o f t he l oo p . "
i f ( r an k . ne . 0 ) t he n
c al l M P I_ S EN D ( nl oo ps , 1 , M P I_ I NT EG E R , 0 , 0 , M P I_ C OM M _W O RL D , e rr )
endif
Trong trng hp tc v ny l tc v ch, n s nhn gi tr nloopst cc tc v con gi vcng dn li
program main
u se m pi
i n te g er i, rank, ntasks, count, start, stop, nloopsinteger total_nloops,er r
8
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
9/27
ng Nguyn Phng Ti liu ni b NMTP
integer status ( MPI_STATUS_SIZE )
call MPI_INIT ( er r )call MPI_COMM_RANK ( MPI_COMM_WORLD, rank, er r )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, er r )
count = 1 0 0 0 / ntasksstart = rank countstop = start + count 1nloops = 0do i=start,stop
nloops = nloops + 1enddo
print ,"Task " , rank, " pe r for me d " , nloops, " i t e r a t i o n s o f t he l oo p . "
i f ( rank .ne. 0 ) th encall MPI_SEND ( nloops, 1 , MPI_INTEGER, 0 , 0 , MPI_COMM_WORLD, er r )
else
total_nloops = nloops;do i=1,ntasks -1
c al l M P I_ R EC V ( nl oo ps , 1 , M P I_ I NT E GE R , i , 0 , M P I_ C OM M _W O RL D , s ta tu s
, e rr )
t o ta l _n l oo p s = t o ta l _n l oo p s + n l oo p s
enddo
endif
Tc v ch s chy tip s 1000 vng lp trong trng hp c d ra mt s vng lp saukhi chia 1000 cho tng s tc v.
program main
u se m pi
i n te g er i, rank, ntasks, count, start, stop, nloopsinteger total_nloops,er rinteger status ( MPI_STATUS_SIZE )
call MPI_INIT ( er r )call MPI_COMM_RANK ( MPI_COMM_WORLD, rank, er r )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, er r )
count = 1 0 0 0 / ntasksstart = rank countstop = start + count 1nloops = 0do i=start,stop
nloops = nloops + 1enddo
print ,"Task " , rank, " pe r for me d " , nloops, " i t e r a t i o n s o f t he l oo p . "
i f ( rank .ne. 0 ) th encall MPI_SEND ( nloops, 1 , MPI_INTEGER, 0 , 0 , MPI_COMM_WORLD, er r )
e l s etotal_nloops = nloops;do i=1 , ntasks1
call MPI_RECV ( nloops, 1 , MPI_INTEGER, i, 0 , MPI_COMM_WORLD,
status, er r )total_nloops = total_nloops + nloops
enddo
9
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
10/27
ng Nguyn Phng Ti liu ni b NMTP
n lo op s = 0
i f ( t o ta l _n l oo p s . l t . 1 0 00 ) t he n
do i=total_nloops,1000-1
n lo op s = n lo op s + 1
enddo
endif
endif
Cui cng l xut ra kt qu v kt thc mi trng MPI, cng nh kt thc chng trnh.
program fixedloop
u se m pi
i n te g er i, rank, ntasks, count, start, stop, nloopsinteger total_nloops,er rinteger status ( MPI_STATUS_SIZE )
call MPI_INIT ( er r )call MPI_COMM_RANK ( MPI_COMM_WORLD, rank, er r )call MPI_COMM_SIZE ( MPI_COMM_WORLD, ntasks, er r )
count = 1 0 0 0 / ntasksstart = rank countstop = start + count 1nloops = 0do i=start,stop
nloops = nloops + 1enddo
print ,"Task " , rank, " pe r for me d " , nloops, " i t e r a t i o n s o f t he l oo p . "
i f ( rank .ne. 0 ) th encall MPI_SEND ( nloops, 1 , MPI_INTEGER, 0 , 0 , MPI_COMM_WORLD, er r )
e l s etotal_nloops = nloops;do i=1 , ntasks1
call MPI_RECV ( nloops, 1 , MPI_INTEGER, i, 0 , MPI_COMM_WORLD,status, er r )
total_nloops = total_nloops + nloopsenddo
nloops = 0i f ( total_nloops .lt. 1 00 0) t he n
do i=total_nloops ,1000
1nloops = nloops + 1
enddo
endif
p r in t * , " Ta sk 0 p e rf o rm e d t he r e ma i ni n g " , n lo op s , " i t er a ti o ns o f t he
loop"
endif
call MPI_FINALIZE()
en d
10
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
11/27
ng Nguyn Phng Ti liu ni b NMTP
4 Cc lnh MPI
4.1 Cc lnh qun l mi trng MPI
Cc lnh ny c nhim v thit lp mi trng cho cc lnh thc thi MPI, truy vn ch s catc v, cc th vin MPI,...
MPI_INIT khi ng mi trng MPI
MPI_INIT (ierr)
MPI_COMM_SIZE tr v tng s tc v MPI ang c thc hin trong communicator(chng hn nh trong MPI_COMM_WORLD)
MPI_COMM_SIZE (comm,size,ierr)
MPI_COMM_RANK tr v ch s ca tc v (rank). Ban u mi tc v s c gn chomt s nguyn t 0 n (N1) vi N l tng s tc v trong communicator MPI_COMM_WORLD.
MPI_COMM_RANK (comm,rank,ierr)
MPI_ABORT kt thc tt c cc tin trnh MPI
MPI_ABORT (comm,errorcode,ierr)
MPI_GET_PROCESSOR_NAME tr v tn ca b x l
MPI_GET_PROCESSOR_NAME (name,resultlength,ierr)
MPI_INITIALIZED tr v gi tr 1 nuMPI_INIT() c gi, 0 trong trng hp ngcli
MPI_INITIALIZED (flag,ierr)
MPI_WTIME tr v thi gian chy (tnh theo giy) ca b x l
MPI_WTIME ()
MPI_WTICK tr v phn gii thi gian (tnh theo giy) ca MPI_WTIME()
MPI_WTICK ()
MPI_FINALIZE kt thc mi trng MPI
MPI_FINALIZE (ierr)
V d:
program simpleuse mp ii n t e g e r numtasks, rank, len, ierr
4 c h a r a c t e r( MPI_MAX_PROCESSOR_NAME ) hostnamec a l l MPI_INIT ( ierr )i f ( ierr .ne. MPI_SUCCESS ) then
p r i n t , 'E r r or st ar t i ng MPI progr am . T e r minat ing . 'c a l l MPI_ABORT ( MPI_COMM_WORLD, rc, ierr )
9 e nd i f c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, rank, ierr )c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )c a l l MPI_GET_PROCESSOR_NAME ( hostname, len, ierr )p r i n t , ' Number of t ask s=', numtasks, ' My rank= ', rank,
11
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
12/27
ng Nguyn Phng Ti liu ni b NMTP
14 & ' Running on= ', hostname! do some work c a l l MPI_FINALIZE ( ierr )en d
4.2 Cc kiu d liu
Cc kiu d liu c bn ca MPI c lit k trong bng sau
Tn Kiu d liuMPI_CHARACTER characterMPI_INTEGER integerMPI_INTEGER1 integer*1MPI_INTEGER2 integer*2MPI_INTEGER4 integer*4MPI_REAL real
MPI_REAL2 real*2MPI_REAL4 real*4MPI_REAL8 real*8MPI_DOUBLE_PRECISION double precisionMPI_COMPLEX complexMPI_DOUBLE_COMPLEX double complexMPI_LOGICAL logicalMPI_BYTE byteMPI_PACKED data packed
Ngoi ra ngi dng cn c th t to ra cc cu trc d liu ring cho mnh da trn cc kiu
d liu c bn ny. Cc kiu d liu c cu trc do ngi dng t nh ngha c gi l deriveddata types. Cc lnh nh ngha cu trc d liu mi bao gm:
MPI_TYPE_CONTIGUOUS to ra kiu d liu mi bng cch lp count ln kiu dliu c.
MPI_TYPE_CONTIGUOUS (count,oldtype,newtype,ierr)
MPI_TYPE_VECTOR tng t nhcontigousnhng c cc phn on (stride) c nh,kiu d liu mi c hnh thnh bng cch lp mt dy cc khi ( block) ca kiu d liu c ckch thc bng nhau ti cc v tr c tnh tun hon.
MPI_TYPE_VECTOR (count,blocklength,stride,oldtype,newtype,ierr)
MPI_TYPE_INDEXED kiu d liu mi c hnh thnh bng cch to mt dy cckhi ca kiu d liu c, mi khi c th cha s lng cc bn sao kiu d liu c khc nhau.
MPI_TYPE_INDEXED (count,blocklens(),offsets(),oldtype,newtype,ierr)
MPI_TYPE_STRUCT tng t nh trn nhng mi khi c th c to thnh bi cckiu d liu c khc nhau.
MPI_TYPE_STRUCT (count,blocklens(),offsets(),oldtypes,newtype,ierr)
Hnh1 trnh by mt s v d cho cc cch to cu trc d liu mi. Trong trng hp truyn
cc cu trc d liu khng cng kiu, ta c th s dng cc lnh MPI_PACKEDv MPI_UNPACKED ng gi d liu trc khi gi.
12
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
13/27
ng Nguyn Phng Ti liu ni b NMTP
Hnh 1: V d cc cch to cu trc d liu mi
MPI_TYPE_EXTENT tr v kch thc (tnh theo byte) ca kiu d liu
MPI_TYPE_EXTENT (datatype,extent,ierr)
MPI_TYPE_COMMIT a kiu d liu mi nh ngha vo trong h thng
MPI_TYPE_COMMIT (datatype,ierr)
MPI_TYPE_FREE b kiu d liu
MPI_TYPE_FREE (datatype,ierr)
V d:to kiu d liu vector
program vectoruse mp i
3
i n t e g e r SIZEparameter ( SIZE=4)i n t e g e r numtasks, rank, source, dest, tag, i, ierrr e a l 4 a ( 0 : SIZE1 , 0 : SIZE1) , b ( 0 : SIZE1)
8 i n t e g er s t a t( MPI_STATUS_SIZE ) , rowtype
! F or tr an s t o r e s t h i s a r ra y i n column m aj or o r de rdata a / 1 . 0 , 2 . 0 , 3 . 0 , 4 . 0 ,
& 5 . 0 , 6 . 0 , 7 . 0 , 8 . 0 ,13 & 9 . 0 , 1 0 . 0 , 1 1 . 0 , 1 2 . 0 ,
& 1 3 . 0 , 1 4 . 0 , 1 5 . 0 , 1 6 . 0 /
c a l l MPI_INIT ( ierr )c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, rank, ierr )18 c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )
13
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
14/27
ng Nguyn Phng Ti liu ni b NMTP
c a l l MPI_TYPE_VECTOR ( SIZE, 1 , SIZE, MPI_REAL, rowtype, ierr )c a l l MPI_TYPE_COMMIT ( rowtype, ierr )
23 ta g = 1i f ( numtasks .eq. SIZE ) then
i f ( rank .eq. 0) thendo 10 i=0 , numtasks1c a l l MPI_SEND ( a ( i , 0 ) , 1 , rowtype, i, tag,
28 & MPI_COMM_WORLD, ierr )10 c o n t i n u e
e n d i f
source = 033 c a l l MPI_RECV ( b, SIZE, MPI_REAL, source, tag,
& MPI_COMM_WORLD, s t a t , ierr )p r i n t , ' rank= ', rank, ' b= ', b
e l s e38 p r i n t , ' Must s p e c i f y ', SIZE, ' p r o c e s s o r s . T er mi n at in g . '
e n d i f
c a l l MPI_TYPE_FREE ( rowtype, ierr )c a l l MPI_FINALIZE ( ierr )
43
en d
4.3 Cc c ch truyn thng ip
Cc c ch giao tip trong MPI gm c:
Point-to-point c ch giao tip im-im, y l c ch giao tip gia tng cp tc v vinhau, trong 1 tc v thc hin cng vic gi thng ip v tc v cn li c nhim v nhnthng ip tng ng . Thng ip c phn bit bi ch s ca tc v v nhn ( tag) cathng ip. Trong c ch ny c nhiu kiu giao tip vi nhau, chng hn nh
Blocking: cc lnh gi/nhn d liu s kt thc khi vic gi/nhn d liu hon tt.
Non-blocking: cc lnh gi/nhn d liu s kt thc ngay m quan tm n vic d liu thc s c hon ton gi i hoc nhn v hay cha. Vic d liu thc s cgi i hay nhn v s c kim tra cc lnh khc trong th vin MPI.
Synchronous: gi d liu ng b, qu trnh gi d liu ch c th c kt thc khi qutrnh nhn d liu c bt u.
Buffer: mt vng nh m s c to ra cha d liu trc khi c gi i, ngidng c th ghi ln vng b nh cha d liu m khng s lm mt d liu chun bgi.
Ready: qu trnh gi d liu ch c th c bt u khi qu trnh nhn d liu snsng.
Bng di y tng hp cc ch giao tip im-im v cc lnh gi/nhn thng ip tngng, thng tin chi tit v cc lnh ny s c trnh by nhng phn sau
14
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
15/27
ng Nguyn Phng Ti liu ni b NMTP
Ch iu kin kt thc Blocking Non-blockingSend Thng ip c gi MPI_SEND MPI_ISENDReceive Thng ip c nhn MPI_RECV MPI_IRECVSynchronous send Khi qu trnh nhn bt u MPI_SSEND MPI_ISSENDBuffer send Lun kt thc, khng quan tm qu
trnh nhn bt u hay cha
MPI_BSEND MPI_IBSEND
Ready send Lun kt thc, khng quan tm qutrnh nhn kt thc hay cha
MPI_RSEND MPI_IRSEND
Collective communication c ch giao tip tp th, lin quan ti tt c cc tc v nmtrong phm vi ca communicator, cc kiu giao tip trong c ch ny (xem Hnh2) gm c
Broadcast: d liu ging nhau c gi t tc v gc (root) n tt c cc tc v khctrong communicator.
Scatter: cc d liu khc nhau c gi t tc v gc n tt c cc tc v khc trongcommunicator.
Gather: cc d liu khc nhau c thu thp bi tc v gc t tt c cc tc v khctrong communicator.
Reduce: phng thc ny cho php ta c th thu thp d liu t mi tc v, rt gn dliu, lu tr d liu vo trong mt tc v gc hoc trong tt c cc tc v.
Hnh 2: Minh ha cc kiu giao tip theo c ch tp th
4.4 Cc lnh truyn thng ip blocking
Mt s lnh thng dng cho ch truyn thng ip blocking gm c:
MPI_SEND gi cc thng tin c bn
MPI_SEND (buf,count,datatype,dest,tag,comm,ierr)
MPI_RECV nhn cc thng tin c bn
MPI_RECV (buf,count,datatype,source,tag,comm,status,ierr)
MPI_SSEND gi ng b thng tin, lnh ny s ch cho n khi thng tin c nhn(thng tin c gi s b gi li cho n khi b m ca tc v gi c gii phng c ths dng li v tc v ch (destination process) bt u nhn thng tin)
MPI_SSEND (buf,count,datatype,dest,tag,comm,ierr)
15
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
16/27
ng Nguyn Phng Ti liu ni b NMTP
MPI_BSEND to mt b nh m (buffer) m d liu c lu vo cho n khi c gii, lnh ny s kt thc khi hon tt vic lu d liu vo b nh m.
MPI_BSEND (buf,count,datatype,dest,tag,comm,ierr)
MPI_BUFFER_ATTACH cp pht dung lng b nh m cho thng tin c s dngbi lnh MPI_BSEND()
MPI_BUFFER_ATTACH (buffer,size,ierr)
MPI_BUFFER_DETACH b cp pht dung lng b nh m cho thng tin c sdng bi lnh MPI_BSEND()
MPI_BUFFER_DETACH (buffer,size,ierr)
MPI_RSEND gi thng tin theo ch ready, ch nn s dng khi ngi lp trnh chcchn rng qu trnh nhn thng tin sn sng.
MPI_RSEND (buf,count,datatype,dest,tag,comm,ierr)
MPI_SENDRECV gi thng tin i v sn sng cho vic nhn thng tin t tc v khc
MPI_SENDRECV (sendbuf,sendcount,sendtype,dest,sendtag,
recvbuf,recvcount,recvtype,source,recvtag,comm,status,ierr)
MPI_WAIT ch cho n khi cc tc v gi v nhn thng tin hon thnh
MPI_WAIT (request,status,ierr)
MPI_PROBE kim tra tnh blocking ca thng tin
MPI_PROBE (source,tag,comm,status,ierr)
V d:
1 program pinguse mp ii n t e g e r numtasks, rank, dest, source, count, tag, ierri n t e g er s t a t( MPI_STATUS_SIZE )c h a r a c t e r inmsg, outmsg
6 outmsg = ' x 'ta g = 1c a l l MPI_INIT ( ierr )c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, rank, ierr )c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )
11 i f ( rank .eq. 0) thendest = 1source = 1c a l l MPI_SEND ( outmsg, 1 , MPI_CHARACTER, dest, tag,
& MPI_COMM_WORLD, ierr )16 c a l l MPI_RECV ( inmsg, 1 , MPI_CHARACTER, source, tag,
& MPI_COMM_WORLD, s t a t , ierr )e l s e i f ( rank .eq. 1) then
dest = 0source = 0
21 c a l l MPI_RECV ( inmsg, 1 , MPI_CHARACTER, source, tag,& MPI_COMM_WORLD, s t a t , e r r)
c a l l MPI_SEND ( outmsg, 1 , MPI_CHARACTER, dest, tag,& MPI_COMM_WORLD, e r r)
e n d i f
16
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
17/27
ng Nguyn Phng Ti liu ni b NMTP
26 c a l l MPI_GET_COUNT (s t a t , MPI_CHARACTER, count, ierr )p r i n t , ' Task ', rank, ': R e ce i ve d ', count, 'c har ( s ) f r om t ask ',
& s t a t( MPI_SOURCE ) , ' w i th t a g ',s t a t( MPI_TAG )c a l l MPI_FINALIZE ( ierr )en d
4.5 Cc lnh truyn thng ip non-blocking
Mt s lnh thng dng cho ch truyn thng ip non-blocking gm c:
MPI_ISEND gi thng ip non-blocking, xc nh mt khu vc ca b nh thc hinnhim v nh l mt b m gi thng tin.
MPI_ISEND (buf,count,datatype,dest,tag,comm,request,ierr)
MPI_IRECV nhn thng ip non-blocking, xc nh mt khu vc ca b nh thc hinnhim v nh l mt b m nhn thng tin.
MPI_IRECV (buf,count,datatype,source,tag,comm,request,ierr)
MPI_ISSEND gi thng ip non-blocking ng b (synchronous).
MPI_ISSEND (buf,count,datatype,dest,tag,comm,request,ierr)
MPI_IBSEND gi thng ip non-blocking theo c chbuffer.
MPI_IBSEND (buf,count,datatype,dest,tag,comm,request,ierr)
MPI_IRSEND gi thng ip non-blocking theo c chready.
MPI_IRSEND (buf,count,datatype,dest,tag,comm,request,ierr)
MPI_TEST kim tra trng thi kt thc ca cc lnh gi v nhn thng ip non-blockingISEND(), IRECV(). Tham s request l tn bin yu cu c dng trong cc lnh gi vnhn thng ip, tham s flag s tr v gi tr 1 nu thao tc hon thnh v gi tr 0 trongtrng hp ngc li.
MPI_TEST (request,flag,status,ierr)
MPI_IPROBE kim tra tnh non-blocking ca thng ip
MPI_IPROBE (source,tag,comm,flag,status,ierr)
V d:
program ringtopouse mp ii n t e g e r numtasks, rank, next, prev, bu f ( 2) , tag1, tag2, ierri n t e g e r stats ( MPI_STATUS_SIZE, 2 ) , reqs ( 4 )
5 tag1 = 1tag2 = 2c a l l MPI_INIT ( ierr )c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, rank, ierr )c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )
10 prev = rank 1next = rank + 1i f ( rank .eq. 0) then
prev = numtasks 1e n d i f
17
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
18/27
ng Nguyn Phng Ti liu ni b NMTP
15 i f ( rank .eq. numtasks 1) thennext = 0
e n d i f c a l l MPI_IRECV ( bu f ( 1 ) , 1 , MPI_INTEGER, prev, tag1,
& MPI_COMM_WORLD, reqs (1) , ierr )20 c a l l MPI_IRECV ( bu f ( 2 ) , 1 , MPI_INTEGER, next, tag2,
& MPI_COMM_WORLD, reqs (2) , ierr )c a l l MPI_ISEND ( rank, 1 , MPI_INTEGER, prev, tag2,
& MPI_COMM_WORLD, reqs (3) , ierr )c a l l MPI_ISEND ( rank, 1 , MPI_INTEGER, next, tag1,
25 & MPI_COMM_WORLD, reqs (4) , ierr )! do some workc a l l MPI_WAITALL ( 4 , reqs, stats, ierr ) ;c a l l MPI_FINALIZE ( ierr )en d
4.6 Cc lnh truyn thng tp thMt s lnh thng dng cho cho c ch truyn thng tp th gm c:
MPI_BARRIER lnh ng b ha (ro chn), tc v ti ro chn (barrier) phi ch chon khi tt c cc tc v khc trn cng mt communicator u hon thnh (xem Hnh3).
MPI_BARRIER (comm,ierr)
Hnh 3: Minh ha lnh ro chn
MPI_BCAST gi bn sao ca b m c kch thccountt tc v rootn tt c cctin trnh khc trong cng mt communicator.
MPI_BCAST (buffer,count,datatype,root,comm,ierr)
MPI_SCATTER phn pht gi tr b m ln tt c cc tc v khc, b m c chiathnh sendcntphn.
MPI_SCATTER (sendbuf,sendcnt,sendtype,recvbuf,recvcnt,recvtype,root,comm,ierr)
MPI_GATHER to mi mt gi tr b m ring cho mnh t cc mnh d liu gp li.
MPI_GATHER (sendbuf,sendcnt,sendtype,recvbuf,recvcnt,recvtype,root,comm,ierr)
MPI_ALLGATHER tng t nh MPI_GATHERnhng sao chp b m mi cho tt c cctc v.
MPI_ALLGATHER (sendbuf,sendcnt,sendtype,recvbuf,recvcnt,recvtype,comm,ierr)
18
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
19/27
ng Nguyn Phng Ti liu ni b NMTP
MPI_REDUCE p dng cc ton t rt gn (tham s op) cho tt c cc tc v v lu ktqu vo mt tc v duy nht.
MPI_REDUCE (sendbuf,recvbuf,count,datatype,op,root,comm,ierr)
Cc ton t rt gn gm c: MPI_MAX(cc i), MPI_MIN(cc tiu), MPI_SUM(tng), MPI_PROD
(tch), MPI_LAND(ton t AND logic), MPI_BAND(ton t AND bitwise), MPI_LOR(ton t ORlogic), MPI_BOR(ton t OR bitwise), MPI_LXOR(ton t XOR logic), MPI_BXOR(ton t XORbitwise), MPI_MAXLOC(gi tr cc i v v tr), MPI_MINLOC(gi tr cc tiu v v tr).
MPI_ALLREDUCE tng t nh MPI_REDUCEnhng lu kt qu vo tt c cc tc v.
MPI_REDUCE (sendbuf,recvbuf,count,datatype,op,root,comm,ierr)
MPI_REDUCE_SCATTER tng ng vi vic p dng lnh MPI_REDUCEri ti lnhMPI_SCATTER.
MPI_REDUCE_SCATTER (sendbuf,recvbuf,recvcnt,datatype,op,comm,ierr)
MPI_ALLTOALL tng ng vi vic p dng lnhMPI_SCATTERri ti lnh MPI_GATHER.
MPI_ALLTOALL (sendbuf,sendcnt,sendtype,recvbuf,recvcnt,recvtype,comm,ierr)
MPI_SCAN kim tra vic thc hin ton t rt gn ca cc tc v.
MPI_SCAN (sendbuf,recvbuf,count,datatype,op,comm,ierr)
Hnh 4: Minh ha mt s lnh giao tip tp th
V d:
1 program scatteruse mp ii n t e g e r SIZEparameter ( SIZE=4)i n t e g e r numtasks, rank, sendcount, recvcount, source, ierr
6 r e a l 4 sendbuf ( SIZE, SIZE ) , recvbuf ( SIZE )! F or tr an s t o r e s t h i s a r ra y i n column m aj or o rd er , s o t he! s c a t t e r w i l l a c t u al l y s c a t t e r columns , n ot rows .data sendbuf / 1 . 0 , 2 . 0 , 3 . 0 , 4 . 0 ,
& 5 . 0 , 6 . 0 , 7 . 0 , 8 . 0 ,11 & 9 .0 , 1 0 . 0 , 1 1 . 0 , 1 2 . 0 ,
& 1 3 . 0 , 1 4 . 0 , 1 5 . 0 , 1 6 . 0 /c a l l MPI_INIT ( ierr )
19
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
20/27
ng Nguyn Phng Ti liu ni b NMTP
c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, rank, ierr )c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )
16 i f ( numtasks .eq. SIZE ) thensource = 1sendcount = SIZErecvcount = SIZEc a l l MPI_SCATTER ( sendbuf, sendcount, MPI_REAL, recvbuf,
21 & recvcount, MPI_REAL, source, MPI_COMM_WORLD, ierr )p r i n t , ' rank= ', rank, ' Re sul t s : ', recvbuf
e l s ep r i n t , ' Must s p e c i f y ', SIZE, ' p r o c e s s o r s . T e rm i n at i ng . '
e n d i f 26 c a l l MPI_FINALIZE ( ierr )
en d
5 Mt s v d
5.1 V d tnh s
Trong v d ny ta s tin hnh lp trnh song song cho php tnh s . Gi tr ca s c thc xc nh qua cng thc tch phn
=
1
0
f(x)dx, vi f(x) = 4
(1 + x2) (1)
Tch phn ny c th c xp x theo gii tch s nh sau
= 1
n
n
i=1
f(xi), vi xi = (i 1
2)
n
(2)
Cng thc xp x trn c th d dng xy dng vi Fortran
f ( a ) = 4 . d0 / ( 1 . d0 + a a )h = 1 . 0 d0 /n
3 su m = 0 . 0 d0do 10 i = 1 , nx = h ( dble ( i ) 0 . 5 d0 )su m = su m + f ( x )
10 c o n t i n u e8 mypi = h su m
Trong v d ny, ta s s dng c ch truyn thng tp th. Da vo cng thc pha trn ta cth d dng nhn ra rng ch c mt tham s duy nht, l n, do ta s truyn tham s nycho tt c cc tc v trong h thng thng qua lnh MPI_BCAST
c a l l MPI_BCAST ( n , 1 , MPI_INTEGER, 0 ,MPI_COMM_WORLD,ierr )
Mi tc v con s thc thi cng thc xp x n/numtasks ln, vi numtasks l tng s cc tcv, do vng lp s c chnh sa li
f ( a ) = 4 . d0 / ( 1 . d0 + a a )h = 1 . 0 d0 /n
su m = 0 . 0 d04 do 10 i = m yi d +1 , n , n u mt a sk s
x = h ( dble ( i ) 0 . 5 d0 )
20
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
21/27
ng Nguyn Phng Ti liu ni b NMTP
su m = su m + f ( x )10 c o n t i n u e
my pi = h su m
vimyidl ch s ca tc v ang thc thi.
Kt qu chy t cc tc v s c tnh tng v lu tc v ch thng qua lnh MPI_REDUCEc a l l MPI_REDUCE (my pi, pi, 1 ,MPI_DOUBLE_PRECISION,MPI_SUM, 0 , MPI_COMM_WORLD,
ierr )
on code chng trnh sau khi thm phn MPI
program main
u se m pi
d ou bl e p re ci si on m yp i , p i , h , s um , x , f , a
4 i n te g er n , m yi d , nu mp r oc s , i , i e rr
f ( a ) = 4 . d0 / ( 1 . d0 + a a )
call MPI_INIT(ierr)
9 call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
call MPI_BCAST(n,1,MPI_INTEGER,0, MPI_COMM_WORLD ,ierr)
14 h = 1 . 0 d0 /nsu m = 0 . 0 d0do 10 i = myid +1 , n, numtasksx = h ( dble ( i ) 0 . 5 d0 )su m = su m + f ( x )
19 10 c o n t i n u emy pi = h su m
call MPI_REDUCE(mypi,pi,1, MPI_DOUBLE_PRECISION,MPI_SUM ,0,
& MPI_COMM_WORLD , ierr )
24
call MPI_FINALIZE(ierr)
en d
Chng trnh tnh gi tr s hon chnh nh sau
program piuse mp i
d ou bl e p r e c i s i o n PI25DT4 parameter ( PI25DT = 3 .141592653589793238462643 d0 )
d ou bl e p r e c i s i o n mypi, pi, h, sum, x, f, ai n t e g e r n, myid, numtasks, i, ierr
f ( a ) = 4 . d0 / ( 1 . d0 + a a )9
c a l l MPI_INIT ( ierr )c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, myid, ierr )c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )
14 10 i f ( myid .eq. 0 ) thenp r i n t , ' E nt er t he number o f i n t e r v a l s : ( 0 q u i t s ) 'read( , ) n
e n d i f
21
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
22/27
ng Nguyn Phng Ti liu ni b NMTP
19 c a l l MPI_BCAST ( n , 1 , MPI_INTEGER, 0 ,MPI_COMM_WORLD,ierr )
i f ( n .le. 0 ) goto 30
h = 1 . 0 d0 /n24 su m = 0 . 0 d0
do 20 i = myid +1 , n, numtasksx = h ( dble ( i ) 0 . 5 d0 )su m = su m + f ( x )
20 c o n t i n u e29 mypi = h su m
c a l l MPI_REDUCE (my pi, pi, 1 ,MPI_DOUBLE_PRECISION,MPI_SUM, 0 ,& MPI_COMM_WORLD,ierr )
34 i f (my id .eq. 0) thenp r i n t , 'p i i s ', pi, ' E rr or i s ', ab s ( pi PI25DT )
e n d i f
goto 1030 c a l l MPI_FINALIZE ( ierr )
39 s t open d
5.2 V d nhn ma trn
Trong v d ny ta s xy dng mt chng trnh tnh tch ca hai ma trn bng phng phptnh ton song song. Gi s ta c ma trn Cl tch ca hai ma trn AvB, ta c th khai botrong Fortran nh sau
parameter ( NR A = 6 2 )parameter ( NC A = 1 5 )parameter ( NC B = 7)
5 r e a l 8 a ( NRA,NC A ) , b ( NCA,NC B ) , c ( NRA,NC B )
trong NRA, NCAv NCBln lt l s ct v s dng ca ma trn A, s dng ca ma trn B.
im c bit trong v d ny l ta s chia cc tc v (tin trnh) ra lm hai loi: tc v chnhv tc v con, mi loi tc v s thc hin nhng cng vic khc nhau. thun tin trongvic ghi nh, ta c th khai bo cc tham s biu din cho ch s ca tc v chnh (MASTER), vnhn (tag) nh du rng d liu c gi l t tc v chnh (FROM_MASTER) hay tc v con(FROM_WORKER).
parameter ( MASTER = 0)parameter ( FROM_MASTER = 1)parameter ( FROM_WORKER = 2)
Cc cng vic ca tc v chnh gm c
Khi to cc ma trn Av B
! I n i t i a l i z e A and B2 do 30 i=1 , NR A
do 30 j=1 , NC Aa ( i, j ) = ( i1)+(j1)
30 c o n t i n u edo 40 i=1 , NC A
7 do 40 j=1 , NC B
22
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
23/27
ng Nguyn Phng Ti liu ni b NMTP
b ( i, j ) = ( i1) ( j1)40 c o n t i n u e
Gi d liu cho cc tc v con. Trong v d ny s ct ca ma trn Bs c chia thnhnumworkers phn tng tng vi s lng tc v con.
1 ! Send m a tr ix d a ta t o t h e w or ke r t a s k savecol = NC B/ numworkersextra = mo d ( NCB, numworkers )offset = 1
mt yp e = FROM_MASTER6 do 50 dest =1 , numworkers
i f ( dest .le. extra ) thencols = avecol + 1
e l s ecols = avecol
11 e n d i f w r i t e( , ) ' s e n d i n g ', cols, ' c o l s to t as k ', dest
c a l l MPI_SEND ( offset, 1 , MPI_INTEGER, dest, mtype,& MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( cols, 1 , MPI_INTEGER, dest, mtype,16 & MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( a, NR A NCA, MPI_DOUBLE_PRECISION, dest, mtype,& MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( b ( 1 , offset ) , cols NCA, MPI_DOUBLE_PRECISION,& dest, mtype, MPI_COMM_WORLD, ierr )
21 offset = offset + cols50 c o n t i n u e
Nhn d liu t tc v con
! R ec ei ve r e s u l t s from w or ker t a sk smt yp e = FROM_WORKER
3 do 60 i=1 , numworkerssource = ic a l l MPI_RECV ( offset, 1 , MPI_INTEGER, source,
& mtype, MPI_COMM_WORLD, s t a t u s , ierr )c a l l MPI_RECV ( cols, 1 , MPI_INTEGER, source,
8 & mtype, MPI_COMM_WORLD, s t a t u s , ierr )c a l l MPI_RECV ( c ( 1 , offset ) , cols NRA, MPI_DOUBLE_PRECISION,
& source, mtype, MPI_COMM_WORLD, s t a t u s , ierr )60 c o n t i n u e
In kt qu
! P r i nt r e s u lt sdo 90 i=1 , NR A
do 80 j = 1 , NC B4 w r i t e( , 7 0 ) c ( i, j )
70 format(2 x,f8 . 2 , $ )80 c o n t i n u e
p r i n t , ' '90 c o n t i n u e
Cc cng vic ca tc v con gm c Nhn d liu t tc v chnh
23
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
24/27
ng Nguyn Phng Ti liu ni b NMTP
! R ec ei ve m at ri x d a ta f rom m as te r t a s k2 mtype = FROM_MASTER
c a l l MPI_RECV ( offset, 1 , MPI_INTEGER, MASTER,& mtype, MPI_COMM_WORLD, s t a t u s, ierr )
c a l l MPI_RECV ( cols, 1 , MPI_INTEGER, MASTER,
& mtype, MPI_COMM_WORLD, s t a t u s, ierr )7 c a l l MPI_RECV ( a, NR A NCA, MPI_DOUBLE_PRECISION, MASTER,
& mtype, MPI_COMM_WORLD, s t a t u s, ierr )c a l l MPI_RECV ( b, cols NCA, MPI_DOUBLE_PRECISION, MASTER,
& mtype, MPI_COMM_WORLD, s t a t u s, ierr )
Tnh ton nhn ma trn
! Do ma tr ix m u l ti p l ydo 10 0 k=1 , cols
do 10 0 i=1 , NR Ac ( i, k ) = 0 .0
5
do 100 j=1 , NC Ac ( i, k ) = c ( i, k ) + a ( i, j ) b ( j, k )10 0 c o n t i n u e
Tr kt qu v cho tc v chnh
! Send r e s u l t s back t o m a st er t as kmt yp e = FROM_WORKER
3 c a l l MPI_SEND ( offset, 1 , MPI_INTEGER, MASTER, mtype,& MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( cols, 1 , MPI_INTEGER, MASTER, mtype,& MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( c, cols NRA, MPI_DOUBLE_PRECISION, MASTER,8 & mtype, MPI_COMM_WORLD, ierr )
Chng trnh nhn hai ma trn hon chnh nh sau
program mm2 use mp i
parameter ( NR A = 6 2 )parameter ( NC A = 1 5 )parameter ( NC B = 7)
7 parameter ( MASTER = 0)parameter ( FROM_MASTER = 1)
parameter ( FROM_WORKER = 2)
i n t e g e r numtasks,taskid,numworkers,source,dest,mtype,12 & cols,avecol,extra, offset, i, j, k,ierr
i n t e g er s t a tu s( MPI_STATUS_SIZE )r e a l 8 a ( NRA,NC A ) , b ( NCA,NC B ) , c ( NRA,NC B )
c a l l MPI_INIT ( ierr )17 c a l l MPI_COMM_RANK ( MPI_COMM_WORLD, taskid, ierr )
c a l l MPI_COMM_SIZE ( MPI_COMM_WORLD, numtasks, ierr )numworkers = numtasks1p r i n t , 'task ID= ', taskid
22 ! m a st e r t a s k i f ( taskid .eq. MASTER ) then
24
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
25/27
ng Nguyn Phng Ti liu ni b NMTP
! I n i t i a l i z e A and Bdo 30 i=1 , NR A
27 do 30 j=1 , NC Aa ( i, j ) = ( i1)+(j1)
30 c o n t i n u edo 40 i=1 , NC A
do 40 j=1 , NC B32 b ( i, j ) = ( i1) ( j1)
40 c o n t i n u e
! Send m a tr ix d at a t o t he w or ke r t a s k savecol = NC B / numworkers
37 extra = mo d ( NCB, numworkers )offset = 1
mt yp e = FROM_MASTERdo 50 dest =1 , numworkers
i f ( dest .le. extra ) then42 cols = avecol + 1
e l s ecols = avecol
e n d i f w r i t e( , ) ' s e n d i n g ', cols, ' c o l s to t as k ', dest
47 c a l l MPI_SEND ( offset, 1 , MPI_INTEGER, dest, mtype,& MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( cols, 1 , MPI_INTEGER, dest, mtype,& MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( a, NR A NCA, MPI_DOUBLE_PRECISION, dest, mtype,52 & MPI_COMM_WORLD, ierr )
c a l l MPI_SEND ( b ( 1 , offset ) , cols NCA, MPI_DOUBLE_PRECISION,& dest, mtype, MPI_COMM_WORLD, ierr )
offset =
offset +
cols
50 c o n t i n u e57
! R ec ei ve r e s u l t s from w ork er t a s k smt yp e = FROM_WORKERdo 60 i=1 , numworkers
source = i62 c a l l MPI_RECV ( offset, 1 , MPI_INTEGER, source,
& mtype, MPI_COMM_WORLD, s t a t u s, ierr )c a l l MPI_RECV ( cols, 1 , MPI_INTEGER, source,
& mtype, MPI_COMM_WORLD, s t a t u s, ierr )c a l l MPI_RECV ( c ( 1 , offset ) , cols NRA, MPI_DOUBLE_PRECISION,
67 & source, mtype, MPI_COMM_WORLD, s t a t u s, ierr )
60 c o n t i n u e
! P r i n t r e s u lt sdo 90 i=1 , NR A
72 do 80 j = 1 , NC Bw r i t e( , 7 0 ) c ( i, j )
70 format(2 x,f8 . 2 , $ )80 c o n t i n u e
p r i n t , ' '77 90 c o n t i n u e
e n d i f
! w o rk e r t a s k 82 i f ( taskid > MASTER ) then
25
-
5/25/2018 C b n l p tr nh song song MPI (Fortran)
26/27
ng Nguyn Phng Ti liu ni b NMTP
! R ec ei v e m a tr ix d at a f ro m m as te r t a s kmt yp e = FROM_MASTERc a l l MPI_RECV ( offset, 1 , MPI_INTEGER, MASTER,
87 & mtype, MPI_COMM_WORLD, s t a t u s , ierr )c a l l MPI_RECV ( cols, 1 , MPI_INTEGER, MASTER,
& mtype, MPI_COMM_WORLD, s t a t u s , ierr )c a l l MPI_RECV ( a, NR A NCA, MPI_DOUBLE_PRECISION, MASTER,
& mtype, MPI_COMM_WORLD, s t a t u s , ierr )92 c a l l MPI_RECV ( b, cols NCA, MPI_DOUBLE_PRECISION, MASTER,
& mtype, MPI_COMM_WORLD, s t a t u s , ierr )
! Do ma tr ix m u lt i pl ydo 10 0 k=1 , cols
97 do 10 0 i=1 , NR Ac ( i, k ) = 0 . 0do 10 0 j=1 , NC A
c ( i, k ) = c ( i, k ) + a ( i, j ) b ( j, k )
100 c o n t i n u e102
! Send r e s u l t s back t o m a ste r t as kmt yp e = FROM_WORKERc a l l MPI_SEND ( offset, 1 , MPI_INTEGER, MASTER, mtype,
& MPI_COMM_WORLD, ierr )107 c a l l MPI_SEND ( cols, 1 , MPI_INTEGER, MASTER, mtype,
& MPI_COMM_WORLD, ierr )c a l l MPI_SEND ( c, cols NRA, MPI_DOUBLE_PRECISION, MASTER,
& mtype, MPI_COMM_WORLD, ierr )
112 e n d i f
c a l l MPI_FINALIZE ( ierr )
en d
Ti liu
[1] William Gropp et al, MPICH2 Users Guide Version 1.0.6, Mathematics and ComputerScience Division, Argonne National Laboratory, 2007.
[2] Serrano Pereira,Building a simple Beowulf cluster with Ubuntuhttp://byobu.info/article/Building_a_simple_Beowulf_cluster_with_Ubuntu/
[3] Blaise Barney,Message Passing Interface (MPI)https://computing.llnl.gov/tutorials/mpi/
[4] Paul Burton, An Introduction to MPI Programminghttp://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_
Programming.pdf
[5] Stefano Cozzini, MPI tutorial, Democritos/ICTP course in Tools for computationalphysics, 2005http://www.democritos.it/events/computational_physics/lecture_stefano4.pdf
[6] Ng Vn Thanh, Tnh ton song songhttp://iop.vast.ac.vn/~nvthanh/cours/parcomp/
[7] https://www.surfsara.nl/systems/shared/mpi/mpi-intro
26
http://byobu.info/article/Building_a_simple_Beowulf_cluster_with_Ubuntu/https://computing.llnl.gov/tutorials/mpi/http://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_Programming.pdfhttp://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_Programming.pdfhttp://www.democritos.it/events/computational_physics/lecture_stefano4.pdfhttp://iop.vast.ac.vn/~nvthanh/cours/parcomp/https://www.surfsara.nl/systems/shared/mpi/mpi-introhttps://www.surfsara.nl/systems/shared/mpi/mpi-introhttp://iop.vast.ac.vn/~nvthanh/cours/parcomp/http://www.democritos.it/events/computational_physics/lecture_stefano4.pdfhttp://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_Programming.pdfhttp://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_Programming.pdfhttps://computing.llnl.gov/tutorials/mpi/http://byobu.info/article/Building_a_simple_Beowulf_cluster_with_Ubuntu/ -
5/25/2018 C b n l p tr nh song song MPI (Fortran)
27/27
ng Nguyn Phng Ti liu ni b NMTP
[8] http://chryswoods.com/book/export/html/117
[9] http://beige.ucs.indiana.edu/B673/node150.html
[10] http://www.cs.indiana.edu/classes/b673/notes/mpi1.html
[11] http://geco.mines.edu/workshop/class2/examples/mpi/index.html
[12] http://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/main.htm
27
http://chryswoods.com/book/export/html/117http://beige.ucs.indiana.edu/B673/node150.htmlhttp://www.cs.indiana.edu/classes/b673/notes/mpi1.htmlhttp://geco.mines.edu/workshop/class2/examples/mpi/index.htmlhttp://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/main.htmhttp://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/main.htmhttp://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/main.htmhttp://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/main.htmhttp://geco.mines.edu/workshop/class2/examples/mpi/index.htmlhttp://www.cs.indiana.edu/classes/b673/notes/mpi1.htmlhttp://beige.ucs.indiana.edu/B673/node150.htmlhttp://chryswoods.com/book/export/html/117