hddm: hierarchical bayesian estimation of the drift diffusion model

Download HDDM: Hierarchical Bayesian estimation of the Drift Diffusion Model

If you can't read please download the document

Upload: twiecki

Post on 20-Jun-2015

1.130 views

Category:

Business


2 download

DESCRIPTION

Talk presented at MathPsych http://github.com/hddm-devs/hddm

TRANSCRIPT

  • 1. HDDM: Hierarchical Bayesian Drift-Diffusion Modeling Thomas V. Wiecki & Imri Sofer, Michael J. Frank

2. Drift-Diffusion Model 3. 4. 5. Traditional model fitting Fitting separate models to each subject Fitting one model toallsubjects e.g. DMAT, fast-dm, EZ Ignores similarities Ignores differences Subject 1 ... Subject n P( data 1 | 1 ) ... P( data n | n ) P( data| ) Subject 1 ... Subject n Subject 1 ... Subject n Subject 1 ... Subject n 6. Hierarchical model estimation Subject 1 ... Subject n Group P( group | 1 ,. , , n ) P( 1 |data, group ) ... P( n |data, group ) 7. Hierarchical Bayesian estimation

  • Pro
  • Adequately maps experimental structure onto model

8. Needs less data for individual subject estimation 9. Constraining of subject parameters (helps with extreme fits) 10. Estimation of full posterior, not just maximum 11. ...

  • Contra
  • Computationally expensive (sampling, e.g. MCMC)

12. Correct model behavior can be hard to assess (e.g. chain convergence) 13. Methods still in development 14.

  • Hierarchical Bayesian estimation(via PyMC) of parameters of the DDM in Python.
  • Ratcliff, Vandekerckhove, Tuerlinckx, Lee, Wagenmakers

Heavilyoptimizedlikelihood functions

  • Navarro & Fuss (2009) likelihood

15. Collapsed model for inter-trial variabilities Flexiblecreation of complex models tailored to specific hypotheses (e.g. separate drift-rate parameters for different stimulus types). 16. Several convergence and goodness-of-fitdiagnostics 17. Validated : integrated tests check if parameters from simulated data can be recovered HDDM 18. ...it works! 19. How to get your data into HDDM response, rt, subj_idx, difficulty 1, 1.06, 1, hard 1, 1.052, 1, hard 1, 1.398, 1, hard 0, 0.48, 1, easy 1, 1.798, 1, easy 1, 0.94, 1, easy 1, 2.093, 2, hard 1, 0.91, 2, hard 0, 1.019, 2, hard ... 20. Model specification via configuration file [depends] v = difficulty [mcmc] samples=5000 burn=1000 21. Model fitting $>hddmfit simple_difficulty.conf simple_difficulty.csv Creating model... Sampling: 100% [0000000000000000000000000000000000] Iterations: 5000 namemeanstd2.5q25q50q75q97.5mc_err a:2.0290.0341.9532.0092.0282.0492.0900.002 t:0.2970.0070.2820.2920.2970.3020.3110.001 v('easy',):0.9920.0510.9020.9530.9871.0281.1020.003 v('hard',):0.5220.0490.4290.4850.5140.5610.6120.002 logp: -1171.276303 DIC: 2329.069932 DIC without separate drift rates: 2373.395603 22. Output statistics hard condition easy condition Error responses mirrored along y-axis. 23. Output statistics II 24. Python model creation import hddm # Load data from csv file into a NumPy structured array data = hddm.load_csv('simple_subj_data.csv') # Create a HDDM model multi object model = hddm.HDDM(data, depends_on={'v':'difficulty'}) # Create model and start MCMC sampling model.sample(5000, burn=2000) # Print fitted parameters and other model statistics model.print_stats() # Plot posterior distributions and theoretical RT distributions hddm.plot_posteriors(model) hddm.plot_post_pred(model) 25. Trial-by-trial random effects Cavanagh, Wiecki et al (submitted) 26. Upcoming features

  • GPU optimized likelihood (~5x speed-up)

27. Contaminant model 28. Linear Ballistic Accumulator Model 29. Switch-task model Note to developers: it is very easy to add your own models to this framework! 30. http://github.com/hddm-devs/hddm