10.1.1.110.6930

Upload: fadligmail

Post on 03-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 10.1.1.110.6930

    1/17

    http://orm.sagepub.com

    Organizational Research Methods

    DOI: 10.1177/109442810034001

    2000; 3; 307Organizational Research MethodsSheila Simsarian Webber, Gilad Chen, Stephanie C. Payne, Sean M. Marsh and Stephen J. Zaccaro

    Enhancing Team Mental Model Measurement with Performance Appraisal Practices

    http://orm.sagepub.com/cgi/content/abstract/3/4/307The online version of this article can be found at:

    Published by:

    http://www.sagepublications.com

    On behalf of:

    The Research Methods Division of The Academy of Management

    can be found at:Organizational Research MethodsAdditional services and information for

    http://orm.sagepub.com/cgi/alertsEmail Alerts:

    http://orm.sagepub.com/subscriptionsSubscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    http://orm.sagepub.com/cgi/content/refs/3/4/307SAGE Journals Online and HighWire Press platforms):

    (this article cites 15 articles hosted on theCitations

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://www.aom.pace.edu/rmd/http://orm.sagepub.com/cgi/alertshttp://orm.sagepub.com/cgi/alertshttp://orm.sagepub.com/subscriptionshttp://orm.sagepub.com/subscriptionshttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsPermissions.navhttp://www.sagepub.com/journalsPermissions.navhttp://orm.sagepub.com/cgi/content/refs/3/4/307http://orm.sagepub.com/cgi/content/refs/3/4/307http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/cgi/content/refs/3/4/307http://www.sagepub.com/journalsPermissions.navhttp://www.sagepub.com/journalsReprints.navhttp://orm.sagepub.com/subscriptionshttp://orm.sagepub.com/cgi/alertshttp://www.aom.pace.edu/rmd/
  • 7/28/2019 10.1.1.110.6930

    2/17

    ORGANIZATIONALRESEARCHMETHODSWebberetal. /TEAMMENTALMODELMEASUREMENT

    Enhancing Team MentalModel Measurement WithPerformance Appraisal Practices

    SHEILA SIMSARIAN WEBBER

    Concordia University

    GILAD CHEN

    George Mason University

    STEPHANIE C. PAYNE

    Texas A&M UniversitySEAN M. MARSH

    STEPHEN J. ZACCARO

    George Mason University

    Little dispute exists with regard to the conceptual and practical contributions of

    team mental models (TMMs) to team-related research and applications, yet the

    measurement of TMMs poses great challenges for researchers and practitioners.

    Borrowing from performance appraisal practices, this article presents a new

    method for assessing TMMs that is user-friendly and allows for the measurement

    ofboth TMMaccuracyand similarity. Theextent to which TMMsimilarityand ac-

    curacy indices predict team performance in a field setting is examined. Contribu-tions to team research and practice are discussed.

    Teams area dominant research area because of their increasedpopularity withinorga-

    nizations. Many researchers and practitioners have examined ways to improve team

    performance through various team design issues, as well as process enhancements

    (e.g., Brannick, Salas, & Prince, 1997). One emerging finding in this research is that

    shared knowledge in teams promotes team coordination and performance (Marks &

    Zaccaro, 1997; Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000). Shared

    Authors Note: Portions of this article were presented at the 58th annual meeting of the Academy of

    Management, San Diego, California, 1998. We thank David Hofmann, Richard Klimoski, the anonymousreviewers, and the editorfor providinguseful comments and suggestions on thisarticle.Correspondence re-

    garding thisarticle should be sentto SheilaSimsarianWebber,Department of Management,Concordia Uni-

    versity, 1455 de Maisonneuve Blvd. West, Montreal, Quebec, Canada H3G 1M8; e-mail: swebber@

    vax2.concordia.ca.

    Organizational Research Methods, Vol. 3 No. 4, October 2000 307-322

    2000 Sage Publications, Inc.

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    3/17

    knowledgeamong team members also helps teams adapt to changing environments, a

    finding that has particularly important implications for action and negotiation teams

    (Marks & Zaccaro, 1997; Marks, Zaccaro, & Mathieu, in press). To enhance our

    understanding of shared knowledge in teams, researchers have borrowed the termmental model from cognitive psychology and examined the extent to which mental

    models are similar among team members (e.g., Cannon-Bowers, Salas, & Converse,

    1993; Klimoski & Mohammed, 1994; Minionis, 1994).

    Although the literature on teammental models (TMMs) is prevalent, theirmeasure-

    ment remains a challenge for both researchers and practitioners (Kraiger & Wenzel,

    1997; Mohammed, Klimoski, & Rentsch, 2000). Moreover, the majority of TMM

    research hasbeen conducted in the laboratory without validation in real team environ-

    ments (e.g., Heffner, Mathieu, & Goodwin, 1995; Marks et al., in press; Minionis,

    1994). Therefore, more research is needed that (a) identifies better methodologies for

    assessingTMMs and(b)examines theextent to which laboratory findingswith regard

    to the utility of TMMs generalize to field settings.

    Accordingly, the purpose of this study was twofold. First, we present a new meth-

    odology for assessing TMMs that has important advantages over existing methods.Second, using this methodology in a field setting, we examine theutility of indices for

    assessing TMM similarity and TMM accuracy. In the sections that follow, we will

    describe TMMs and their measurement. We will also describe how performance

    appraisal practices were applied to develop and validate a new method for measuring

    TMMs.

    TMMs

    Mental models have been defined as themechanisms that allow humans to gener-

    ate descriptions of system purpose and form, explanations of system functioning and

    observed system states, and predictions of future system states (Rouse & Morris,

    1986, p. 360). Mental models incorporate both knowledge content (i.e., declarative,

    procedural, or strategic) and knowledge structure (i.e., relationships between compo-

    nents of knowledge content). Mental models are not static; they are both manipulable

    andchangeable(Johnson-Laird, 1983). Theyhelp individuals makesense of situations

    and select the appropriate courses of action given the circumstances of the situation

    (Rouse & Morris, 1986).

    Theextent towhich team members havecommonknowledge about the task at hand

    has been referred to as shared or team mental models (Cannon-Bowers et al., 1993;

    Klimoski & Mohammed, 1994). It is believed that TMMs facilitate coordinated team

    performance andteamdecision making, which in turn contributesto theteams overall

    effectiveness (Cannon-Bowers et al., 1993). Research has shown that the extent to

    which mental models among team members are similar (i.e., TMM similarity) and

    accurate (i.e., TMM accuracy) positively relates to effective team coordination and

    performance (e.g., Heffner et al., 1995; Marks et al., in press; Mathieu et al., 2000;Minionis, 1994). Despite having only limited empirical research to date, TMMs are

    beginningto emerge as importantconstructs tobemeasured in team researchandprac-

    tice (Mohammed et al., 2000).

    Researchers have described a number of different types of mental models. As a

    result, it is importantto identifywhich type of mentalmodelis most appropriate forthe

    308 ORGANIZATIONAL RESEARCH METHODS

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    4/17

    team under investigation (Kraiger & Wenzel, 1997). Converse and Kahler (1992) dis-

    tinguished between declarative, procedural, and strategic mental models. Declarative

    mentalmodels contain informationabout theconceptsand elements in thedomain and

    about the relationships between them. Procedural mental models store informationabout the steps that must be taken to accomplish various activities and the order in

    which thesestepsmust be taken.Strategicmental modelsare composedof information

    that provides the basis for problem solving, including action plans to meet specific

    goals, knowledge of the context in which procedures should be implemented, actions

    to be taken if a proposed solution fails, and how to respond if necessary information is

    absent. AccordingtoCannon-Bowers,Tannenbaum,Salas, andVolpe(1995), declara-

    tive mentalmodels are important for the types of tasks performedby decision-making

    teams, procedural mental models are important for the types of tasks typical for pro-

    duction or assembly teams, and strategic mental models are particularly important for

    the types of tasks employed by action and negotiation teams.

    It is important to note that the content of strategic TMMs can include teamwork

    and/or taskwork (Cannon-Bowers et al., 1993). Taskwork consists of understanding

    the activities andactionsequences that must be carried out to perform the teams task.Teamwork refers to the communication needs, compensatory behaviors, performance

    monitoring, and internal coordination strategies. In action teams, the development of

    appropriate taskwork and/or teamwork strategies is highly task specific and team spe-

    cific such that strategies employed will vary based on changes in the environmental

    conditions and/or changes in team membership (Cannon-Bowers et al., 1995).

    TMM Measurement

    Although there area number of methods for measuringTMMs, no singleapproach

    is dominant in the literature (for a review, see Mohammed et al., 2000). In addition,

    some approaches appear to be more useful for certain types of mental models than are

    others. For example, measures of declarative and procedural mental models (e.g.,

    Pathfinder andconcept mapping) arewell establishedandhavebeen used with relative

    success (Kraiger, Salas, & Cannon-Bowers, 1995; Marks et al., in press; Minionis,

    1994). Measures of strategic mental models, on the other hand, are not as well estab-

    lished. Measures also vary in their ability to capture the content of mental models, the

    structure, or both (Mohammed et al., 2000). For example, Eby, Meade, Parisi, and

    Douthitt (1999) recently developed a mentalmodel measure of a specific content area,

    expectations for teamwork. As a result, TMM measurement may require the develop-

    ment and use of multiple types of measures (Kraiger & Wenzel, 1997).

    Furthermore, existing TMM measures have consistently demonstrated problems

    for organizational researchers and practitioners (Klimoski & Mohammed, 1994).

    Some of these problems include (a) confusing instruments, (b) time-consuming ques-

    tionnaires, and (c) cumbersome administration procedures (e.g., requiring a com-

    puter). Such problems areeven greater in field settings, in which the time allocatedforsurvey administration may be more limited. In fact, such difficulties have led some

    researchers to interpret their findings with respect to TMMs post hoc, avoiding the

    measurement aspect altogether (e.g., Athens, 1982; Kleinman & Serfaty, 1989). This

    has led to a recurrent call for faster, more user-friendly, yet valid measures of TMMs

    (Klimoski & Mohammed, 1994; Kraiger & Wenzel, 1997).

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 309

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    5/17

    Application of PerformanceAppraisal Practices to TMM Measurement

    In this study, we propose a new methodology for assessing TMMs that can be tai-

    lored to the team and task of interest. The application of this measurement technique

    permits an assessment of the content of strategic TMMs. The impetus for this study

    stems from the cumbersome nature of existing mental model measures, as well as the

    need for measures of strategic TMMs. The present measure builds on the work of

    Zaccaro, Shlechter, andBurke (2000), whodeveloped a similar methodfor measuring

    TMMs in a military setting. Our investigation extends the previous application of this

    methodand critically examines theutilityofvarious indices inaneffort toassessTMM

    similarity and accuracy.

    The development and implementation of theproposed TMMmethodology borrow

    from many of the traditional performance appraisal practices. For instance, typically

    when completing a performance appraisal form, raters assess specific behaviors that

    tap knowledge, skills, and abilities or competencies on a Likert-type scale (Fleenor,

    Fleenor, & Grossnickle, 1996). Similarly, using the methodology described in thisstudy, team members rate potential team actions given a specific team scenario on a

    Likert-type scale.

    Second, the scenarios developed for the TMM questionnaire are based on critical

    incidents (Flanagan, 1954), another application of performance appraisal practices.

    Critical incidents have been used widely in performance appraisal practices, particu-

    larly when developingbehaviorally anchored ratingscales. Thevalueof this approach

    is that it focuses on behaviorsthat discriminate effective fromineffectiveperformance

    and that these behaviors aregeneratedby subject matterexperts (SMEs)(Schneider &

    Schmitt, 1986). With the assistance of SMEs, we developed scenarios that describe

    specificcritical teamsituations in whicha varietyof behaviorsmight be demonstrated.

    We also developed a list of potential behaviors that the team members could execute

    given thesituation. These behaviors consistedof taskwork activities that were consid-

    ered to be either effective or ineffective strategies by our SMEs for each situation.Finally, we use interrater agreement and reliability indices typically used in perfor-

    mance appraisal practices to determine the similarity of the teams mental model.

    Assessing TMM Similarity and Accuracy

    To determineTMM similarity, we calculated bothinterrater agreementand reliabil-

    ity indices with thedata collected, and assessed their utility in predicting team perfor-

    mance. To assess TMM accuracy, we compared the teams mental model (average of

    the individual team members mental models) to the experts mental model (average

    of theexpertsmentalmodels). In this section, we discuss theassessment of similarity

    and accuracy, issues inherent in the different techniques, and our expectations with

    regard to the utility of each technique.

    TMM Similarity

    As various researchers in the performance appraisal domain have uncovered,

    assessinginterratersimilaritycan taketwo distinctforms: (a) reliability/consistency or

    (b) agreement/consensus (Fleenor et al., 1996; Kozlowski & Hattrup, 1992). This dis-

    310 ORGANIZATIONAL RESEARCH METHODS

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    6/17

    tinction canhelp clarify how TMMsimilarity shouldbe assessed.Specifically, having

    similar TMMs could mean either having overlapping TMMs, compatible TMMs, or

    identical TMMs (see Klimoski & Mohammed, 1994). On one hand, similar mental

    models may mean that team members share the same pattern of responses or similar(compatible) response profiles (i.e., high interrater reliability or consistency). On the

    other hand, similar mental models may mean that team members share the same

    responses in the absolute sense (i.e., high interrater agreement). Therefore, we evalu-

    ated the utility of both consistency and consensus indices as measures of TMM simi-

    larity. Unraveling the distinction between consistency and consensus is crucial to

    assessing TMM similarity and contributes to the utility of the new measure proposed

    in this research.

    Consistency. Measures of consistency are indices of reliability or the proportional

    consistency of variance among raters. Examples of consistency indices include the

    intraclass correlation coefficient (ICC) (McGraw & Wong, 1996; Shrout & Fleiss,

    1979) and coefficient alpha (Kozlowski & Hattrup, 1992). High interrater reliability

    can be obtained if ratings by kjudges are different but proportional. Specifically, con-sistency indices attend to similarity of rank orderings of judges target ratings (i.e.,

    whether judges scores correlate with one another) irrespective of whether the scores

    are the same. Therefore, high interrater reliability can be obtained even when there is

    little manifest agreement between judges.

    The value of examining the consistency ofkraters is that raters may have the same

    pattern of ratings butnot perfect agreementbecauseof various ratingbiases (e.g.,halo,

    restriction of range,etc.).For example, onerater mayuse values 2 through 4 (e.g.,2, 3,

    4, 4) on a 7-point scale, whereas another rater uses 5 through 7 (e.g., 5, 6, 7, 7) on the

    same scale. A correlational analysis of these ratings would reveal high consistency or

    similarity in the pattern of ratings, whereas an index of agreement would reveal low

    consensus.Therefore,in casesin whicha researcherexpects somesystematic variabil-

    ity in the use of a rating scale, an index of consistency may be the most ideal approach

    (Kozlowski & Hattrup, 1992).

    Consensus. Interrater consensus or agreement refers to the interchangeability

    among raters. In other words,an indexof consensusassesses theextent to which raters

    make essentially thesame ratings (Kozlowski & Hattrup, 1992). James,Demaree, and

    Wolf (1984) describe interchangeableas the extent to which judges agree on a set of

    judgments, or the proportion of systematic variance in a set of judgments in relation to

    some hypothetical random variance in the judgments. High agreement between krat-

    ers can exist with low reliability because consistency indices lack power when vari-

    ance among judges across targets (i.e., parallel items measuring a construct) is

    restricted (James et al., 1984). This occurs when agreement between judges across a

    set of common targets is high. Therefore, consistency measures may not be appropri-

    ate to use in such situations. In addition, agreement indices are less sensitive to the

    numberof ratersthan arespecific consistencymeasures such as coefficientalpha, thusmaking them more comparable across teams of various sizes (Cortina, 1993;

    Kozlowski & Hattrup, 1992).

    The most frequently used measure of agreement or consensus is rwg or rwg(j) for j

    essentially parallel items. The rwg(j) formula (James et al., 1984, Equation 2) uses the

    mean within-group varianceacrossall items compared to some hypothetical expected

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 311

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    7/17

    variancedistributionof responses. When no response biasesareexpected, researchers

    usually employ an expected uniform distribution (sEU2 ) of responses when calculating

    rwg(j). This approach is useful when little variance in judges responses is expected

    among groupsand acrossthej items. Kozlowski andHattrup (1992) argue thatconsen-sus measures are a reference for the shared perceptions ofkraters.

    Recently, Lindell and his colleagues (Lindell & Brandt, 1997, 1999; Lindell,

    Brandt, & Whitney, 1999) have noted several computational and theoretical problems

    associated with therwg(j) index thathave particular relevance to thisresearch.First, rwg(j)can display irregular behavior when the obtained average response variance exceeds

    sEU2 . Consequently, rwg(j) tends to produce severe negatively skewed distributions.

    Moreover, the rwg(j) formula incorporates the Spearman-Brown correction formula,

    which is grounded in reliability theory. This is problematic,given that rwg(j) is an agree-

    ment index. Furthermore, rwg(j) can create artificially inflated agreement estimates.

    Specifically, truncating out-of-range values (which occurs when the observed vari-

    ance is greater than the expected variance) to 0 artificially inflates low agreement val-

    ues (cf. Lindell & Brandt, 1997; Lindell et al., 1999).

    In an effort to deal with these limitations, Lindell et al. (1999) proposed a variationtothe rwg(j)formula,rwg j( )

    * , which uses theaverageof theobtainedvariancesof theitems

    in the scale compared to sEU2 or compared to the variance expected under a maximum

    dissensus distribution (sMV2 ). Lindell et al. note that rwg j( )

    * offers more meaningful val-

    ues than does rwg(j) in thatan rwg j( )* valueof .50using sMV

    2 (or a value of 0 using sEU2 ) rep-

    resents agreement due to chance, and values higher or lower than .50 (or 0) represent

    agreement more than or less than chance, respectively. The measure rwg j( )* is a linear

    function of averagevariance;however, rwg j( )* canbe more useful than average variance

    because it is scale invariant, making it comparable across different response scales

    and/or samples. Recently, Eby et al. (1999) demonstrated the use ofrwg j( )* as a means

    for assessing shared or team mental models.

    TMM Accuracy

    TMMaccuracy is assessedby comparinga teamsmental model towhat isbelieved

    to be themost accurate (i.e., most correct) TMM, as defined by a setof SMEs (Marks

    et al., in press). Researchers have argued that both TMM similarity and TMM accu-

    racy are important predictors of team effectiveness (e.g., Marks et al., in press;

    Mathieu et al., 2000).However, TMMaccuracy is a distinct constructfrom TMMsim-

    ilarity in that a team can share accurate or inaccurate information. Marks et al. (in

    press)found that TMMaccuracy andsimilarity uniquely andpositivelyrelated to both

    team coordination and team performance. However, most research to date has exam-

    ined TMM accuracy in settings in which accuracy is relatively easy to define (e.g., in

    laboratory settings). Research has yet to examine TMM accuracy in action teams per-

    forming in a field environment.

    The Present Research

    To demonstrate the utility of the proposed TMM measurement methodology, we

    examined the ability of TMMsimilarity and accuracy to predict subsequent team per-

    formance. Based on our review of the literature and previous empirical studies, we

    expected that TMM similarity and accuracy would uniquely and positively predict

    312 ORGANIZATIONAL RESEARCH METHODS

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    8/17

    team performance. With regard to TMMsimilarity, we further expected that interrater

    consensuswould yield a stronger relationship with performance than would interrater

    consistency. This is because agreeing on one set of strategic actions (as measured by

    interrater agreement) is more important for interdependent team performance thansimply having correlated strategic knowledge and expectations (as measured by

    interrater reliability).

    Method

    Participants and Setting

    Participants were 147members (mean age= 28 years,79%male) of 24 community

    league basketball teams. Of these, 19 were all-male teams and 5 were all-female

    teams. Team size ranged from 5 to 12 members. The number of team members who

    provided complete TMM data per team ranged from 2 to 11 members, with a mean of

    5.71and a standard deviation of 2.51. Thenumberof ratersper team varied because (a)not all teams had an equal number of members, and (b) not all members of participat-

    ing teams chose to complete the survey. In exchange for their participation, partici-

    pants were provided with discount coupons fora sporting goods store, soft drinks, and

    a chance to win one of two $25 gift certificates. The 24 teams who participated in this

    study represent 55% of the total number of teams who played in the community

    league. The teams played one game per week during the course of the season; male

    teams played 14 games, whereas female teams played 10 games.1

    Development of the TMM Questionnaire

    As described earlier, the TMM questionnaire was developed with extensive help

    from SMEs. First, together with one SME (an intercollegiate basketball player with

    somecommunity league experience),we composed four specificscenarios that reflectcritical incident situations. Each of the four scenarios described a situation in which

    the team is losing their championship game, but the four scenarios varied in (a) the

    amount of time remaining (2 minutes or a full half) and (b) the number of points down

    (7 or 25). These scenarios were chosen to represent varying degrees of a losing situa-

    tion requiring different team strategies with the objective of differentiating between

    effective and ineffective teams.

    Next, we asked a set of four SMEs (all tenured high school coaches with at least 15

    years experience) to generate 10 team-level actions that would help a teams success

    and 10 team-level actions that would harm a teams success in each of the four scenar-

    ios. Finally, with the assistance of the first SME, we reduced the number of SMEs

    responses to 17 actions based on response redundancy (stated by two or more SMEs)

    and applicability of actions to thestudys setting (community leaguebasketball).This

    list of actions wasthen reviewedby oneadditional SME(who didnotparticipate in thefirst exercise). The final list consisted of 17 actions that could capture effective, inef-

    fective, and neutral strategies (depending on the scenario). The same 17 actions were

    used for all four scenarios.

    By using multiple scenarios in which the same 17 actions could be demonstrated,

    both TMM similarity and TMM accuracy across scenarios could be assessed. An

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 313

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    9/17

    example of one of the four scenarios from the TMM questionnaire is provided in the

    appendix.Participantswereasked to indicatehoweffectiveeachaction would be given

    the situation described, with responses ranging from 3 (this action would severely

    hurt our teams success) to +3 (this action is vital for our teams success). The TMMquestionnaire was administered to a set of four SMEs (four different, tenured high

    school basketball coaches) whose responses were used as a basis for our accuracy

    measure. The average interrater agreement obtained for the SMEs ratings across the

    four scenarios was very high (rwg j( )* = .81, range = .72 to .89).

    In sum, the new TMM measurement method involved four steps: (a) generating

    multiple critical-incident-based team scenarios with the assistance of SMEs, (b) ask-

    ing a different set of SMEs to generate behaviors that vary in effectiveness across the

    team scenarios, (c) reducing the number of behaviors to a reasonableamount with the

    help of SMEs, and(d) administering theTMM questionnaire toa differentset of SMEs

    to establisha TMMaccuracystandard.Note that although we chose team-level behav-

    iors, behaviorsat differentlevelsof analyses (e.g., individual level) canalso bechosen.

    The level of analysis chosenshouldmatch the level of analysis in which thetheory and

    hypothesesof interest reside (e.g.,homogeneous, independent, or heterogeneous) (seeKlein, Dansereau, & Hall, 1994).

    Measures

    TMM similarity. TMMsimilarity wasassessed using twodifferent indices: coeffi-

    cient alpha and rwg j( )* .

    2The two similarity measures were calculated for each scenario

    and then averagedacross scenarios. To calculatecoefficient alphas, we transposed the

    raw data set so that raters (i.e., team members) were treated as items and actions were

    treated as cases. We calculated a separate alpha for each scenario for each team. We

    then averagedthefour scenario alphas to generate a singlescore foreach team. Higher

    levels of alpha indicate higher levels of within-team interrater consistency.

    The agreement measure, rwg j( )* , was calculated using the formulas provided by

    Lindell et al. (1999, Equation 5). Lindell et al. (1999) demonstrated that rwg j( )* values

    based on sEU2 are perfectly related to the rwg j( )

    * values based on sMV2 . However, when

    employing sMV2 , rwg j( )

    * values can range from 0 (absolute dissensus) to 1.0 (absolute

    consensus), whereas when employing the s2EU distribution, the maximum rwg j( )* value

    is also 1.0, but the lowest value is less than 0 and is based on the number of response

    options. Therefore, to be more consistent with the range of possible values of coeffi-

    cient alpha,we used sMV2 when calculatingrwg j( )

    * values. Higher levels ofrwg j( )* indicate

    higher levels of interrater agreement within teams.

    TMM accuracy. TMM accuracy wasassessed using one statistical index. First,we

    calculated individual TMMaccuracyscores foreach item withineach scenario by cal-

    culating theabsolute difference between each team members responsesand the aver-

    age response valueobtained by theSMEs. TMMaccuracy levels were then calculated

    by averaging the individual team member mentalmodel accuracy scoresacross the17actions and then across the four scenarios. The TMM accuracy index ranges from 0

    (completely accurate TMMs) to 6 (completely inaccurate TMMs). Note that the aver-

    age rwg j( )* across teams and scenarios was .73, providing sufficient statistical rationale

    for aggregating individual responses to the team level (James et al., 1984; Kozlowski &

    Hattrup, 1992).

    314 ORGANIZATIONAL RESEARCH METHODS

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    10/17

    Team performance. Team performance was measured using two indices: season

    winning percentage and the ratio of points scored by the team to points scored against

    the team in the season. These two indices were standardized and combined into a sin-

    gle performance measure, standardized = .93.

    Procedure

    We first announced the study to team captains during a preseason league organiza-

    tional meeting. Team captains were told that the study examined the relationship

    between various team characteristics and team performance. Interested team captains

    provided their team name and contact information. The league allocated a single

    1-hour practice for each team approximately 1 week before the first gameof the season,

    duringwhich we arranged toadminister theTMMquestionnaire. Thequestionnairewas

    completed either immediately before or after their practice in the practice area and took

    approximately 15 minutes to complete. At the end of the season, the management of the

    community league provided us with team performance information.

    Results

    Table 1 provides descriptive statistics and intercorrelations for variables in this

    study. Average team-level interscenario correlations for alpha and rwg j( )* measures

    were quite modest, suggesting substantial variability in responses across the four sce-

    narios. Examination of the intercorrelations between theconsistency measure and the

    consensus measure shows a correlation of .33. Examination of the relationship

    between the similarity and accuracy indices shows that coefficient alpha correlated

    significantly with the accuracy measure (r= .45, p < .05), whereas rwg j( )* did not.

    Results indicated that the interrater agreement index, rwg j( )* , yielded a marginally

    significant relationship with performance (r= .36, p < .10), whereas coefficient alpha

    showed no relationship with performance. Contrary to our expectations, TMM accu-

    racy did not correlate significantly with performance. Thus, the consensus measure,

    rwg j( )* , wasfound to be theTMMindexmost related to team performance.Hierarchical

    regression analysis,depicted in Table2, further revealed thatneither accuracynor con-

    sistency accounted forvariance in performance,andwhen entered as a separate block,

    the TMM agreement index, rwg j( )* , accounted for an additional 14% of the variance in

    team performance over and above the two other TMM indices.

    Discussion

    Limited research has been devoted to developing TMM measures and to testing

    their utility in field settings (Klimoski & Mohammed, 1994; Kraiger & Wenzel, 1997;

    Mohammed et al., 2000). This study specifically examined the utility of a new strate-

    gic TMM measurement technique using action teams performing in a real environ-ment. Comparison of twoTMMsimilarity indices andoneTMMaccuracy index indi-

    cated that TMM similarity, as measured by the agreement index, rwg j( )* , was most

    predictive of team performance.

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 315

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    11/17

    Implications and Contributions

    These results have important implications and contributions for TMM measure-

    ment, research, and practice. First, the results showed that the new strategic TMM

    measurement method could be used as a predictor of team performance in a field set-

    ting. Although the relationship with performance was marginally significant, the goal

    of predicting performance over the course of a full season was quite lofty. Further-

    more, results indicatedthat having shared(as measured byan agreementindex), rather

    than similar or consistent (as measured by a reliability index), strategic knowledge

    resulted in better teamperformance.Thus,by examininginterrater agreementand reli-

    ability indices as measures of TMMs, we were able to specify the meaning of the term

    similarity in this research study. It is important to note, however, that the extent to

    which team members need to share similar information may vary as a function of the

    type of information examined. Although we found that teams need to share similar

    team-level strategic information, theymayneed to retain different declarativeand pro-cedural information pertaining to their individual roles in the team.

    Althoughwehaveused a context-specificcritical situationfor theTMM measure in

    this research, the TMMmeasurement technique presented in this study can be used to

    developTMMmeasures for morecontext-generic settings.For instance, followingthe

    steps described in the Method section, a generic teamwork TMM measure can be

    316 ORGANIZATIONAL RESEARCH METHODS

    Table 1

    Descriptive Statistics and Intercorrelations for Variables in Study

    Variable M SD 1 2 3 4

    1. Alpha .85 .09 .36a

    2. r*wg(j) .73 .08 .34 .31a

    3. Accuracyb

    1.36 0.18 .45* .06 .30a

    4. Performance 0.02 0.97 .03 .36 .04 .93c

    Note. n= 24 for all correlations.

    a. Average team-level interscenario correlation.

    b. Lower numbers indicate higher levels of accuracy.

    c. Team-level coefficient alpha.

    *p< .05. p< .10.

    Table 2

    Regression Analyses of Team Performanceon Team Mental Model (TMM) Indices (n= 24)

    Step/Predictor R2

    TotalR2

    F

    1. TMM accuracy .07

    TMM consistencya

    .06 .00 .00 0.05

    2. TMM agreementb

    .40

    .14 .14 3.15

    a. Interrater consistency (alpha) index.

    b. Interrater agreement, r*wg(j), index.p< .10.

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    12/17

    designed by using teamwork experts. A similar approach was recently taken by Eby

    et al. (1999) to measure context- and team-generic TMMs, teamwork expectations.

    This research goes beyond the work of Eby et al., however, by examining reliability,

    accuracy, andagreement. Thepracticalutility of such TMMmeasuresdependson sev-eral factors, including the cost of using SMEs and the extent to which the measure can

    be reused. The more context- and team-generic such TMM measures are, the broader

    their application. Furthermore, this measurement technique can also be used to assess

    more scenarios to capture the full domain of situations that a team might face. Thus,

    this TMM measurement methodology is flexible and can be used in different settings

    for different purposes.

    Unlike results obtained in laboratory settings (Markset al., in press; Mathieu et al.,

    2000), we found that TMMaccuracy didnot significantly relate to team performance.

    These results suggest that when discussing strategic TMMs, there may not be one

    accurate representation. Rather,accuracy is basedon team-specific knowledge, which

    may vary across teams. For instance, if the teams studied in our research had formal

    coaches, accuracy of team members compared to their coach might have yielded dif-

    ferent results. A related possibility is that theSMEs whose ratings were used to assessTMMaccuracyshareda differentframeof referencethan theone sharedby most of the

    teams studied in this research. However, TMM accuracy scores were quite high, sug-

    gesting that such an explanation is unlikely. It is also possible that TMM accuracy is

    morepreciselyassessed in laboratorysettings than in field settings. Specifically, strat-

    egies for effective team performance may be easier to quantify in laboratory settings,

    in which experimenters control such factors as environmental demands, team design,

    and task structure. Another explanation for the lack of relationship between TMM

    accuracy and team performance may be the restricted variance on the accuracy mea-

    sure (SD = 0.18). Clearly, more research is needed to better understand the nature and

    utility of both TMM similarity and TMM accuracy.

    Limitations and Directions for Future Research

    Several limitations to this research need to be addressed. First, we didnot compare

    theutility of strategicTMMs toother forms of TMMs (e.g.,declarativeandprocedural

    TMM). In addition, our TMM measure focused on TMM content and not on TMM

    structure (Mohammed et al., 2000). Kraiger and Wenzel (1997) encouraged research-

    ers to examine multiple TMM constructs, but they also argued that different [TMM]

    measures maybe valuablefordifferent applications (p.80). Although we believe that

    ourstrategicTMMmeasure isvaluablefor thesetting of this study, a more comprehen-

    sivecomparison of multiple formsof TMMs isneededbefore concluding whichTMM

    measure is of greater importance in which team setting. Nevertheless, we presented a

    new measurement technique for assessing strategic TMMs, which may allow for a

    more accurate comparison between different TMM forms. Moreover, future applica-

    tions of this new TMMmeasurementtechnique couldassessdifferent aspects of TMMnot examined in this study.

    Second, our application of themeasurement techniqueattemptedto assess strategic

    TMMs. However, our specific measure could be improved by incorporating the

    sequence of behaviors that a team might take and assessing the full domain of situa-

    tions that a team might face.Further useof this techniqueacross differentteamsshould

    consider expanding the technique to assess the complete domain of TMMs.

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 317

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    13/17

    Several methodological limitations also deserve attention. We examined the new

    TMM measurement technique in an athletic setting, which may limit the

    generalizability of our findings to organizational settings. Athletic teams have limited

    interaction (1 game per week for a total of 10 to 14 games), gender homogeneity, andsecondary membership (members have other main responsibilities). However, these

    teams do share some similar characteristics (e.g., task interdependence, longevity)

    to action teams in other organizational settings, such as military and surgical teams

    (Cannon-Bowers et al., 1995). Therefore, results from this study may generalize to

    other settings in which action teams function. Furthermore, we were unable to obtain

    complete participation by allteam members foreach team in this research,andwe had

    a small number of teams participating in the research. One limitation of conducting

    research in a field setting is the difficulty of capturing all members of each team and

    capturing a large number of teams. Not having all team members complete the mea-

    sure poses a potential treat to the validity of our findings. Future research in the area

    of team research needs to consider the importance of obtaining all team members

    perceptions.

    Future research should examine the applicability of the strategic TMM techniquedeveloped in this study in other contexts using different types of teams (e.g., top-

    management teams, aviation crews) and different levels of context-specific scenarios.

    Furthermore, although this research focused only on TMM content, it is possible that

    statistical methodologiessuchas multidimensional scaling would allow researchers to

    use this TMM measure to assess TMM structure. Although work on such use of our

    TMMmethodology wasbeyond thescope of this article, futureresearch shouldexam-

    ine this possibility.

    Researchers should also evaluate the unique contribution of strategic TMMs to

    team effectiveness relative to other TMMaspects and types (Kraiger & Wenzel,1997)

    and relative to other team processes such as team efficacy, team cohesion, and team

    goals (e.g., Durham, Knight, & Locke, 1997). Furthermore, research should examine

    the mediating role of team processes between TMMs and performance as such rela-

    tionships have been uncovered in recent TMM research (see Mathieu et al., 2000).

    Research should also examine the importance of having strategic TMMs that are

    adaptive in environments of different levels of complexity (e.g., Marks et al., in press).

    To score high on the strategic TMM similarity measure used in this study, members

    hadto notonly share strategicinformationpertaining to a singleteam scenariobut also

    share shifts in TMM ratings across different team scenarios. Thus, our study demon-

    strated the importance of considering the adaptability of the team members mental

    models as a component of a strategic TMM. Based on this study and on findings by

    Marks et al. (inpress), it appears that sharedstrategicTMMs areespecially crucial for

    effective team performance in adverse situations, in which teams need to adjust their

    behavior to match new situational demands. Our research captured some elements of

    adaptability by examining a sample of scenarios in which sharing strategic TMMs

    across scenarios is important.However, future research in this areashould specificallyexamine adaptability with regard to (a) the change of TMMs under different environ-

    mental conditions, (b) the appropriateness of the change (i.e., was it an accurate

    change), and (c) the sharedness of the change among team members.

    Finally, research should focus on how TMMs develop over time. Such research

    shoulddetermine themalleabilityof various types of TMMs andwhat factors contrib-

    ute to TMM development. Marks et al. (in press) found that team performance, effec-

    318 ORGANIZATIONAL RESEARCH METHODS

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    14/17

    tive leader briefings, and team interaction training positively influenced changes in

    TMM similarity and accuracy. Future researchshould examinewhether thesefindings

    generalize toother settings andwhetherother factors (e.g., familiaritywith team mem-

    bers, relevantexperience of team members) that were notcapturedin this research alsoinfluence the development of various forms of TMMs.

    In conclusion, this study provides team researchers and practitioners with a new

    measurement technique for assessing strategic TMMs. Although initial findings with

    regard to the utility of this method are promising, more research is needed to evaluate

    thegeneralizability andapplicability of this technique to other team settings.We hope

    this research stimulates additional TMM research and allows for more accurate and

    comprehensive assessment of TMM constructs.

    APPENDIX

    Strategic Team Mental Model Measure

    It is HALF TIME in the championship game of the basketball league. Your team isdown by TWENTY-FIVE points. If you win this game your team is champion of the

    league. If not, then the team you are playing is champion.

    Using thescalebelow, your task is to indicate theeffectiveness of each of thefollowing

    potential actions given the situation described above. Please give a rating for each

    action.

    3 2 1 0 +1 +2 +3

    (This action would severely (This action would neither help (This act ion is vital for

    hurt our teams success) nor hurt our teams success) our teams success)

    1) Quit. _____ 10) Think about personal performance. _____

    2) Focus on fundamentals. _____ 11) Increase intensity level. _____3) Pick a fight. _____ 12) Make no changes. _____

    4) Complain to the referee. _____ 13) Complain to teammates

    5) Start looking forward to about their performance. _____

    next week. _____ 14) Try to get as many 3 pt.

    6) Pray the other team starts shots as possible. _____

    making mistakes. _____ 15) Maintain confidence. _____

    7) Foul to stop the clock. _____ 16) Really stress defense. _____

    8) Start pressing if you 17) Put subs in. _____

    arent already. _____

    9) Start playing more

    aggressively. _____

    Notes

    1. Male andfemale teams didnot differon any of thevariablesused in this study. In addition,

    gender did not affect the correlations reported on in this study. Therefore, it was appropriate to

    combine male and female teams in our analyses.

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 319

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    15/17

    2. Because allkraters(i.e., team members) rated allj stimuli (i.e., actions), the coefficient al-

    pha measure was identical to an intraclass correlation coefficient, ICC(C, k) (Shrout & Fleiss,

    1979).

    References

    Athens, M. (1982, September). The expert team of experts approach to command and control

    (C2) organizations. IEEE Control Systems Magazine, 2, 30-38.

    Brannick, M. T., Salas, E., & Prince, C. (1997). Team performance assessment and measure-

    ment: Theory, methods, and application. Mahwah, NJ: Lawrence Erlbaum.

    Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team

    decision making. In J. Castellan, Jr. (Ed.), Current issues in individual and group decision

    making (pp. 221-246). Hillsdale, NJ: Lawrence Erlbaum.

    Cannon-Bowers, J. A.,Tannenbaum,S. I.,Salas, E.,& Volpe, C. E. (1995).Definingcompeten-

    cies andestablishing team trainingrequirements. In R. A. Guzzo& E. Salas(Eds.), Team ef-

    fectiveness and decision making in organizations (pp. 333-380). San Francisco:

    Jossey-Bass.

    Converse,S. A.,& Kahler, S. E. (1992, August). Sharedmentalmodels, teamperformance, andknowledge acquisition. Paper presented at the 100th annual meeting of the American Psy-

    chological Association, Washington, DC.

    Cortina, J. M. (1993). What is coefficient alpha: An examination of theory and application.

    Journal of Applied Psychology, 78, 98-104.

    Durham, C. C., Knight, D., & Locke, E. A. (1997). Effects of leader role, team-set goal diffi-

    culty, and tactics on team effectiveness. Organizational Behavior and Human Decision

    Processes, 72(2), 203-231.

    Eby, L. T., Meade,A. W., Parisi, A. G.,& Douthitt, S. S. (1999).The developmentof an individ-

    ual-level teamwork expectations measure and the application of a within-group agreement

    statistic to assess shared expectations for teamwork. Organizational Research Methods, 2,

    366-394.

    Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327-358.

    Fleenor,J. W., Fleenor,J. B., & Grossnickle, W. F. (1996).Interrater reliability and agreementof

    performance ratings: A methodological comparison. Journal of Business and Psychology,19(3), 367-380.

    Heffner, T. S., Mathieu, J. E., & Goodwin, G. F. (1995, April). An empirical investigation of

    mental models in teamperformance. Paper presented at the 10th annual meeting of the So-

    ciety for Industrial and Organizational Psychology, Orlando, FL.

    James, L. R., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability

    with and without response bias. Journal of Applied Psychology, 69(1), 85-98.

    Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, infer-

    ence, and consciousness. Cambridge, MA: Harvard University Press.

    Klein, K. L., Dansereau, F., & Hall, R. J. (1994). Levels issues in theory development, data col-

    lection, and analysis. Academy of Management Review, 19(2), 195-229.

    Kleinman, D. L., & Serfaty, D. (1989). Team performance assessment in distributed decision

    making. In R. Gilson, J. P. Kincaid, & B. Goldiez (Eds.), Proceedings of the interactivenet-

    worked simulation for training conferences (pp. 22-27). Orlando, FL: Naval Training Sys-

    tems Center.Klimoski,R. J.,& Mohammed,S. (1994).Team mentalmodel: Construct or metaphor?Journal

    of Management, 20(2), 403-437.

    Kozlowski, S.W.J., & Hattrup, K. (1992). A disagreement about within-group agreement: Dis-

    entangling issues of consistency versus consensus. Journal of Applied Psychology, 77(2),

    161-167.

    320 ORGANIZATIONAL RESEARCH METHODS

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    16/17

    Kraiger,K., Salas,E., & Cannon-Bowers, J. A. (1995).Measuring knowledge organizationas a

    method for assessing learning during training. Human Factors, 37(4), 781-816.

    Kraiger,K., & Wenzel, L. H. (1997).Conceptual development and empiricalevaluationof mea-

    sures of shared mental models as indicators of team effectiveness. In M. T. Brannick, E.Salas, & C. Prince (Eds.), Team performance assessment and measurement: Theory, meth-

    ods, and application (pp. 63-84). Mahwah, NJ: Lawrence Erlbaum.

    Lindell,M. K.,& Brandt, C. J. (1997).Measuring interrater agreementfor ratings of a singletar-

    get. Applied Psychological Measurement, 21, 271-278.

    Lindell, M. K., & Brandt, C. J. (1999). Assessing interrater agreement on the job relevance of a

    test: A comparison of the CVI, T, rwg(j)

    , and rwg j( )* Indexes. Journal of Applied Psychology,

    84, 640-647.

    Lindell, M. K., Brandt, C. J., & Whitney, D. J. (1999). A revised index of interrater agreement

    for multi-item ratingsof a single target.AppliedPsychologicalMeasurement, 23, 127-135.

    Marks, M. A., & Zaccaro, S. J. (1997, August). Leader-team dynamics in hierarchical deci-

    sion-makingteams. Paperpresented atthe 57th annualmeetingof theAcademyof Manage-

    ment, Boston.

    Marks, M. A., Zaccaro, S. J., & Mathieu, J. E. (in press). Performance implications of leader

    briefings and training for team adaptation to novel environments. Journal of Applied Psy-chology.

    Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & Cannon-Bowers, J. A. (2000). The

    influence of shared mental models on team process and performance. Journal of Applied

    Psychology, 85, 273-283.

    McGraw, K. O.,& Wong, S. P. (1996).Forming inferences about some intraclass correlation co-

    efficients. Psychological Methods, 1(1), 30-43.

    Minionis, D. (1994). Enhancing team performance in adverse conditions: The role of shared

    team mental models and team training on an interdependent task. Unpublished doctoral

    dissertation, George Mason University, Fairfax, VA.

    Mohammed, S., Klimoski, R. J., & Rentsch, J. R. (2000). The measurement of team mental

    models: We have no shared schema. Organizational Research Methods, 3, 123-165.

    Rouse,W.B., & Morris, N. M. (1986).On looking into theblack box:Prospects andlimits in the

    search for mental models. Psychological Bulletin, 100, 349-363.

    Schneider, B., & Schmitt, N. (1986). Staffing organizations. Prospect Heights, IL: Waveland.Shrout, P. E.,& Fleiss, J. L. (1979).Intraclass correlations:Uses in assessingreliability. Psycho-

    logical Bulletin, 86, 420-428.

    Zaccaro, S. J., Shlechter, T. M., & Burke, C. S. (2000). Toward an understanding of the team

    mental models of experienced armor platoons. Unpublished manuscript.SheilaSimsarian Webber is an assistant professor in theDepartment of Management at ConcordiaUniver-

    sity. Shereceived her doctoraldegreein industrialandorganizational psychology from George MasonUni-

    versity. Her research interests include team composition and effectiveness, trust, project leadership, and client-

    service coproduction.

    Gilad Chen is a doctoral candidate in industrial and organizational psychology at George Mason Univer-

    sity. He received his masters degree in industrial andorganizational psychology from George Mason Uni-

    versity. His research interests focus on integrating individual differences in motivation with leadership and

    team processes.

    StephanieC. Payneis an assistantprofessorin the Departmentof Psychology at TexasA&M University. Shereceived her doctoral degree in industrial and organizational psychology from George Mason University.

    Her research interests include individual differences, training, and efficiency in the workplace.

    Sean M. Marsh is a doctoral student in industrial andorganizational psychology at George Mason Univer-

    sity. He received his masters degree in industrial andorganizational psychology from George Mason Uni-

    versity. His research interests include decision making in the workplace, leadership, and organizational

    strategy and policy making.

    Webber et al. / TEAM MENTAL MODEL MEASUREMENT 321

    2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.at PENNSYLVANIA STATE UNIV on April 10, 2008http://orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 7/28/2019 10.1.1.110.6930

    17/17

    StephenJ. Zaccaro is an associateprofessor in theDepartment of Psychology at George Mason University.

    He received his doctoral degree in social psychology fromthe University of Connecticut. His researchinter-

    ests include team and leadership effectiveness, as well as leadership training and development.

    322 ORGANIZATIONAL RESEARCH METHODS