|
|
|
ABSTRACT |
This study investigated the factorial validity of the 33-item self-rated Emotional Intelligence Scale (EIS: Schutte et al., 1998) for use with athletes. In stage 1, content validity of the EIS was assessed by a panel of experts (n = 9). Items were evaluated in terms of whether they assessed EI related to oneself and EI focused on others. Content validity further examined items in terms of awareness, regulation, and utilization of emotions. Content validity results indicated items describe 6-factors: appraisal of own emotions, regulation of own emotions, utilization of own emotions, optimism, social skills, and appraisal of others emotions. Results highlighted 13-items which make no direct reference to emotional experiences, and therefore, it is questionable whether such items should be retained. Stage 2 tested two competing models: a single factor model, which is the typical way researchers use the EIS and the 5-factor model (optimism was discarded as it become a single-item scale fiolliwng stage 1) identified in stage 1. Confirmatory factor analysis (CFA) results on EIS data from 1,681 athletes demonstrated unacceptable fit indices for the 33-item single factor model and acceptable fit indices for the 6-factor model. Data were re-analyzed after removing the 13-items lacking emotional content, and CFA results indicate partial support for single factor model, and further support for a five-factor model (optimism was discarded as a factor during item removal). Despite encouraging results for a reduced item version of the EIS, we suggest further validation work is needed. |
Key words:
Mood, psychometric, regulation, construct validity, measurement.
|
Key
Points
- Given the inherent link between construct measurement and theory testing, it is imperative for researchers to pay close attention to measurement issues showed poor fit indices. The present study investigated a self-report emotional intelligence for use in sport
- Results indicate that a single-item model shows poor fit with acceptable fit indices for a 6-factor model.
- A revised 5-factor and 19-item model showed improved model fit.
- Despite encouraging results, we suggest further validation work is needed.
|
Emotional intelligence (EI) has emerged as a key concept among researchers and practitioners alike, and is subject to growing interest in sport psychology (Meyer and Fletcher, 2007; Meyer and Zizzi, 2007; Thelwell et al., 2008; Zizzi et al., 2003). Further to this, meta-analysis results indicate positive relationships between EI and health- related variables (Schutte et al., 2007) and performance variables (Van Rooy and Viswesvaran, 2004). To date, only a few studies have examined EI in sport but the early studies point to encouraging results. Zizzi et al. found EI was associated significantly with sport performance, whereas Thelwell et al. found that EI related with perceptions of coaching effectiveness. Despite increased interest amongst sport psychologists to investigate the effects of EI, it is prudent to establish that measures of EI are valid for use in sport. Schutz, 1994 argued that demonstrating existing measures are valid and reliable should be the first stage in the research process. To date, no published study has provided a comprehensive analysis of the validity of an EI measure for use in sport. Emotional intelligence can be defined as “the ability to carry out accurate reasoning about emotions and the ability to use emotions and emotional knowledge to enhance thought ”(Mayer et al., 2008, p. 111). Emotional intelligence can be assessed using either an objective performance-based measure (Mayer-Salovey-Caruso Emotional Intelligence Test [MSCEIT: Mayer et al., 1999]) or a subjective self-report measure (Emotional Intelligence Scale [EIS: Schutte et al., 1998] (see Meyer and Zizzi, 2007 for a review). In a performance test, individuals are asked to answer questions for which there are correct answers. In self-report tests, individuals are asked to reflect on emotional experiences across different situations and report their subjective perceptions. These perceptions are indicative of an individual’s predispositions or traits. Evidence suggests that performance and self-report measures show low to moderate correlations (Meyer and Zizzi, 2007), and that self-report measures of EI tend to correlate more strongly with personality than performance measures (see Bracket and Mayer, 2003; Meyer and Fletcher, 2007). Additionally, performance tests of EI tend to predict objective measures of performance and cognitive ability better than self-report tests (Van Rooy and Viswesvaran, 2004). As illustrated above and discussed in detail elsewhere (Meyer and Fletcher, 2007; Meyer and Zizzi, 2007; Petrides et al., 2007a; Petrides et al., 2007b), disagreement exists with regard to the most appropriate way to assess EI generally. Presupposing that EI can be adequately assessed using both self-report and performance tests, evidence showing weak correlations between self-report and performance tests suggest they assess different aspects of the concept (Engelberg and Sjöberg, 2004; O’Connor and Little, 2003; Warwick and Nettelbeck, 2004). Decisions regarding use of a performance or self-report measure should be informed by the relative contribution of each to the variables of interest (i.e., how strongly do beliefs about EI scores relate to emotion vs. how strongly do performance test scores relate to emotion). This approach is different from viewing one conceptualization/measure as inherently superior to the other. With this in mind, it should be noted that self-report is the typical method of construct assessment in the sport and exercise psychology literature to date (see Vealey and Garner-Holman, 1998), and in the relatively limited sport and exercise psychology-EI literature specifically (Thelwell et al., 2008; Zizzi et al., 2003). One commonly used measure of self-reported EI is the EIS (Austin et al., 2004; Schutte et al., 1998). The EIS is a 33-item measure designed to assess an individual’s perceptions of the extent to which s/he can identify, understand, harness, and regulate emotions in self and others. Schutte et al. used a set of 62 items derived from the model of Salovey and Mayer, 1990. Exploratory factor analysis on data from 346 participants yielded a four-factor model which the author’s argued that by removing 29-items and re-analyzing data, this produces an adequate one-factor solution. Schutte et al. report an adequate internal consistency reliability (r = 0.87 to .90) and acceptable test-retest reliability (r = 0.78). In a subsequent study, Petrides and Furnham, 2000 identified four factors: optimism/mood regulation, appraisal of emotions, social skills, and utilization of emotions. Similarly, Saklofske et al., 2003 subjected the EIS to confirmatory factor analysis (CFA) and found moderate support in terms of the strengths of fit indices for the four-factor model. With the view of reducing socially desirable responses by including a greater number of reverse scoring items, Besharat, 2007 found support for a 41-item four-factor model among a population of Iranian students. By contrast, Gignac et al., 2005 tested several competing models for the EIS finding some support for a four-factor model that describes appraisal of emotions in the self, appraisal of emotions in others, emotional regulation of the self, and utilization of emotions in problem solving. Gignac et al. argued that further validation work on the scale was needed if the intention of the scale is to assess the theoretical model proposed by Salovey and Mayer, 1990. It is commonly agreed among researchers that ensuring measures are valid and reliable should be the first stage in the research process (Schutz, 1994). A central aspect of the nature of EI is that it is concerned with regulatory processes related to one’s own emotions and the emotions of others (Gignac et al., 2005). A composite measure of EI cannot distinguish emotions related to self from emotions related to others. It is also worth noting that multi-factorial models of the EIS such the four dimensional model proposed by Petrides and Furnham, 2000 also does not distinguish EI in terms of a self and others. In summary, EI has been found to be a useful construct in general psychology, and this trend appears to be continuing with promising initial results from the relatively few studies published in sport and exercise psychology. No published study has investigated the factorial validity of either a performance test or a self-report measure of EI among athletic samples. Given the tradition of using self-report in sport and exercise psychology (Vealey and Garner-Holman, 1998), it is argued that this represents a worthwhile starting point. Therefore, the purpose of this study was to investigate the validity of the EIS. The research process involved two stages. In Stage 1, the aim was to investigate the content validity through asking a panel of experts to assess the suitability of items in the EIS as indicators of EI theory. In Stage 2, the aim was to investigate factorial validity of the EIS using CFA. Two hypothesized models (i.e., single factor, 6-factor) were tested using CFA on a sample of athletes. The first model tested was the single factor model, which has been used in the majority of published studies to date (see Schutte et al., 2007; Zizzi et al., 2003). The second model tested was a 6-factor model based on the work of Salovey and Mayer, 1990. The six-factor model assesses EI in self and others in terms of awareness, regulation, and utilization of emotions.
Study I
Content validityAnastasi and Urbina, 1997 argued that construct validity should be considered at the initial stages of questionnaire development. This suggestion guided the examination of the validity of the EIS whereby items were placed into factors based on a qualitative assessment of the meaning of the item by a panel of experts.
Participants and procedureParticipants were nine sport and exercise psychology researchers (Age range 27-41 years; male; n = 4, female; n = 5). Expert status was operationalized in terms of recruiting people who have published research in peer refereed academic journals, or presented papers at academic conferences on topics related to emotion, mood, EI, and/or psychological skills in sport and exercise settings. Participants indicated the extent to which each item assessed EI in terms of awareness, regulation, and utilization of emotions in self and others (see Salovey and Mayer, 1990; Schutte et al., 1998). Participants evaluated each item on the EIS independently and then discussed the meaning of items collectively.
Results and Discussion Content validity results yielded the identification of six factors with four factors describing aspects of EI related to oneself (i.e., appraisal of own emotions, regulation of own emotions, utilization of emotions, and optimism), and two factors describing aspects of EI in relation to others (i.e., regulation of others’ emotions, a factor labeled social skills by Schutte et al., 1998, and appraisals of others’ emotions). It should be noted that a balanced model with three factors describing EI related to self, and three factors related to EI in others could not be identified. It was not possible to identify items in the EIS that assess the ability to utilize the emotions in others. Items in the 6-factor model are listed in Table 1. Content validity results identified 13-items lack a direct assessment of emotional experiences. Given the definition of EI presented previously (Mayer et al., 2008), each item should contain reference to emotional experiences. For example, the item “I find it hard to understand the non-verbal messages of other people ”assesses perceived difficulties in assessing non-verbal messages. It should be emphasized that there are not direct assessments of an emotional reaction to finding it hard to understand non-verbal messages. It is possible that an individual might find understanding non-verbal messages hard to understand, but this might not activate an emotional response. The item might be better phrased as “I become tense (emotional content) in situations when I feel I need to understand the non-verbal messages of other people”. Items lacking a direct emotion focus include: The strategy in the present study was to test both hypothesized models using all 33-items, and then re-analyze after removing the 13-items identified above. The rationale for following this procedure was to facilitate comparisons with previous research that has investigated the validity of the 33-item EIS. Experts decided that an inclusion criterion for the retention of an item in the re-analysis of data was for each item to contain reference to emotions, moods, or feelings, or a recognizable discrete emotion (anger, anxiety, etc). Given the proposal that 13-items lack emotional content, it was hypothesized that a 20-item version of the EIS should demonstrate improved fit indices. However, as content validity results left only a single-item in the optimism factor, data re-analysis examined a 19-item and five-factor model.
Study II
Test of factorial validityConfirmatory factor analysis tested two models. The first was a single factor first-order model. Research has typically summed EIS scores to produce a single score (Schutte et al., 2007; Van Rooy and Viswesvaran, 2004; Zizzi et al., 2003). The second model tested was a 6-factor model developed in stage 1.
ParticipantsVolunteer physical activity participants (n = 1681) completed the 33-item EIS. Participants were university student-athletes (n = 1072, Age: M = 21.12, SD = 6.7 years), exercisers (n = 275, Age: M = 22.23, SD = 9.23 years), runners (n = 80, Age: M = 27.34, SD = 15.42 years), and judo players (n = 254, Age: M = 34.62, SD = 15.10 years). Participants represented a heterogeneous sample of athletes who competed at levels ranging from elite/professional sport to recreational sport, as well as those for whom the primary goal was health and fitness.
ProcedureFollowing institutional ethical approval from the institution of the first author, athletes were recruited via a number of different approaches (e.g., e-mail invitations, invitations in lectures, invitations on on-line learning modules). Student-athletes could complete an online version or a pencil-paper version of the EIS. Student-athletes completed the measure either before or after formal lectures, while other participants (i.e., marathon runners, judo players, exercisers) completed the measure at their respective training sessions. It should be noted that Internet-based surveys have become a popular method of data collection in psychology, with evidence suggesting that online research is equivalent to traditional offline (i.e., paper-pencil [PP]) methods (Lonsdale et al., 2006).
Data analysisConfirmatory factor analysis using EQS V6 (Bentler and Wu, 1995) was used to test the hypothesized models. As there was evidence of multivariate non-normality in the data, models were tested using the Robust Maximum Likelihood method. This method has been found to effectively control for overestimation of X2, under-estimation of adjunct fit indexes, and under-identification of errors (see Hu and Bentler, 1995). The 6-factor measurement model for the EIS specified that each item related to its hypothesized factor with the variance of the factor fixed at 1. Factors were allowed to freely inter-correlate. In terms of assessing model fit, long standing debate continues on which are the best fit indices to use. It is generally agreed (Hu and Bentler, 1995) that incremental fit indices should be greater than .90 with the standardized root mean error of approximation below .08. Hu and Bentler, 1999 indicated that incremental fit indices such as the CFI should be greater than .95, which is the criterion used in the present study.
Results and Discussion Confirmatory factor analysis results for the 33-item single factor model results were: Normative Fit Index (NFI) = 0.82; Non-Normative Fit Index (NNFI) = 0.83; Comparative Fit Index (CFI) = 0.84; and Root Mean Error of Approximation (RMSEA) = 0.05. Incremental fit indices show poor fit, with all values being lower than the .95 criterion suggested by Hu and Bentler, 1999. The RMSEA was an acceptable value. When seen collectively, the single-factor model demonstrates a poor fitting model to the data. By contrast, fit indices for the 6-factor were: NFI = 0.92; NNFI = 0.95; CFI = 0.95; and RMSEA = 0.03, and therefore within an acceptable value other than the NFI that was marginally below the 0.95 criterion. Factor loadings for the items on both models are contained in Table 1. As indicated in Table 1, factor loadings for the single-factor model range from 0.28 to 0.60 with a mean factor loading of 0.43 ± 0.07. In terms of identifying trends to explain weak loading items, the most discernible observation is for reverse scoring items (e.g., items worded in the direction opposite to that of other items). All three reverse scoring items demonstrate weak factor loadings (i.e., “I find it hard to understand the non-verbal messages people send”, “it is difficult for me to understand why people feel they way they do”, and “When I am faced with a challenge, I give up because I believe I will fail”) in both single and six-factor models. Previous research has found evidence to suggest reverse scoring items perform poorly in single-factor models (Woods, 2006). In studies that test multifactoral models, previous research has found that CFA tends to produce better fitting results when all reverse scored items are contained on the same factor and all items assess the construct in the same direction (Tomas and Amparo, 1999). It is proposed that participants might not read reverse scoring items correctly, and that poor factor loading is attributed to carelessness of respondents. In an examination of respondent-carelessness, Woods showed that as few as 10% of careless respondents can result in the rejection of a good fitting unidimensional scale. In the present study, two of the three reverse score items are clearly focused on assessing aspects of emotional control. Both items have an equivalent positively worded item that conveys almost identical meaning with an acceptable factor loading. Therefore it is plausible that low factor loadings for two of the three reverse the scoring items could be attributed to a respondent carelessness magnified by CFA. Previous research has included reverse scoring items as a strategy to improve validity (Besharat, 2007). However, it should be noted that reverse score items often perform poorly on athletic samples. For example, Lane et al., 1999 showed reverse scoring items in the Competitive State Anxiety Inventory-2 (Martens et al., 1990) demonstrated weak factor loadings. It is possible that athletic samples magnify limitations of reverse scoring items. Clearly future research following the methodology adopted by Woods, 2006 using athletic samples is desirable. CFA procedures using the remaining 19-items were repeated for both single and 5-factor models (optimism discarded). CFA results for the single factor model were: NFI = 89; NNFI = 90; CFI = .91; and RMSEA = .06, and for the five-factor model: NFI = 93; NNFI = 96; CFI = .96; and RMSEA = .04. In comparison to the 33-item version of the EIS, fit indices improved following the removal of 14-items for both single and six-factor models. Results for the single factor model are acceptable for the RMSEA and marginally lower for incremental indices (NFI, NNFI, & CFI) using the criterion values for acceptable fit proposed by Hu and Bentler, 1999. Results of the present study show that the EIS for use in sport differs to findings reported in samples from the general population (Austin et al., 2004; Besharat, 2007; Schutte et al., 1998). However, we argue that the primary explanation for these differences can be ascribed methodological factors. It is proposed that the usage of exploratory factor analysis techniques to develop a factor structure during the original EIS and subsequent validation studies represents a limitation. Exploratory factor analysis is a data driven approach (see Thompson and Daniel, 1996) in which a factor structure is produced from the data, rather than testing the extent to which data were consistent with a hypothesized model. The decision to use exploratory techniques seems surprising as researchers were looking to test a theoretical framework rather than exploring the data with a view to producing new models. Although subsequent validation studies have used CFA, they tested models developed through exploratory procedures, and therefore retain the mathematically driven model. An example of how exploratory factor analysis can produce a theoretically unclear factor is exemplified by examining the meaning of the optimism/mood regulation factor produced in the exploratory factor analysis results by Petrides and Furnham, 2000. By combining optimism and mood regulation into a single factor, this precludes examining the extent to which optimistic beliefs are associated with regulatory behaviors, a line of enquiry that should be salient among individuals with an unrealistic sense of optimism (Colvin et al., 1995). For example, an extreme optimist should see the positive aspects of situations, and therefore should not anticipate needing self-regulatory skills, and consequently, may not develop such skills. However, over optimism has been associated with self-enhancement, unrealistic perceptions of task difficulties, and exaggerated perceptions of control (Colvin et al., 1995). Following this logic, extreme optimists are unlikely to anticipate needing to manage intense emotions experienced before important competition. As events unfold, the true nature of task difficulties emerge, and if this results in failure to attain goals, then unpleasant emotions are likely to increase. By contrast, a pessimist might develop effective mood-management strategies as anticipatory coping efforts to manage potentially stressful situations. A pessimist might anticipate experiencing high anxiety and have developed strategies to manage these feelings. Whilst the assumptions suggested above are speculative, they indicate the difficulty of including items that might be assessing different constructs. It is worth remembering that researchers and practitioners will calculate factor scores by summing all items in the factor, suggesting that it is imperative that items assess a similar underlying concept. As evidence suggests high EI is associated with positive health and performance outcomes (Schutte et al., 2007; Van Rooy and Viswesvaran, 2004), then high scores on each scale are assumed to be desirable. With the above in mind, high scores on optimism might behave differently to mood-regulation, and therefore optimism and mood-regulation should be conceptualized independently. Taken together, the argument presented above suggests that results of the current study could make important contributions to research in EI by identifying a theoretically-informed factor structure that removed items with limited emotional content. As indicated in a pertinent review by Mayer et al., 2008, EI is a popular topic to research, with the emphasis being on the relationships between the construct and behavior. Whilst such an approach is logical, it assumes that the EI scales are valid and reliable. However, as indicated by the findings of the present study, the EIS contains items lacking an emotional focus. Although the EIS has been a frequently used measure of EI, previous research has identified similar limitations (Gignac et al., 2005). In agreement with the findings and suggestions made by Gignac et al., 2005, further validation work on the EIS is needed. We agree with this suggestion and propose that further work should look to revise items so that they have a clear emotional focus and develop specifically for the hypothesized factor in which they should belong. Findings from the present study lend support to the value of conducting a content validity study and scrutinizing the intended meaning of items closely. Although it is a commonly held belief that content validity is a key part of the validation process, it typically takes second place to examining factoral validity. Jones et al., 2005 provides an example of a study that conducted a detailed analysis of content validity. Jones et al. developed content validity over several stages. They used an iterative process in which experts (knowledge of underlying theory) and athletes (knowledge of emotions experienced in sport) contributed to develop a measure that assessed a theoretical framework for assessing emotions in sport. Once the factor structure was developed, they used confirmatory procedures to test the hypothesized model. This approach is different to developing items that support a theoretical framework and then using exploratory procedures. Using the framework for developing and validating a scale for use in sport (Jones et al., 2005), we suggest that future research continues to revise items in the EIS by considering their intended meaning.
In conclusion, given the incumbent link between the validity of theory and methods testing, the present study sought to investigate the validity of the EIS for use in sport. Notwithstanding the debate on the nature of EI, and the extent to which the construct can be assessed through ability or through self-report tests, findings from the present study suggest that researchers could use a 19-item version of EIS to assess perceptions of (or self-reported) EI in athletes.
|
AUTHOR BIOGRAPHY |
|
Andrew M. Lane |
Employment: Professor in Sport and Exercise Psychology, School of Sport, Performing Arts and Leisure, University of Wolverhampton, UK. |
Degree: BA, PGCE, MSc, PhD. |
Research interests: Mood, emotion, measurement, coping, and performance. |
E-mail: A.M.Lane2@wlv.ac.uk |
|
|
Barbara B. Meyer |
Employment: Professor and Chair Department of Human Movement Sciences College of Health Sciences University of Wisconsin-Milwaukee. |
Degree: BA, M.S., PhD. |
Research interests: Applied sport psychology, emotional intelligence, sport injury, families in sport. |
E-mail: bbmeyer@uwm.edu |
|
|
Tracey J. Devonport |
Employment: Senior Lecturer in Sport and Exercise Psychology, School of Sport, Performing Arts and Leisure, University of Wolverhampton, UK. |
Degree: BSc, PGCE, MSc, Postgraduate Diploma in Psychology. |
Research interests: Stress appraisal and coping, emotion, self-efficacy, imagery, and performance. |
E-mail: T.Devonport@wlv.ac.uk |
|
|
Kevin A. Davies |
Employment: PhD Candidate, Research Centre for Sport, Exercise and Performance, University of Wolverhampton. |
Degree: BSc(Hons), MSc. |
Research interests: Stress appraisal and coping, emotion, hypnosis, measurement, and performance. |
E-mail: kad@wlv.ac.uk |
|
|
Richard Thelwell |
Employment: Department of Sport and Exercise Science, University of Portsmouth. |
Degree: BSc, PhD. |
Research interests: Sport and exercise psychology. |
E-mail: richard.thelwell@port.ac.uk |
|
|
Gobinder S. Gill |
Employment: MPhil Candidate, Research Centre for Sport, Exercise and Performance, University of Wolverhampton. |
Degree: |
Research interests: |
E-mail: |
|
|
Caren D. P. Diehl |
Employment: PhD Candidate, Research Centre for Sport, Exercise and Performance, University of Wolverhampton. |
Degree: BSc (Hons), MEd. |
Research interests: |
E-mail: |
|
|
Mat Wilson |
Employment: Senior lecturer, University of Wolverhampton. |
Degree: |
Research interests: |
E-mail: Mat.Wilson@wlv.ac.uk |
|
|
Neil Weston |
Employment: Department of Sport and Exercise Science, University of Portsmouth. |
Degree: BSc, MSc PhD. |
Research interests: Sport and exercise psychology. |
E-mail: neil.weston@port.ac.uk |
|
|
|
REFERENCES |
Anastasi A., Urbina S. (1997) Psychological testing. Upper Saddle River, N.J.. Prentice Hall.
|
Austin E.J., Saklofske D.H., Huang S.H.S., McKenney D. (2004) Measurement of trait emotional intelligence: Testing and cross-validating a modified version of Schutte et al.’s (1998) measure. Personality and Individual Differences 36, 555-562.
|
Bentler P.M., Wu E.J.C. (1995) EQS/Windows user’s guide. Encino, CA. Multivariate Software.
|
Besharat M.A. (2007) Psychometric properties of Farsi version of the Emotional Intelligence Scale-41 (FEIS-41). Personality and Individual Differences 43, 991-1000.
|
Brackett M.A., Mayer J.D. (2003) Convergent, discriminant, and incremental validity of competing measures of emotional intelligence. Personality and Social Psychology Bulletin 29, 1147-1158.
|
Colvin C.R., Block J., Funder D.C. (1995) Overly positive evaluations and personality: Negative implications for mental health. Journal of Personality and Social Psychology 68, 1152-1162.
|
Engelberg E., Sjöberg L. (2004) Emotional intelligence, affect intensity, and social adjustment. Personality and Individual Differences 37, 533-542.
|
Gignac G.E., Palmer B.R., Manocha R., Stough C. (2005) An examination of the factor structure of the schutte self-report emotional intelligence (SSREI) scale via confirmatory factor analysis. Personality and Individual Differences 39, 1029-1042.
|
Hu L., Bentler P.M. (1995) Structural Equation Modeling: Concepts, issues, and applications. London. Sage.
|
Hu L., Bentler P.M. (1999) Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling 6, 1-55.
|
Jones M.V., Lane A.M., Bray S.R., Uphill M., Catlin J. (2005) Development of the Sport Emotions Questionnaire. Journal of Sport and Exercise Psychology 27, 407-431.
|
Lane A.M., Sewell D.F., Terry P.C., Bartram D., Nesti M.S. (1999) Confirmatory factor analysis of the Competitive State Anxiety Inventory-2. Journal of Sports Sciences 17, 505-512.
|
Lonsdale C., Hodge K., Rose E.A. (2006) Pixels vs. Paper: Comparing Online and Traditional Survey Methods in Sport Psychology. Journal of Sport and Exercise Psychology 28, 100-108.
|
Martens R., Vealey R. S., Burton D. (1990) Competitive anxiety in sport. Champaign, IL. Human Kinetics. 117-190.
|
Mayer J.D., Caruso D.R., Salovey P. (1999) Emotional intelligence meets standards for traditional intelligence. Intelligence 27, 267-298.
|
Mayer J.D., Roberts R.D., Barsade S.G. (2008) Human abilities: Emotional intelligence. Annual Review of Psychology 59, 507-536.
|
Meyer B.B., Fletcher T.B. (2007) Emotional Intelligence: A theoretical overview and implications for research and professional practice in sport psychology. Journal of Applied Sport Psychology 19, 1-15.
|
Meyer B.B., Zizzi S., Lane A.M. (2007) Mood and human performance: Conceptual, measurement, and applied issues. Hauppauge, NY. Nova Science.
|
O’Connor R.M., Little I.S. (2003) Revisiting the predictive validity of emotional intelligence: self-report versus ability-based measures. Personality and Individual Differences 35, 1893-1902.
|
Petrides K. V., Furnham A., Mavroveli S. (2007a) Emotional intelligence: Knowns and unknowns (Series in Affective Science). Oxford. Oxford University Press.
|
Petrides K. V., Pita R., Kokkinaki F. (2007b) The location of trait emotional intelligence in personality factor space. British Journal of Psychology 98, 273-289.
|
Petrides K.V., Furnham A. (2000) On the dimensional structure of emotional intelligence. Personality and Individual Differences 29, 313-320.
|
Saklofske D.H., Austin E.J., Minski P.S. (2003) Factor structure and validity of a trait emotional intelligence measure. Personality and Individual Differences 34, 707-721.
|
Salovey P., Mayer J.D. (1990) Emotional intelligence. Imagination, Cognition and Personality 9, 185-211.
|
Schutte N.S., Malouff J.M., Hall L.E., Haggerty D.J., Cooper J.T., Golden C.J., Dornheim L. (1998) Development and validation of a measure of emotional intelligence. Personality and Individual Differences 25, 167-177.
|
Schutte N.S., Malouff J.M., Thorsteinsson E.B., Bhullar N., Rooke S.E. (2007) A meta-analytic investigation of the relationship between emotional intelligence and health. Personality and Individual Differences 42, 921-933.
|
Schutz R.W., Serpa S., Alves J., Pataco V. (1994) International perspectives on sport and exercise psychology. Morgantown, WV. Fitness Information Technology.
|
Thelwell R., Lane A. M., Weston N.J.V., Greenlees I.A. (2008) Examining relationships between emotional intelligence and coaching efficacy. International Journal of Sport and Exercise Psychology 6, 224-235.
|
Thompson B., Daniel L.G. (1996) Factor analytic evidence for the construct validity of scores: A historical overview and some guidelines. Educational and Psychological Measurement 56, 197-208.
|
Tomas J.M., Amparo O. (1999) Rosenberg’s Self-Esteem scale: Two Factors or method effects. Structural Equation Modeling 6, 84-98.
|
Van Rooy D.L., Viswesvaran C. (2004) Emotional intelligence: A meta-analytic investigation of predictive validity and nomological net. Journal of Vocational Behavior 65, 71-95.
|
Vealey R.S., Garner-Holman M. (1998) Advances in sport and exercise psychology measurement. Morgantown, WV. Fitness Information Technology.
|
Warwick J., Nettelbeck T. (2004) Emotional intelligence is…?. Personality and Individual Differences 37, 1091-1100.
|
Woods C.M. (2006) Careless responding to reverse-worded items: implications for confirmatory factor analysis. Journal of Psychopathology and Behavioral Assessment 28, 186-191.
|
Zizzi S.J., Deaner H.R., Hirschhorn D.K. (2003) The relationship between emotional intelligence and performance among college baseball players. Journal of Applied Sport Psychology 15, 262-269.
|
|
|
|
|
|
|