SSCI - PowerPoint PPT Presentation

1 / 57
About This Presentation
Title:

SSCI

Description:

The conceptual model and methodology. for two of my published papers ... idiosyncrasies (e.g., ...local idiom and metaphors)...should be replaced. ... – PowerPoint PPT presentation

Number of Views:99
Avg rating:3.0/5.0
Slides: 58
Provided by: fjl9
Category:
Tags: ssci | idiom

less

Transcript and Presenter's Notes

Title: SSCI


1
SSCI?????????
????? ??? ???????2006 ? 12 ? 29 ?
2
Agenda
  • The conceptual model and methodology
  • for two of my published papers
  • Reviewers comments to each paper and
  • my reply to reviewer concerns
  • - sample, research design, procedure, etc.
  • Lessons learned
  • Final thoughts


3
The role of Methodology
  • Quality of Methods and Analysis has been viewed
    by journal editors as one of the most important
    characteristics for publishable journal articles
    (17 of 60 items).
  • See Desrositers et al. (2002). Writing research
    articles Update on the article
  • review checklist. In S. G. Rogelberg (Ed.),
    Handbook of research methods in
  • industrial and organizational psychology (p.
    460). Malden, MA Blackwell.


4
Paper Reviewed 1
  • Tsai, W. C. (2001). Determinants and
  • consequences of employee displayed
  • positive emotions. Journal of Management,
  • 27, 497-512.
  • JOB JOB(1st revision) JAP
  • JOM(1st revision) JOM(2nd revision)
  • JOM(accepted)


5
Theoretical Model
Purchase decision Willingness to return
Willingness to recommend
Psychological climate for service friendliness
Employee displayed positive emotions

  • Control variables
  • Sex variables
  • clerk experienced
  • emotions
  • Control variables
  • Sex age variables
  • clerk professional
  • ability

6
(No Transcript)
7
Methods
Measures
8
Methods
Measures (continued)
9
Comments and Reply - 1
  • Sample - sample representativeness
  • If customers who had a negative experience with
    a clerk declined participation more frequently,
    could you have oversampled positive customer
    reactions?
  • Wouldnt customers and sales clerks in a worse
    mood be less likely to participate? I think this
    might only restrict the range of data in your
    sample

10
Comments and Reply - 2
  • Research Design - control variable
  • You should also point out that there is a very
    great deal of variance in customer reactions
  • not explained by your model.
  • JOB Editor

11
Comments and Reply - 2
  • Research Design (JAP version)
  • there is still a great deal of variance in
    customer reactions which is not explained by the
    model in this study. Other potential predictors
    might be store atmosphere (e.g., music in the
    store, inside decoration), facilities (e.g.,
    space for rest, space for leisure), and the sales
    clerks behaviors other than displayed emotions
    (Yoo, Park, MacInnis, 1998).

12
Comments and Reply - 2
  • Research Design
  • several key variables that provide alternative
    explanations for your results are omittedFor
    instance, clerks emotional displays might covary
    with price or customer characteristicsand no
    attempt to eliminate these alternatives.
  • comments from JAP Editor

13
Comments and Reply - 3
  • Research Design - direction of causation
  • Given the absence of an experimental or
    quasi-experimental design, no causal inferences
    can be drawn. Customer reactions may have caused
    displayed emotion or vice versa...
  • comments from JAP Editor
  • OK with JOM reviewers.

14
Comments and Reply - 3
  • Research Design (JOM version)
  • because the data are cross-sectional, the
    direction of causality cannot be unambiguously
    determined. Nonetheless, some thoughts come to
    mind that may partially mitigate this concern
    Because customers indicated their reactions
    toward the store AFTER they had encountered the
    employees emotional displays, it is less
    plausible to argue that customer reaction is in
    fact a cause of employee displayed emotions.
    After all, causes have to be antecedent to their
    effects.

15
Comments and Reply - 4
  • Procedure
  • More description is needed of (a) how observers
    were trained, and (b) the actual process of
    observing the transactions. Were employees aware
    of the presence of the observers, for example?
    ...

16
Comments and Reply - 5
  • Measures - construct validity
  • I am not sure that your measure of professional
    ability I encourage you to establish some
    argument for its construct validity.

17
Comments and Reply - 5
  • Measures
  • ... Alternatively, the sales clerks
    professional ability could be rated by customers.
    However, this would lead to the problem of common
    method bias
  • I am unable to provide evidencegiven the kind
    of data I have.

18
Comments and Reply - 6
  • Measures - social desirability
  • Do you think social desirability is driving the
    responses of the sales clerks for the PANAS
    scales.

19
Comments and Reply - 6
  • Measures
  • I agree there was a possibility that sales
    clerks overstated their levels of positive
    affect. However, if social desirability was the
    major force, the variance of this variable
    should become relatively small. I checked the
    standard deviation value of PA reported in
    Watson, et al. (1988) and found that it was at a
    similar level (SD7.2 5-point scale) to the
    value found in this study (SD6.4 4-point
    scale). Thusits effect should not be too
    serious.

20
Comments and Reply - 7
  • Measures (JOB initial version) - low alpha
  • alpha for the displayed emotions measure was
    .51, suggesting that the six criteria were not
    sufficiently homogeneous. However, the creation
    of this variable was still deemed meaningful
    since each of these six criteria reflected part
    of the displayed emotions construct andAs argued
    by Nunnally and Bernstein (1994), the
    heterogeneity would be a legitimate part of the
    test if it were part of the domain of content
    implied by the construct (p. 312).

21
Comments and Reply - 7
  • Measures (Reviewer comments)
  • I dont think your discussion surrounding the
    issue of low alpha for displayed emotions provide
    an adequate justificationI think much more needs
    to be done to try and clean up that variable.
  • I understand your point that the construct is
    heterogeneousI see two solutions (1) select a
    subset of the items(2) treat the six items as
    separate indicators of displayed emotions.

22
Comments and Reply - 7
  • Measures (JOM initial version)
  • I added a paragraph to the method section as
    follows An alternative way to deal with the low
    alpha is to treat the six criteria as separate
    indicators of displayed positive emotions rather
    than to create a composite. I have run the
    regressions in this way
  • I added a paragraph to the discussion.
  • Consequently, NO concerns on this issue have
    been raised by JOM reviewers.

23
Comments and Reply - 8
  • Statistical analysis
  • It seems it would be better to test all the
    relationships at one time using structural
    equation modeling rather than testing the
    relationships separately.
  • I agree with the reviewers that hierarchical
    regression is more appropriateEven better would
    be causal modeling(Editor opinion)

24
Comments and Reply - 9
  • Cross-cultural methodology - generalization
  • Given that your sample was Taiwanese, and that
    the journal is a US based journal, you need to
    convince your audience of generalizability of
    your findings?
  • I would argue that sales clerks expressions of
    every indicator chosen in this study would make
    customers feel they are warmly welcomed in both
    Taiwan and US, although it is likely that
    emotional indicatorsmay not be identical

25
Comments and Reply - 9
  • Cross-cultural methodology - foreign scales
  • You used American scales in a foreign country.
    That is a huge problem. Without accounting for
    the specific cultural differences your findings
    are close to meaningless
  • Concerns about using scales originally written
    in English in a foreign countrymay be partially
    mitigated for the following reasons
  • Similar factor structures of the scales
  • Some congruent research findings

26
Paper Reviewed 2
  • Tsai, W. C., Chen, C. C., Chiu, S. F.
  • (2005). Exploring boundaries of the
  • effects of applicant impression
  • management tactics in job interviews.
  • Journal of Management, 31, 108-125.
  • JOM(1st revision) JOM(2nd revision)
  • JOM(accepted)


27
Theoretical Model
Interview Length
  • Applicant IM
  • self-focused
  • non-verbal

Interviewer Evaluation
Customer- contact Requirement
Interview Structure
28
Methods
Participants
  • 141 applicants of non-managerial positions 33
  • interviewers of 24 firms in Taiwan (3
    industries).
  • Only one interviewer involved in each interview.

Procedure
Survey completed by interviewer
Survey completed by both interviewer and applicant
29
Methods
Measures
30
Methods
Measures (continued)
31
Methods
9 Control variables
  • Applicant Physical Attractiveness, Sex, Age, and
  • Conscientiousness
  • Interviewer Experience, Training, Sex, Age, and
  • Pre-interview Impression (?.61)

32
Comments and Reply - 1
  • Sample
  • The applicants participated in an average of
    6.09 interviews in past years. Does this mean
    that they interviewed 6 times annually? Or does
    this mean?

33
Comments and Reply - 2
  • Research Design - control variable
  • Including nine control variables certainly helps
    eliminate alternative explanations. At the same
    time, it cuts into the ability of your analyses
    to detect effects with small sample sizes. You
    may want to consider collecting additional data.
  • To your credit, you used a lot of control
    variables here. However, in some sense, I think
    you used too many.Please drop. (Editor opinion)

34
Comments and Reply - 2
  • Research Design
  • Would you expect industry related differences to
    influence your results? If so, then you should
    control for industry. Similar differences may be
    found across organizations, although controlling
    for such differences becomes more troublesome.

35
Comments and Reply - 2
  • Research Design
  • In this study we created two dummy variables,
    with service industry as the referent category.
  • Note that we DID NOT include firm as a control
    variable. Given the small sample size in our
    study, adding 24 firm variables to the regression
    would greatly reduce the statistical power of our
    statistical analyses (see Cohen Cohen, 1983),
    thus it may be inappropriate to do so.

36
Comments and Reply - 2
  • Research Design (2nd comments)
  • I agree that it will take up too many degrees of
    freedom to include a dummy variable for each
    organization. However, you might acknowledge in
    the limitations that you have multiple data
    points that are linked to the same organization
    and the same interviewer, and discuss how that
    might influence your analysis.

37
Comments and Reply - 2
  • Research Design (2nd reply)
  • multiple data points were linked to the same
    interviewer and the same organization, we still
    could not control for all interviewer effects
    and, of course, the organization effects. This
    might violate the statistical assumption of
    independent observations (Kenny La Voie,
    1985)....We encourage future research to utilize
    larger sample sizes and take appropriate actions
    (e.g., standardize interviewer ratings before
    pooling across interviewers) to deal with such
    issues.

38
Comments and Reply - 3
  • Measures - low alpha
  • A major issue with this study is the lack of
    reliable measures. Several alphas are below .7,
    and even below the marginally acceptable .65.
  • My first preferenceto go back and try
    different types of rotations and data reduction
    techniquesIf that does not work, then I would
    prefer you to simply use some solo items(Editor
    opinion)

39
Comments and Reply - 3
  • Measures
  • We followed your suggestion by first trying
    different types of rotations and data reduction
    techniquesbut could not increase alphas. Thus,
    as suggested by you, we used a single item to
    measure non-verbal tactics

40
Comments and Reply - 4
  • Measures - factor analysis
  • The reader needs more information about how
    these factor analyses were conducted (e.g.,
    method of extraction, method of rotation, etc.).

41
Comments and Reply - 5
  • Measures - description of selected items
  • Did you use the full-scales from Kristof and her
    colleagues to measure self-focused and nonverbal
    IM tactics? If you only used selected items,
    what were the decision criteria?

42
Comments and Reply - 6
  • Statistical analysis
  • If there are high correlations you might center
    the variables before constructing the interaction
    terms (Jaccard, Turrisi Van, 1990) if you
    have not done so already.
  • we added the following explanation to the
    revision To counter problems of
    multicollinearity in tests of interaction terms,
    we centered all independent and moderating
    variables before creating the interaction terms.

43
Comments and Reply - 7
  • Cross-cultural methodology - scale translation
  • The study was conducted in Taiwan. Were the
    surveys in English? And if so, how fluent were
    the participants in English?

44
Comments and Reply - 7
  • Cross-cultural methodology
  • In fact, all questionnaires in this study were
    in Chinese, as it is the official language in
    Taiwan.
  • Van de Vijver and Leung (1997) indicated the
    importance of evaluating the translatability of
    an instrument in a multilingual contextitems
    show cultural idiosyncrasies (e.g., local idiom
    and metaphors)should be replaced. It seems that
    scales used in our study did not contain such
    poorly translatable items.

45
Lessons Learned - 1
  • Sample
  • Has adequately detailed demographics.
  • Has acceptable return rates and attrition
  • rates (e.g., addresses the influence of
  • non-respondents and drop-outs).

46
Lessons Learned - 2
  • Research Design
  • Design is constructed so as to rule out
  • alternative explanation.
  • Includes needed control variables.
  • Uses logical timing of measurement,
  • especially regarding longitudinal designs.


47
Lessons Learned - 3
  • Procedure
  • Explains procedures in adequate detail
  • (enough to allow a replication), yet is
  • reasonably succinct.


48
Lessons Learned - 4
  • Measurement
  • Explains fully when existing, accepted
  • measures are not used.
  • Uses measures that are free from bias
  • (e.g., social desirability).
  • Presents evidence of construct validity
  • (e.g., convergent and discriminant validity,
  • factor analysis) as needed.

49
Lessons Learned - 5
  • Measurement (conti.)
  • Has adequate types and levels of
  • reliability (e.g., internal consistency, inter-
  • rater).

50
Lessons Learned - 6
  • Statistical Analysis
  • Proper use of data analytic approaches
  • and techniques.
  • Use of minimally sufficient statistics, not
  • unduly fancy and complex.

51
Lessons Learned - 7
  • Cross-cultural Methodology
  • Be familiar with cross-culture research
  • methodology (e.g., the use of research
  • instruments).
  • Semantic equivalence
  • Conceptual equivalence
  • Scaling equivalence

52
Additional Issues
  • Avoids common method variance or
  • explain why it is not a likely counter
  • explanation for results.
  • Avoids procedures for data collection in
  • field studies that are so intrusive that there
  • is a risk of changing the phenomenon under
  • examination or creating Hawthorne effects.

53
A Few Reminders
  • ??????????????????
  • ??????
  • ????,?????????,???
  • ?????
  • ????????????????,?
  • ?????????????????
  • ???????

Example
54
A Few Reminders
  • I am pleased with the level of detail you
    provided in your revisions. Each point you made
    was acceptable to me and I feel you have
    adequately addressed my concerns. If you
    remember from the initial review I was the
    reviewer who was the least accepting of this
    piece. Very good work in my opinion,
    particularly the review and integration of the
    additional literature that was suggested to you.
    by one JOM reviewer

55
Final Thoughts
  • ??????????????,??????
  • ????????????????????
  • Rogelberg, S. G. (2002). Handbook of research
  • methods in industrial and organizational
    psychology
  • (Ed.). Malden, MA Blackwell.
  • Pedhazur, E. J. Schmelkin, L. P. (1991).
  • Measurement, design, and analysis An
    integrated
  • approach. Lawrence Erlbaum Associates.
  • ???????????????????,?
  • ??Seminar??????

56
Final Thoughts
  • ????,?????????????
  • Importance of the research question
  • Conceptual development and definition
  • Well organized, structured writing style
  • Source Desrositers et al. (2002). Writing
    research articles Update on the article
  • review checklist. In S. G. Rogelberg (Ed.),
    Handbook of research methods in industrial and
    organizational psychology (p. 460). Malden, MA
    Blackwell.

57
Questions
?
Write a Comment
User Comments (0)
About PowerShow.com