The Standard Appraisal Questions - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

The Standard Appraisal Questions

Description:

Is the article of interest? A summary that presents key points from each of the ... Author may put a 'spin' on the article that may influence novice readers. 5 ... – PowerPoint PPT presentation

Number of Views:56
Avg rating:3.0/5.0
Slides: 40
Provided by: michaelh9
Category:

less

Transcript and Presenter's Notes

Title: The Standard Appraisal Questions


1
The Standard Appraisal Questions
2
Anatomy of a scholarly journal article
  • Designed to answer
  • Is it of interest
  • Why was it done?
  • How was it done?
  • What has it found?
  • What are the implications?
  • What else is of interest?
  • Abstract
  • Introduction
  • Methods
  • Results
  • Discussion
  • (conclusion)

Adapted from Crombie, 1996
3
Abstract
  • Is the article of interest?
  • A summary that presents key points from each of
    the main sections
  • Structured abstract now commonly required vs.
    previously accepted in paragraph form
  • Structured
  • Purpose, Methods, Results, and Discussion
  • Should give an overview of the article

4
Abstract cont.
  • How relevant is the topic to the information
    sought by a reader?
  • Common flaws
  • Not a concise, clear, or accurate summary
  • Author may put a spin on the article that may
    influence novice readers

5
Introduction - Why was it done?
  • Provides the background to the study
  • Reviews previous work
  • Highlight gaps in current knowledge
  • May explain why gaps are urgent
  • Clinical importance of the topic
  • Usually expressed in epidemiological terms
  • Morbidity, mortality, cost of health services

6
Introduction cont.
  • Should wrap-up with a clear statement of the
    studys purpose
  • A hypothesis to be tested or a question to be
    answered
  • If absent, did authors know what they were
    looking for?
  • Represents a flaw if missing

7
Are the aims clearly stated?
  • Explanation of why the study was carried out
  • Did the research tackle an important problem?
  • Clearly stated tightly focused aims
  • Hypothesis specified in advance
  • Well planned study
  • Vs. trawling which could result in spuriously
    significant results

8
Methods How was the study carried out?
  • Should be thorough enough to reproduce
  • However, often refers to other publications for
    details described elsewhere
  • Who was studied how were they recruited
  • Clinic? Diagnostic criteria? Demographics sought?
  • Information needed for generalization

9
Methods cont.Are the data accurate?
  • How were measurements taken?
  • Steps taken to standardize measurements?
  • Scientific quality of questionnaires measuring
    instruments
  • Should be valid a reliable
  • Which statistical methods were used in analysis
  • Were they appropriate?

10
An essential component of the Methods section
  • A published study should disclose the details of
    how they estimated their required sample size,
    including
  • Expected or clinically important difference
    between groups that is sought
  • Acceptable probability of Type I error
  • Desired power to detect a difference if there is
    one
  • The statistics package (computer program) used to
    calculate needed sample size

11
Was the sample size justified?
  • Should be large enough to give an accurate
    picture of whats going on
  • What size of an effect is being sought?
  • How big the study must be to detect this effect
  • Small studies may fail to detect clinically
    important effects

12
Sample size justified? Cont.
  • What size of effect did the study have the power
    to detect?
  • May be calculated after study completion
  • Low power may be justification to repeat an
    experiment with a larger sample

13
Results - What has the study found?
  • Main findings tables figures, explained in
    text
  • Logical presentation simple observations to
    complex analyses
  • Text should highlight key findings of the data

14
Results cont.
  • Text will emphasize what authors find important
  • Readers should make up their own minds
  • Do results fulfill the aims of the study
  • What do the findings mean?
  • Reader should find flaws assess their impact
  • Reader should decide what findings really mean

15
Are measurements likely to be valid reliable?
  • Detailed description of measurement methods
    should be given
  • Read critically, asking how errors could be
    introduced
  • Were assessments subjective?
  • Did more than one observer assess?
  • Did authors discuss potential measurement errors?

16
Measurements cont.
  • Author should discuss how reliability validity
    were assessed
  • Its not acceptable to use a measure that has not
    been previously validated

17
Were basic data adequately described?
  • Number of subjects and how they were obtained
  • Basic characteristics, mean range
  • What typical measurements look like how they
    vary
  • Important for generalizability comparability
  • Begin with simple analyses, giving main outcomes
    in tables /or figure

18
Do the numbers add up?
  • Subjects lost to follow-up or missing data should
    be accounted for
  • If data analyzed in subgroups, all should add up
    to total
  • Discrepancies should be accounted for
  • Small (lt1) discrepancies unlikely to have impact
    on findings

19
Histogram
20
Bar chart
21
Line chart
22
Table example
Table 1. Descriptive statistics of total visits,
comparing Trauma with No Trauma cases.
23
Are the statistical methods described?
  • Described in the Methods section, and referenced
  • Address assumptions about data
  • Warning sign large numbers of tests carried out
  • Potential for spurious significance

24
Was the statistical significance assessed?
  • Chance effects may appear quite large, especially
    when the sample size is small
  • P-value lt0.05 provides evidence that the result
    is likely to be real rather than chance
  • Some journals prefer confidence intervals rather
    than p-values
  • CI shows the range within which the true value
    could lie, with a certain degree of confidence
    (usually 95)
  • A broad range is less reliable

25
Discussion What are the implications?
  • Can the findings be generalized to other people,
    places times?
  • Subjective authors are not always impartial
  • Implications
  • What is new?
  • What does it mean for health care?
  • Is it relevant to my patients?

26
Discussion cont.
  • Author should make comparisons to other studies
    and address discrepancies
  • Conclusions
  • Should findings induce changes in clinical
    practice?
  • Do findings highlight the need for further
    research?

27
What do the main findings mean?
  • Is the effect size clinically significant?
  • Why or why not?
  • Internal consistency may be demonstrated
  • Similar results by age or sex
  • Dose response
  • Supports findings as not chance aberration

28
What do the main findings mean?
  • Reader should consider whether the authors
    interpretations make sense
  • Biologic plausibility
  • Timing of events

29
How are null findings interpreted?
  • Was there lack of effect?... OR
  • Was the study too small to have a reasonable
    chance of detecting anything?
  • Wide confidence interval indicates this
    possibility
  • Lack of evidence of an association is not the
    same as evidence of no association

30
Did untoward events occur during the study?
  • Many problems should have been dealt with in
    feasibility pilot studies
  • Difficulty following research design
  • Loss of subjects to follow-up
  • Difficult to make measurements on some individuals

31
Untoward events cont.
  • Missing data may allow bias to influence the
    results
  • Midstream changes in design is worrisome
  • Outcome measures, intervention,
    inclusion/exclusion criteria, etc.
  • Data may not be comparable from beginning of the
    study to after design change

32
Are important effects overlooked?
  • Reader should look at the results for unexplored
    findings, patterns, etc.
  • Researchers, understandably, may draw attention
    to findings that fit their preconceptions
  • Do they comment on results which do not fit their
    views or try to gloss over them?

33
How do the results compare with previous reports?
  • A single study seldom provides convincing
    evidence
  • New findings accepted only with substantial
    body of research
  • Confidence diminishes if other studies fail to
    confirm previous results
  • Findings should be fitted into a balanced
    overview of all reported studies

34
What implications does the study have for your
practice?
  • Should this information lead to changes in the
    management of ones own patients?
  • Risk subjecting patients to useless therapy
  • Risk denying patients access to effective ones
  • Risk causing anxiety by advising them to avoid
    harmful behavior

35
Implications for your practice cont.
  • How big was the effect, and is it clinically
    important?
  • Were patients circumstances similar to your
    practice?
  • Hospital patients
  • Age, gender

36
Overall Questions to Ask
  • Is the study design appropriate to address the
    research question?
  • In the Discussion section, are the findings...
  • consistent with the research question of the
    study?
  • consistent with the results presented?
  • presented in the context of current evidence?

37
Use checklists
  • Invaluable tools as you review journal articles,
    which are often very intimidating to read
  • Contain many of the questions covered in this
    lecture
  • Specific for the various types of research
    methodology involved, from case studies to RCTs

38
Read the article and evaluate the information
  • There are three basic questions that need to be
    answered for every type of study
  • Are the results of the study valid?
  • What are the results?
  • Will the results help in caring for my patient?
  • The issue of validity refers to the
    truthfulness of the information

39
Validity of a study?
  • Was the assignment of patients to treatment
    randomized?
  • Were all the patients who entered the trial
    properly accounted for at its conclusion?
  • Were patients, their clinicians, and study
    personnel blind to treatment?
  • Aside from the experimental intervention, were
    the groups treated equally?
Write a Comment
User Comments (0)
About PowerShow.com