Title: The Standard Appraisal Questions
1The Standard Appraisal Questions
2Anatomy of a scholarly journal article
- Designed to answer
- Is it of interest
- Why was it done?
- How was it done?
- What has it found?
- What are the implications?
- What else is of interest?
- Abstract
- Introduction
- Methods
- Results
- Discussion
- (conclusion)
Adapted from Crombie, 1996
3Abstract
- Is the article of interest?
- A summary that presents key points from each of
the main sections - Structured abstract now commonly required vs.
previously accepted in paragraph form - Structured
- Purpose, Methods, Results, and Discussion
- Should give an overview of the article
4Abstract cont.
- How relevant is the topic to the information
sought by a reader? - Common flaws
- Not a concise, clear, or accurate summary
- Author may put a spin on the article that may
influence novice readers
5Introduction - Why was it done?
- Provides the background to the study
- Reviews previous work
- Highlight gaps in current knowledge
- May explain why gaps are urgent
- Clinical importance of the topic
- Usually expressed in epidemiological terms
- Morbidity, mortality, cost of health services
6Introduction cont.
- Should wrap-up with a clear statement of the
studys purpose - A hypothesis to be tested or a question to be
answered - If absent, did authors know what they were
looking for? - Represents a flaw if missing
7Are the aims clearly stated?
- Explanation of why the study was carried out
- Did the research tackle an important problem?
- Clearly stated tightly focused aims
- Hypothesis specified in advance
- Well planned study
- Vs. trawling which could result in spuriously
significant results
8Methods How was the study carried out?
- Should be thorough enough to reproduce
- However, often refers to other publications for
details described elsewhere - Who was studied how were they recruited
- Clinic? Diagnostic criteria? Demographics sought?
- Information needed for generalization
9Methods cont.Are the data accurate?
- How were measurements taken?
- Steps taken to standardize measurements?
- Scientific quality of questionnaires measuring
instruments - Should be valid a reliable
- Which statistical methods were used in analysis
- Were they appropriate?
10An essential component of the Methods section
- A published study should disclose the details of
how they estimated their required sample size,
including - Expected or clinically important difference
between groups that is sought - Acceptable probability of Type I error
- Desired power to detect a difference if there is
one - The statistics package (computer program) used to
calculate needed sample size
11Was the sample size justified?
- Should be large enough to give an accurate
picture of whats going on - What size of an effect is being sought?
- How big the study must be to detect this effect
- Small studies may fail to detect clinically
important effects
12Sample size justified? Cont.
- What size of effect did the study have the power
to detect? - May be calculated after study completion
- Low power may be justification to repeat an
experiment with a larger sample
13Results - What has the study found?
- Main findings tables figures, explained in
text - Logical presentation simple observations to
complex analyses - Text should highlight key findings of the data
14Results cont.
- Text will emphasize what authors find important
- Readers should make up their own minds
- Do results fulfill the aims of the study
- What do the findings mean?
- Reader should find flaws assess their impact
- Reader should decide what findings really mean
15Are measurements likely to be valid reliable?
- Detailed description of measurement methods
should be given - Read critically, asking how errors could be
introduced - Were assessments subjective?
- Did more than one observer assess?
- Did authors discuss potential measurement errors?
16Measurements cont.
- Author should discuss how reliability validity
were assessed - Its not acceptable to use a measure that has not
been previously validated
17Were basic data adequately described?
- Number of subjects and how they were obtained
- Basic characteristics, mean range
- What typical measurements look like how they
vary - Important for generalizability comparability
- Begin with simple analyses, giving main outcomes
in tables /or figure
18Do the numbers add up?
- Subjects lost to follow-up or missing data should
be accounted for - If data analyzed in subgroups, all should add up
to total - Discrepancies should be accounted for
- Small (lt1) discrepancies unlikely to have impact
on findings
19Histogram
20Bar chart
21Line chart
22Table example
Table 1. Descriptive statistics of total visits,
comparing Trauma with No Trauma cases.
23Are the statistical methods described?
- Described in the Methods section, and referenced
- Address assumptions about data
- Warning sign large numbers of tests carried out
- Potential for spurious significance
24Was the statistical significance assessed?
- Chance effects may appear quite large, especially
when the sample size is small - P-value lt0.05 provides evidence that the result
is likely to be real rather than chance - Some journals prefer confidence intervals rather
than p-values - CI shows the range within which the true value
could lie, with a certain degree of confidence
(usually 95) - A broad range is less reliable
25Discussion What are the implications?
- Can the findings be generalized to other people,
places times? - Subjective authors are not always impartial
- Implications
- What is new?
- What does it mean for health care?
- Is it relevant to my patients?
26Discussion cont.
- Author should make comparisons to other studies
and address discrepancies - Conclusions
- Should findings induce changes in clinical
practice? - Do findings highlight the need for further
research?
27What do the main findings mean?
- Is the effect size clinically significant?
- Why or why not?
- Internal consistency may be demonstrated
- Similar results by age or sex
- Dose response
- Supports findings as not chance aberration
28What do the main findings mean?
- Reader should consider whether the authors
interpretations make sense - Biologic plausibility
- Timing of events
29How are null findings interpreted?
- Was there lack of effect?... OR
- Was the study too small to have a reasonable
chance of detecting anything? - Wide confidence interval indicates this
possibility - Lack of evidence of an association is not the
same as evidence of no association
30Did untoward events occur during the study?
- Many problems should have been dealt with in
feasibility pilot studies - Difficulty following research design
- Loss of subjects to follow-up
- Difficult to make measurements on some individuals
31Untoward events cont.
- Missing data may allow bias to influence the
results - Midstream changes in design is worrisome
- Outcome measures, intervention,
inclusion/exclusion criteria, etc. - Data may not be comparable from beginning of the
study to after design change
32Are important effects overlooked?
- Reader should look at the results for unexplored
findings, patterns, etc. - Researchers, understandably, may draw attention
to findings that fit their preconceptions - Do they comment on results which do not fit their
views or try to gloss over them?
33How do the results compare with previous reports?
- A single study seldom provides convincing
evidence - New findings accepted only with substantial
body of research - Confidence diminishes if other studies fail to
confirm previous results - Findings should be fitted into a balanced
overview of all reported studies
34What implications does the study have for your
practice?
- Should this information lead to changes in the
management of ones own patients? - Risk subjecting patients to useless therapy
- Risk denying patients access to effective ones
- Risk causing anxiety by advising them to avoid
harmful behavior
35Implications for your practice cont.
- How big was the effect, and is it clinically
important? - Were patients circumstances similar to your
practice? - Hospital patients
- Age, gender
36Overall Questions to Ask
- Is the study design appropriate to address the
research question? - In the Discussion section, are the findings...
- consistent with the research question of the
study? - consistent with the results presented?
- presented in the context of current evidence?
37Use checklists
- Invaluable tools as you review journal articles,
which are often very intimidating to read - Contain many of the questions covered in this
lecture - Specific for the various types of research
methodology involved, from case studies to RCTs
38Read the article and evaluate the information
- There are three basic questions that need to be
answered for every type of study - Are the results of the study valid?
- What are the results?
- Will the results help in caring for my patient?
- The issue of validity refers to the
truthfulness of the information
39Validity of a study?
- Was the assignment of patients to treatment
randomized? - Were all the patients who entered the trial
properly accounted for at its conclusion? - Were patients, their clinicians, and study
personnel blind to treatment? - Aside from the experimental intervention, were
the groups treated equally?