Title: The Standard Appraisal Questions
1The Standard Appraisal Questions
2Anatomy of a scholarly journal article
- Abstract
- Introduction
- Methods
- Results
- Discussion
- (conclusion)
- Designed to answer
- Is it of interest
- Why was it done?
- How was it done?
- What has it found?
- What are the implications?
- What else is of interest?
Adapted from Crombie, 1996
3Abstract
- Is it of interest?
- A summary that presents key points from each of
the main sections - Structured abstract now commonly required vs.
previously accepted in paragraph form - Should give an overview of the article
- How relevant is the topic to the information
sought by a reader? - Common flaws
- Not concise, clear or accurate summary
- Author may put a spin on the abstract that may
influence novice readers
4Why was it done? - Introduction
- Provides the background to the study
- Review previous work
- Highlight gaps in current knowledge
- May explain why gaps are urgent
- Clinical importance of the topic
- Usually expressed in epidemiological terms
- Morbidity, mortality, cost of health services
- Introduction should wrap-up with a clear
statement of the studys purpose - A hypothesis to be tested or a question to be
answered - If absent, did authors know what they were
looking for?
5Are the aims clearly stated?
- Explanation of why the study was carried out.
- Did the research tackle an important problem?
- Clearly stated tightly focused aims
- Hypothesis specified in advance
- Well planned study
- Vs. trawling which could result in spuriously
significant results
6How was the study carried out? - Methods
- Should be thorough enough to reproduce
- However, often refers to other publications for
details - Who was studied how were they recruited
- Clinic? Diagnostic criteria? Demographics sought?
- Information needed for generalization
- Are data accurate?
- How were measurements taken?
- Steps taken to standardize measurements?
- Scientific quality of questionnaires measuring
instruments - Which statistical methods were used in analysis
7An essential component of the Methods section
- If a published study does not disclose the
details of how they estimated their required
sample size, including - Expected or clinically important difference
sought - Acceptable probability of making a Type I error
- Desired power to detect a difference if there is
one - And the statistical package or computer program
used to calculate needed sample size based on the
above - Then, the statistical conclusions can be
interesting, informative, but not convincing!
8What has it found? - Results
- Main findings tables figures, explained in
text - Logical presentation simple observations to
complex analyses - Text should highlight key findings of the data
- Text will emphasize what authors find important
- Readers should make up their own minds
- Do results fulfill the aims of the study
- What do the findings mean?
- Reader should find flaws assess their impact
- Reader should decide what findings really mean
9Was the sample size justified?
- Large enough to give accurate picture of whats
going on - Size of the effect being sought
- How big study must be to detect this effect
- Small studies may fail to detect clinically
important effects - What size of effect did the study have the power
to detect? - May be calculated after study completion
10Are measurements likely to be valid reliable?
- Detailed description of measurement methods
- Read critically, asking how errors could be
introduced - Were assessments subjective?
- Did more than one observer assess?
- Did authors discuss potential measurement errors?
- Should discuss how reliability validity were
assessed
11Were basic data adequately described?
- Number of subjects and how they were obtained
- Basic characteristics, mean range
- What typical measurements look like how they
vary - Important for generalizability comparability
- Begin with simple analyses, giving main outcomes
in tables /or figures - Complex analyses later, should reconcile
12Do the numbers add up?
- Subjects lost to followup or missing data should
be accounted for - If data analyzed in subgroups, all should add up
to total - Discrepancies should be accounted for
- Small (lt1) discrepancies unlikely to have impact
on findings
13Are the statistical methods described?
- Described in the Methods section, and referenced
- Address assumptions about data
- Warning sign large numbers of tests carried out
- Potential for spurious significance
- Simple analyses should be compared with more
complex ones
14Was the statistical significance assessed?
- Chance effects may appear quite large, especially
when the sample size is small - P-value lt.05 provides good evidence that the
result is likely to be real rather than chance - Some journals prefer confidence intervals to
p-values - CI shows the range within which the true value
could lie, with a certain degree of confidence
(usually 95) - A broad range calls into question the effect size
15What are the implications? - Discussion
- Can the findings be generalized to other people,
places times? - Subjective authors are not always impartial
- Implications
- What is new?
- What does it mean for health care?
- Is it relevant to my patients?
- Author should make comparisons to other studies
and address discrepancies - Conclusions
- Should findings induce changes in clinical
practice? - Do findings highlight need for further research?
16What do the main findings mean?
- Is the effect size clinically significant?
- Why or why not?
- Internal consistency may be demonstrated
- Similar results by age or sex
- Dose response
- Supports findings as not chance aberration
- Reader should consider whether authors
interpretations make sense - Biologic plausibility, timing of events,
17How are null findings interpreted?
- Was there lack of effect?... OR
- Was the study too small to have a reasonable
chance of detecting anything? - Wide confidence interval indicates this
possibility - Lack of evidence of an association is not the
same as evidence of no association
18Did untoward events occur during the study?
- Many problems should have been dealt with in
feasibility pilot studies - Difficulty following research design
- Loss of subjects to followup
- Difficult to make measurements on some
individuals - Missing data may allow bias to intrude
- Midstream changes in design worrisome
- Data may not be comparable
19Are important effects overlooked?
- Reader should look at the results for unexplored
findings, patterns, etc. - Researchers, understandably, may draw attention
to findings which fit their preconceptions - Do they comment on results which do not fit their
views?
20How do the results compare with previous reports?
- Single study seldom provides convincing evidence
- New findings accepted only with substantial
body of research - Confidence diminishes if other studies fail to
confirm previous results - Findings should be fitted into a balanced
overview of all reported studies
21What implications does the study have for your
practice?
- Should this information lead to changes in the
management of ones own patients? - Risk subjecting patients to useless therapy
- Risk denying patients access to effective ones
- Risk causing anxiety by advising them to avoid
harmful behavior - How big was the effect, and is it clinically
important? - Were patients circumstances similar to your
practice?
22Overall Questions to Ask
- Is the study design appropriate to address the
research question? - In the Discussion Section Are the findings...
- ...consistent with the research question of the
study? - consistent with the results presented?
- given in the context of current evidence?