F1 F2 Training Introduction to Quantitative methods - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

F1 F2 Training Introduction to Quantitative methods

Description:

Bowling A., Research Methods in Health.OUP 1999 ... Too small-underpowered. Invalid measures of exposure outcome or unimportant measures of outcome ... – PowerPoint PPT presentation

Number of Views:75
Avg rating:3.0/5.0
Slides: 52
Provided by: paulro5
Category:

less

Transcript and Presenter's Notes

Title: F1 F2 Training Introduction to Quantitative methods


1
F1 F2 Training Introduction to Quantitative
methods
  • Dr P Roderick
  • November 2007
  • pjr_at_soton.ac.uk

2
Reading
  • Bowling A., Research Methods in Health.OUP 1999
  • Fletcher RH, Fletcher SW, Wagner E. Clinical
    Epidemiology. BaltimoreWilliams and
    Wilkins,1996. Ch 1,2,5,6,11
  • Hennekens CH, Buring JE. Epidemiology in
    Medicine.BostonLittle Brown,1987. Ch 3,11,12
  • Clinical research of kidney diseases
    researchable questions and valid answers NDT 2007
    22 2459. Ravani P et al.

3
Key stages of research study
  • Whats the question?
  • Which design to use?
  • What exposures and outcomes to measure and how?
  • On whom?
  • How analyse to answer question?
  • How interpret?

4
PICO
  • Which Patients
  • What Intervention
  • What Control
  • What Outcomes

5
Defining the question
  • First stage of any study
  • Methods used determined by question
  • Consider is it worth asking?
  • Prior knowledge, Feasible? Is it important?
    Ethical?
  • Hypothesis generating vs hypothesis testing
  • Research vs audit
  • Audit monitoring of clinical practice against
    pre defined standards
  • Tends to be local, non generalisible

6
Quantitative methods
  • Numerical answer to questions of
  • Health status, disease burden
  • Causes (aetiology) of disease
  • Diagnosis
  • Treatment
  • Prognosis
  • Economics (e.g cost effectiveness)
  • Most as questions about association between TWO
    variables exposure and outcome OR describing
    one of these
  • quantification gives magnitude/importance/frequenc
    y

7
Outcome
Exposure
8
Exercise 1List
  • Exposures
  • Outcomes

9
Quantitative questions
  • How much of X is there?
  • Who gets X? Where is X? Is X getting more or
    less?
  • What factors are associated with X?
  • What happens to X? What factors influence this?
  • Is test A better than B in detecting X?
  • Is treatment C better than D at treating X?

10
Summary of study designs
Experimental
Observational
Individuals
Randomised controlled trial (RCT)
Non-randomised methods
Populations
Descriptive
Analytic
Descriptive X-sectional,
Analytic
Ecological correlation
Time/pop controlled non exptal intervn
studies
Cross-sectional study
Cohort study
Case-control study
Case studies for prognosis
11
Hierachy for causation
  • RCT
  • Non randomised experimental
  • Cohort
  • Case control
  • .

12
Descriptive
  • patterns of disease by time place person
  • key measures
  • incidence
  • prevalence

13
Acceptance rates pmp in the UK 1982-90, England
1991-98
14
Incidence of chronic renal failure by age (Feest
TG et al)
15
(No Transcript)
16
(No Transcript)
17
Ecological
  • Exposure and disease measured in aggregate not at
    individual
  • Correlate exposure and disease
  • often use routine data
  • Hypothesis generating but care ecological fallacy
    (different relationship may hold between areas
    than within individuals)

18
(No Transcript)
19
Within and between-place relationships of
opposite sign
Between-place relationship
Exposure
Within-place relationships
Effect
20
Cross sectional
  • survey population
  • measure exposure and disease
  • one point in time
  • prevalent not incident cases
  • no temporal relation
  • Widely used for biochemical/pathophysiological/lif
    estyle measures

21
Prevalence of CKD in NHANES III (Coresh)
22
Case control
  • find new cases of disease
  • then find controls with no disease
  • does frequency of exposure differ between cases
    and controls?
  • Q good for rare disease or rare exposure?
  • prone to bias

23
Case control
New cases of disease
Exposed or not
Identify cases
Research
Population
Time
Controls without disease
Exposed or not
24
Socio-economic status and kidney disease
  • Case control pre ESRD cases in Sweden SCr gt300
    men (Fored)
  • Risk in men 1.6 unskilled vs professional
  • Risk 1.3 lt9 yrs education vs university educn
  • Adjusted for age sex BMI smoking alcohol drugs

25
Cohort study
  • define population group
  • measure exposure follow -up over time to see who
    gets diseases
  • can be prospective or retrospective (esp
    occupational/clinical)
  • exposure can be an intervention
  • cohort could be people with disease followed to
    determine prognostic factors
  • Good for rare exposures

26
Cohort
New Disease or not
Exposed
Time
Population
Sample
Research
Not exposed
New Disease or not
27
Proteinuria and progression to ESRD (Iseki)
28
Hypertension and CKDMRFIT study (after Klag)
29
Cohort as prognostic study survival on RRT by
deprivation quintile
30
Cohort vs case control
  • Cohort
  • Good for
  • Rare exposures
  • Temporal effect
  • BUT problems
  • Rare disease, long time until disease
  • Loss to follow-up
  • Case control
  • Good for
  • Rare diseases
  • Multiple exposures
  • BUT problems
  • selection of cases/controls
  • recall of exposures
  • Can only study 1 disease at a time

31
Experimental
  • Randomised trials
  • Compare subjects receiving intervention to
    control group who dont
  • Compare outcomes in the 2 groups
  • If two groups similar in all respects other than
    the treatment then difference in outcome is
    assumed to be due to treatment difference
  • Intervention any-drugs, surgery, counselling,
    information giving, test, organisation of care,
    screening ..

32
Clinical trial
Outcome
Experimental intervention
Improved
Not improved
Randomised allocation
Sample
Improved
Not improved
Comparison intervention
33
Random concealed allocation
  • Why?
  • Groups to differ by our intervention of interest
    only and not by other factors that effect outcome
  • random allocation distributes these factors
    known and unknown by chance
  • Randomise patient, person, GP surgery school
  • not the same as random sampling
  • Conceal allocation from clinicians

34
Control or comparison group
  • Need to test hypothesis that new therapy is
    better or worse than
  • nothing
  • best existing therapy.
  • If no best existing therapy placebo control
    where possible
  • inert, same properties as therapy under
    investigation i.e patient/doctor cant tell
    difference
  • Blinding-unbiased assessment
  • double blind both patient/clinician blind, if
    placebo used.

35
Follow-up
  • Minimise loss to follow-up
  • Analyse by intention to treat ie from point of
    randomisation
  • Even if protocol deviations, withdrawals etc
  • Can bias if dont eg if treatment leads to side
    effects in sicker pts

36
Normal vs low hematocrit values during anaemia
therapy on CKDBesarab NEJM 1998 339 584
  • N1233 patients on RRT with heart failure or
    coronary heart disease from 51 US centres
  • Intervention targets 39-45 normal or 27-33 low
  • 2 weekly adjustment of EPO dose
  • Baseline age 65,44 black, HCT 31
  • Main outcomes death or non fatal MI
  • Normal 183/618 deaths vs Low 150/615 deaths
  • Stopped early, 2.5 yrs

37
Why cant you experiment?
  • Inappropriate for the question
  • Timescales too long to reach answer
  • Unethical
  • No clinical uncertainty

38
Non-randomised experimental studies
  • Non randomised
  • before and after
  • before and after and concurrent control
  • concurrent control
  • Note observational intervention studies
  • E.g CABG vs PTCA, HD vs PD

39
Measurement
  • Measure exp and outcome in samples of target
    population
  • Measure close to truth (accurate) with minimlal
    uncertainty (precise)
  • 2 types
  • Random
  • Due to chance, imprecise measure
  • Overcome by precise measures, increase size
  • Systematic
  • Biased, how overcome?

40
Bias and precision
Bias
Unbiased and precise
Biased but precise
Random variation
Unbiased but imprecise
Biased and imprecise
41
Measurement example blood pressure
  • Sources of random error
  • In individual
  • In observer
  • In machine
  • Sources of systematic error
  • In observer
  • In machine

42
Study validity
  • Internal result could be true but consider
  • Chance
  • Adequately powered i.e. big enough to address
    question with reasonable confidence
  • Bias
  • Any process at any stage of study that produces
    results that departs from the truth
  • Selection and Information
  • Confounding
  • The alternative explanation
  • External
  • How generalisible?

43
Bias
  • Any process at any stage of study that produces
    results that departs from the truth
  • Two main types
  • Selection
  • Information

44
Selection bias
  • Identification of subjects into study biased
  • Eg non responders? are they the same as
    responders
  • Case control ..specific meaning
  • Where choice of cases or controls is dependent on
    exposure
  • Cohort less problem

45
Information bias
  • Measurement
  • tool OR frequency of measure systematically
    different by exposure group
  • Observer
  • aware of exposure affects assessment of disease
    or vice versa
  • Subject
  • recall -patient with disease maybe more likely to
    remember exposure compared non exposed-esp if
    aware of hypothesis, or tell observer what want
    to hear

46
How deal with bias
  • Get design right
  • no test can compensate

47
Confounding
A confounder provides an alternative explanation
for an observed association between exposure and
outcome
Confounder
Outcome
Exposure
A confounder is a factor that is independently
associated with both exposure and outcome. If
recognised and measured it can be controlled
for. Its not an effect of the exposure.
48
What might confound..?
  • Alcohol and pharyngeal cancer?
  • Herpes simplex and Ca cervix?
  • Number of kids and breast cancer..

49
How deal with confounding
  • Consider potential confounders
  • design
  • matching
  • restriction
  • randomisation
  • measure them as well as main exposure
  • stratification
  • multivariate analysis
  • Standardisation (usually age sex)

50
External validity
  • Are the results generalisible to other
    populations/settings
  • can only generalise valid result
  • therefore go for design and sample you can get
    internal validity
  • epidemiological study populations can differ from
    target
  • judgement wrt other studies of similar design
    ,differences between populations

51
Some research pitfalls
  • Not original or relevant question
  • Wrong design for question
  • Unfeasible e.g. recruitment , follow-up too short
  • Too small-underpowered
  • Invalid measures of exposure outcome or
    unimportant measures of outcome
  • Not enough thought to minimising bias and dealing
    with confounding
Write a Comment
User Comments (0)
About PowerShow.com