Review of Statistics - PowerPoint PPT Presentation

1 / 75
About This Presentation
Title:

Review of Statistics

Description:

Strength of association for nominal-level variables is indicated by ? (lambda) ... i = the strength of the effect of xi on y, e = the amount of error in the ... – PowerPoint PPT presentation

Number of Views:58
Avg rating:3.0/5.0
Slides: 76
Provided by: mica80
Category:

less

Transcript and Presenter's Notes

Title: Review of Statistics


1
Review of Statistics
Edouard Manet Rue Mosnier with Flags, 1878
2
  • Statistical Analysis
  • Introduction
  • This presentation describes types of statistical
    analysis typically used in sociology.
  • Mostly, it provides an interpretive rather than
    mathematical description of statistics.
  • Students enrolled in Sociology 202 should have
    either taken or be currently enrolled in their
    statistics class.

3
  • Descriptive Statistics
  • Data Reduction
  • The first step in quantitative data analysis is
    to calculate descriptive statistics about
    variables.
  • The researcher calculates statistics such as the
    mean, median, mode, range, and standard
    deviation.
  • Also, the researcher might choose to collapse
    response categories for variables.

4
  • Descriptive Statistics
  • Measures of Association
  • Next, the researcher calculates measures of
    association statistics that indicate the
    strength of a relationship between two variables.
  • Measures of association rely upon the basic
    principle of proportionate reduction in error
    (PRE).

5
  • Descriptive Statistics
  • Measures of Association
  • PRE represents how much better one would be at
    guessing the outcome of a dependent variable by
    knowing a value of an independent variable.
  • For example How much better could I predict
    someones income if I knew how many years of
    formal education they have completed? If the
    answer to this question is 37 better, then the
    PRE is 37.

6
  • Descriptive Statistics
  • Measures of Association (Continued)
  • Statistics are designated by Greek letters.
  • Different statistics are used to indicate the
    strength of association between variables
    measured at different levels of data.
  • Strength of association for nominal-level
    variables is indicated by ? (lambda).
  • Strength of association for ordinal-level
    variables is indicated by ? (gamma).
  • Strength of association for interval-level
    variables is indicated by correlation (r).

7
  • Descriptive Statistics
  • Measures of Association (Continued)
  • Covariance is the extent to which two variables
    change with respect to one another.
  • As one variable increases, the other variable
    either increases (positive covariance) or
    decreases (negative covariance).
  • Correlation is a standardized measure of
    covariance.
  • Correlation ranges from -1 to 1, with figures
    closer to one indicating a stronger relationship.

8
  • Descriptive Statistics
  • Measures of Association (Continued)
  • Technically, covariance is the extent to which
    two variables co-vary about their means.
  • If a persons years of formal education is above
    the mean of education for all persons and his/her
    income is above the mean of income for all
    persons, then this data point would indicate
    positive covariance between education and income.

9
  • Descriptive Statistics
  • Regression Analysis
  • Regression analysis is a procedure for estimating
    the outcome of a dependent variable based upon
    the value of an independent variable.
  • Thus, for just two variables, regression analysis
    is the same as analysis using the covariance or
    correlation between the variables.

10
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • Typically, regression analysis is used to
    simultaneously examine the effects of more than
    one independent variable on a dependent variable.
  • One might want to know, for example, the ability
    to predict income by knowing the education, age,
    race, and sex of the respondent.

11
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • The statistic used to summarize the total PRE of
    multiple variables is the correlation squared, or
    R-square.
  • R-square represents the total variance explained
    in the dependent variable.
  • It represents how well we did in explaining the
    topic we wanted to explain.

12
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • R-square ranges from 0 to 1, wherein the larger
    the value of R-square, the greater the predictive
    ability of the independent variables.
  • The predictive ability of each variable is
    indicated by the statistic ß (beta).

13
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • Consider this equation
  • y a ß1x1 ß2x2 ß3x3 ß4x4 e
  • where
  • y the value of the dependent variable,
  • a the intercept, or starting point of y,
  • ßi the strength of the effect of xi on y,
  • e the amount of error in the prediction of y.

14
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • ß is called a parameter estimate. It represents
    the amount of change in y for a one unit change
    in x.
  • For example, a beta of .42 would mean that for
    each one unit change in x (e.g., education) we
    would expect to observe a .42 unit change in y.

15
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • For the example we discussed earlier, we can
    rewrite the equation as
  • Income ? ?1education ß2age ß3race ß4sex
    e
  • where each of the betas (ß) ranges in size from
    - ? to ? to let us know the direction and
    strength of the relationship between each
    independent variable and income.

16
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • In standardized form, this equation is
  • Income ß1education ß2age ß3race ß4sex
    e
  • where each of the standardized betas (ß) ranges
    in size from -1 to 1.
  • Note that the intercept (?) is omitted because,
    in standardized form, it equals zero.

17
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • Each of the beta terms in these equations
    represents the partial effect of the variable on
    the dependent variable, meaning the effect of the
    independent variable on y after controlling for
    the effects of all other variables on y.
  • The partial effects of independent variables in
    explaining the variance in a dependent variable
    can be visualized by thinking about the
    contributions of each player on a basketball team
    to the overall team performance.

18
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • Suppose the team wins, 65-60. The player at
    center is the leading scorer with 18 points.
  • So, we might say that the center is the most
    important contributor to the win. Not so fast,
    says regression analysis.
  • Regression analysis also wants to know the
    contributions of the other players on the team
    and how they helped the center.

19
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • Suppose that the point guard had 10 assists, 8 of
    which went to the center. Eight times the point
    guard drove the lane and then passed the ball to
    the center for an easy layup, accounting for 16
    of the 18 points scored by the center.
  • To best understand the contributions of the
    center, we would calculate the contributions of
    the center while controlling for the
    contributions of the point guard.

20
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • Similarly, regression analysis shows the
    contribution to R-square for each variable, while
    controlling for the contributions of the other
    variables.
  • The contribution of each variable in explaining
    variance in the dependent variable is summarized
    as a partial beta coefficient.

21
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • In summary, regression analysis provides two
    indications of our ability to explain how
    societies work
  • The R-Square shows how much variance is explained
    in the dependent variable.
  • The standardized betas (parameter estimates)
    show the partial effects of the independent
    variables in explaining the dependent variable.

22
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • The graphic shown on the next slide shows a
    diagram of a regression of education (x) on
    income (y).
  • The regression equation (Y2) is shown as
    blue-colored line. The intercept (a) is located
    where the regression line meets the y axis.
  • The slope of the line is the beta coefficient
    (ß), which equals .42.

23
  • Descriptive Statistics
  • Regression Analysis (Continued)

24
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • We would interpret the results of the regression
    equation shown on the preceding slide in this
    manner A one unit change in education will
    result in a .42 unit change in income.
  • We can adjust this interpretation into actual
    units of education and income as we measured them
    in our study, to state, for example, Each
    additional year of education results in an
    additional 4,200 in annual income.

25
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • One should be cautious about interpreting the
    results of regression analysis
  • A high R-square value does not necessarily mean
    that the researcher can be confident of knowing
    cause and effect.
  • Predictions regarding the dependent variable are
    valid only within the range of the independent
    variables used in the regression analysis.

26
  • Descriptive Statistics
  • Regression Analysis (Continued)
  • The preceding discussion has focused upon linear
    regression.
  • Regression lines can be curvilinear or some
    combination of straight and curved lines.

27
  • Inferential Statistics
  • Introduction
  • Descriptive statistics attempt to explain or
    predict the values of a dependent variable given
    certain values of one or more independent
    variables.
  • Inferential statistics attempt to generalize the
    results of descriptive statistics to a larger
    population of interest.

28
  • Inferential Statistics
  • Introduction (Continued)
  • To make inferences from descriptive statistics,
    one has to know the reliability of these
    statistics.
  • In the same sense that the distribution of one
    variable has a standard deviation, a parameter
    estimate has a standard errorthe distribution of
    the estimate from its mean with respect to the
    normal curve.

29
  • Inferential Statistics
  • Introduction (Continued)
  • To better understand the concepts standard
    deviation and standard error, and why these
    concepts are important to our course, please
    review the presentation regarding standard error.
  • Presentation on Standard Error.

30
  • Inferential Statistics
  • Tests of Statistical Significance
  • If one assumes a normal distribution, then one
    can examine parameters and their standard errors
    with respect to the normal curve to evaluate
    whether an observed parameter differs from zero
    by some set margin of error.
  • Assume that the researcher sets the probability
    of a Type-1 error (i.e., the probability of
    assuming causality when there is none) at 5.
  • That is, we set our margin of error very low,
    just 5.

31
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • To evaluate statistical significance, the
    researcher compares a parameter estimate to a
    zero point on a normal curve (its center).
  • The question becomes Is this parameter estimate
    sufficiently large, given its standard error,
    that, within a 5 probability of error, we can
    state that it is not equal to zero?

32
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • To achieve a probability of error of 5, the
    parameter estimate must be almost two (i.e.,
    1.96) standard deviations from zero, given its
    standard error.
  • Sometimes in sociological research, scholars say
    two standard deviations in referring to a 5
    error rate. Most of the time, they are more
    precise and state 1.96.

33
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • Consider this example
  • Suppose the unstandardized estimate of the effect
    of self-esteem on marital satisfaction equals
    3.50 (i.e., each additional amount of self-esteem
    on its scale results in 3.50 additional amount of
    marital satisfaction on its scale).
  • Suppose the standard error of this estimate
    equals 1.20.

34
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • If we divide 3.50 by 1.20 we obtain the ratio of
    2.92. This figure is called a t-ratio (or,
    t-value).
  • The figure 2.92 means that the estimate 3.50 is
    2.92 standard deviations from zero.
  • Based upon our set margin of error of 5 (which
    is equivalent to 1.96 standard deviations), we
    can state that at prob. lt .05, the effect of
    self-esteem on marital satisfaction is
    statistically significant.

35
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • The t-ratio is the ratio of a parameter estimate
    to its standard error.
  • The t-ratio equals the number of standard
    deviations that an estimate lies from the zero
    point (i.e., center) of the normal curve.

36
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • Why do we state that we need to have 1.96
    standard deviations from the zero point of the
    normal curve?
  • Recall the area beneath the normal curve
  • The first standard deviation covers 34.1 of the
    observations on one side of the zero point.
  • The second standard deviation covers the next
    13.6 of the observations.

37
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • Lets assume for a moment that our estimate is
    greater than the real effect of self-esteem on
    marital satisfaction.
  • Then, at 1.96 standard deviations, we have
    covered the 50 probability below the real
    effect, and we have covered 34.1 13.4
    probability above this effect.
  • In total, we have accounted for 97.5 of the
    probability that our estimate does not equal
    zero.

38
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • That leaves 2.5 of the probability above the
    real estimate.
  • But we have to recognize that our estimate might
    have fallen below the real estimate.
  • So, we have the probability of error on both
    sides of reality.
  • 2.5 2.5 equals 5
  • This is our set margin of error!

39
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • Thus, inferential statistics are calculated with
    respect to the properties of the normal curve.
  • There are other types of distributions besides
    the normal curve, but the normal distribution is
    the one most often used in sociological analysis.

40
  • Inferential Statistics
  • Tests of Statistical Significance (Continued)
  • If we know the properties of the normal curve,
    and we have calculated an estimate of a
    parameter, and we know the standard error of this
    estimate (e.g., the range of values that the
    estimate might be), then we can calculate
    statistical significance.
  • Recall that statistical significance does not
    necessarily equal substantive significance.

41
  • Inferential Statistics
  • Chi-Square
  • Chi-square is a test of independence between two
    variables.
  • Typically, one is interested in knowing whether
    an independent variable (x) has some effect on
    a dependent variable (y).
  • Said another way, we want to know if y is
    independent of x (e.g., if it goes its own way
    regardless of what happens to x).
  • Thus, we might ask, Is church attendance
    independent of the sex of the respondent?

42
  • Inferential Statistics
  • Chi-Square (Continued)
  • Scenario 1 Consider these data on sex of the
    subject and church attendance
  • Church Attendance
  • Sex Yes No Total
  • Male 28 12 40
  • Female 42 18 60
  • Total 70 30 100

43
  • Inferential Statistics
  • Chi-Square (Continued)
  • Note that
  • 70 of all persons attend church.
  • 70 of men attend church.
  • 70 of women attend church.
  • Thus, we can say that church attendance is
    independent of the sex of the respondent because,
    if the total number of church goers equals 70,
    then, with independence, we expect 70 of men and
    70 of women to attend church, and they do.

44
  • Inferential Statistics
  • Chi-Square (Continued)
  • Scenario 2 Now, suppose we observed this pattern
    of church attendance
  • Church Attendance
  • Sex Yes No Total
  • Male 20 20 40
  • Female 50 10 60
  • Total 70 30 100

45
  • Inferential Statistics
  • Chi-Square (Continued)
  • Note that
  • 70 of all persons attend church.
  • Therefore, if church attendance is independent of
    the sex of the respondent, then we expect 70 of
    the men and 70 of the women to attend church.
  • But they do not.
  • Instead, 50 of the men attend church and 83.3
    of the women attend church.

46
  • Inferential Statistics
  • Chi-Square (Continued)
  • So, for this second set of data, is church
    attendance independent of the sex of the
    respondent?
  • Lets begin by calculating how much error we
    would make by assuming men and women behave as
    expected.
  • That is, for each cell of the table, we will
    calculate the difference between the observed and
    expected values.

47
  • Inferential Statistics
  • Chi-Square (Continued)
  • Observed in Red
  • Expected in White
  • Church Attendance
  • Sex Yes No
  • Male 20-28 -8 20-12 8
  • Female 50-42 8 10-18 -8

48
  • Inferential Statistics
  • Chi-Square (Continued)
  • Note that in each cell, if we assume
    independence, we make a mistake equal to 8
    (sometimes positive and sometimes negative).
  • If we add all of our mistakes, we obtain a sum of
    zero, which we know is not true.
  • So, we will square each mistake to give every
    number a positive valence.

49
  • Inferential Statistics
  • Chi-Square (Continued)
  • How badly did we do in each cell?
  • To know the magnitude of our mistake in each
    cell, we will divide the size of the mistake by
    the expected value in the cell (a PRE measure).
  • The following table shows our proportionate
    degree of error in each cell and our total amount
    of proportionate error for the entire table.

50
  • Inferential Statistics
  • Chi-Square (Continued)
  • Proportionate error is calculated for each cell
  • Church Attendance
  • Sex Yes No
  • Male (-8 )2 / 28 2.29 (8)2 / 12 5.33
  • Female (8)2 / 42 1.52 (-8)2 / 18 3.56
  • The total of all proportionate error 12.70.
  • This is the chi-square value for this table.

51
  • Inferential Statistics
  • Chi-Square (Continued)
  • Our chi-square value of 12.70 gives us a number
    that summarizes our proportionate amount of
    mistakes for the whole table.
  • Is this number big enough to indicate a lack of
    independence between church attendance and sex of
    the respondent?
  • To make this assessment, we compare our observed
    chi-square with a standardized distribution of
    PRE measures the chi-square distribution.

52
  • Inferential Statistics
  • Chi-Square (Continued)
  • The chi-square distribution looks like a lopsided
    version of the normal curve.
  • To compare our observed chi-square with this
    distribution, we need some indication of where we
    should be on the distribution, as we did with
    standard errors on the normal curve.
  • On the chi-square distribution, we are allowed
    a certain amount of error depending upon our
    degrees of freedom.

53
  • Inferential Statistics
  • Chi-Square (Continued)
  • To understand degrees of freedom, reconsider our
    table on observed church attendance
  • Church Attendance
  • Sex Yes No Total
  • Male 20 20 40
  • Female 50 10 60
  • Total 70 30 100
  • Given the margin totals, once we fill in one
    cell with the correct number, all the other cells
    are given.

54
  • Inferential Statistics
  • Chi-Square (Continued)
  • A degree of freedom is the number of correct
    guesses one must make to reach a point where all
    the other cells are given.
  • Our table has one degree of freedom.
  • The more correct guesses one must make, the
    greater the degrees of freedom and the more
    proportionate amount of error one is allowed
    within the chi-square distribution before
    claiming a lack of independence.

55
  • Inferential Statistics
  • Chi-Square (Continued)
  • The amount of chi-square we are allowed, at a
    probability of error set to 5, for one degree of
    freedom, equals 3.841.
  • Our chi-square exceeds this amount. Thus, we can
    claim a lack of independence between church
    attendance and sex of the subject at a
    probability of error equal to less than 5.

56
  • Inferential Statistics
  • Chi-Square (Continued)
  • Are you wondering where the number 3.841 comes
    from? It is 1.96 squared.
  • Remember 1.96? It is the number of standard
    deviations within the normal curve that indicates
    a 5 Type-I error rate.
  • The t-ratios for the effects of the independent
    variables in regression analysis each had one
    degree of freedom.
  • So, we are working with the same principles we
    used for the normal curve, but with a different
    distribution the chi-square distribution.

57
  • Inferential Statistics
  • Some Words of Caution
  • Recognize that statistical significance does not
    necessarily mean that one has substantive
    significance.
  • Statistical significance refers to mistakes made
    from sampling error only.
  • Tests of statistical significance depend upon
    assumptions about sampling and distributions of
    data, which are not always met in practice.

58
  • Other Multivariate Techniques
  • Path Analysis
  • Path analysis is the simultaneous calculation of
    regression coefficients within a complex model of
    direct and indirect relationships.
  • The example of an elaboration model regarding the
    success of women-owned businesses is an example
    of path analysis .
  • Path analysis is a very powerful tool for
    examining cause and effect within a complex
    theoretical model.

59
  • Other Multivariate Techniques
  • Time-Series Analysis
  • Time-series analysis uses comparisons of
    statistics and/or parameter estimates across time
    to learn how changes in the independent
    variable(s) affect changes in the dependent
    variable(s).
  • Time-series analysis, when the data are
    available, can be a powerful tool for gaining a
    stronger indication of cause and effect than one
    learns from a cross-sectional analysis.

60
  • Other Multivariate Techniques
  • Factor Analysis
  • Factor analysis indicates the extent to which a
    set of variables measures the same underlying
    concept.
  • This procedure assesses the extent to which
    variables are highly correlated with one another
    compared with other sets of variables.
  • Consider the table of correlations (i.e., a
    correlation matrix) on the following slide

61
  • Other Multivariate Techniques
  • Factor Analysis (Continued)
  • X1 X2 X3 X4 X5 X6
  • X1 1 .52 .60 .21 .15 .09
  • X2 .52 1 .59 .12 .13 .11
  • X3 .60 .59 1 .08 .10 .10
  • X4 .21 .12 .08 1 .72 .70
  • X5 .15 .13 .10 .72 .68 .73
  • X6 .09 .11 .10 .70 .73 1

62
  • Other Multivariate Techniques
  • Factor Analysis (Continued)
  • Note that variables X1-X3 are moderately
    correlated with one another, but have weak
    correlations with variables X4-X6.
  • Similarly, variables X4-X6 are moderately
    correlated with one another, but have weak
    correlations with variables X1-X3.
  • The figures in this table indicate that variables
    X1-X3 go together and variables X4-X6 go
    together.

63
  • Other Multivariate Techniques
  • Factor Analysis (Continued)
  • Factor analysis would separate variables X1-X3
    into Factor 1 and variables X4-X6 into Factor
    2.
  • Suppose variables X1-X3 were designed by the
    researcher to measure self-esteem and variables
    X4-X6 were designed to measure marital
    satisfaction.

64
  • Other Multivariate Techniques
  • Factor Analysis (Continued)
  • The researcher could use the results of factor
    analysis, including the statistics produced by
    it, to evaluate the construct validity of using
    X1-X3 to measure self-esteem and using X4-X6 to
    measure marital satisfaction.
  • Thus, factor analysis can be a useful tool for
    confirming the validity of measures of latent
    variables.

65
  • Other Multivariate Techniques
  • Factor Analysis (Continued)
  • Factor analysis can be used also for exploring
    groupings of variables.
  • Suppose a researcher has a list of 20 statements
    that measure different opinions about same-sex
    marriage.
  • The researcher might wonder if the 20 opinions
    might reflect a fewer number of basic opinions.

66
  • Other Multivariate Techniques
  • Factor Analysis (Continued)
  • Factor analysis of responses to these statements
    might indicate, for example, that they can be
    reduced into three latent variables, related to
    religious beliefs, beliefs about civil rights,
    and beliefs about sexuality.
  • Then, the researcher can create scales of the
    grouped variables to measure religious beliefs,
    civil beliefs, and beliefs about sexuality to
    examine support for same-sex marriage.

67
  • Other Multivariate Techniques
  • Analysis of Variance
  • Analysis of variance (ANOVA) examines whether a
    difference in the mean value for one group
    differs from that of another group.
  • Is the mean income for males, for example,
    statistically different from the mean income for
    females?

68
  • Other Multivariate Techniques
  • Analysis of Variance (Continued)
  • For examining mean differences across just one
    other variable, the researcher uses one-way
    ANOVA, which is equivalent to a t-test.
  • For two or more other variables, the researcher
    uses two-way ANOVA. The researcher might be
    interested, for example, in knowing how mean
    incomes differ based upon sex of subject and
    level of education.

69
  • Other Multivariate Techniques
  • Analysis of Variance (Continued)
  • The logic of a statistical test of a difference
    in means is identical to that of testing whether
    an estimate differs from zero, except that the
    comparison point is the mean of the other group
    rather than zero.
  • Rather than using just the estimate and its
    standard error for a single group, the procedure
    is to use the estimates and standard errors of
    two groups to assess statistical significance.

70
  • Other Multivariate Techniques
  • Analysis of Variance (Continued)
  • Suppose we wanted to know if the mean height of
    male ISU students differs significantly from the
    mean height of female ISU students.
  • Rather than comparing the mean height of male ISU
    students to a hypothetical zero point, we would
    compare it to the mean height of female ISU
    students, where this comparison takes place
    within the context of standard errors and the
    shape of the normal curve.

71
  • Other Multivariate Techniques
  • Analysis of Variance (Continued)
  • Suppose we find in our sample of 100 female ISU
    students that their mean height equals 65 inches
    with a standard error of 1.5 inches. These
    figures indicate that most females (68.2) are
    63.5 to 66.5 inches in height.
  • Suppose that a sample of 100 male ISU students
    shows a mean height for them of 70 inches with a
    standard error of 2.0 inches.

72
  • Other Multivariate Techniques
  • Analysis of Variance (Continued)
  • Lets set our margin of error (probability of a
    Type-1 error) at 5, meaning that we are looking
    at 1.96 standard deviations on the normal curve
    to indicate statistical significance.
  • Here is our question If we allow the mean of
    females to grow by 1.96 standard deviations and
    the mean of males to shrink by 1.96 standard
    deviations, will they reach one another?

73
  • Other Multivariate Techniques
  • Analysis of Variance (Continued)
  • The answer is no, not even close. The t-ratio
    (number of standard deviations on the normal
    curve needed to join the two groups) equals 26.7.
  • We can state that the difference in mean heights
    between ISU males and females is statistically
    significant at prob. lt .05 (actually,
    considerably less than that but that was our
    test margin).

74
  • Summary of Data Analysis
  • Sociologists have at their disposal a wide range
    of statistical techniques to help them understand
    relationships among their variables of interest.
  • These techniques, when used properly, can help
    sociologists understand human societies for the
    purpose of improving human well-being.
  • Students who want to be professional sociologists
    must learn statistics and the proper applications
    of these statistics to data analysis.
  • Enjoy!

75
Questions?
Write a Comment
User Comments (0)
About PowerShow.com