Errata and comments on
Ann Aschengrau and George Seage III
Essentials of Epidemiology in Public Health
Sudbury MA: Jones and Bartlett
2nd edition, 2008 (http://publichealth.jbpub.com/aschengrau/2e/)


The following errata have been noted (most are courtesy of Olivia Carter-Pokras, Ph.D. and her students):

  • In the discussion of standardization using the example of Alaska and Florida, page 70 of the textbook gives the excess crude mortality rate as 500.87 per 100,000 (per year), but page 72 gives the value as 580.87 per 100,000 (per year). Since the crude mortality rates are 990.98 per 100,000 (per year) and 490.12, the correct excess is 500.87 (which differs slightly from the actual difference presumably due to rounding in the printed crude mortality rates). Thus, the first "8" in 580.87 is a typo. (Thanks to Stefanie Cousins, summer 2009 Internet course, for spotting this typo.)
  • Chapter 3 Page 69-70, last words of text following to next page:
    Reads: dividing the total number of cases in the population total number of individuals
    Should Read: dividing the total number of cases in the population by the total number of individuals
  • Tables 3-7 and 3-8: Death rate per 100,000 in Florida for total population should be 990.99 to be consistent with Table 3-6

    Paragraph above "summary" on page 72:
    "580.87 per 100,000" should be "500.87 per 100,000" to be consistent with page 70

    Page 72, summary section, line 3:
    Reads: which are based in the difference between...
    Should read: which are based on the difference between...

    Practice problem 6:
    The answers to A and D are given as "true". In question 6A, that statement can be read (1) as referring to typical risks of CHD and lung cancer and typical risk ratios for smoking or (2) as a general statement that a disease with a much higher incidence will necessarily lead to a greater risk difference. The latter proposition is not true in general, although it would often be the case. Since the text lists the answer as true, I assume the authors mean for us to use typical incidences (e.g., about 0.01 per year for CHD in male nonsmokers aged 60+ years (my guess) and about 0.0001 for lung cancer in male nonsmokers aged 60+ years - also a guess). Typical RR’s for heavy male smokers would be about 1.5 for CHD and about 10.0 for lung cancer. With these numbers, the risk difference for CHD (1.5 x 0.01 - 0.01 = 0.005) is greater than that for lung cancer (10 x 0.0001 – 0.0001 = 0.001 – 0.0001 = 0.0009). To have it come out the other way with these incidences, you’d need risk ratios like 1.1 for CHD (risk difference 0.011-0.01 = 0.001) and 15 for lung cancer (risk difference 0.0015 – 0.0001 = 0.0014).

    In quesiton 6D, “excess risk” is defined at the top of page 68 (“excess relative risk”) as RR-1. So if excess risk = RR-1 = 15%, then RR=1.15. The percentage of cases among persons exposed to unchlorinated water that are attributable to the lack of chlorination (or that would be prevented by avoiding unchlorinated water) would be (RR-1)/RR = 0.15/1.15 = 0.13, or 13%.
  • Chapter 5 On page 122, prevalence is misspelled in Figure 5-9.
  • Chapter 6 Page 146, first full paragraph, line 5:
    Reads: the characteristics of population from which...
    Should read: the characteristics of the population from which...
  • Page 164, summary section, first word
    Reads: Epidemiolgists
    Should Read: Epidemiologists
  • Chapter 7, Page 191, second paragraph, 9th line:
    Reads: accuracy all of the non-fatal...
    Should read: accuracy of all non-fatal...

Chapter 8, Cohort Studies

The following are comments, rather than errata.

  • Chapter 8, Page 201, "natural experiments":
    A&S appear to apply the term "natural experiments" to most any cohort study "because the investigator acts as a disinterested observer merely letting nature take its course." In my experience (e.g., John Last's Dictionary of Epidemiology, 3rd ed), "natural experiment" refers to a situation where the assignment of exposure appears to be effectively random. The classic example is John Snow's study in which he compared death rates from cholera between households whose water was supplied by Southwark and Vauxhall Company or from the Lambeth Company (see page 18-21 in A&S).

  • Chapter 8, Page 202, first paragraph:
    Although I expect that the authors would agree, many epidemiologists would regard the statement "it is perfectly ethical to conduct an observational study by comparing women who choose to smoke during pregnancy with those who choose not to do so" as correct only if the investigators recommended that the study participants quit smoking and offered resources and/or referrals (e.g., see Norma Kanarek and Marty S. Kanarek, Smoking cessation in clinical trials and public health studies: a research ethical imperative. Ann Epidemiol Dec 2007;17(12):983-987). Similarly, both observational and experimental studies of HIV risk have traditionally counseled participants to use condoms even though fewer transmissions means less study power or having to recruit more participants.

  • Chapter 8, Page 202, 2nd paragraph:
    "A classical cohort study examines ... a single exposure." I've seen similar statements in other sources, but I'm not sure that the concept is a useful one. Some cohort studies are defined on the basis of exposure status, but other cohort studies, including some "classics" like the Framingham Study and other cardiovascular disease cohort studies conducted inthe mid-20th century (e.g., Evans County, Tecumseh, Chicago Western Electric, ...) were created from communities or occupational settings rather than defined by "an exposure". Baseline examinations in these studies were extensive, providing data on a large number of "exposures". In my experience, what defines a cohort study is the identification of a collection of people who are at risk for the occurrence of one or more outcomes of interest and the follow-up of these people to detect such occurrences. A key characteristic is that exposures are ascertained at or before the beginning of the follow-up period, so that there is some confidence that the measured exposure(s) reflect the person's status before the disease has occurred. Often, physiologic specimens are stored, permitting later identification of exposures that can nevertheless be linked to the pre-disease condition. There is a class of cohort studies (A&S appear to call these special cohorts - page 210) that are designed around a group of people with an exposure. An example is studies of asbestos workers. The exposures studied in these instances are often rare in the general population, so that the identification of exposed persons is a fundamental step in designing the study.

  • Chapter 11, Page 192, beginning:
    Reads: the crude relative risk of dementia among participants with diabetes was 4.0 while the age-adjusted relative risk was 2.0. Thus, the magnitude of confounding was [(4.0 - 2.0)/2.0] x 100 or 100%, indicating that a large amount of confounding was present.

    Should read: the crude relative risk of dementia among participants with diabetes was 3.5 ... [(3.5 - 2.0) / 2.0 ] x 100 or 75% (or 3.45 ... 72.7% if the calculation is performed before rounding) - thanks to Sarah Somers, fall 2011, for bringing this inaccuracy to my attention)
  • Chapter 12 (Random Error), Pages 325-326:
    Reads: "SD is the standard deviation of the sample mean" (p325, 5th line from bottom)
    Actually, SD represents the standard deviation of the population. A standard error is the standard deviation of the distribution of a statistical estimator, such as a mean, proportion, rate, risk difference, or risk ratio. If such an estimator is calculated repeatedly for a large number of samples from the same population, then the estimator will have a probability distribution whose mean provides the best estimate of the population value. The standard deviation of that distribution is the standard error of the estimator. When the estimator is the mean, then that standard deviation is called the standard error of the mean, SEM. The formula for SEM is the population standard deviation divided by the square root of the sample size. When the population standard deviation is unknown, it is estimated from the sample. Similarly, the standard error of a proportion (e.g., a prevalence) estimated from a sample of size n is the square root of p(1-p)/n. The standard error describes how far away the proportion calculated from the sample is likely to be from the population proportion it estimates.
  • Confidence interval example calculation on page 331:
    The 95% confidence limits for the cumulative incidence of mortality among subjects from Steubenville are given as 20.4% to 22.6% in the last paragraph of page 331. The correct values are 19.3% to 23.7%. If one calculates the confidence interval but forgets to multiply the standard error by 1.96, one obtains results very close to the values in the text. Thanks to Vanessa Miller and Christine Owre for spotting the error. The error was not corrected in the 3rd edition.
  • Chapter 16 (Screening), Page 439 (question 4d) and 477 (answer):
    The answer provided for question 4D (specificity) in chapter 16 has an error. Page 476 shows a correct table and the correct calculated value for specificity, but the calculation should read: 95,900/99,100, not 95,900/96,000. The calculation shown (but not the result) is for negative predictive value. [Thanks to David Hill for spotting this error in the 2nd edition.]


  • Return to EPID600 home page

    Vic Schoenbach, 5/26/2009, ..., 6/25/2013, 9/15,25/2013, 11/20/2014