> XW #@\p!Victor J. Schoenbach, vjs@unc.edu Ba==hZU2#8X@"1Arial1Arial1Arial1Arial1Arial1
Arial1Arial1Arial1Arial1Arial1Arial"$"#,##0_);\("$"#,##0\)!"$"#,##0_);[Red]\("$"#,##0\)""$"#,##0.00_);\("$"#,##0.00\)'""$"#,##0.00_);[Red]\("$"#,##0.00\)7*2_("$"* #,##0_);_("$"* \(#,##0\);_("$"* ""_);_(@_).))_(* #,##0_);_(* \(#,##0\);_(* ""_);_(@_)?,:_("$"* #,##0.00_);_("$"* \(#,##0.00\);_("$"* ""??_);_(@_)6+1_(* #,##0.00_);_(* \(#,##0.00\);_(* ""??_);_(@_) 0.0%
0.000 #,##0.000 + ) , * 1# " ( @@ @ @ @ @ (@ D) # #4 #8 "x  D) "x ` " `0 h0 @< "XA P `P0 h, P h, h, `< `P< `P< `P0 hP0 `PP0 hP, hP, `P< `PP< hP, hP, hPP, `P "xP PP # " H "
P h ( (@ (@ ,@ ,@ ( , h ( , P P (P ,P ( D) D) "\ "4 !x # `$7CombinationsOfScreeningTestslPredictiveValuePopSize:2
Prevalence:3!Sensitivity:4!Specificity:5`i :Noncases who test positive = (1specificity)(1prevalence)XIn the usual screening situation, the disease is rare, say less than 1%. In that case, bSo if the false positive rate is larger than the prevalence (not unusual for a rare disease), the ]positive predictive value will necessarily be less than 50%, even with perfect sensitivity. Predictive value calculatorTotalNegativePositiveTestresultPopulation sizeCasesNoncasesDisease prevalenceSensitivitySpecificityPredictive valueTrueStatuspredictive value below changes),(Note: these cells are named, permitting theformulas below to use named cell references.)So:%(Change these numbers and see how theRPrevalence and specificity are the main determinants of positive predictive value.7An easy way to see this algebraically is the following.` Cases who test positive (true positives)86Cases who test positive = Sensitivity x prevalencedPositive predictive value (PPV) = Z All positive testsH>All positive tests = Cases who test positive (true postives).K + Noncases who test positive (false positives)9W Sensitivity x prevalence + (1specificity)(1prevalence)4:L Sensitivity x prevalence2o PPV = R(1prevalence) is close to 1, and Sensitivity x prevalence will be less than theXprevalence (or equal to prevalence, if sensitivity=100%). So positive predictive value will be approximately:\ PPV =  C A small # less than the prevalenceK A small # less than the prevalence + (1specificity)V1  specificity is the false positive rate, I.e., the proportion of noncases who test8positive, so positive predictive value is approximately:I A small # less than the prevalenceR A small # less than the prevalence + false positive ratee PPV = 
Try out this:False positive rate}www.epidemiolog.net Victor J. Schoenbach, 9/10/2000, 9/17/2004/Combining screening tests in series or parallelNTwo screening tests, whether identical or different, are said to be applied inQparallel if a positive result on either test is sufficient to prompt a diagnostic:TThe overall sensitivity (sometimes called "net sensitivity") and overall specificityAM(sometimes called "net specificity") for the two tests in combination can be incorrectB incorrectRIf the two tests are labelled A and B, there are four possible results if both areOgiven: both may give the correct result, both may give an incorrect result, orA and BcorrectAB B correctof B (2) Both (3) BothC A correct (1+3) + B correct (3+4)  Both A and B correct (3)M Sensitivity of A + Sensitivity of B  (Sensitivity of A x Sensitivity of B)Gseries, then only the cases that are correctly classified by both testsA(represented by box 3) will be termed "positive" for the combinedJclassification. In fact, if one test is negative, the second test may not?boxes 1, 3, and 4. This area can be obtained algebraically as:CWhen we add the area where A is correct (boxes 1 and 3) to the areaFin which B is correct (boxes 3 and 4), we are counting box 3 twice, so3is the product of the probabilities of each event).Lof a correct test result. (The joint probability of two independent events Sensitivity of test ASensitivity of test BA & B combined in parallelA & B combined in seriesA x B
A + B  A x BGcorrect test is the sensitivity of the test. If A and B are applied inM Specificity of A + Specificity of B  (Specificity of A x Specificity of B)GSo series testing increases specificity, and parallel testing decreasesSpecificity of test ASpecificity of test B$obtained using probability concepts.Ceven be done, which we are counting here as "incorrect", since the Gdiagram represents cases. So the overall sensitivity of applying testsA(represented by box 3) will be termed "noncases" in the combinedNwhere, since we are focusing on noncases, each specificity is the probabilityHspecificity. Change the values of the specificities in the shaded cells7to see the specificity of the two tests in combination.www.epidemiolog.net/\\/<>
HFor example, breast cancer screening frequently employs a combination ofJgenerally employs a combination of ELISA and Western Blot tests applied inMseries. If the ELISA test is repeatedly positive (two ELISA tests applied inLseries) then a Western Blot test is given before making a determination thatIHIV antibody is present (I.e., series testing of ELISA and Western Blot).Imammography and breast physical exam applied in parallel. If either testRis positive, then further investigation is indicated. In contrast, HIV screening HSimilarly, syphilis testing employs two tests in series. Specimens thatItest positive with an RPR (rapid plasma reagin) or VDRL (Venereal DiseaseFResearch Laboratory test) are evaluated with a confirmatory FTAABS orMHATP.2four possibilities are shown in the diagram below.None test may give the correct result and the other an incorrect result. These  Sensitivity or orspecificity(1) A correct,(4) A incorrect9workup (i.e., the combined result is called "positive").KTwo screening tests are said to be applied in series if both tests must be 8HSince we are focusing on cases, each sensitivity is the probability of aMcorrect test result. (The joint probability of two independent events is the,product of the probabilities of each event).Kon either test causes the overall result to be classified as positive, then =EMwhere each sensitivity is the probability of a correct test (since these are.Fcases). So series testing decreases sensitivity, and parallel testingIincreases sensitivity. Here is a calculator to see these relations with Hnumbers. Change the values of the sensitivities in the shaded cells to 4see the sensitivity of the two tests in combination.9Correct classification of cases  combining sensitivities=Correct classification of Noncases  combining specificitiesQclassification. If the first test is positive (i.e., incorrect), the second testMmay not even be done, since a single positive test is sufficient to call the ?discussing noncases, the positive classification is incorrect.JIn order to have a correct classification of noncases with two tests readHin parallel, both tests must be negative. So the overall probability ofLcorrect classification of noncases (the overall specificity) from applying FA and B in series is represented by the area of box 3. Algebraically:Cwe need to subtract it to avoid doublecounting. We can write the:.Combined sensitivity for A and B in parallel =9we need to subtract it to avoid doublecounting. So the:,Combined specificity for A and B in series =.Combined specificity for A and B in parallel =V Specificity of A x Specificity of B,Combined sensitivity for A and B in series =W Sensitivity of A x Sensitivity of B<where each specificity is the probability of a correct test.V. Schoenbach, 9/21/2005Npositive in order to prompt action (the combined result i<js called "positive").specificity of AFIf the diagram shows test results for cases, then the probability of a&+8Specificity evaluates the ability to identify noncases..74Sensitivity evaluates the ability to identify cases.FIf the diagram shows noncases, then the probability of a correct testIis the specificity of the test. If A and B are applied in parallel, then;C=only the noncases that are correctly classified by both testsHthe overall classification positive for parallel testing. Since we are Qtests A and B is in parallel is represented by the area in box 3. Algebraically:Lresult on either test causes the overall result to be classified as negativeLIf instead tests A and B are applied in series, so that a negative (correct)(.MIf, instead, tests A and B are applied in parallel, so that a positive result*2>Fwe have two chances to identify each case. So the sensitivity for theBcombination is represented by the total area of boxes 1, 3, and 4.+This area can be obtained algebraically as:F(correct), then we have two chances to identify each noncase. Thus, Fthe specificity of the combination is represented by the total area ofKParallel testing with two tests gives us two chances to identify each case.MSeries testing with two tests gives us two chances to identify each noncase.)So series testing has higher specificity.(+So parallel testing has higher sensitivity.*Summary SensA x SensB SensA + SensB  SensA x SensB Sensitivity Specificity SpecA + SpecB  SpecA x SpecB SpecA x SpecB
Series ParallelSeries testingParallel testingObserved prevalence!(All positive tests / population)(compare to cell D52)QP"{6Q
}S
*!#w$2&>'x*D,H.602F5562
#@(RW]ae)jil
dMbP?_*+%Mr\\SPHPS1\SPHEPID05C
odXXLetterDINU"4`l+`IUPHdLetter [none] [none]Arial4Pd? VSCHOENB<Automatic>@600dpidType new Quick Set name hereEXCEL.EXEC:\Program Files\Microsoft Office\Office\EXCEL.EXE"dXX??U}
}m}m }$}
}}
;
P1
2
3
v
w
d
i
j
e
f
g
h
k
l
m
n
4
5
X
8
9
p
o<
@!
"
#
$
%
&
'
(
)
*
+
,

.
/
0,1,23;456789:;<=? 89D::;!5
!2t
!E1
!2A
!<"5
"27
"E1
"26
"<#@AFBBC$6G//=
$N`$ %6G//=
%@&6G//=
&@'6G//=
'.(6
(,B
(G/
(.u(=
(.r)6
),:
)H0
).>)>
).s*6
*,;*H0/>
*.?+6,H0/>
+@,6G//=
,@6G//=
@.73I44?
.Na/
0Ob
0Kq
0Jc01O
1.1J1
2KJ2
3P4
5
6
7S
8E
9F
:G
;Y
<Z
=
?BHXBB.$$$LL8$$&$4,
@BCDFGHIJLNOPRTVWXYZ\]^_
@
Bx
Cy
Dz
F
G{
HQ
I
J
LC
NI
OJ
P
R
TD
V
W}
X~
Y
Z
\MM~
\+V@
\<
]MN~
]+T@
]=
^P!^Lףp=
?_D\D]
^Q
_O'_L](\?_D\D]D^
_R4**Ab;defghijklnopqstvwxz{}~
bP
d
e
f
g
h[
i
j
k
l
n
o
p
q
s
t
v\
wL
xK
z
{

}
~H40
C
I
J
T
U
]
^
MV~
+T@
<
MW~
+V@
=
P'L](\?^DDD
R
O!Lףp=
?DD
Q
L
L
L
L
L
L
L
L
L6[[[[[S\]^^_ 6
c
X`6UW `:**GA"8
T
U
W `
TU
Z `YV!"RTUW `
T
U
W `6
VX"`7ab33I
LL
_
<.<,>*@GGG7
#@AԀ.ˌ!
dMbP?_*+%&?'?(?)?M"\\sph1\mc2104hp4300DS
odXXLetterPRIV0''''\KhCڐ;IUPHdLetter [none] [none]4Pd?VSCHOENB"dhh??U}}
}A ;
P
%
!"
!
"
! "
#
$
%
'
& 4$$$$$$$ !#$'()*,.0 2 3 4 5 6 8 9 : ; <
!( "!
#)
$*
'+'
((
)!,)"*
,

..
0
2
~
2&@
2#
2##
3
~
3f4@
3#
3##
4~
4dV@
4
5~
5dV@
5
6/6e?; C
8
89
9)
9)
9)
9*
9i
:
:':' @<CCC1:'@;CCCK:'P@:5CCCC@CC@Z:gwb'vb'?:DCCCCCCCCC@?:jp=
ף?6)CCC@C@
:
;
;,;(h@:CCC,;( @:CCCK;(@;5CCC@CCC@[;h*Y7"?;ECCCCCCCCC
;
<!<'@@<CC&<'@@<CC<'@<C<0e$$$$88**1Pz8@
@$0">(@222_227
Oh+'0DhpL$
0</Predictive value and combining screening tests/Predictive value and combining screening tests"Victor J. Schoenbach, vjs@unc.eduee6sensitivity, specificity, predictive value, screeningdDeveloped for EPID168, fall 2000; rev. 9/17/2004; correction plus some rewording, 9/22/2005. Minor formatting change, 7/3/2006."Victor J. Schoenbach, vjs@unc.edureMicrosoft Excel@X@;w@)՜.+,0X
X`
54Department of Epidemiology, School of Public Health,University of North Carolina at Chapel Hill CombinationsOfScreeningTestsPredictiveValueWorksheets
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFHIJKLMNPQRSTUVRoot Entry FWorkbookrSummaryInformation(GDocumentSummaryInformation8O