Sensitivity and Specificity Formulas:
From: | To: |
Sensitivity (true positive rate) measures the proportion of actual positives correctly identified. Specificity (true negative rate) measures the proportion of actual negatives correctly identified. These metrics are fundamental for evaluating diagnostic tests.
The calculator uses these formulas:
Where:
Explanation: Sensitivity focuses on how well the test detects true cases, while specificity focuses on how well it excludes non-cases.
Details: High sensitivity tests are good for ruling out disease (SnOUT), while high specificity tests are good for ruling in disease (SpIN). These metrics help determine appropriate diagnostic tests for different clinical situations.
Tips: Enter the counts from your 2×2 contingency table. All values must be non-negative integers. Results are presented as proportions (0 to 1) which can be multiplied by 100 for percentages.
Q1: What's the difference between sensitivity and PPV?
A: Sensitivity measures how good the test is at detecting true cases, while positive predictive value (PPV) tells you the probability that a positive test result is truly positive.
Q2: Can a test have 100% sensitivity and specificity?
A: In practice, no. There's typically a trade-off between sensitivity and specificity when setting diagnostic thresholds.
Q3: What's considered good sensitivity/specificity?
A: Generally >90% is excellent, 80-90% is good, but acceptable levels depend on the clinical context and consequences of false results.
Q4: How do prevalence affect these metrics?
A: Sensitivity and specificity are prevalence-independent, but predictive values are affected by disease prevalence.
Q5: What about ROC curves?
A: ROC curves visualize the trade-off between sensitivity and specificity across different diagnostic thresholds.