Statistical Hypothesis Testing

The use of Statistical Hypothesis Testing procedure to determine type I and type II errors was linked to the measurement of sensitivity and specificity in clinical trial test and experimental pathogen detection techniques. A theoretical analysis of establishing these types of errors was made and compared to determination of False Positive, False Negative, True Positive and True Negative. Experimental laboratory detection methods used to detect Cryptosporidium spp. were used to highlight the relationship between hypothesis testing, sensitivity, specificity and predicted values.

Source: Owusu-Ansah et al.

Medical Research and Binary Classification Test

Sensitivity and specificity are two terms widely used in Medical research and are the statistical measures of performance of a binary classification test. In clinical research the sensitivity of a medical test is the probability of its giving a 'positive' result when the patient is indeed positive and specificity is the probability of getting 'negative' result when the patient is indeed negative. Wrongly identify a healthy person as sick and a sick person as healthy is closely related to the concept of type I and type II errors of testing hypothesis. It was observed that the sensitivity of a test is equal to power of test in hypothesis testing.

Source: Sharma et al.

Diagnostic and Statistical Tests

Diagnostic tests guide physicians in assessment of clinical disease states, just as statistical tests guide scientists in the testing of scientific hypotheses.

Sensitivity and specificity are properties of diagnostic tests and are not predictive of disease in individual patients. Positive and negative predictive values are predictive of disease in patients and are dependent on both the diagnostic test used and the prevalence of disease in the population
studied. These concepts are best illustrated by study of a two by two table of possible outcomes of testing, which shows that diagnostic tests may lead to correct or erroneous clinical conclusions.

In a similar manner, hypothesis testing may or may not yield correct conclusions. A two by two table of possible outcomes shows that two types of errors in hypothesis testing are possible.
- One can falsely conclude that a significant difference exists between groups (type I error). The probability of a type I error is a (alpha).
- One can falsely conclude that no difference exists between groups (type II error). The probability of a type II error is b (beta). The consequence and probability of these errors depend on the nature of the research study.
- Statistical power indicates the ability of a research study to detect a significant difference between populations, when a significant difference truly exists.
- Power equals 1- b. Because hypothesis testing yields "yes" or "no" answers, confidence intervals can be calculated to complement the results of hypothesis testing.

Finally, just as some abnormal laboratory values can be ignored clinically, some statistical differences may not be relevant clinically.

Source: Gaddis GM et al.

References