This will be a short post. Apologies for my extended absence; without going into detail, family issues have been taking my attention. I plan on getting back to a once or twice a week post schedule moving forward. Anyway. Onwards!
I would like to take a few paragraphs to address the concept of a “false positive” on a diagnostic test and what that really means, because there is a certain amount of confusion among the general public about this concept, and in case anyone reading has this issue, I’d like to clarify.
A false positive test is what happens when a patient receives a positive test result for a pathology they do not, in fact, have. Similarly a false negative result happens when the pathology is present but the test does not indicate this, sometimes because it is not sensitive enough to detect said pathology. (There are other reasons for incorrect results but the vast majority occur when the testing methodology is either too sensitive, potentially giving a false positive result, or not sensitive enough, which would result in a false negative. This is not news.
The important thing to consider when gauging the accuracy of a test is the prevalence of the pathology in the population being tested. As we will see, sensitive diagnostic tests work better when the pathology is highly prevalent. In other words, when testing for something rare, the chances that your positive result is incorrect is significantly higher than it would be in a high prevalence population.
The reason for this is what is usually referred to as the positive predictive value of the test. A common misperception that I have encountered many times is that, for example, a “1% false positive rate” does NOT mean that out of 100 positives, one will be false and the other 99 will be accurate. Rather, it means that out of 100 TOTAL TESTS, 1 will be a false positive, regardless of how many positives there are in that 100. So if a disease is at a 10% prevalence in a population (which is pretty high), we could expect 10 true positives and 1 false positive, which really means that there is a 1:10 ratio of false to true positives, or a full. 10% chance your positive is wrong. If that prevalence increases to 20%, the ratio is now 1:20 false:true, so there is now only a 5% chance your positive is wrong. It is clear by now that as the prevalence increases, the accuracy of the test increases as well. Similarly, suppose the prevalence is only 1%. Now your ratio is 1:1 and you can clearly see that the “positive predictive value” of the test (loosely defined as “likelihood your positive is real” plummets.
So long story short, “false positive” does not mean what you think it means and it is by definition highly dependent on the external population rather than just the individual being tested.
Of course, there are varying levels of “false positive” rates depending on the test and the condition being tested for, but I hope this little post clarifies that, in most cases, “false positives” are far MORE common than we have been led to believe. Whether this was intentional I leave as an exercise for the reader. 😉
Is there mathematical proof showing that as sensitivity increases, specificity decreases?