By Mark Bunting CFA, ACA, CA(SA), Associate professor of finance at Rhodes University
Picture a stock exchange in which 1% of the shares are delisted for bankruptcy every year. Clearly, it would be useful to be able to avoid catastrophic investment losses related to these failures. So let us assume that you have a test for bankruptcy which seems pretty accurate. 98% of the time, when this test says that a company will go bankrupt within the next year, it gets it right. And in 97% of cases, when the test results show that a share is safe, it is also correct.
Now imagine that you have invested a sizeable amount in a specific company. You run the test for your investment, and the results indicate looming disaster. What is the probability that you will lose all your money? The correct answer is not 99%, or 98%, or 97%. It is a little less than 25%. In other words, a test that you thought had high precision suddenly seems worse than useless. If it predicts bankruptcy, it is three times more likely than not that your investment will survive the year unscathed.
To many, these are puzzling and disconcerting results. There is little that is intuitive about a supposedly highly accurate test turning out to be wrong most of the time. Indeed, there is quite a lot going on here. Most obviously, it underlines one of the propositions of behavioural finance: that the majority of us are terrible statisticians. There is even a name for the cognitive affliction that causes us to make such hopelessly inaccurate inferences from situations like this: base rate neglect.
Essentially, the common mistake is to fixate on the fact that an accurate test has told us that a firm is a lost cause. Why would we doubt this? Surely the correct conclusion is clear, and bankruptcy is almost certain. However, there is one crucial piece missing from the construct, when framed in these terms: there is only a 1 in 100 chance that any given listing will fail.
Including this so-called base rate in the analysis, things become a little clearer. In a 1,000-share stock exchange, 10 listings will fail in a given year. Our test will correctly identify almost all of these. So far, so good. But it is the false positives that wreck the effectiveness of our procedure: we will also incorrectly identify 30 safe securities as being at risk.
Thomas Bayes, an 18th century mathematician, is justifiably famous for giving us the theorem, applied in this column, that can help you overcome at least some of your computational inadequacies. Always assuming, of course, that you’re prepared to do a little statistics.