Explaining the Base Rate Fallacy With Terrorist Statistics ๐Ÿ”Ž

Let us explain the Base Rate fallacy using an example of terrorism statistics.

In a city of 1 million inhabitants let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software.

The software has two failure rates of 1%:

  • The false negative rate: If the camera scans a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of the time.
  • The false positive rate: If the camera scans a non-terrorist, a bell will not ring 99% of the time, but it will ring 1% of the time.

Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T | B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the โ€˜base rate fallacyโ€™ would infer that there is a 99% chance that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the chances he/she is a terrorist are actually near 1%, not near 99%.

The fallacy arises from confusing the natures of two different failure rates. The โ€˜number of non-bells per 100 terroristsโ€™ and the โ€˜number of non-terrorists per 100 bellsโ€™ are unrelated quantities. One does not necessarily equal the other, and they donโ€™t even have to be almost equal. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The โ€˜number of non-terrorists per 100 bellsโ€™ in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.

Imagine that the first cityโ€™s entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm โ€” and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So, the probability that a person triggering the alarm actually is a terrorist, is only about 99 in 10,098, which is less than 1%, and very, very far below our initial guess of 99%.

The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists, and the number of false positives (non-terrorists scanned as terrorists) is so much larger than the true positives (the real number of terrorists).

This is incredibly important to keep in mind when optimizing our data science problems and how we are scoring them. In this example, the stakes are high for claiming that an individual is a terrorist. Therefore, we need to make sure in our training models that we are optimizing for the least amount of false positives as possible while retaining accuracy overall.

Leave a Reply