top of page
Search

What London’s V-1 Bomb Maps Taught Us About Confirmation Bias

  • 6 days ago
  • 3 min read


During the V-1 flying bomb campaign in 1944, Londoners stared at maps of falling bombs and became convinced the Germans had a strategy. They began looking for areas where bombs didn’t fall — areas where they suspected German spies might be hiding.


This suspicion intensified after the first Vergeltungswaffe-1 bomb struck a railway bridge in Mile End on 13 June 1944, killing six people and leaving 200 homeless.

Better known as the Vengeance Weapon, or the V-1, the first operational cruise missile was a deadly weapon of terror and, more worryingly, apparent precision.


The British became fixated on bombing patterns, turning to maps and statistics in an obsessive attempt to understand how and why targets appeared to cluster.

As 1944 progressed, thousands of V-1 bombs rained down on London. The death toll from this formidable weapon alone eventually exceeded 5,000 people.


Yet the German attacks seemed anything but random — the bombs fell in intriguingly clear clusters, patterns that British military intelligence were desperate to decode.


It was only later, shortly after the war, when a British actuary named R. D. Clarke analysed the distribution of V-1 strikes across London, that it became apparent no pattern existed at all: the targets were entirely random.


Clarke divided the map of London into a grid and counted how many bombs had fallen in each square. If the Germans were targeting particular districts, some squares should have contained far more strikes than chance alone would predict. Instead, the pattern fitted almost perfectly with what statisticians expect from random events — what is known as a Poisson distribution.


In other words, the clustering that had so alarmed observers was exactly what randomness produces. Random events do not spread themselves politely across a map; they bunch together, creating the irresistible illusion of design.


So why did it take so long to discover this?


Our brains are pattern machines. Usually this is useful. A rustle in the grass might be wind; it might also be something with teeth. Better, from an evolutionary point of view, to spot one tiger too many than one too few.


The difficulty is that the same mental machinery that helped keep our ancestors alive now spends much of its time inventing stories out of noise. This habit is broadly referred to as confirmation bias — the tendency to look for, interpret and favour information that fits what we already believe.


When carrying out any form of investigation, whether in the workplace or elsewhere, confirmation bias is the enemy of the investigator. It can shape the behaviour of the groups being investigated — and it can quietly shape the thinking of the investigator themselves.


Our brains are wired to draw quick conclusions and stick to them, forcing the round circle of facts through the square hole of preconception. This is excellent for avoiding tigers in the forest, but terrible for collaboration in the workplace.

Humans find a pattern they like, then stop looking.


Investigators and managers have to be better than this. That does not mean stopping the search for patterns; it means not holding on to them too dogmatically. The trick, in conflict as in statistics, is to resist the premature satisfaction of believing the pattern must mean what you first thought it meant.


We have to remember that we are not naturally wired to pursue evidence with complete neutrality. Like moths drawn to lantern light, we often go looking not to discover the truth — but to be reassured.

 
 
 

Comments


bottom of page