IT BEGAN AT 4:00 in the morning on March 28, 1979, at Three Mile Island, Pennsylvania. The nuclear reactor was operating at nearly full power when a secondary cooling circuit malfunctioned and affected the temperature of the primary coolant. This sharp rise in temperature made the reactor shut down automatically. In the second it took to deactivate the reactor’s system, a relief valve failed to close. The nuclear core suffered severe damage, but operators couldn’t diagnose or deal with the unexpected shutdown of the reactor in the heat of the moment.
Sociologist Charles Perrow later analyzed why the Three Mile Island accident had happened, hoping to anticipate other disasters to come. The result was his seminal book Normal Accidents. His goal, he said, was to “propose a framework for characterizing complex technological systems such as air traffic, marine traffic, chemical plants, dams, and especially nuclear power plants according to their riskiness.”
One factor was complexity: The more components and interactions in a system, the more challenging it is when something goes wrong. With scale comes complexity, whether we are thinking of the technology or the organization that supports it. Imagine you run a start-up where everyone sits in the same loft space. From where you sit, you can easily see what they are all doing. In a large organization, that visibility is lost. The moment a leader can’t see the inner workings of the system itself—in this case, staff activities—complexity rises.
Perrow associated this type of complexity with tech failures. At Three Mile Island,