Abstract
Most experts agree that the major blackout that affected a large part of the North American upper Midwest and Northeast last August 14, 2003, was no anomaly and will definitely happen again. Records show that between 1984 and 2000, utilities logged 11 outages affecting more than 4000 megawatts, making the probability of any one outage 325 times greater than mathematicians would have expected. Mathematicians, engineers, and physicists have set out to explain the statistical overabundance of big blackouts. Two distinct models emerged based on two general theories of systems failure. One, an optimization model presumes that power engineers make conscious and rational choices to focus resources on preventing smaller and more common disturbances on the lines; large blackouts occur because the grid isn't forcefully engineered to prevent them. The other model views blackouts as a surprisingly constructive force in an unconscious feedback loop that operates over years or decades. Blackouts spur investments to strengthen overloaded power systems, periodically counter-balancing pressures to maximize return on investment and deliver electricity at the lowest possible cost. The mainstream view among power system engineers continues to be the answer to reliability problems is to make the grids more robust physically, improve simulation techniques and computerized real-time controls, and improve regulation. What system theorists suggest is that even if all that is done and done well, the really big outages still will happen more often than they should.

This publication has 1 reference indexed in Scilit: