In Tokyo, where I traveled recently, protesters thronged the sidewalks outside the Prime Minister’s office as he agonized over re-opening two of the country’s 54 nuclear power reactors — they’d been shuttered in the wake of the disaster at Fukushima last March. The Japanese people I met seemed serenely calm — polite and reserved. Seeing them rise up in this way was a testament to the passions aroused by nuclear power.
I’ve witnessed some of the same sort of passion in the halls of academe, where the debates about nuclear safety are cloaked in the logic of research and the language of dispassion, but in reality can be just as emotional.
My colleague Lucas Davis and I recently wrote a paper arguing that market deregulation led to increased output from nuclear power plants. (I blogged about this work recently here.) Safety regulations remained under the jurisdiction of the Nuclear Regulatory Commission, but the market restructuring changed the incentives faced by producers.
As I’ve presented this work in seminars, fellow scholars have accused us of ignoring the safety implications of the market reforms. A commenter on a Washington Post blog post on the paper articulated what I suspect some of them fear: “The idea that nuclear safety will be in the hands of greedy corporate toadies, instead of highly trained and independent engineers is terrifying.”
What they’re arguing, in essence, is that deregulation has made us more likely to face a Fukushima-scale disaster — or worse. That’s a difficult worry to respond to because what’s good for the world — we’ve had very few commercial nuclear disasters — poses a problem for research. I count only two and a half such disasters — Chernobyl, Fukushima and Three Mile Island. So, there’s very little data to draw on in predicting the causes of disasters.
One way to sidestep this problem is to examine more minor safety issues — to, as empirical economists say, use them as proxies. This is kind of like using near-misses to understand airplane crashes.
Catie Hausman, a grad student at UC Berkeley, takes this approach in a recent paper. She analyzes things like nuclear-plant fires, unplanned power outages, and worker radiation exposure and finds no evidence that safety deteriorated at the privately operated reactors and some evidence that it improved.
Why might this happen? For one, it’s not clear that safety and efficiency compete for managerial attention and funds. Running a plant efficiently may mean paying extra attention to safety, as breaches can force the plant offline, neither producing power nor revenue. Also, a problem at one plant can lead to downtime at others, as regulators suspend operations or impose costly new rules. Consider what this might mean for Exelon, now the biggest operator in the country. It owns 16 plants, and if it’s careless at one, it may feel the costs in terms of increased regulations at its 15 others.
Granted, getting better at avoiding minor incidents doesn’t prove that you’re also getting better at avoiding major ones. But determining how much to rely on nuclear power going forward — the very question Japan is wrestling with now — means coming to terms with the uncertainty of predicting the next major accident. Given the research challenges that I mentioned, the question almost defies typical cost-benefit calculations. (The New York Times blog had an interesting post on evaluating unforeseen nuclear risks Monday.)
Even so, we have to accept that shutting down reactors imposes costs, too. These are not as terrifying as an accident, but they can be even more pervasive. The most visible impact of the Fukushima disaster that I saw in Tokyo wasn’t Geiger counters or food warnings but rather the pleas – on subway posters, hotel fliers, and notices posted in offices — to save electricity because, without its reactors, Japan didn’t have enough.
Cross-posted from the Energy Economics Exchange, a blog published by the Energy Institute at Haas.