Greg Ip’s book Foolproof is about how the choices we make to danger and the choices we (should) make when things appear safe. It’s a great book. This book fits nicely among Taleb’s Antifragile, Gall’s Systemantics, and Tetlock’s Superforecating.
What I most appreciated about Ip’s work was the novelty of it. The research he dug up was new and interesting. The anecdotes were funny and tight. This was a good book. Here are six things I learned from it.
1- Seeing is believing but NOT seeing is also believing.
On the personal level seeing is believing has changed lives. Michael Lombardi, Penn Jillette, and Ezra Klein (among others) all said that when they saw someone do something – they thought they could do it too. This is not what Ip is addressing.
In Foolproof, Ip’s point is that things are happening, whether we see them or not. Just because someplace doesn’t flood this year, doesn’t mean it won’t flood in the future.
Or, as John Gall writes in Systemantics:
“Our point, repeatedly stressed in this text, is that Systems operate according to Laws of Nature, and that Laws of Nature are not suspended to accommodate our human shortcomings.”
Just because we can’t see it, doesn’t mean it’s not there.
2- When everything looks like a nail.
If we understand where someone is coming from, we can better understand their point of view. The best framework I know is spectrums.
An easy spectrum in 2016 is the political one; left or right, republican or democrat, outsider or insider. Knowing any of these starting points can help us pinpoint a candidate’s arguments. But there are many other good examples of spectrums:
– Brian Koppelman and David Levien faced a spectrum of choices – maybe one episode to a guaranteed season – with their new show Billions. They had to decided where to land on that spectrum because that would dictate their actions.
– Charlie Munger (via Tren Griffin ) pointed us to the spectrum of things that can be counted and things that can’t be counted. Munger wrote, “practically everybody overweighs the stuff that can be numbered, because it yields to the statistical techniques they’re taught in academia, and doesn’t mix in hard-to-measure stuff that may be more important.”
– In the post on black box problems we saw the mini-spectrum of complicated to complex problems. There, we noted that you had to know where you were to solve the type of problem. For example, you can’t use the same heuristics to calculate the tides as you would to predict snowfall.
– In the Chris Dixon post we looked at the spectrum of skill and luck. Dixon uses this spectrum to figure out why someone failed. Was it bad luck, or not enough skill?
Ip introduces another spectrum, engineers to ecologist. He writes:
- “Engineers use the maximum of our knowledge to solve problems.”
- “Ecologists think complex systems may produce unintended consequences whiare are worse.”
If you can identify a condition as created by engineers or ecologists you’ll understand it better. Engineered systems typically solve for small, regular problems but miss suffer from large irregular ones (black swans). Ecological systems typically suffer from small, regular events, but not from black swans.
Ip does a wonderful job in the book to point out why neither system is always the answer. Instead he nudges us to think about second order effects.
3- Solutions have second order effects, you better be sure about what they are.
A big point of the book is for people to understand that systems have ripple effects. If one person puts out sandbags in a flood, that stops their flooding problem but it doesn’t stop the water. The next person needs put out sandbags and so on, but eventually the water will flow somewhere.
That’s a clear demonstration of the second (and third and so on) effects, but many engineered solutions aren’t so clear.
– Ip quotes Brad Delong who wrote, “new ways to borrow and spread risk seemed to have little downside….it seemed worth trying. It wasn’t.” Delong – a very smart academic – didn’t foresee the second order effects of new borrowing techniques.
– In our post on CEO pay, we saw the unintended consequences (second order effects) of shifting from salary to stock options.
– Another example from Foolproof is the Mark to Market regulation that changed how banks reported their asset prices. The first order effect was that banks couldn’t hide bad loans. The second order effect was that Mark to Market added fuel to the financial panic associated with mortgage backed securities.
Figuring out second order effects is really hard, let’s move to a simpler problem, “risk homeostasis.”
4- What’s riskier, climbing Mt. Hood or driving a car?
Another nice distinction Ip makes in the book is why we engage in risky behavior. He looks at our “risk thermostat” and writes “reducing the risk increases our appetite for the activity.” But only directly. Ip found that taxis that had anti-lock brakes drove harder and faster than ones that didn’t. Drivers of cars that were safer – a better airbag system for example – didn’t have as strong an effect on risky behavior.
This idea is in Laurence Gonzalez’s book, Deep Survival too. Gonzales calls it “risk homeostasis” but the idea is the same. If we feel safer, we engage in riskier behavior to return to some internal risk level. Gonzalez writes about how four men planned a decent where, “paradoxically, the very discussion that was intended to reduce risk encouraged faith in a faulty system.” The climbers thought they shifted the descent into a less risky system and so they engaged in a more risky type of descent.
In both these cases – taxi drivers and mountain climbers – we see that when the riskiness of the act is directly reduced, people return to a basic level of risk taking.
5- A celebration for failed restauranteurs, economies, and marriages.
Failure at the micro level is often quite good for the macro level. Nassim Taleb writes that we should have a National Entrepreneur Day, “I am an ingrate toward the man whose overconfidence caused him to open a restaurant and fail, enjoying my nice meal while he is probably eating canned tuna.” Taleb notes that the overall restaurant landscape is stronger thanks to the many small failures.
Ip’s research reflects this too. He looked at the economies of India and Thailand and found, “the benefits of easy credit are sometimes worth the crisis and bailout that followed.”
In her book, The Up Side of Down, Megan McArdle looked at why ‘bad’ things can be ‘good.’ She writes, “if you do a Google search of the phrase “The best thing that ever happened to me,” marriage and childbirth certainly do pop up. But….Here’s a partial list of the unexpected items that make up life’s Greatest Hits: Divorce My husband’s affair Cancer Get fired.”
6- How to be safe? Have space.
If you want to avoid disaster you need space between you and the disaster.
– Ip writes that airplanes work in the system of “big sky, little airplanes.” In-air crashes rarely happen. It’s when the space is constricted (runways, airports, approach, takeoff) that most crashes occur.
– Charlie Munger suggests space too: “No matter how wonderful [a business] is, it’s not worth an infinite price. We have to have a price that makes sense and gives a margin of safety considering the normal vicissitudes of life.”
– Goldman Sachs has a “Global Core Liquid Assets” fund of $179B in case of a liquidity crisis.
– Nassim Taleb writes in Antifragile that redundancy is a form space, but not an easy one to justify. “Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens—usually.”
Thanks for reading, I’m @mikedariano on Twitter.
I wrote another post Danger in Safey where I oultined three ways safety makes us feel safe. The thinking with that post was inspired by some of Ip’s writings.