This post wanders all over the place and I’m not sure I’m articulating my thoughts very well. Comments and criticism are welcome.
Fannie Mae and Freddie Mac (I don’t know why these companies have such ridiculous names either) are bound by regulations to hold enough capital (cash dollars) in order to remain solvent across some theoretical worst-case scenarios. The regulators dreamed up some really extreme situations that could likely bankrupt these companies, and insisted that the companies held enough cash to survive.
When I worked at Fannie Mae in the department that wrestled with the C++ model that calculated our reserve requirements for these 10-year stress tests, we used to joke around about how unlikely these stress tests really were. We would say that we might as well buy insurance against Martian invasions, or against all the animals teaming up together to attack humanity.
While Fannie Mae was legally complying with these unrealistic scenarios, the sub-prime crisis was a scenario that they were not prepared for, and it slaughtered them. The CEO had to step down. The price fell from around $80 a share when I left in 2001 to $18 today.
The sub-prime crisis at its core is very mundane. Lenders got sloppy and investors let their greed entice them to take risks they shouldn’t have. That’s all there is to it. Local banks lent money to high-risk borrowers, then the banks sold the loan to Fannie Mae, who sold the loans to Wall Street. Investors preferred the high-return investments over the low-return boring crap.
No perfect storm was necessary to trigger this. It was just a whole lot of people getting sloppy and eventually enough straws accumulated to break the camel’s back. The same pattern played out in the seventeenth century and probably a hundred times since then.
Now I’m a workaday programmer, and I see the same dynamic in code. People write elaborate systems to protect against ridiculously unlikely scenarios but then skimp on the boring stuff. Maybe they get the hard parts done but never make sure their app’s internals are well documented, easy to maintain, and intuitively designed.
In my experience, it’s the mundane bugs, not the diabolically clever hackers, that cause me the most grief.
If I write some algorithm that costs O(n2), I will almost immediately start trying to tame it down. The voices in my brain scream about worst-case costs. Macho programmers write badass algorithms. However, I find that the really smart thing to do is to spend a few minutes thinking about the likely use cases. If I know that for the forseeable future, I’m never going to run this algorithm with n > 5, then I think the grown-up thing to do is to write a big fat docstring that reminds me later about this risk, and then move on to getting other stuff done.
The market rewards a good-enough and finished solution more than an potentially amazing but currently unfinished solution.
If Fannie Mae had focused on just making sure that they were vetting the loans better, things wouldn’t have been so bad. The theoretical worst case scenarios are not going to happen before the more likely stuff goes wrong. I worked at Fannie Mae preparing against Martian invaders. We ignored the termites in the walls, so to speak.