GeekPress linked to a Wired article by Bruce Schneier about how human brains are poor judges of risk, but unfortunately the article approaches the question from an entirely wrong direction and therefore comes to some meaningless (and ridiculous) conclusions. (Set aside the pervasive issue of assumed evolution for the time being.)
The article does set up the problem reasonably well, highlighting the difference between our reflexive amygdala and our analytical neocortex.
But the world is actually more complicated than that. Some scary things are not really as risky as they seem, and others are better handled by staying in the scary situation to set up a more advantageous future response. This means there's an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for handling it.
We humans have a completely different pathway to cope with analyzing risk. It's the neocortex, a more advanced part of the brain that developed very recently, evolutionarily speaking, and only appears in mammals. It's intelligent and analytic. It can reason. It can make more nuanced trade-offs. It's also much slower.
So here's the first fundamental problem: We have two systems for reacting to risk -- a primitive intuitive system and a more advanced analytic system -- and they're operating in parallel. It's hard for the neocortex to contradict the amygdala.
Fine. The problem comes when the article asserts that the neocortex is bad at making predictions because it has "rough edges".
All this is about the amygdala. The second fundamental problem is that because the analytic system in the neocortex is so new, it still has a lot of rough edges evolutionarily speaking. Psychologist Daniel Gilbert wrote a great comment that explains this:The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That's what brains did for several hundred million years -- and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.
Our ability to duck that which is not yet coming is one of the brain's most stunning innovations, and we wouldn't have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.
A lot of the current research into the psychology of risk are examples of these newer parts of the brain getting things wrong.
That's silly. Sure, the neocortex often makes bad predictions, but the reason for that has nothing to do with "rough edges" (whatever that means). The problem is very simple: the immediate future is much easier to predict than the distant future. If you're walking down the sidewalk and a lion starts running at you, it isn't hard to imagine what's going to happen next; a quick, irresistible physiological "fight or flight" response is a good way to handle a life-threatening situation that demands immediate action. The neocortex might prefer to weigh the various options: will fighting this lion make my peers respect me? Would a lion-skin rug look better in my living room or foyer? But such predictions are almost impossible to make very far in advance, which is what we imply when we say "hindsight is 20/20".
The author believes that our neocortex is flawed because it can't predict the future as well as computers can.
And it's not just risks. People are not computers. We don't evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases -- generally known as "heuristics." These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities. Don Norman's wonderful essay, Being Analog, provides a great background for all this.
Someone please show me where I can buy one of these computers that can predict the future. Alas, they don't exist. Computers are no better at predicting the future than are humans, and in some areas humans do significantly better. We can analyze more varied information than computers can because we're great at parameterizing a wide spectrum of data types. Our heuristics and "hunches" are beyond the ken of state-of-the-art artificial intelligence, and there's no immediate prospect that this will change.
The fact of the matter is that we don't know the best funds to pick for our 401(k) because that sort of prediction is hard, not because our neocortexes are somehow lacking. It's easy to predict if a baseball is about to plonk your nose or if a lion is dangerous, but as the time horizon extends it becomes increasingly hard to analyze risk, and that won't change no matter how long our neocortexes have to evolve.
Finally, the most complex systems that humans interact with are composed of other humans. If all of our neocortexes were improved by evolution (or cybernetic enhancement, or whatever), the net effect would probably be zero. If everyone else in the stock market has a super-neocortex just like me, what advantage I have when I select my investments? None, because my competitors will all be using their superior analytical skills against me.
AdamReed points out in the comments that the economy isn't a zero-sum game in which wins by one party are equal to losses by another party. This is true! In competitive markets the wins can be larger than the losses, which is why we see economic growth over time. My point in the "finally" paragraph above is that improved neocortexes wouldn't give one investor an advantage over any other, and wouldn't give one businessman an advantage over another in a particular niche. Improved neocortexes probably would improve the creation and exploitation of new niches, however, and thereby enhance broad economic growth in an absolute sense if not in a relative sense.