Science, Technology & Health: March 2007 Archives

It seems like cancer is in the news more than ever these days, and the health benefits of light alcohol consumption have been touted for years (even claims that alcohol improves brain health). However, new research indicates that even light alcohol consumption will increase your risk from cancer.

Scientists have known for a hundred years about the link between alcohol consumption and cancer. A study from Paris in 1910 showed that 80 percent of patients with cancer of the esophagus or gastric track were alcoholics. More recently, scientists have found correlations between alcohol consumption and cancer of the mouth, pharynx, larynx, esophagus, liver, large bowel, and even the breasts. Yet lab experiments have always failed to show the effects in animals that investigators knew to be true in humans.

Until now.

It seems past studies used too much alcohol -- in concentrations of 20 percent -- and the animals just wasted away while showing no tumor growth. But when Gu used concentrations of one percent -- about one to two drinks per day in humans -- to study blood vessel growth, he detected stimulated tumor growth in both chick embryos and mice. ...

Gu's findings, now confirmed by other scientists, are evidence of what many have long suspected -- alcohol, even in moderation, increases cancer risk.

Therefore I'll continue my policy of teetotaling.

GeekPress linked to a Wired article by Bruce Schneier about how human brains are poor judges of risk, but unfortunately the article approaches the question from an entirely wrong direction and therefore comes to some meaningless (and ridiculous) conclusions. (Set aside the pervasive issue of assumed evolution for the time being.)

The article does set up the problem reasonably well, highlighting the difference between our reflexive amygdala and our analytical neocortex.

But the world is actually more complicated than that. Some scary things are not really as risky as they seem, and others are better handled by staying in the scary situation to set up a more advantageous future response. This means there's an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for handling it.

We humans have a completely different pathway to cope with analyzing risk. It's the neocortex, a more advanced part of the brain that developed very recently, evolutionarily speaking, and only appears in mammals. It's intelligent and analytic. It can reason. It can make more nuanced trade-offs. It's also much slower.

So here's the first fundamental problem: We have two systems for reacting to risk -- a primitive intuitive system and a more advanced analytic system -- and they're operating in parallel. It's hard for the neocortex to contradict the amygdala.

Fine. The problem comes when the article asserts that the neocortex is bad at making predictions because it has "rough edges".

All this is about the amygdala. The second fundamental problem is that because the analytic system in the neocortex is so new, it still has a lot of rough edges evolutionarily speaking. Psychologist Daniel Gilbert wrote a great comment that explains this:
The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That's what brains did for several hundred million years -- and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

Our ability to duck that which is not yet coming is one of the brain's most stunning innovations, and we wouldn't have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

A lot of the current research into the psychology of risk are examples of these newer parts of the brain getting things wrong.

That's silly. Sure, the neocortex often makes bad predictions, but the reason for that has nothing to do with "rough edges" (whatever that means). The problem is very simple: the immediate future is much easier to predict than the distant future. If you're walking down the sidewalk and a lion starts running at you, it isn't hard to imagine what's going to happen next; a quick, irresistible physiological "fight or flight" response is a good way to handle a life-threatening situation that demands immediate action. The neocortex might prefer to weigh the various options: will fighting this lion make my peers respect me? Would a lion-skin rug look better in my living room or foyer? But such predictions are almost impossible to make very far in advance, which is what we imply when we say "hindsight is 20/20".

The author believes that our neocortex is flawed because it can't predict the future as well as computers can.

And it's not just risks. People are not computers. We don't evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases -- generally known as "heuristics." These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs. We have ways of generating close-to-optimal answers quickly with limited cognitive capabilities. Don Norman's wonderful essay, Being Analog, provides a great background for all this.

Someone please show me where I can buy one of these computers that can predict the future. Alas, they don't exist. Computers are no better at predicting the future than are humans, and in some areas humans do significantly better. We can analyze more varied information than computers can because we're great at parameterizing a wide spectrum of data types. Our heuristics and "hunches" are beyond the ken of state-of-the-art artificial intelligence, and there's no immediate prospect that this will change.

The fact of the matter is that we don't know the best funds to pick for our 401(k) because that sort of prediction is hard, not because our neocortexes are somehow lacking. It's easy to predict if a baseball is about to plonk your nose or if a lion is dangerous, but as the time horizon extends it becomes increasingly hard to analyze risk, and that won't change no matter how long our neocortexes have to evolve.

Finally, the most complex systems that humans interact with are composed of other humans. If all of our neocortexes were improved by evolution (or cybernetic enhancement, or whatever), the net effect would probably be zero. If everyone else in the stock market has a super-neocortex just like me, what advantage I have when I select my investments? None, because my competitors will all be using their superior analytical skills against me.

Update:

AdamReed points out in the comments that the economy isn't a zero-sum game in which wins by one party are equal to losses by another party. This is true! In competitive markets the wins can be larger than the losses, which is why we see economic growth over time. My point in the "finally" paragraph above is that improved neocortexes wouldn't give one investor an advantage over any other, and wouldn't give one businessman an advantage over another in a particular niche. Improved neocortexes probably would improve the creation and exploitation of new niches, however, and thereby enhance broad economic growth in an absolute sense if not in a relative sense.

I've never understood why a generic gas station can be empty while right across the street customers are lining up at a Shell station charging twenty cents more per gallon. As I'd always assumed, generic gas and brand-name gas are essentially identical.

At the Maryland Fuel Testing Laboratory, chemists conducted a battery of tests. First, they verified that gas was formulated correctly for the season. Then, they checked for contaminants, like excessive sediment or diesel, accidentally mixed with the gasoline.

They also ran the gas through an elaborate engine to make sure it got the 87 octane level people pay for. Both samples easily met state standards.

"By and large, it's one and the same. … You will find results will almost mirror each other," said Bob Crawford, who works at the lab. "There are going to be slight variations -- but gasoline is gasoline."

When gasoline arrives at regional distribution centers, it's all the same. Different gas station chains then buy the raw fuel and add their own blend of detergents. In the past, there might have been more of a difference between different brands of regular unleaded, but these days the EPA requires that all gas contain a minimum amount of detergent to keep car engines clean.

If you're paying for a particular brand of gasoline, "you would be paying more for brand loyalty, primarily," Crawford said. "Some people feel more comfortable dealing with a particular brand." ...

"The generic, no, will not do harm at all," Crawford said. "I use the lowest price. It makes no difference what the brand is."

Paying extra money for a brand-name is a foolish waste of money, whether we're talking about Prada purses or Shell gasoline.

Here's an astounding article about why Hummers are better for the environment than Priuses -- and it doesn't even mention smug!

Building a Toyota Prius causes more environmental damage than a Hummer that is on the road for three times longer than a Prius. As already noted, the Prius is partly driven by a battery which contains nickel. The nickel is mined and smelted at a plant in Sudbury, Ontario. This plant has caused so much environmental damage to the surrounding environment that NASA has used the ‘dead zone’ around the plant to test moon rovers. The area around the plant is devoid of any life for miles.

The plant is the source of all the nickel found in a Prius’ battery and Toyota purchases 1,000 tons annually. Dubbed the Superstack, the plague-factory has spread sulfur dioxide across northern Ontario, becoming every environmentalist’s nightmare. ...

All of this would be bad enough in and of itself; however, the journey to make a hybrid doesn’t end there. The nickel produced by this disastrous plant is shipped via massive container ship to the largest nickel refinery in Europe. From there, the nickel hops over to China to produce ‘nickel foam.’ From there, it goes to Japan. Finally, the completed batteries are shipped to the United States, finalizing the around-the-world trip required to produce a single Prius battery. Are these not sounding less and less like environmentally sound cars and more like a farce? ...

When you pool together all the combined energy it takes to drive and build a Toyota Prius, the flagship car of energy fanatics, it takes almost 50 percent more energy than a Hummer - the Prius’s arch nemesis.

Through a study by CNW Marketing called “Dust to Dust,” the total combined energy is taken from all the electrical, fuel, transportation, materials (metal, plastic, etc) and hundreds of other factors over the expected lifetime of a vehicle. The Prius costs an average of $3.25 per mile driven over a lifetime of 100,000 miles - the expected lifespan of the Hybrid.

The Hummer, on the other hand, costs a more fiscal $1.95 per mile to put on the road over an expected lifetime of 300,000 miles. That means the Hummer will last three times longer than a Prius and use less combined energy doing it.

So just like real environmentalists drink tap water, real environmentalists drive Hummers.

(HT: GeekPress.)

Here's a neat item: an artificially intelligent expert system was cited for effectively practicing law without a license for helping prepare bankruptcy filings. From the decision

The software did, indeed, go far beyond providing clerical services. It determined where (particularly, in which schedule) to place information provided by the debtor, selected exemptions for the debtor and supplied relevant legal citations. Providing such personalized guidance has been held to constitute the practice of law. ...

(The) system touted its offering of legal advice and projected an aura of expertise concerning bankruptcy petitions; and, in that context, it offered personalized -- albeit automated -- counsel. ... We find that because this was the conduct of a non-attorney, it constituted the unauthorized practice of law.

That's awesome.

(HT: Intelligent Machines, a blog I'll have to watch closely.

My brother sent me a neat article about a car that runs on compressed air and has many promising qualities.

Many respected engineers have been trying for years to bring a compressed air car to market, believing strongly that compressed air can power a viable "zero pollution" car. Now the first commercial compressed air car is on the verge of production and beginning to attract a lot of attention, and with a recently signed partnership with Tata, India’s largest automotive manufacturer, the prospects of very cost-effective mass production are now a distinct possibility. The MiniC.A.T is a simple, light urban car, with a tubular chassis that is glued not welded and a body of fibreglass. The heart of the electronic and communication system on the car is a computer offering an array of information reports that extends well beyond the speed of the vehicle, and is built to integrate with external systems and almost anything you could dream of, starting with voice recognition, internet connectivity, GSM telephone connectivity, a GPS guidance system, fleet management systems, emergency systems, and of course every form of digital entertainment. The engine is fascinating, as is and the revolutionary electrical system that uses just one cable and so is the vehicle’s wireless control system. Microcontrollers are used in every device in the car, so one tiny radio transmitter sends instructions to the lights, indicators etc. ...

90m3 of compressed air is stored in fibre tanks. The expansion of this air pushes the pistons and creates movement. The atmospheric temperature is used to re-heat the engine and increase the road coverage. The air conditioning system makes use of the expelled cold air. Due to the absence of combustion and the fact there is no pollution, the oil change is only necessary every 31.000 miles.

My only hesitation is that carrying highly compressed gases can be extremely dangerous. Despite what you may see in action movies, gasoline-powered cars rarely explode; however, a compressed air car could easily explode if involved in an accident that impacts or punctures its air tank.

Researchers in Belgium have developed a model to attempt to explain how opinions change within a population, but I'm not sure their assumptions are valid. My masters thesis was in a very similar vein, but used a more complex model.

The key, say European researchers, is how strongly the groups communicate with each other. The work could explain how language differences persist across geographic boundaries and how political thought can quickly become polarized. ...

To model the evolution of opinions, researchers led by physicist Renaud Lambiotte of the University of Liege in Belgium imagined two groups, initially isolated, whose members gradually begin to talk to members of the other group.

They supposed for simplicity that individuals hold one of two opinions, assigned randomly at the start. People then change their views by a “majority rule” – each person tends to adopt the opinion that is held by a majority of those with whom they are linked in the social network.

Because of the majority rule system, it's perfectly logical that increasing connections between groups would lead to equilibrium (agreement) across the population. However, no opinion space is really binary and very few people make up their minds based purely on what the majority of their friends think. It seems that these assumptions might be so simplifying that they miss the essence of the problem at hand, but I reached a similar conclusion in my masters research.

There's a lot of to-do about how the current crop of virtual worlds could lead to the next killer app of the internet age (after email and the web), but I think there's one critical feature that these virtual worlds presently lack: interconnectivity.

[Linden Labs'] backers include some of the world's smartest, richest, and most successful tech entrepreneurs. The chairman and first big outside investor is Mitch Kapor, creator of Lotus 1-2-3, the spreadsheet application that helped begin the PC software revolution. Other investors include eBay founder Pierre Omidyar, Amazon (Charts) CEO Jeff Bezos, and Microsoft chief technology architect (and inventor of Lotus Notes) Ray Ozzie - each credited with a seminal networked product of our age.

They think Second Life may be next, and some respected tech pundits agree. Says Mark Anderson, author of the Strategic News Service newsletter: "In two years I think Second Life will be huge, probably as large as the entire gaming community is today."

The problem with Second Life, World of Warcraft, and the upcoming Sony virtual world called "Home" is that unlike the web and email, none of these worlds can connect to any other. Each world is self-contained and proprietary, developed for the profit of the owning company. In contrast, web pages can be loaded in innumerable browsers, and pages can be created that link to pages owned by anyone, anywhere in the world. Similarly, users of Microsoft Outlook can send emails to Gmail users, not just other users of Outlook. That interconnectivity is the reason that email and the web are so powerful and compelling.

Until someone designs the virtual world equivalents to HTML and Firefox that allow users to seamlessly jump from one independent virtual world to another, there is no way this technology will be good for anything more than making toys. By analogy, the current crop of virtual worlds are to the future what GEnie and CompuServe were to the modern web.

After installing Windows Vista on my new machine I discovered the Windows Experience Index, a numerical measure of how well a computer system will be able to run Windows Vista and other software. This is a pretty cool feature if it measures something useful, and despite disparaging comments around the net it appears that it does.

The Windows® Experience Index is a new feature built into Windows Vista™. It is designed to help consumers understand how well Windows Vista and the software running on it will perform on a specific PC. The index achieves this by assessing the performance of the PC and assigning a score to it. The higher the score, the better the PC will perform.

The overall PC performance is represented by the base score. The base score is derived from 5 sub-scores for each of the following 5 attributes:

- Processor: calculations per second

- Memory: operations per second

- Graphics: desktop performance for Windows Aero graphics

- Gaming graphics: 3D graphics performance. Useful for gaming and 3D business applications

- Primary hard disk: The data transfer rate of the primary hard disk

The numbers are determined by actual tests run on your system, not by just looking at the types of components you have installed. The goal is that the scores for a given system will stay constant unless the hardware is changed, so future systems with new technology will be able to be compared on an ever-expanding scale. Good idea, if implemented properly.

Click below for an amazing picture from NASA of two volcanic plumes on Jupiter's moon Io.

(HT: My brother.)

SimulScribe is a company that provides voice mail to email transcription, a service I've been craving for years. I hate listening to voice mail. If they can transcribe call-back numbers correctly this will be a great service.

The Comprehensive Test Ban Treaty "bans all nuclear explosions in all environments, for military or civilian purposes", which presents quite a difficulty for American scientists developing a new nuclear warhead.

One of the assurances given by defense officials to Congress is that the new warhead will not have to undergo actual testing. Once developed, it would be used in the Trident missiles on submarines and eventually would replace warheads on the Air Force's missile arsenal, officials said. ...

Of overriding concern to members of Congress has been that the warhead be developed without the need for underground tests. The administration has sought to assure Congress that the design would not require such testing.

That's ridiculous, and especially so because America hasn't ratified the CTBT. We've got plenty of desert land to use for testing, and it's pretty foolish to not perform a full suite of tests on our new nuclear arsenal. Congress' insistence on this matter has weakened our national security substantially, not least because it almost forced the approval of a proposal based on an older warhead design that was tested in the 1980s rather than a new, untested technology.

Scott Aaronson addresses the hype surrounding quantum computing in a series of questions and answers. The science behind this stuff is out of my domain, but the problems are quite interesting. His site has a lot about the topic and quantum mathematics in general, so it should be a great read. (HT: Scientific American.)

About this Archive

This page is a archive of entries in the Science, Technology & Health category from March 2007.

Science, Technology & Health: February 2007 is the previous archive.

Science, Technology & Health: April 2007 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Science, Technology & Health: March 2007: Monthly Archives

Site Info

Support