Since rogue AI is in the news recently, it's worth remembering that AI can be dangerous even if it isn't malevolent. Nick Bostrum's paperclip maximizer is the canonical example.

First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function--in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

0 TrackBacks

Listed below are links to blogs that reference this entry: "Rogue" AI Can be Dangerous Even If It Isn't Malevolent.

TrackBack URL for this entry:



Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info