Science, Technology & Health: September 2017 Archives


Since rogue AI is in the news recently, it's worth remembering that AI can be dangerous even if it isn't malevolent. Nick Bostrum's paperclip maximizer is the canonical example.

First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function--in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.

About this Archive

This page is a archive of entries in the Science, Technology & Health category from September 2017.

Science, Technology & Health: July 2017 is the previous archive.

Science, Technology & Health: October 2017 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Science, Technology & Health: September 2017: Monthly Archives

Site Info

Support