This meeting was a reaction to the extensive press coverage given to statements by Stephen Hawking, Elon Musk, Bill Gates and others regarding the potential dangers from unchecked development of Artificial Intelligence. Their concern is expressed in a letter originating from the Future of Humanity Institute. Here are links to the letter (which references a paper entitled Research priorities for robust and beneficial artificial intelligence), and Slate and Atlantic articles about the matter. Sam Harris has also weighed in on the topic.
The magnitude of the problem and how soon it might be manifest (and if it actually is a problem) were discussed. Autonomous weapons systems and the possibility of a ‘fundamentalist’ AI were named as two examples used to point out potential hazards of unrestricted AI development. Our group members admittedly have little direct experience or specialized knowledge to call on in evaluating the situation, but agree it raises an ethical issue, one that extends to other areas of science and resulting technologies.
The Future of Life letter is open for signatures, and this group will sign collectively. We hope to pursue the topic in local media and philospher’s cafes, presenting a humanist take on a complicated issue.