logo

Project AGI

Building an Artificial General Intelligence

This site has been deprecated. New content can be found at https://agi.io

Saturday, 21 February 2015

Mitigating the perceived future risks of AI




Just a few days ago, Jay posted here about the recent public attention to AI and the potential future dangers that it presents.

Discussion on the topic continues, but it's not all talk. There are at least two institutes focussed on practical steps to mitigate the risks.

There is an institute called MIRI that exists solely to "ensure smarter-than-human intelligence has a positive impact". Founded in 2000, it is non-profit and has many very high profile people on the team, including Eliezer Yudkowsky, Nick Bostrom and entrepreneur Peter Thiel. They publish and hold workshops regularly.

Also, the Future of Life Institute is currently focussed on AI. It recently received a well publicised hefty donation from Elon Musk, founder of Space X and Tesla. As mentioned in Jay's article, Musk is from the camp that believes AI could pose an existential threat to mankind.

Bostrom is director of another related institute called the Future of Humanity Institute, a part of Oxford University.

More of the discussion is set to become practical work and these organisations are leading by example. Usually considerations of the ethical issues lags behind the science. In this case however, due to science fiction and misunderstandings of the technology, the perceived dangers are way ahead of the actual threat. That's a healthy situation to be in to ensure we are prepared.

No comments :

Post a Comment