How dangerous is AI?
This guy confidently says AI is going to kill us all. He makes some compelling arguments. It is really worth reading his blog post. AGI Ruin: A List of Lethalities - LessWrong. Below is a drastically reduced summary.
1. AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require.
2. An AGI smarter than ourselves will not find it difficult to escape counter-measures and bootstrap existing human infrastructure to kill us all. (He gives the example of shipping instructions to a biolab to make nanobots or a virus)
3. We need to get the alignment problem right on the first critical try. We do not get to try again. Science is full of blunders soooo this is not good if his assumptions are true.
4. We cannot say "Lets not build AGI" because GPU's are everywhere, and countries and companies are in an arms race to build bigger, faster, stronger.
I just started a topic on mitigating the potential dangers of artifical intelligence check it out.
Hot comments
about anything