The bots are coming. Artificial intelligence is escalating in capabilities and potential effects. This is the hub for discussing how we may address this coming future.
With general capabilities the potential dangers are many. Whether the AI simply remain tools OR become their own agents, regardless we are looking at many potential dangers.
Potential dangers if AI remain tools.
Potential dangers if AI remain tools.
If AI remains tools to individuals, AI will be the most effective tool in history… by a wide margin. You may have access to the equivalent of many expert advisors in various fields. You may have access to a never sleeping, intelligent force that acts on your wishes.
In such a world, individuals with access to AI and resources become incredibly capable.
Imagine a future where:
Individuals can conduct and keep up with cutting edge research without doing anything. (Their AI is doing it)
Individuals can perform genetic modification on themselves and nature (including viruses).
Individuals can access robots - from household cleaners and companions to… drone armies.
No one knows what is true, because everything can be faked easily. Real time video calls can be faked.
Cyber cyber warfare includes self-protecting and self-updating viruses.
States could use such tools to create 1984 style “always watching” environments.
Potential dangers if AI become their own agents.
Potential dangers if AI become their own agents.
It is possible (perhaps likely) we are creating artificial intelligences which will become effective agents in their own right. They may affect their own will on the world without human supervision.
Whether or not these agents act while considering human wishes and morality is an active research problem: called alignment. i.e. Is the agent aligned with our interests.
A world where AI becomes their own agents, quickly leads to some serious concerns.
We may end up with a massive population of AI agents doing shady stuff on the internet, while effectively avoiding counter measures.
Fast recursive improvement may lead to one super intelligence superintelligence becoming overwhelming powerful extremely quickly. This is called Fast take off AI… a FOOM event.
We may have some unaligned AI’s and some aligned AI’s. This could lead to a conflict between the two groups. “The alignment wars.”
The dangers of regulation
The dangers of regulation
Regulation is required, but we cannot pretend that there are not dangers in such regulations. The world is not black and white, and the best course of action is nuanced.
Stagnation of technology. Many people call for us to stop technological progress completely because it is becoming too dangerous. While intelligent regulation in specific areas can be highly effective, an absolute halt will be disastrous to the economy, and is likely impossible due to game theory mechanics.
Techno-Authoritarian States. Many people will call to give the States immense powers. If every civilian has potential access to AI, every civilian becomes a threat. People will use this threat to justify authoritarian behaviors. Authoritarian governments with artificial intelligence technology provides some scary possibilities. We may start seeing nations that look a whole like 1984. Your every action monitored, all in the name of safety.
Possible mitigation frameworks
When considering AI regulation, we must consider this an enormous challenge with multiple failure points. We are dealing with politically divided countries in a politically divided world. Our best bet in my opinion is multi-national agreements to use open-sourced AI wrappers. This would be similar to open-source self-updating anti-virus software. With such measures we hope and act so that the vast resources of aligned actors always counter the unaligned.
Hot comments
about anything