Muhahahaha Super AI. You can’t make super crazy plans if you can’t see, hear, or feel. You can’t do anything if you have no information other than the information needed to do this one exact job.
This addresses potentially dangerous AIs with the CIA style “Need-to-Know-Basis” strategy. This strategy would never give any AI access to more information than is needed to perform its task. This strategy would never give any AI access to tools more than is needed to perform its task. We would specialize and modularize AI for each task we want done. This is by far the safest route we can go down in my opinion.
This method can be substantiated in multiple ways. The most common idea is the oracle AI. Here you create a Super Artificial Intelligence with no access to the outside world. Its only input is your input queries. Its only possible output are answers. You then use its superintelligence to solve some of our most pressing problems. Including the alignment problem. This I believe is OpenAI’s intended strategy.
While the oracle AI is one substantiation of this strategy… there are others. You could create an AI that is actually multiple AI’s all with different roles. No single agent has full control or access to all information.
There are some big tradeoffs with strategy. First, developing specialized AI versions for each task may be resource intensive. Creating, maintaining, and updating multiple AI systems tailored to specific roles could require significant investment in terms of time, money, and computational power. This means, to truly implement this strategy, you would likely want to train a generalized model and then specialize it. This would save resources, but potentially put you back to square one. The generalized model might be dangerous if misaligned. But at least this way, you can have a small team, checking on the generalized model, while the public uses a much safer set of specialized and decentralized versions.
The other big problem with input-to-output throttling is that this would limit the AI's potential for innovation and problem-solving. By constraining AI's access to information, we hinder its ability to discover novel solutions or make connections between seemingly unrelated data points. The safety is worth it in my opinion but... nothing is preventing people from NOT using this strategy. So, while you try to safely divide and conquer your AI, others decide to go full steam ahead. This is a responsible smart strategy that slows you down. Worldwide regulation is needed to have everyone use this strategy.
Hot comments
about anything