If we assume that we truly are innovating our way to artificial super intelligence (ASI)… should we open source it? Should every person with an internet connection be able to download the latest ASI attempt? The internet has collapsed this important discussion into a black and white slug fest… but right here, right now, lets try and do things a little differently.
Take a look at the image below. Which of the “things to avoid” do you believe is the most likely? the most scary? Which should we be working hardest to prevent?
All of the above are scary outcomes. They are all possible futures that everyone should be interested in avoiding. But different people focus on different problems. And where you focus your attention effects everything.
NOTE: For the following discussion we are assuming some future AI advancement not our current LLM models. We are assuming AI with agency, able to make plans in the real world and the ability to carry them through.
Argument 1 for Open Sourcing AI: Keep the playing field level
If your number one issue is “Entrenched dictators / monopolies” then your gut reaction is likely to open source AI.
The argument is simple. We should keep an even playing field. We really don’t want one group with super intelligence while the rest of us don’t.
Artificial Super intelligence is a MASSIVE advantage. Forget the “super” for a second, lets just imagine an AI at human intelligence with real human-like agency.
It can perform any work a human can.
It can work 1000-10000 times faster, depending on your compute.
It can copy and paste. You can make millions of them, depending on your compute.
They do not sleep, and they only do what you tell them to do.
If an entrenched dictator or monopoly has such a system, and everyone else doesn’t… then they win. Full-stop. End of story. Whatever they want to do, they will be able to do. They essentially gain the intellectual power of a nation state.
So how can we avoid such being completely at their whim? You level the playing field, you make super intelligence open source.
Argument 1 for Close Sourcing AI: Offense becomes more powerful then defense.
Consider those future scary problems again. Lets now imagine your number one issue is “asshole creates a super virus” or “communications break down”. Well now your gut reaction is likely to close source AI.
The argument is simple. There are people out there who want to lash out at the world. Terrorists, madmen, whatever. If we are truly talking about SUPER intelligence, then it would be really scary if they got their hands on it.
With an open source ASI they could create a biological virus that kills millions. With an open source ASI they could create a computer virus that takes out all communication. These are domains where the offense is much more powerful than the defense. Even if we have ASI’s creating medicine and vaccines, it is easier to attack then to defend.
With ASI anyone can become that asshole younger brother who knocks down your Lego tower. Except now that Lego tower is the infrastructure of civilization.
Notes for editors
Lets keep adding creditable arguments. Try and keep them short and sweet. At some point we can discuss more synthesis solutions.
Hot comments
about anything