Yeah, that is entirely possible. But other options are possible consider:
1. The current reigning theory of how the human brain is bootstrapped is coming up with predictions on the next set of patterns (whether in space or time). AKA we might be bootstrapped in a similar (but different) manner. See Jeff Hawkins
2. They did not release the architecture or training details but I have to think they have moved beyond just training it on predicting the next word. If nothing else, it may have gotten a similar treatment as chatGPT.
3. This argument can be used forever and there is no way of disproving it. We could have AGI's running the world government and still be arguing over whether they are just predicting the next logical set of words. So we should not use this argument to say: "everything is fine, there is no need for concern."
IDK it is a large language model, its trained to guess the next set of words. Couldn't it do all of this just by... guessing the next set of words given the context and query.
· Reply
Make
lupin
your Representive in the
Collaborative Summary of the ‘sparks of Agi’ Paper - Gpt4
topic?
Share
Moderate