The problems with LLMs and the paths being explored to fix them.

Our current AI models have massive problems. They do not plan. They gravitate towards normalcy. Their world model is NOT a set of functional components. This does not mean we will not solve these issues. Current LLMs are likely over hyped. Future AI ...

Are Large Language Models Glorified Auto-text? Where Does LLMs Intelligence Fail?

Where are the weaknesses in large language models? Do they have predictable failings? Can I (playground bully style) call them glorified auto text, a stochastic parrot, a bloody word calculator!

Glorified auto-text?

As an oversimplified summary, Lar...

Mitigating the Potential Dangers of Artificial Intelligence

The bots are coming! Artificial intelligence’s abilities are growing. What are the potential dangers? This is the hub for discussing this coming future.

It is unclear exactly how capable AI is going to become. Will they be narrow intelligences or ge...

Artificial Intelligence The Alignment Problem

Artificial intelligence promises two possible futures: one of remarkable promises and another of daunting dangers.

These futures are advancing rapidly. You could find yourself in a very different world very soon. To navigate this changing landscape, we need to focus on the alignment problem. We need to make sure Artificial Intelligence is aligned ...

More to explore:

Generating social media posts - AI prompting

Most AI prompting for social media are automated t...

AI Prompting collaborative notes

Everyone and their grandma is a prompt engineer ap...

Image preview

Summarizing and understanding academic articles - AI Prompting

Lets summarize academic articles!

Why? because 1. ...