What’s in the box?! – Towards interpretability by distinguishing niches of value within neural networks.

King of the Hill 1

Current king of the hill (#1)

at

What’s in the box?! – Towards interpretability by distinguishing niches of value within neural networks. .

Submitted by

Josh

on

2/27/2024.

This king text has no current edits attempting to take its place.

Man, took me awhile to get through it. The ideas are really interesting i just think its too much of a slog. You have (understandably) made new words for various concepts (like inout, triggers, etc.). This makes it hard for the reader.

Particularly, this makes the intro really hard. It kind of reads like gobbiligoop. That is until you read the entire thing, then you look back and it all makes sense. If you want more people to read it (which you should, once again, really interesting perspective to interpretability) I would rewrite the intro, and perhaps the abstract to account for this.

By jake-denning ·

Reply

More to explore:

Mitigating the Potential Dangers of Artificial Intelligence

The bots are coming! Artificial intelligence’s abilities are growing. What are the potential dangers? This is the hub for discussing this coming future.

It is ...

Are Large Language Models Glorified Auto-text? Where Does LLMs Intelligence Fail?

Where are the weaknesses in large language models? Do they have predictable failings? Can I (playground bully style) call them glorified auto text, a stochastic...

How Artificial Intelligence works

Artificial Intelligence promises to change our world. Let’s learn about it!

Think of a neural network as a student taking a test. The network guesses answers, ...

What Are Some Tips And Tricks For Training Deep Neural Networks?