Think of a neural network as a student taking a test. The network guesses answers, and a "teacher" (called the loss function) grades them. If the guesses are wrong, another process (backpropagation) helps the network learn from its mistakes. This cycle of guess, grade, and learn keeps repeating until the network gets smarter.
The humble neuron
Artificial neurons are the building blocks in neural networks that power various forms of machine learning and AI. These fundamental components take in data (inputs), weigh their importance (weights), and make decisions (activation function) to produce a specific output.
An artificial neuron has four main parts: Inputs, Weights, an Activation Function, and the Output. Let's explore this step-by-step, as illustrated in the visual aid above:
1. Inputs: The neuron receives data, known as inputs, from external sources or other neurons.
2. Weights: These inputs are then assigned a "weight," which determines their significance in the upcoming calculations.
3. Activation Function: The weighted inputs are processed by an activation function. This function makes a decision based on the data it receives.
4. Output: Finally, the activation function produces an output, which can either be a final result or serve as input for another neuron.
Neurons can be abstractly applied to many different types of problems. So, the meaning of the output decision is determined by the context.
Wholistic understanding of networks.
The solar system is a network. It is a network of objects connected by gravity. There are inputs and outputs (space debris, meteors, etc) but these inputs and outputs are not directed towards some goal. The solar system is not a learning network.
A learning network may start dumb, but over time it learns. It does this because a learning network always contains some sort of feedback loop.
The general feedback loop is as follows:
Inputs enter the network.
Given the inputs and the network internal structure, outputs are created.
Those outputs are compared to some measurable goal.
The network is updated based on how well the outputs measured up to the goal.
This general feedback loop can (with a degree of explanation) be applied to evolution, your brain, the economy, and all other types of machine learning paradigms.
Neural networks share this trait, as they are learning networks. To understand different types of neural networks you should keep the different parts of this feedback loop in mind. Always consider: What are the inputs? What is the network architecture? What are the outputs? What is the measurement system? How is the network being updated?
A neural network is a collection of artificial neurons connected to lead from inputs to outputs.
Like all learning networks there is a feedback loop. This feedback loop takes the outputs of the neural network and measures the outputs against some sort of label.
This is done using a Loss function. This Loss function measures how closely the outputs and the labels match. You can imagine it as a teacher who compares the network’s answers against some grading rubric.
Then backpropagation occurs. This is the magic that makes the network learn. It adjusts the neurons based on the loss function, fine-tuning the network. You can imagine it as the teacher beating up the network if it has done poorly!
Over and over this feedback cycle continues. Inputs are provided, the network generates outputs. The outputs are compared to labels by the loss function. And backpropagation changes the network based on how well the network performed. This is how neural networks learn.
Different Types of Neural networks
The neural network itself can have many different arrangements. You can think of the network a bit like a lego structure, with different arrangements being better in different situations.
Hot comments
about anything