Lets play a game! I have a massive set of pictures. Your job is to draw a picture so that it can fit in with my set of pictures.
If I can recognize your image as a fake forgery, I win! But if I believe your image fits with the set (and I do not recognize the forgery) then you win.
This is how Generative Adversarial Networks (GANs) work. Two neural networks are set to work against each other. One network generates new data that resembles a given dataset, while the other network tries to distinguish between real and generated data. The goal is for the generative network to produce data that the discriminative network can't tell apart from real data.
Both networks are trained at the same time, improving their methods based on whether or not the discriminative network is fooled. This is case where we use game theory to create an arms race between two different networks so that they can both learn and improve without labeled data.
Here is a google colab that you can play with.
Google Colaboratory
WARNING: Training GANs can be really annoying.
On top of all the normal problems with training neural networks, GANs can have their own set of unique challenges.
Mode Collapse: The generator starts producing the same output (or limited variety) over and over and over again, essentially failing to capture the diversity of the data. If it works once, right?
Instability: The training process can become unstable, with either the generator or discriminator becoming much better than the other.
Sensitivity to Hyperparameters: GANs are often sensitive to the choice of hyperparameters, which can make the training process tricky.
Evaluation Challenges: Unlike supervised learning models, evaluating the performance of GANs is less straightforward, often requiring subjective human judgment.
Resource Intensive: Training GANs usually requires a lot of computational power and time, especially for complex models and large datasets.
My suggestion: Its better to script kitty an existing model.
Hot comments
about anything