text
stringclasses
4 values
source
stringclasses
1 value
Generative Adversarial Networks Basic Theory of GANs (Generative Adversarial Networks) Generative Adversarial Networks GAN is one type of machine learning frameworks developed by Ian Goodfellow in 2014. They are mainly used in the generation of new data samples similar to what they were trained on, for instance, generating realistic pictures, music, and texts. The basic concept of GAN is basically having two networks, that is a generator and another as a discriminator, play against each other in a game-theoretic setting that allows them to learn and improve with time. GANs consist of two primary components: 1. Generator (G): The generator is tasked to produce artificial data samples, like images, in such a way that it would be impossible to distinguish them from the training data. It accepts random noise (most often sampled from normal or uniform distribution) as input and applies transformations on this noise which produces a data sample mimicking the distribution of real data. The aim of the generator is to make it impossible for the discriminator to classify its false outputs as the real data. 2. Discriminator (D): The purpose of a discriminator is to determine whether a given data sample is real or generated. It takes real and fake data samples as input and produces a probability that the input belonged to the real process or was generated. The purpose of a discriminator is to classify the real from the fake data samples. GAN Architecture A standard GAN architecture consists: The input to the Generator: The generator takes as input random noise (often dubbed the latent vector, sampled normally from a Gaussian distribution) and tries to map this noise to a data space, for example, images. Generator Network: Here, the generator comprises layers of fully connected or convolutional neural networks in the case of images. Non-linear activation functions such as Re LU or Leaky Re LU are used to output something that is similar to real data. Discriminator Network: Often, it happens that the discriminator is a simple classifier network, typically CNN when images are involved. The discriminator network takes either real or generated data as input and outputs a binary label, either real or fake. Loss Function: GANs typically work on a special loss function, referred to below as adversarial loss, wherein the generator and the discriminator play a zero-sum game. How GANs Work ? GANs work in successive rounds of training the generator and discriminator in what is rather like a minimax game The objective of the generator To generate data which appears real, in the training process, its objective is to keep the discriminator from correctly distinguishing real data from generated data. The objective of the discriminator: The discriminator aims to identify the given input as real or fake. It tries to maximize its ability to differentiate this one by the discriminator's increasing correct classification. Training of GAN should be in that manner, where generator gains mastery in producing realistic forgeries so that the discriminator could not reliably distinguish between real and fake samples any further. Training Process of GANs
Gan ASSIGNMENT.pdf
The two alternating training steps of GANs are as follows: 1. Training the Discriminator, (D): Two batches of the following data to which the discriminator is trained on real data samples from the training set. The discriminator is fed by these and fake data samples generated by the generator. The discriminator learns to maximise the likelihood that correctly distinguishes real samples from fake samples. In other words, the discriminator has a loss function based on a two-classifier belonging to one class, namely true or false. 2. Training the Generator, ( G): Generator learns to deceive the discriminator. In other words, it tries to minimize the ability of the discriminator to classify its generated (artificial data) as fakes. The generator loss is dependent upon the capability with which it can actually generate data that the discriminator classifies as real. These two actions are performed alternately: Discriminator First, it updates to become better at distinguishing real from fake data; then the generator updates to become better at fooling the discriminator. Adversarial Loss in GANs The training process of GANs revolves around adversarial loss, which reflects the competition between the generator and discriminator. The adversarial loss is formulated as a minimax optimization problem: The discriminator maximizes the probability of correctly classifying real and fake samples. The generator tries to reduce the discriminator's power in identifying whether the generated data are fake or not. The overall objective function of the GAN is as follows: TYPES OF GA N 1-DCGAN Deep Convolutional GAN Overview: DCGAN is known variant of the GAN variants. DCGAN incorporated CNNs in the GAN framework to attain far superior image quality and stability. In DCGAN, the generator and the discriminator both are designed with convolutional layers instead of fully connected layers, proving especially efficient for image data tasks. Architecture Highlights: Convolution: It uses both the generator and discriminator with convolutional layers, making it more effective to the task of image generation. Batch Normalization: Both the networks are subjected to batch normalization; this promotes the stability of the training process and leads to faster convergence. Leaky Re LU Activation: Leaky Re LU is used in the discriminator, and Re LU is used in the generator. No Fully Connected Layers: Neither of the networks uses any fully connected layer, which makes the architecture even more simple and efficient in processing images. Applications: Image generation: DCGANs are primarily exploited in order to generate very high-quality images. For example, it can produce images, for example, objects or landscapes, as well as even faces. Art creation: DCGANs have been utilized by artists and researchers alike to create wonderful pieces of art. Super-resolution: DCGANs can be used to enhance the resolution of poor quality images. Example:
Gan ASSIGNMENT.pdf
Celeb A dataset: Celeb A dataset utilized by DCGANs where it succeeded in producing high-resolution images of celebrity faces. 2-WGAN (Wasserstein GAN) Introduction: Wasserstein GAN (WGAN) was introduced to overcome problems such as instability and mode collapse, which are commonly seen in the standard GAN models. WGAN gives the generator a smoother gradient than gained from the traditional GAN loss, an improvement whi ch generally promotes stable training. WGAN improves upon the convergence when training while allowing for the generator to learn overtime. Architecture Highlights: Wasserstein Loss: The key innovation of the work is the use of the Earth-Mover, or Wasserstein distance to quantify differences between real and generated data distributions as a more meaningful learning signal than the original GAN loss. Weight Clipping: To enforce the Lipschitz constraint required by the Wasserstein distance, the weights of the discriminator are clipped to some small range after each update. New Discriminator: WGAN uses a critic to evaluate the "reality" of a sample instead of classifying it as real or fake, thereby replacing the discriminator of the binary classifier. Applications: High-quality image generation: Being stable in training, WGAN is preferred for the generation of high-quality images. Text-to-image synthesis: WGAN has a potential application of synthesizing images from textual descriptions, providing ways for creating visuals based on inputs given in text format. Create 3D shapes: WGANs can be used to generate 3D shapes that may come in handy for virtual environments, games, and 3D printing. Example: LFW dataset (Labelled Faces in the Wild): WGANs have produced more realistic and varied faces from this dataset compared with GAN. 3-Cycle GAN Overview: Cycle GAN is an unpaired image-to-image translation architecture. The advantage of this design is that with just one forward pass or a backward pass, it learns transformations between two different image domains, like horses to zebras, or transforming photo s to paintings without needing paired training data. Key Idea: Cycle consistency whereby there is a model that safeguards translations from one domain to another and then translate them back again into the original image. Key Architecture Highlights: Cycle Consistency Loss: It is ensured that the translation of an image from domain A to B and vice versa, and back to A results in an original image. Thus, the model prevents any random transformations from taking place. Two Generators and Two Discriminators: Cycle GAN uses two generators 鈥攐ne for each domain 鈥攁nd two discriminators 鈥攐ne for each domain, to perform the bi-directional translation between the domains.
Gan ASSIGNMENT.pdf
Unsupervised Learning: Unlike other GANs that require paired data, input-output pairs 鈥擟ycle GAN makes do with unpaired data, making it extremely flexible. Applications: Image style transfer: Cycle GAN can be very popularly used in the context of style transfer for various types of tasks, such as photo-to-painting converting photographs to be turned into paintings, for example, Van Gogh, Monet styles, changing the season of the landscape, or ev en turning the breed of an animal. Medical imaging: In healthcare, Cycle GAN is used for conversion of MRI scans from one modality to another, such as transforming from MRI to CT scans, helping in cross-modality medical image analysis. Domain adaptation: Cycle GANs can be used for adapting models trained on one domain to operate on another. This might include transforming synthetic images into realistic images to improve model performance on real-world tasks. Example: Horse to Zebra transformation: One of the most well-known applications of Cycle GAN is for the horse-to-zebra image transformation and vice versa using no paired training data.
Gan ASSIGNMENT.pdf
README.md exists but content is empty.
Downloads last month
29