The Core Problem
Traditional autoencoders compress an image into a discrete point in a low-dimensional “latent space.”A Probabilistic Approach
Instead of mapping an input to a single point, a VAE maps it to a probability distribution (specifically a Gaussian).- The Encoder: Predicts parameters of the distribution: Mean () and Variance ().
- The Latent Space: By representing data as overlapping “clouds” rather than points, the space becomes continuous.
The Objective Function: ELBO
To train a VAE, we maximize the Evidence Lower Bound (ELBO). This objective balances reconstruction accuracy with latent space organization.Mathematical Derivation of ELBO
The goal is to maximize the probability of our data, expressed as the log density . Since calculating this directly is intractable, we use marginalization to introduce the latent variable . Step 1: Marginalization Step 2: The Variation Trick We multiply and divide by the approximate posterior (our Encoder) to express the integral as an expectation: Step 3: Jensen’s Inequality Because the logarithm function is concave, we can “swap” the log and the expectation to find the lower bound: Step 4: Final Decomposition Using Bayes’ Formula (), we can break the ELBO into the two components used for training:1. Reconstruction Loss ()
This term represents the Likelihood. It measures how well the Decoder can recreate the original data given a latent sample . Under a Gaussian assumption, this is typically implemented as Mean Squared Error (MSE):2. KL Divergence
This term measures the “distance” between the approximate posterior and the prior . We typically assume the prior is a Standard Normal Distribution . For a univariate Gaussian, the closed-form solution is:The Tug-of-War: wants to separate data to ensure accuracy
(scattering), while wants to pull all data toward the center
(overlapping). This tension creates a smooth, navigable latent space.
The Reparameterization Trick
In standard backpropagation, you cannot flow gradients through a random sampling operation (). To solve this, we move the randomness to an external variable .Mathematical Deduction
We define the latent vector as a deterministic function: By treating as a constant during the backward pass, we can calculate gradients for and directly:Capabilities & Trade-offs
Smooth Interpolation
You can “walk” between two latent vectors to seamlessly blend features
(e.g., changing a smile to a frown).
Data Generation
Generate entirely new samples by drawing random vectors from the standard
normal prior.
Limitations
- Blurriness: VAEs tend to produce softer images than GANs. This is because loss encourages the model to “average” its predictions when uncertain.
- Inference: While foundational for models like Stable Diffusion, vanilla VAEs struggle with high-resolution, sharp details without advanced modifications like VQ-VAEs.
Resources
- Original Paper: Auto-Encoding Variational Bayes
- Concepts: ELBO, Reparameterization Trick, Latent Variables.