Skip to main content
Generative AI isn’t just about Large Language Models. At its core, generative AI is about creating new data from scratch. While standard autoencoders are excellent for compression, they fail as generative models. This documentation explores the Variational Autoencoder (VAE), popularized by Kingma & Welling (2013).

The Core Problem

Traditional autoencoders compress an image into a discrete point in a low-dimensional “latent space.”
The Discontinuity Gap: Because the latent space is not regularized, it is often disorganized. Sampling a random point between two trained clusters often results in “gibberish” because the decoder hasn’t learned to interpret those empty regions.

A Probabilistic Approach

Instead of mapping an input to a single point, a VAE maps it to a probability distribution (specifically a Gaussian).
  • The Encoder: Predicts parameters of the distribution: Mean (μ\mu) and Variance (σ2\sigma^2).
  • The Latent Space: By representing data as overlapping “clouds” rather than points, the space becomes continuous.

The Objective Function: ELBO

To train a VAE, we maximize the Evidence Lower Bound (ELBO). This objective balances reconstruction accuracy with latent space organization.

Mathematical Derivation of ELBO

The goal is to maximize the probability of our data, expressed as the log density lnp(x)\ln p(x). Since calculating this directly is intractable, we use marginalization to introduce the latent variable zz. Step 1: Marginalization lnp(x)=lnp(x,z)dz\ln p(x) = \ln \int p(x, z) dz Step 2: The Variation Trick We multiply and divide by the approximate posterior q(zx)q(z|x) (our Encoder) to express the integral as an expectation: lnp(x)=lnEzq(zx)[p(x,z)q(zx)]\ln p(x) = \ln \mathbb{E}_{z \sim q(z|x)} \left[ \frac{p(x, z)}{q(z|x)} \right] Step 3: Jensen’s Inequality Because the logarithm function is concave, we can “swap” the log and the expectation to find the lower bound: lnp(x)Ezq(zx)[lnp(x,z)q(zx)]\ln p(x) \ge \mathbb{E}_{z \sim q(z|x)} \left[ \ln \frac{p(x, z)}{q(z|x)} \right] Step 4: Final Decomposition Using Bayes’ Formula (p(x,z)=p(xz)p(z)p(x, z) = p(x|z)p(z)), we can break the ELBO into the two components used for training: ELBO=Ezq(zx)[lnp(xz)]ReconstructionDKL(q(zx)p(z))Regularization\text{ELBO} = \underbrace{\mathbb{E}_{z \sim q(z|x)}[\ln p(x|z)]}_{\text{Reconstruction}} - \underbrace{D_{KL}(q(z|x) \parallel p(z))}_{\text{Regularization}}

1. Reconstruction Loss (L2L_2)

This term represents the Likelihood. It measures how well the Decoder can recreate the original data xx given a latent sample zz. Under a Gaussian assumption, this is typically implemented as Mean Squared Error (MSE): Lrecon=(xix^i)2\mathcal{L}_{recon} = \sum (x_i - \hat{x}_i)^2

2. KL Divergence

This term measures the “distance” between the approximate posterior q(zx)q(z|x) and the prior p(z)p(z). We typically assume the prior is a Standard Normal Distribution p(z)=N(0,1)p(z) = \mathcal{N}(0, 1). For a univariate Gaussian, the closed-form solution is: DKL=12(σ2+μ21ln(σ2))D_{KL} = \frac{1}{2} \left( \sigma^2 + \mu^2 - 1 - \ln(\sigma^2) \right)
The Tug-of-War: L2L_2 wants to separate data to ensure accuracy (scattering), while DKLD_{KL} wants to pull all data toward the center (overlapping). This tension creates a smooth, navigable latent space.

The Reparameterization Trick

In standard backpropagation, you cannot flow gradients through a random sampling operation (zN(μ,σ2)z \sim \mathcal{N}(\mu, \sigma^2)). To solve this, we move the randomness to an external variable ϵ\epsilon.

Mathematical Deduction

We define the latent vector zz as a deterministic function: z=μ+σϵwhereϵN(0,I)z = \mu + \sigma \odot \epsilon \quad \text{where} \quad \epsilon \sim \mathcal{N}(0, I) By treating ϵ\epsilon as a constant during the backward pass, we can calculate gradients for μ\mu and σ\sigma directly: zμ=1,zσ=ϵ\frac{\partial z}{\partial \mu} = 1, \quad \frac{\partial z}{\partial \sigma} = \epsilon

Capabilities & Trade-offs

Smooth Interpolation

You can “walk” between two latent vectors to seamlessly blend features (e.g., changing a smile to a frown).

Data Generation

Generate entirely new samples by drawing random vectors from the standard normal prior.

Limitations

  • Blurriness: VAEs tend to produce softer images than GANs. This is because L2L_2 loss encourages the model to “average” its predictions when uncertain.
  • Inference: While foundational for models like Stable Diffusion, vanilla VAEs struggle with high-resolution, sharp details without advanced modifications like VQ-VAEs.

Resources