KLGaussian

class colibri.regularizers.KLGaussian(mean=0.01, stddev=2.0)[source]

Bases: Module

KL Divergence Regularization for Gaussian Distributions.

Code adapted from [2] Jacome, Roman, Pablo Gomez, and Henry Arguello. “Middle output regularized end-to-end optimization for computational imaging.” Optica 10.11 (2023): 1421-1431.

R(y)=KL(py,pz)=12i=1n(1+log(σyi2)log(σzi2)σyi2+(μyiμzi)2σzi2)

where μyi and σyi are the mean and standard deviation of the input tensor yY, respectively, and μzi and σzi are the target mean and standard deviation, respectively.

forward(y)[source]

Compute KL divergence regularization term.

Parameters:

y (torch.Tensor) – Input tensor representing a Gaussian distribution.

Returns:

KL divergence regularization term.

Return type:

torch.Tensor

Examples using KLGaussian:

Demo Colibri.

Demo Colibri.