KLGaussian

class colibri.regularizers.KLGaussian(mean=0.01, stddev=2.0)[source]

Bases: Module

KL Divergence Regularization for Gaussian Distributions.

Code adapted from [2] Jacome, Roman, Pablo Gomez, and Henry Arguello. “Middle output regularized end-to-end optimization for computational imaging.” Optica 10.11 (2023): 1421-1431.

\[\begin{equation*} R(\mathbf{y}) = \text{KL}(p_\mathbf{y},p_\mathbf{z}) = -\frac{1}{2}\sum_{i=1}^{n} \left(1 + \log(\sigma_{\mathbf{y}_i}^2) - \log(\sigma_{\mathbf{z}_i}^2) - \frac{\sigma_{\mathbf{y}_i}^2 + (\mu_{\mathbf{y}_i} - \mu_{\mathbf{z}_i})^2}{\sigma_{\mathbf{z}_i}^2}\right) \end{equation*}\]

where \(\mu_{\mathbf{y}_i}\) and \(\sigma_{\mathbf{y}_i}\) are the mean and standard deviation of the input tensor \(\mathbf{y}\in\yset\), respectively, and \(\mu_{\mathbf{z}_i}\) and \(\sigma_{\mathbf{z}_i}\) are the target mean and standard deviation, respectively.

forward(y)[source]

Compute KL divergence regularization term.

Parameters:

y (torch.Tensor) – Input tensor representing a Gaussian distribution.

Returns:

KL divergence regularization term.

Return type:

torch.Tensor

Examples using KLGaussian:

Demo Colibri.

Demo Colibri.