ls_mlkit.diffuser.euclidean_ddim_diffuser module¶
- class ls_mlkit.diffuser.euclidean_ddim_diffuser.EuclideanDDIMConfig(n_discretization_steps: int = 1000, ndim_micro_shape: int = 2, use_probability_flow=False, use_clip: bool = True, clip_sample_range: float = 1.0, use_dyn_thresholding: bool = False, dynamic_thresholding_ratio=0.995, sample_max_value: float = 1.0, betas=None, n_inference_steps: int = 1000, eta: float = 0.0, *args, **kwargs)[source]¶
Bases:
EuclideanDDPMConfig
- class ls_mlkit.diffuser.euclidean_ddim_diffuser.EuclideanDDIMDiffuser(config: EuclideanDDPMConfig, time_scheduler: TimeScheduler, masker: MaskerInterface, conditioner_list: list[Conditioner], model: Model4DiffuserInterface, loss_fn: Callable[[Tensor, Tensor, Tensor], Tensor])[source]¶
Bases:
EuclideanDDPMDiffuser- get_sigma2(t: Tensor, prev_t: Tensor) Tensor[source]¶
Compute DDIM variance term
\[\sigma^2 = (\frac{1 - \bar{\alpha}_{pre}}{1 - \bar{\alpha}_{t}}) \cdot ( 1- \frac{\bar{\alpha}_{t}}{\bar{\alpha}_{pre}})\]- Parameters:
t (Tensor) – timestep
prev_t (Tensor) – previous timestep
- Returns:
\(\sigma^2\)
- Return type:
Tensor
- sample_xtm1_conditional_on_xt(x_t: Tensor, t: Tensor, padding_mask: Tensor, *args: Any, **kwargs: Any) Tensor[source]¶
DDIM sampling algorithm:
\[ \begin{align}\begin{aligned}\hat{x}_0 = \frac{x_t - \sqrt{1 - \bar{\alpha}_t} \cdot \epsilon_\theta(x_t, t)}{\sqrt{\bar{\alpha}_t}}\\\text{direction} = \sqrt{1 - \bar{\alpha}_{t-1} - \sigma_t^2} \cdot \epsilon_\theta(x_t, t)\\x_{t-1} = \sqrt{\bar{\alpha}_{t-1}} \cdot \hat{x}_0 + \text{direction} + \sigma_t \cdot z\end{aligned}\end{align} \]- Parameters:
x_t (Tensor) – the sample at timestep t
t (Tensor) – the timestep
padding_mask (Tensor) – the padding mask
- Returns:
the sample at timestep t-1
- Return type:
Tensor