ls_mlkit.diffuser.euclidean_ddpm_diffuser module

class ls_mlkit.diffuser.euclidean_ddpm_diffuser.EuclideanDDPMConfig(n_discretization_steps: int = 1000, ndim_micro_shape: int = 2, use_probability_flow=False, use_clip: bool = True, clip_sample_range: float = 1.0, use_dyn_thresholding: bool = False, dynamic_thresholding_ratio=0.995, sample_max_value: float = 1.0, betas=None, *args, **kwargs)[source]

Bases: EuclideanDiffuserConfig

Config Class for Euclidean DDPM Diffuser

class ls_mlkit.diffuser.euclidean_ddpm_diffuser.EuclideanDDPMDiffuser(config: EuclideanDDPMConfig, time_scheduler: TimeScheduler, masker: MaskerInterface, conditioner_list: list[Conditioner], model: Model4DiffuserInterface, loss_fn: Callable[[Tensor, Tensor, Tensor], Tensor])[source]

Bases: EuclideanDiffuser

compute_loss(batch: dict[str, Any], *args: Any, **kwargs: Any) dict[source]

Compute loss

Parameters:

batch (dict[str, Any]) – the batch of data

Returns:

a dictionary that must contain the key “loss”

Return type:

dict

forward_process(x_0: Tensor, discrete_t: Tensor, mask: Tensor, *args: list[Any], **kwargs: dict[Any, Any]) dict[source]

Forward process, from \(x_0\) to \(x_t\)

Parameters:
  • x_0 (Tensor) – \(x_0\)

  • discrete_t (Tensor) – the discrete time steps \(t\)

  • mask (Tensor) – the mask of the sample

Returns:

a dictionary that must contain the key “x_t”

Return type:

dict

forward_process_n_step(x: Tensor, t: Tensor, next_t: Tensor, padding_mask: Tensor, *args: Any, **kwargs: Any) Tensor[source]

Forward process n step, from t to next_t

Parameters:
  • x (Tensor) – the sample

  • t (Tensor) – the timestep

  • next_t (Tensor) – the next timestep

  • padding_mask (Tensor) – the padding mask

Returns:

the sample at the next timestep

Return type:

Tensor

forward_process_one_step(x: Tensor, t: Tensor, padding_mask: Tensor, *args: Any, **kwargs: Any) Tensor[source]

Forward process one step

Parameters:
  • x (Tensor) – the sample

  • t (Tensor) – the timestep

  • padding_mask (Tensor) – the padding mask

Returns:

the sample at the next timestep

Return type:

Tensor

get_posterior_mean_fn(score: Tensor = None, score_fn: Callable = None)[source]

Get the posterior mean function

Parameters:
  • score (Tensor, optional) – the score of the sample

  • score_fn (Callable, optional) – the function to compute score

Returns:

the posterior mean function

Return type:

Callable

prior_sampling(shape: Tuple[int, ...]) Tensor[source]

Sample initial noise used for reverse process

Parameters:

shape (Tuple[int, ...]) – the shape of the sample

Returns:

the initial noise

Return type:

Tensor

q_xt_x_0(x_0: Tensor, t: Tensor, mask: Tensor) Tuple[Tensor, Tensor][source]

Forward process

\[q(x_t|x_0) = \mathcal{N}(\sqrt{\alpha_t} x_0, \sqrt{1-\alpha_t} I)\]
Parameters:
  • x_0 (Tensor) – \(x_0\)

  • t (Tensor) – \(t\)

  • mask (Tensor) – the mask of the sample

Returns:

the expectation and standard deviation of the sample

Return type:

Tuple[Tensor, Tensor]

sample_xtm1_conditional_on_xt(x_t: Tensor, t: Tensor, padding_mask: Tensor, *args: Any, **kwargs: Any) Tensor[source]

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs.

Based on the standard DDPM sampling formula:

\[ \begin{align}\begin{aligned}\hat{\mathbf{x}}_0:=\frac{1}{\sqrt{\bar{\alpha}_t}}(\mathbf{x}_t - \sqrt{1-\bar{\alpha}_t}\mathbf{\epsilon}_{\theta}(\mathbf{x}_t,t))\\\mathcal{N}\left( \boldsymbol{x}_{t-1}; \underbrace{\frac{\sqrt{\alpha_t}(1-\bar{\alpha}_{t-1})\boldsymbol{x}_t + \sqrt{\bar{\alpha}_{t-1}}(1-\alpha_t)\hat{\boldsymbol{x}}_0}{1-\bar{\alpha}_t}}_{\mu_q(\boldsymbol{x}_t, \hat{\boldsymbol{x}}_0)}, \underbrace{\frac{(1-\alpha_t)(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_t}\mathbf{I}}_{\Sigma_q(t)} \right)\end{aligned}\end{align} \]
Parameters:
  • x_t (Tensor) – the sample at timestep t

  • t (Tensor) – the timestep

  • padding_mask (Tensor) – the padding mask

Returns:

the sample at timestep t-1

Return type:

Tensor