Skip to content

Conditional Likelihood Loss #25

@MahdiShAhmadi

Description

@MahdiShAhmadi

Is there any way we can have a conditional likelihood loss like the following? x=<x1,x2> is the input.

Forward pass

lls_all = pc(x)
lls_x2 = pc(x2) # assuming we can handle missing values in training loop (?)
lls_x1_given_x2 = lls_all - lls_x2

Backward pass

lls_x1_given_x2.mean().backward()

Even if we could do this in code, does it make theoretical sense in the E-M algorithm?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions