Skip to content

Kernel agnostic mcmc from deformable beta splatting #687

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

RongLiu-Leo
Copy link
Contributor

I’d like to begin by expressing our gratitude to the gsplat framework—our Deformable Beta Splatting (DBS) core is built on top of it. In developing DBS, we’ve taken care to introduce only minimal changes, so we hope our method can seamlessly contribute back to the gsplat repo.

I am initiating this PR to see the possibility of merging DBS into the main branch. This PR implements Section 3.4: Kernel-Agnostic Markov Chain Monte Carlo from the paper.
Theoretically, when primitive opacities are small, simply adjusting opacity guarantees distribution-preserved densification—regardless of the splatting kernel’s shape or the number of densification numbers.
Empirically, setting $\lambda_o = 0.01$ yields primitive opacities $o \sim \mathcal{N}(0.01, 0.03^2)$, which validates our small-opacity assumption.

This PR should reduce the training time and mem cost, and facilitate future arbitrary kernel explorations.

However, one unexpected observation is that removing the scale-adjustment step and eliminating binomial storage did not reduce training time or memory usage. I also experimented with a CUDA implementation that omits scale adjustment, but observed the same runtime and memory profile.

This needs further investigation.

Original Benchmark

Eval Stats (step 29999)

Scene psnr ssim lpips ellipse_time num_GS
garden 27.30 0.855 0.104 0.03 1000000
bicycle 25.57 0.773 0.184 0.04 1000000
stump 26.99 0.789 0.162 0.05 1000000
bonsai 32.76 0.953 0.106 0.04 1000000
counter 29.43 0.924 0.131 0.05 1000000
kitchen 31.67 0.936 0.087 0.04 1000000
room 32.47 0.938 0.132 0.04 1000000
Mean 29.46 0.881 0.129 0.04 1000000

Train Stats (step 29999)

Scene mem ellipse_time num_GS
garden 1.30 429.19 1000000
bicycle 1.32 416.98 1000000
stump 1.31 417.77 1000000
bonsai 1.38 571.39 1000000
counter 1.47 640.24 1000000
kitchen 1.39 578.49 1000000
room 1.42 574.03 1000000
Mean 1.37 518.30 1000000

This PR

Eval Stats (step 29999)

Scene psnr ssim lpips ellipse_time num_GS
garden 27.30 0.854 0.105 0.04 1000000
bicycle 25.60 0.774 0.185 0.04 1000000
stump 27.01 0.790 0.164 0.05 1000000
bonsai 32.67 0.953 0.106 0.04 1000000
counter 29.51 0.924 0.131 0.04 1000000
kitchen 31.68 0.935 0.087 0.04 1000000
room 32.44 0.938 0.132 0.03 1000000
Mean 29.46 0.881 0.130 0.04 1000000

Train Stats (step 29999)

Scene mem ellipse_time num_GS
garden 1.30 446.47 1000000
bicycle 1.32 433.73 1000000
stump 1.32 429.02 1000000
bonsai 1.39 573.84 1000000
counter 1.46 638.57 1000000
kitchen 1.39 586.67 1000000
room 1.44 649.40 1000000
Mean 1.38 536.82 1000000

Given opacity regularization, adjusting the opacity alone guarantees distribution-preserved densification, regardless of how many primitives densified or which splatting kernel is chosen.
@RongLiu-Leo
Copy link
Contributor Author

Another practice is that only do the opacity regularization during densification to make distribution preserved. Otherwise the scene looks too transparent at the end. Cancelling this regularization won't hurt the metrics but produces more visual appealing results.

Change

if cfg.opacity_reg > 0.0:
        loss = (
            loss
            + cfg.opacity_reg
            * torch.abs(torch.sigmoid(self.splats["opacities"])).mean()
        )

to something like

if cfg.opacity_reg > 0.0 and iteration < cfg.strategy.refine_stop_iter:
        loss = (
            loss
            + cfg.opacity_reg
            * torch.abs(torch.sigmoid(self.splats["opacities"])).mean()
        )

@liruilong940607
Copy link
Collaborator

liruilong940607 commented May 14, 2025

I like that it simplify the MCMC and still keep the same metrics!!!

But hurting runtime is less ideal. If we can get at least same runtime I would be happy to merge it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants