Kernel agnostic mcmc from deformable beta splatting #687
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I’d like to begin by expressing our gratitude to the gsplat framework—our Deformable Beta Splatting (DBS) core is built on top of it. In developing DBS, we’ve taken care to introduce only minimal changes, so we hope our method can seamlessly contribute back to the gsplat repo.
I am initiating this PR to see the possibility of merging DBS into the main branch. This PR implements Section 3.4: Kernel-Agnostic Markov Chain Monte Carlo from the paper.$\lambda_o = 0.01$ yields primitive opacities $o \sim \mathcal{N}(0.01, 0.03^2)$ , which validates our small-opacity assumption.
Theoretically, when primitive opacities are small, simply adjusting opacity guarantees distribution-preserved densification—regardless of the splatting kernel’s shape or the number of densification numbers.
Empirically, setting
This PR should reduce the training time and mem cost, and facilitate future arbitrary kernel explorations.
However, one unexpected observation is that removing the scale-adjustment step and eliminating binomial storage did not reduce training time or memory usage. I also experimented with a CUDA implementation that omits scale adjustment, but observed the same runtime and memory profile.
This needs further investigation.
Original Benchmark
Eval Stats (step 29999)
Train Stats (step 29999)
This PR
Eval Stats (step 29999)
Train Stats (step 29999)