diff --git a/dev/contributing/new-example/index.html b/dev/contributing/new-example/index.html index 32278f5b9..e683693ce 100644 --- a/dev/contributing/new-example/index.html +++ b/dev/contributing/new-example/index.html @@ -1,4 +1,4 @@ Adding a new example · RxInfer.jl

Contributing: new example

We welcome all possible contributors. This page details some of the guidelines that should be followed when adding a new example (in the examples/ folder) to this package.

In order to add a new example simply create a new Jupyter notebook with your experiments in the examples/ folder. When creating a new example add a descriptive explanation of your experiments, model specification, inference constraints decisions and add appropriate results analysis. We expect examples to be readable for the general public and therefore highly value descriptive comments. If a submitted example only contains code, we will kindly request some changes to improve the readability.

After preparing the new example it is necessary to modify the examples/.meta.jl file. See the comments in this file for more information.

  1. Make sure that the very first cell of the notebook contains ONLY # <title> in it and has the markdown cell type. This is important for generating links in our documentation.
  2. The path option must be set to a local path and cannot contain subfolders.
  3. The text in the description option will be used on the Examples page in the documentation.
  4. Set hidden = true option to hide a certain example in the documentation (the example will be executed to ensure it runs without errors).
  5. Please do no use Overview as a name for the new example, the title Overview is reserved.
  6. Use the following template for equations, note that $$ and both \begin and \end commands are on the same line (check other examples if you are not sure). This is important, because otherwise formulas may not render correctly. Inline equations may use $...$ template.
$$\begin{aligned}
       <latex equations here>
-\end{aligned}$$
  1. When using equations, make sure not to follow the left-hand $$ or $ with a space, but instead directly start the equation, e.g. not $$ a + b $$, but $$a + b$$. For equations that are supposed to be on a separate line, make sure $$...$$ is preceded and followed by an empty line.

  2. Notebooks and plain Julia have different scoping rules for global variables. It may happen that the generation of your example fails due to an UndefVarError or other scoping issues. In these cases we recommend using let ... end blocks to enforce local scoping (see Gaussian Mixtures Multivariate.ipynb as an example)

  3. All examples must use and activate the local environment specified by Project.toml in the second cell (see 1.). Please have a look at the existing notebooks for an example on how to activate this local environment. If you need additional packages, you can add then to the (examples) project.

  4. All plots should be displayed automatically. In special cases, if needed, save figures in the ./pics/figure-name.ext format. Might be useful for saving gifs.

Note

Please avoid adding PyPlot in the (examples) project. Installing and building PyPlot dependencies takes several minutes on every CI run. Use Plots instead.

Note

Use make examples to run all examples or make examples specific=MyNewCoolNotebook to run any notebook that includes MyNewCoolNotebook in its file name.

+\end{aligned}$$
  1. When using equations, make sure not to follow the left-hand $$ or $ with a space, but instead directly start the equation, e.g. not $$ a + b $$, but $$a + b$$. For equations that are supposed to be on a separate line, make sure $$...$$ is preceded and followed by an empty line.

  2. Notebooks and plain Julia have different scoping rules for global variables. It may happen that the generation of your example fails due to an UndefVarError or other scoping issues. In these cases we recommend using let ... end blocks to enforce local scoping (see Gaussian Mixtures Multivariate.ipynb as an example)

  3. All examples must use and activate the local environment specified by Project.toml in the second cell (see 1.). Please have a look at the existing notebooks for an example on how to activate this local environment. If you need additional packages, you can add then to the (examples) project.

  4. All plots should be displayed automatically. In special cases, if needed, save figures in the ./pics/figure-name.ext format. Might be useful for saving gifs.

Note

Please avoid adding PyPlot in the (examples) project. Installing and building PyPlot dependencies takes several minutes on every CI run. Use Plots instead.

Note

Use make examples to run all examples or make examples specific=MyNewCoolNotebook to run any notebook that includes MyNewCoolNotebook in its file name.

diff --git a/dev/contributing/new-release/index.html b/dev/contributing/new-release/index.html index 02ddc5369..65eafec9c 100644 --- a/dev/contributing/new-release/index.html +++ b/dev/contributing/new-release/index.html @@ -1,2 +1,2 @@ -Publishing a new release · RxInfer.jl

Publishing a new release

Please read first the general Contributing section. Also, please read the FAQ section in the Julia General registry.

Start the release process

In order to start the release process a person with the associated permissions should:

  • Open a commit page on GitHub
  • Write the @JuliaRegistrator register comment for the commit:

Release comment

The Julia Registrator bot should automaticallly register a request for the new release. Once all checks have passed on the Julia Registrator's side, the new release will be published and tagged automatically.

+Publishing a new release · RxInfer.jl

Publishing a new release

Please read first the general Contributing section. Also, please read the FAQ section in the Julia General registry.

Start the release process

In order to start the release process a person with the associated permissions should:

  • Open a commit page on GitHub
  • Write the @JuliaRegistrator register comment for the commit:

Release comment

The Julia Registrator bot should automaticallly register a request for the new release. Once all checks have passed on the Julia Registrator's side, the new release will be published and tagged automatically.

diff --git a/dev/contributing/overview/index.html b/dev/contributing/overview/index.html index cea910580..a2f04f96d 100644 --- a/dev/contributing/overview/index.html +++ b/dev/contributing/overview/index.html @@ -1,2 +1,2 @@ -Overview · RxInfer.jl

Contributing

We welcome all possible contributors. This page details some of the guidelines that should be followed when contributing to this package.

Reporting bugs

We track bugs using GitHub issues. We encourage you to write complete, specific, reproducible bug reports. Mention the versions of Julia and RxInfer for which you observe unexpected behavior. Please provide a concise description of the problem and complement it with code snippets, test cases, screenshots, tracebacks or any other information that you consider relevant. This will help us to replicate the problem and narrow the search space for solutions.

Suggesting features

We welcome new feature proposals. However, before submitting a feature request, consider a few things:

  • Does the feature require changes in the core RxInfer code? If it doesn't (for example, you would like to add a factor node for a particular application), you can add local extensions in your script/notebook or consider making a separate repository for your extensions.
  • If you would like to add an implementation of a feature that changes a lot in the core RxInfer code, please open an issue on GitHub and describe your proposal first. This will allow us to discuss your proposal with you before you invest your time in implementing something that may be difficult to merge later on.

Contributing code

Installing RxInfer

We suggest that you use the dev command from the new Julia package manager to install RxInfer for development purposes. To work on your fork of RxInfer, use your fork's URL address in the dev command, for example:

] dev git@github.com:your_username/RxInfer.jl.git

The dev command clones RxInfer to ~/.julia/dev/RxInfer. All local changes to RxInfer code will be reflected in imported code.

Note

It is also might be useful to install Revise.jl package as it allows you to modify code and use the changes without restarting Julia.

Core dependencies

RxInfer.jl heavily depends on the ReactiveMP.jl, GraphPPL.jl and Rocket.jl packages. RxInfer.jl must be updated every time any of these packages has a major update and/or API changes. Developers are adviced to use the dev command for all of these packages while making changes to the RxInfer.jl. It is worth noting though that standard Julia testing utilities ignore the local development environment and always try to test the package with the latest released versions of the core dependencies. Read the section about the Makefile below to see how to test RxInfer.jl with the locally installed core dependencies.

Committing code

We use the standard GitHub Flow workflow where all contributions are added through pull requests. In order to contribute, first fork the repository, then commit your contributions to your fork, and then create a pull request on the main branch of the RxInfer repository.

Before opening a pull request, please make sure that all tests pass without failing! All examples (can be found in /examples/ directory) have to run without errors as well.

Note

Use make test, make examples and make docs commands to ensure that all tests, examples and the documentation build run without any issues. See below for the Makefile commands description in more details.

Style conventions

We use default Julia style guide. We list here a few important points and our modifications to the Julia style guide:

  • Use 4 spaces for indentation
  • Type names use UpperCamelCase. For example: AbstractFactorNode, RandomVariable, etc..
  • Function names are lowercase with underscores, when necessary. For example: activate!, randomvar, as_variable, etc..
  • Variable names and function arguments use snake_case
  • The name of a method that modifies its argument(s) must end in !
Note

RxInfer repository contains scripts to automatically format code according to our guidelines. Use make format command to fix code style. This command overwrites files. Use make lint to run a linting procedure without overwriting the actual source files.

Unit tests

We use the test-driven development (TDD) methodology for RxInfer development. The test coverage should be as complete as possible. Please make sure that you write tests for each piece of code that you want to add.

All unit tests are located in the /test/ directory. The /test/ directory follows the structure of the /src/ directory. Each test file should have the following filename format: test_*.jl. Some tests are also present in jldoctest docs annotations directly in the source code. See Julia's documentation about doctests.

The tests can be evaluated by running following command in the Julia REPL:

] test RxInfer

In addition tests can be evaluated by running following command in the RxInfer root directory:

make test
Note

Use make devtest to use local dev-ed versions of the core packages.

Makefile

RxInfer.jl uses Makefile for most common operations:

  • make help: Shows help snippet
  • make test: Run tests, supports extra arguments
    • make test test_args="distributions:normal_mean_variance" would run tests only from distributions/test_normal_mean_variance.jl
    • make test test_args="distributions:normal_mean_variance models:lgssm" would run tests both from distributions/test_normal_mean_variance.jl and models/test_lgssm.jl
    • make test dev=true would run tests while using dev-ed versions of core packages
  • make devtest: Alias for the make test dev=true ...
  • make docs: Compile documentation
  • make devdocs: Same as make docs, but uses dev-ed versions of core packages
  • make examples: Run all examples and put them in the docs/ folder if successfull
  • make devexamples: Same as make examples, but uses dev-ed versions of core packages
  • make lint: Check codestyle
  • make format: Check and fix codestyle
Note

Core packages include ReactiveMP.jl, GraphPPL.jl and Rocket.jl. When using any of the dev commands from the Makefile those packages must be present in the Pkg.devdir() directory.

+Overview · RxInfer.jl

Contributing

We welcome all possible contributors. This page details some of the guidelines that should be followed when contributing to this package.

Reporting bugs

We track bugs using GitHub issues. We encourage you to write complete, specific, reproducible bug reports. Mention the versions of Julia and RxInfer for which you observe unexpected behavior. Please provide a concise description of the problem and complement it with code snippets, test cases, screenshots, tracebacks or any other information that you consider relevant. This will help us to replicate the problem and narrow the search space for solutions.

Suggesting features

We welcome new feature proposals. However, before submitting a feature request, consider a few things:

  • Does the feature require changes in the core RxInfer code? If it doesn't (for example, you would like to add a factor node for a particular application), you can add local extensions in your script/notebook or consider making a separate repository for your extensions.
  • If you would like to add an implementation of a feature that changes a lot in the core RxInfer code, please open an issue on GitHub and describe your proposal first. This will allow us to discuss your proposal with you before you invest your time in implementing something that may be difficult to merge later on.

Contributing code

Installing RxInfer

We suggest that you use the dev command from the new Julia package manager to install RxInfer for development purposes. To work on your fork of RxInfer, use your fork's URL address in the dev command, for example:

] dev git@github.com:your_username/RxInfer.jl.git

The dev command clones RxInfer to ~/.julia/dev/RxInfer. All local changes to RxInfer code will be reflected in imported code.

Note

It is also might be useful to install Revise.jl package as it allows you to modify code and use the changes without restarting Julia.

Core dependencies

RxInfer.jl heavily depends on the ReactiveMP.jl, GraphPPL.jl and Rocket.jl packages. RxInfer.jl must be updated every time any of these packages has a major update and/or API changes. Developers are adviced to use the dev command for all of these packages while making changes to the RxInfer.jl. It is worth noting though that standard Julia testing utilities ignore the local development environment and always try to test the package with the latest released versions of the core dependencies. Read the section about the Makefile below to see how to test RxInfer.jl with the locally installed core dependencies.

Committing code

We use the standard GitHub Flow workflow where all contributions are added through pull requests. In order to contribute, first fork the repository, then commit your contributions to your fork, and then create a pull request on the main branch of the RxInfer repository.

Before opening a pull request, please make sure that all tests pass without failing! All examples (can be found in /examples/ directory) have to run without errors as well.

Note

Use make test, make examples and make docs commands to ensure that all tests, examples and the documentation build run without any issues. See below for the Makefile commands description in more details.

Style conventions

We use default Julia style guide. We list here a few important points and our modifications to the Julia style guide:

  • Use 4 spaces for indentation
  • Type names use UpperCamelCase. For example: AbstractFactorNode, RandomVariable, etc..
  • Function names are lowercase with underscores, when necessary. For example: activate!, randomvar, as_variable, etc..
  • Variable names and function arguments use snake_case
  • The name of a method that modifies its argument(s) must end in !
Note

RxInfer repository contains scripts to automatically format code according to our guidelines. Use make format command to fix code style. This command overwrites files. Use make lint to run a linting procedure without overwriting the actual source files.

Unit tests

We use the test-driven development (TDD) methodology for RxInfer development. The test coverage should be as complete as possible. Please make sure that you write tests for each piece of code that you want to add.

All unit tests are located in the /test/ directory. The /test/ directory follows the structure of the /src/ directory. Each test file should have the following filename format: test_*.jl. Some tests are also present in jldoctest docs annotations directly in the source code. See Julia's documentation about doctests.

The tests can be evaluated by running following command in the Julia REPL:

] test RxInfer

In addition tests can be evaluated by running following command in the RxInfer root directory:

make test
Note

Use make devtest to use local dev-ed versions of the core packages.

Makefile

RxInfer.jl uses Makefile for most common operations:

  • make help: Shows help snippet
  • make test: Run tests, supports extra arguments
    • make test test_args="distributions:normal_mean_variance" would run tests only from distributions/test_normal_mean_variance.jl
    • make test test_args="distributions:normal_mean_variance models:lgssm" would run tests both from distributions/test_normal_mean_variance.jl and models/test_lgssm.jl
    • make test dev=true would run tests while using dev-ed versions of core packages
  • make devtest: Alias for the make test dev=true ...
  • make docs: Compile documentation
  • make devdocs: Same as make docs, but uses dev-ed versions of core packages
  • make examples: Run all examples and put them in the docs/ folder if successfull
  • make devexamples: Same as make examples, but uses dev-ed versions of core packages
  • make lint: Check codestyle
  • make format: Check and fix codestyle
Note

Core packages include ReactiveMP.jl, GraphPPL.jl and Rocket.jl. When using any of the dev commands from the Makefile those packages must be present in the Pkg.devdir() directory.

diff --git a/dev/examples/Active Inference Mountain car/index.html b/dev/examples/Active Inference Mountain car/index.html index 127f57093..cbea89c4a 100644 --- a/dev/examples/Active Inference Mountain car/index.html +++ b/dev/examples/Active Inference Mountain car/index.html @@ -283,4 +283,4 @@ end # The animation is saved and displayed as markdown picture for the automatic HTML generation -gif(animation_ai, "./pics/ai-mountain-car-ai.gif", fps = 24, show_msg = false);

Voila! The car now is able to reach the camping site with a smart strategy.

The left figure shows the agent reached its goal by swinging and the right one shows the corresponding engine force. As we can see, at the beginning the agent tried to reach the goal directly (with full engine force) but after some trials it realized that's not possible. Since the agent looks ahead for 50 time steps, it has enough time to explore other policies, helping it learn to move back to get more momentum to reach the goal.

Now our friends can enjoy their trip at the camping site!.

Reference

We refer reader to the Thijs van de Laar (2019) "Simulating active inference processes by message passing" original paper with more in-depth overview and explanation of the active inference agent implementation by message passing. The original environment/task description is from Ueltzhoeffer (2017) "Deep active inference".

+gif(animation_ai, "./pics/ai-mountain-car-ai.gif", fps = 24, show_msg = false);

Voila! The car now is able to reach the camping site with a smart strategy.

The left figure shows the agent reached its goal by swinging and the right one shows the corresponding engine force. As we can see, at the beginning the agent tried to reach the goal directly (with full engine force) but after some trials it realized that's not possible. Since the agent looks ahead for 50 time steps, it has enough time to explore other policies, helping it learn to move back to get more momentum to reach the goal.

Now our friends can enjoy their trip at the camping site!.

Reference

We refer reader to the Thijs van de Laar (2019) "Simulating active inference processes by message passing" original paper with more in-depth overview and explanation of the active inference agent implementation by message passing. The original environment/task description is from Ueltzhoeffer (2017) "Deep active inference".

diff --git a/dev/examples/Advanced Tutorial/index.html b/dev/examples/Advanced Tutorial/index.html index 0870acf1b..17b5173bc 100644 --- a/dev/examples/Advanced Tutorial/index.html +++ b/dev/examples/Advanced Tutorial/index.html @@ -412,4 +412,4 @@ [y[4]][Bernoulli][p]: VariationalMessage() [y[5]][Bernoulli][p]: VariationalMessage() New posterior marginal for θ: Marginal(Beta{Float64}(α=5.0, β=9.0))
unsubscribe!(θ_subscription)
# Inference is lazy and does not send messages if no one is listening for them
-update!(y, coinflips)
+update!(y, coinflips) diff --git a/dev/examples/Assessing People Skills/index.html b/dev/examples/Assessing People Skills/index.html index 9bfc5e78d..42482e823 100644 --- a/dev/examples/Assessing People Skills/index.html +++ b/dev/examples/Assessing People Skills/index.html @@ -46,4 +46,4 @@ map(params, inference_result.posteriors[:s])
3-element Vector{Tuple{Float64}}:
  (0.9872448979591837,)
  (0.06377551020408162,)
- (0.4719387755102041,)

These results suggest that this particular student was very likely out on the town last night.

+ (0.4719387755102041,)

These results suggest that this particular student was very likely out on the town last night.

diff --git a/dev/examples/Autoregressive Model/index.html b/dev/examples/Autoregressive Model/index.html index 7e21b1e27..d1145116c 100644 --- a/dev/examples/Autoregressive Model/index.html +++ b/dev/examples/Autoregressive Model/index.html @@ -212,4 +212,4 @@ █▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁█ ▁ 929 ms Histogram: frequency by time 1.04 s < - Memory estimate: 271.04 MiB, allocs estimate: 2335137. + Memory estimate: 271.04 MiB, allocs estimate: 2335137. diff --git a/dev/examples/Bayesian ARMA/index.html b/dev/examples/Bayesian ARMA/index.html index 14319661b..1e2f6cb63 100644 --- a/dev/examples/Bayesian ARMA/index.html +++ b/dev/examples/Bayesian ARMA/index.html @@ -131,4 +131,4 @@ # after every new prediction we can actually "retrain" the model to use the power of Bayesian approach # we will skip this part for now end
plot(x_test, label="test data", legend=:topleft)
-plot!(mean.(predictions)[1:end], ribbon=std.(predictions)[1:end], label="predicted", xlabel="day", ylabel="price")

+plot!(mean.(predictions)[1:end], ribbon=std.(predictions)[1:end], label="predicted", xlabel="day", ylabel="price")

diff --git a/dev/examples/Bayesian Linear Regression/index.html b/dev/examples/Bayesian Linear Regression/index.html index 6909bbc71..ee504904d 100644 --- a/dev/examples/Bayesian Linear Regression/index.html +++ b/dev/examples/Bayesian Linear Regression/index.html @@ -208,4 +208,4 @@ plot(ps_b...)

He also checks the noise estimation procedure and sees that the noise variance are currently a bit underestimated. Note here that he neglects the covariance terms between the individual elements, which might result in this kind of behaviour.

scatter(1:dim_mv, v_mv, ylims=(0, 100), label=L"True $s_d$")
 scatter!(1:dim_mv, diag(mean(results_mv.posteriors[:W])); yerror=sqrt.(diag(var(results_mv.posteriors[:W]))), label=L"$\mathrm{E}[s_d] \pm \sigma$")
-plot!(; xlabel=L"Dimension $d$", ylabel="Variance", title="Estimated variance of the noise")

+plot!(; xlabel=L"Dimension $d$", ylabel="Variance", title="Estimated variance of the noise")

diff --git a/dev/examples/Chance Constraints/index.html b/dev/examples/Chance Constraints/index.html index 624d6c86c..e9dd7386b 100644 --- a/dev/examples/Chance Constraints/index.html +++ b/dev/examples/Chance Constraints/index.html @@ -163,4 +163,4 @@ end

Results

Results show that the agent does not allow the wind to push it all the way to the ground.

p1 = plot(1:N, wind.(1:N), color="blue", label="Wind", ylabel="Velocity", lw=2)
 plot!(p1, 1:N, a, color="red", label="Control", lw=2)
 p2 = plot(1:N, x, color="black", lw=2, label="Agent", ylabel="Elevation")
-plot(p1, p2, layout=(2,1))

+plot(p1, p2, layout=(2,1))

diff --git a/dev/examples/Coin Toss Model/index.html b/dev/examples/Coin Toss Model/index.html index a2b081a47..aec2168d0 100644 --- a/dev/examples/Coin Toss Model/index.html +++ b/dev/examples/Coin Toss Model/index.html @@ -39,4 +39,4 @@ plot!(rθ, (x) -> pdf(Beta(2.0, 7.0), x), fillalpha=0.3, fillrange = 0, label="P(θ)", c=1,) plot!(rθ, (x) -> pdf(θestimated, x), fillalpha=0.3, fillrange = 0, label="P(θ|y)", c=3) -vline!([θ_real], label="Real θ")

+vline!([θ_real], label="Real θ")

diff --git a/dev/examples/Conjugate-Computational Variational Message Passing/index.html b/dev/examples/Conjugate-Computational Variational Message Passing/index.html index da23c9ee3..5fd7702a9 100644 --- a/dev/examples/Conjugate-Computational Variational Message Passing/index.html +++ b/dev/examples/Conjugate-Computational Variational Message Passing/index.html @@ -135,4 +135,4 @@ meta = normal_square_meta(StableRNG(123), 100, 100, CustomDescent(0.1)) ) -mean(res.posteriors[:x][end])
-18.998360101686355

The mean inferred value of x is indeed close to 19, which was used to generate the data. Inference is working!

Note: $x^2$ can not be inverted; the sign information can be lost: -19 and 19 are both equally good solutions.

+mean(res.posteriors[:x][end])
-18.998360101686355

The mean inferred value of x is indeed close to 19, which was used to generate the data. Inference is working!

Note: $x^2$ can not be inverted; the sign information can be lost: -19 and 19 are both equally good solutions.

diff --git a/dev/examples/Custom nonlinear node/index.html b/dev/examples/Custom nonlinear node/index.html index 135afa9be..e88903fe7 100644 --- a/dev/examples/Custom nonlinear node/index.html +++ b/dev/examples/Custom nonlinear node/index.html @@ -64,4 +64,4 @@ estimated = Normal(mean_std(θposterior)...) plot(estimated, title="Posterior for θ", label = "Estimated", legend = :bottomright, fill = true, fillopacity = 0.2, xlim = (-3, 3), ylim = (0, 2)) -vline!([ θ ], label = "Real value of θ")

+vline!([ θ ], label = "Real value of θ")

diff --git a/dev/examples/GPRegression by SSM/index.html b/dev/examples/GPRegression by SSM/index.html index 72996ede7..83874be49 100644 --- a/dev/examples/GPRegression by SSM/index.html +++ b/dev/examples/GPRegression by SSM/index.html @@ -100,4 +100,4 @@ plot!(t, f_true,label="true process", lw = 2) scatter!(t_obser, f_noisy[pos], label="Observations") xlabel!("t") -ylabel!("f(t)")

As we can see from the plot, both cases of Matern kernel provide good approximations (small variance) to the true process at the area with dense observations (namely from t = 0 to around 3.5), and when we move far away from this region the approximated processes become less accurate (larger variance). This result makes sense because GP regression exploits the correlation between observations to predict unobserved points, and the choice of covariance functions as well as their hyper-parameters might not be optimal. We can increase the accuracy of the approximated processes by simply adding more observations. This way of improvement does not trouble the state-space method much but it might cause computational problem for naive GP regression, because with N observations the complexity of naive GP regression scales with $N^3$ while the state-space method scales linearly with N.

+ylabel!("f(t)")

As we can see from the plot, both cases of Matern kernel provide good approximations (small variance) to the true process at the area with dense observations (namely from t = 0 to around 3.5), and when we move far away from this region the approximated processes become less accurate (larger variance). This result makes sense because GP regression exploits the correlation between observations to predict unobserved points, and the choice of covariance functions as well as their hyper-parameters might not be optimal. We can increase the accuracy of the approximated processes by simply adding more observations. This way of improvement does not trouble the state-space method much but it might cause computational problem for naive GP regression, because with N observations the complexity of naive GP regression scales with $N^3$ while the state-space method scales linearly with N.

diff --git a/dev/examples/Gamma Mixture/index.html b/dev/examples/Gamma Mixture/index.html index 2ba7ffe2d..acdee9b98 100644 --- a/dev/examples/Gamma Mixture/index.html +++ b/dev/examples/Gamma Mixture/index.html @@ -127,4 +127,4 @@ # evaluate the convergence of the algorithm by monitoring the BFE p3 = plot(gresult.free_energy, label=false, xlabel="iterations", title="Bethe FE") -plot(p1, p2, layout = @layout([ a; b ]))

plot(p3)

+plot(p1, p2, layout = @layout([ a; b ]))

plot(p3)

diff --git a/dev/examples/Gaussian Linear Dynamical System/index.html b/dev/examples/Gaussian Linear Dynamical System/index.html index 005aed8f2..3de9fe10b 100644 --- a/dev/examples/Gaussian Linear Dynamical System/index.html +++ b/dev/examples/Gaussian Linear Dynamical System/index.html @@ -92,4 +92,4 @@ ▅██████▅█▄▅▄▃▄▃▃▁▃▃▁▁▃▁▁▃▁▁▁▁▁▃▁▃▄▃▃▇▄▄▃▁▃▄▃▃▁▁▁▁▁▃▃▁▁▁▃▁▁▃ ▃ 27.3 ms Histogram: frequency by time 69.7 ms < - Memory estimate: 9.47 MiB, allocs estimate: 218991. + Memory estimate: 9.47 MiB, allocs estimate: 218991. diff --git a/dev/examples/Gaussian Mixture Univariate/index.html b/dev/examples/Gaussian Mixture Univariate/index.html index 86cc78b2f..420f1449f 100644 --- a/dev/examples/Gaussian Mixture Univariate/index.html +++ b/dev/examples/Gaussian Mixture Univariate/index.html @@ -87,4 +87,4 @@ fep = plot(fe[2:end], label = "Free Energy", legend = :bottomleft) -plot(mp, wp, swp, fep, layout = @layout([ a b; c d ]), size = (800, 400))

+plot(mp, wp, swp, fep, layout = @layout([ a b; c d ]), size = (800, 400))

diff --git a/dev/examples/Gaussian Mixtures Multivariate/index.html b/dev/examples/Gaussian Mixtures Multivariate/index.html index db00dbc0c..63eb0ea91 100644 --- a/dev/examples/Gaussian Mixtures Multivariate/index.html +++ b/dev/examples/Gaussian Mixtures Multivariate/index.html @@ -135,4 +135,4 @@ █▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█ ▁ 852 ms Histogram: frequency by time 929 ms < - Memory estimate: 273.84 MiB, allocs estimate: 3555762. + Memory estimate: 273.84 MiB, allocs estimate: 3555762. diff --git a/dev/examples/Global Parameter Optimisation/index.html b/dev/examples/Global Parameter Optimisation/index.html index 913bcdcd7..f74c3a2c1 100644 --- a/dev/examples/Global Parameter Optimisation/index.html +++ b/dev/examples/Global Parameter Optimisation/index.html @@ -175,4 +175,4 @@ px = plot!(px, getindex.(mean.(xmarginals), 1), ribbon = getindex.(var.(xmarginals), 1) .|> sqrt, fillalpha = 0.5, label = "dim1_e") px = plot!(px, getindex.(mean.(xmarginals), 2), ribbon = getindex.(var.(xmarginals), 2) .|> sqrt, fillalpha = 0.5, label = "dim2_e") -plot(px, size = (1200, 450))

+plot(px, size = (1200, 450))

diff --git a/dev/examples/Handling Missing Data/index.html b/dev/examples/Handling Missing Data/index.html index ae2859438..3ea81fa1d 100644 --- a/dev/examples/Handling Missing Data/index.html +++ b/dev/examples/Handling Missing Data/index.html @@ -58,4 +58,4 @@ iterations = 20 );
plot(real_signal, label = "Noisy signal", legend = :bottomright)
 scatter!(missing_indices, real_signal[missing_indices], ms = 2, opacity = 0.75, label = "Missing region")
-plot!(mean.(result.posteriors[:x]), ribbon = var.(result.posteriors[:x]), label = "Estimated hidden state")

+plot!(mean.(result.posteriors[:x]), ribbon = var.(result.posteriors[:x]), label = "Estimated hidden state")

diff --git a/dev/examples/Hidden Markov Model/index.html b/dev/examples/Hidden Markov Model/index.html index 6bbc54e86..b5fff77ed 100644 --- a/dev/examples/Hidden Markov Model/index.html +++ b/dev/examples/Hidden Markov Model/index.html @@ -129,4 +129,4 @@ xlabel="Iteration Number" ) -plot(p1, p2, layout = @layout([ a; b ]))

Neat! Now you know how to track a Roomba if you ever need to. You also learned how to fit a Hidden Markov Model using RxInfer in the process.

+plot(p1, p2, layout = @layout([ a; b ]))

Neat! Now you know how to track a Roomba if you ever need to. You also learned how to fit a Hidden Markov Model using RxInfer in the process.

diff --git a/dev/examples/Hierarchical Gaussian Filter/index.html b/dev/examples/Hierarchical Gaussian Filter/index.html index 45303246d..6ab982a26 100644 --- a/dev/examples/Hierarchical Gaussian Filter/index.html +++ b/dev/examples/Hierarchical Gaussian Filter/index.html @@ -142,4 +142,4 @@ ▆██▆▆███▆█▆███▆▆▆▆▁▁▆▁▁▁▁▁▆█▆▆▁█▆█▆▆█▆▆▁▁█▁▁▁▆▆▁▆▁▁▁▁▁▁▁▁▁▁▆ ▁ 70.2 ms Histogram: frequency by time 108 ms < - Memory estimate: 17.01 MiB, allocs estimate: 411812. + Memory estimate: 17.01 MiB, allocs estimate: 411812. diff --git a/dev/examples/Identification Problem/index.html b/dev/examples/Identification Problem/index.html index a4f480802..aaec120ed 100644 --- a/dev/examples/Identification Problem/index.html +++ b/dev/examples/Identification Problem/index.html @@ -266,4 +266,4 @@ px2 = scatter!(px2, rx_real_y, label = "Observations", ms = 2, alpha = 0.5, color = :red) px2 = plot!(px2, mean.(rx_smarginals), ribbon = std.(rx_smarginals), label = "Combined estimated signal", color = :green) -plot(px1, px2, size = (800, 300))

The results are quite similar to the smoothing case and, as we can see, one of the random walk is again in the "disabled" state, does not infer anything and simply increases its variance (which is expected for the random walk).

+plot(px1, px2, size = (800, 300))

The results are quite similar to the smoothing case and, as we can see, one of the random walk is again in the "disabled" state, does not infer anything and simply increases its variance (which is expected for the random walk).

diff --git a/dev/examples/Infinite Data Stream/index.html b/dev/examples/Infinite Data Stream/index.html index 55b1bed27..76e7700e9 100644 --- a/dev/examples/Infinite Data Stream/index.html +++ b/dev/examples/Infinite Data Stream/index.html @@ -166,4 +166,4 @@ end;

The plot above is fully interactive and we can stop and unsubscribe from our datastream before it ends:

if !isnothing(engine) && isdefined(Main, :IJulia)
     RxInfer.stop(engine)
     IJulia.clear_output(true)
-end;
+end; diff --git a/dev/examples/Invertible Neural Network Tutorial/index.html b/dev/examples/Invertible Neural Network Tutorial/index.html index 5b284a0f7..0d072d831 100644 --- a/dev/examples/Invertible Neural Network Tutorial/index.html +++ b/dev/examples/Invertible Neural Network Tutorial/index.html @@ -292,4 +292,4 @@ p1 = scatter(data_x[:,1], data_x[:,2], marker_z = data_y, title="original labels", xlabel="weight 1", ylabel="weight 2", size=(1200,400), c=:viridis) p2 = scatter(data_x[:,1], data_x[:,2], marker_z = normcdf.(trans_data_x_2), title="predicted labels", xlabel="weight 1", ylabel="weight 2", size=(1200,400), c=:viridis) p3 = contour(0:0.01:1, 0:0.01:1, (x, y) -> normcdf(dot([1,1], ReactiveMP.forward(inferred_model, [x,y]))), title="Classification map", xlabel="weight 1", ylabel="weight 2", size=(1200,400), c=:viridis) -plot(p1, p2, p3, layout=(1,3), legend=false)

+plot(p1, p2, p3, layout=(1,3), legend=false)

diff --git a/dev/examples/Kalman filter with LSTM network driven dynamic/index.html b/dev/examples/Kalman filter with LSTM network driven dynamic/index.html index 57102e82c..02b42d130 100644 --- a/dev/examples/Kalman filter with LSTM network driven dynamic/index.html +++ b/dev/examples/Kalman filter with LSTM network driven dynamic/index.html @@ -280,4 +280,4 @@ p3 = scatter!(last.(testset[index][1:tt]), label="Observations", markersize=1.0) -plot(p1, p2, p3, size = (1000, 300),legend=:bottomleft)

+plot(p1, p2, p3, size = (1000, 300),legend=:bottomleft)

diff --git a/dev/examples/Nonlinear Noisy Pendulum/index.html b/dev/examples/Nonlinear Noisy Pendulum/index.html index 97a0ccbdf..23619f0a1 100644 --- a/dev/examples/Nonlinear Noisy Pendulum/index.html +++ b/dev/examples/Nonlinear Noisy Pendulum/index.html @@ -100,4 +100,4 @@ plot!(xlim = (0, 20), title = "Inference results")

It is important to look at the evolution of free energy. Remember that free energy is a measure of uncertainty-weighted prediction error.

# Plot free energy objective
 plot(result.free_energy, xlabel="# iterations", label="Bethe Free Energy")
-title!("Free energy by iterations, averaged over time")

+title!("Free energy by iterations, averaged over time")

diff --git a/dev/examples/Nonlinear Rabbit Population/index.html b/dev/examples/Nonlinear Rabbit Population/index.html index c86dd8e0f..903cd3f6d 100644 --- a/dev/examples/Nonlinear Rabbit Population/index.html +++ b/dev/examples/Nonlinear Rabbit Population/index.html @@ -91,4 +91,4 @@ p2 = ylabel!(p2, "probability density") p2 = xlims!(p2, -1, 4) -plot(p1, p2, layout = @layout([ a; b ]), size = (750, 600))

As we can see the inferred results match the real hidden state with high precision, as well as the inferred fertility parameter.

+plot(p1, p2, layout = @layout([ a; b ]), size = (750, 600))

As we can see the inferred results match the real hidden state with high precision, as well as the inferred fertility parameter.

diff --git a/dev/examples/Nonlinear Sensor Fusion/index.html b/dev/examples/Nonlinear Sensor Fusion/index.html index 5eab12c09..2f735e55f 100644 --- a/dev/examples/Nonlinear Sensor Fusion/index.html +++ b/dev/examples/Nonlinear Sensor Fusion/index.html @@ -133,4 +133,4 @@ p2 = plot(results_wishart.free_energy, label = "") xlabel!("iteration"), ylabel!("Bethe free energy [nats]") -plot(p1, p2, size=(1200, 500))

+plot(p1, p2, size=(1200, 500))

diff --git a/dev/examples/Nonlinear Virus Spread/index.html b/dev/examples/Nonlinear Virus Spread/index.html index f797fa9ce..3b414c9f2 100644 --- a/dev/examples/Nonlinear Virus Spread/index.html +++ b/dev/examples/Nonlinear Virus Spread/index.html @@ -58,4 +58,4 @@ p3 = plot!(p3, rθ, (x) -> pdf(Normal(mean(result.posteriors[:a]), var(result.posteriors[:a])), x), fillalpha=0.3, fillrange = 0, label="P(a|y)", c=3) p3 = vline!(p3, [ a_data ], label="Real θ", linestyle=:dash, color = :red) -plot(p1, p2, p3, layout = @layout([a; b c]))

As we can see the inference results match hidden states with high precision, as well as for the a parameter.

+plot(p1, p2, p3, layout = @layout([a; b c]))

As we can see the inference results match hidden states with high precision, as well as for the a parameter.

diff --git a/dev/examples/Probit Model (EP)/index.html b/dev/examples/Probit Model (EP)/index.html index b87204a50..185b41494 100644 --- a/dev/examples/Probit Model (EP)/index.html +++ b/dev/examples/Probit Model (EP)/index.html @@ -73,4 +73,4 @@ f = plot(xlabel = "t", ylabel = "BFE") f = plot!(result.free_energy, label = "Bethe Free Energy") -plot(p, f, size = (800, 400))

+plot(p, f, size = (800, 400))

diff --git a/dev/examples/RTS vs BIFM Smoothing/index.html b/dev/examples/RTS vs BIFM Smoothing/index.html index 603b19a34..1001a0dcf 100644 --- a/dev/examples/RTS vs BIFM Smoothing/index.html +++ b/dev/examples/RTS vs BIFM Smoothing/index.html @@ -232,4 +232,4 @@ p = Plots.plot!(p, 1:trials_range, m_BIFM, ribbon = ((q1_BIFM .- q3_BIFM) ./ 2), color = "orange", label = "mean (BIFM)") Plots.savefig(p, "pics/rts_bifm_benchmark.png") p -end

+end

diff --git a/dev/examples/Universal Mixtures/index.html b/dev/examples/Universal Mixtures/index.html index 8b8ca503d..5e2ce5bc3 100644 --- a/dev/examples/Universal Mixtures/index.html +++ b/dev/examples/Universal Mixtures/index.html @@ -90,4 +90,4 @@ p = plot(title = "posterior belief") plot!(rθ, (x) -> pdf(result_mary.posteriors[:θ], x), fillalpha=0.3, fillrange = 0, label="P(θ|y) Mary", c=1) plot!(rθ, (x) -> result_mary.posteriors[:θ].weights[1] * pdf(result_mary.posteriors[:θ].components[1], x), label="", c=3) -plot!(rθ, (x) -> result_mary.posteriors[:θ].weights[2] * pdf(result_mary.posteriors[:θ].components[2], x), label="", c=3)

+plot!(rθ, (x) -> result_mary.posteriors[:θ].weights[2] * pdf(result_mary.posteriors[:θ].components[2], x), label="", c=3)

diff --git a/dev/examples/overview/index.html b/dev/examples/overview/index.html index 5040f3185..83ccc15b0 100644 --- a/dev/examples/overview/index.html +++ b/dev/examples/overview/index.html @@ -1,2 +1,2 @@ -Overview · RxInfer.jl

Examples overview

This section contains a set of examples for Bayesian Inference with RxInfer package in various probabilistic models.

Note

All examples have been pre-generated automatically from the examples/ folder at GitHub repository.

+Overview · RxInfer.jl

Examples overview

This section contains a set of examples for Bayesian Inference with RxInfer package in various probabilistic models.

Note

All examples have been pre-generated automatically from the examples/ folder at GitHub repository.

diff --git a/dev/index.html b/dev/index.html index 73714eb88..c9c9007f4 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · RxInfer.jl

RxInfer

Julia package for automatic Bayesian inference on a factor graph with reactive message passing.

Given a probabilistic model, RxInfer allows for an efficient message-passing based Bayesian inference. It uses the model structure to generate an algorithm that consists of a sequence of local computations on a Forney-style factor graph (FFG) representation of the model. RxInfer.jl has been designed with a focus on efficiency, scalability and maximum performance for running inference with message passing.

Package Features

  • User friendly syntax for specification of probabilistic models.
  • Automatic generation of message passing algorithms including
  • Support for hybrid models combining discrete and continuous latent variables.
  • Support for hybrid distinct message passing inference algorithm under a unified paradigm.
  • Factorisation and functional form constraints specification.
  • Evaluation of Bethe free energy as a model performance measure.
  • Schedule-free reactive message passing API.
  • High performance.
  • Scalability for large models with millions of parameters and observations.
  • Inference procedure is differentiable.
  • Easy to extend with custom nodes and message update rules.

Why RxInfer

Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. Often, the inference task in these applications must be performed continually and in real-time in response to new observations. Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. Therefore, MC-based inference is practically not suitable for real-time applications. While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "variational Free Energy" cost function. For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms.

RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier to explore different "what-if" scenarios and enables very efficient inference in specific cases.

Ecosystem

The RxInfer unites 3 core packages into one powerful reactive message passing-based Bayesian inference framework:

  • ReactiveMP.jl - core package for efficient and scalable for reactive message passing
  • GraphPPL.jl - package for model and constraints specification
  • Rocket.jl - reactive programming tools

How to get started?

Head to the Getting started section to get up and running with RxInfer. Alternatively, explore various examples in the documentation.

Table of Contents

Resources

Index

+Home · RxInfer.jl

RxInfer

Julia package for automatic Bayesian inference on a factor graph with reactive message passing.

Given a probabilistic model, RxInfer allows for an efficient message-passing based Bayesian inference. It uses the model structure to generate an algorithm that consists of a sequence of local computations on a Forney-style factor graph (FFG) representation of the model. RxInfer.jl has been designed with a focus on efficiency, scalability and maximum performance for running inference with message passing.

Package Features

  • User friendly syntax for specification of probabilistic models.
  • Automatic generation of message passing algorithms including
  • Support for hybrid models combining discrete and continuous latent variables.
  • Support for hybrid distinct message passing inference algorithm under a unified paradigm.
  • Factorisation and functional form constraints specification.
  • Evaluation of Bethe free energy as a model performance measure.
  • Schedule-free reactive message passing API.
  • High performance.
  • Scalability for large models with millions of parameters and observations.
  • Inference procedure is differentiable.
  • Easy to extend with custom nodes and message update rules.

Why RxInfer

Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. Often, the inference task in these applications must be performed continually and in real-time in response to new observations. Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. Therefore, MC-based inference is practically not suitable for real-time applications. While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "variational Free Energy" cost function. For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms.

RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier to explore different "what-if" scenarios and enables very efficient inference in specific cases.

Ecosystem

The RxInfer unites 3 core packages into one powerful reactive message passing-based Bayesian inference framework:

  • ReactiveMP.jl - core package for efficient and scalable for reactive message passing
  • GraphPPL.jl - package for model and constraints specification
  • Rocket.jl - reactive programming tools

How to get started?

Head to the Getting started section to get up and running with RxInfer. Alternatively, explore various examples in the documentation.

Table of Contents

Resources

Index

diff --git a/dev/library/bethe-free-energy/index.html b/dev/library/bethe-free-energy/index.html index 97a9a742c..ba8234d41 100644 --- a/dev/library/bethe-free-energy/index.html +++ b/dev/library/bethe-free-energy/index.html @@ -1,2 +1,2 @@ -Bethe Free Energy · RxInfer.jl

Bethe Free Energy implementation in RxInfer

RxInfer.AbstractScoreObjectiveType
AbstractScoreObjective

Abstract type for functional objectives that can be used in the score function.

Note

score is defined in the ReactiveMP package.

source
RxInfer.BetheFreeEnergyType
BetheFreeEnergy(marginal_skip_strategy, scheduler, diagnostic_checks)

Creates Bethe Free Energy values stream when passed to the score function.

source
RxInfer.apply_diagnostic_checkFunction
apply_diagnostic_check(check, context, stream)

This function applies a check to the stream. Accepts optional context object for custom error messages.

source
+Bethe Free Energy · RxInfer.jl

Bethe Free Energy implementation in RxInfer

RxInfer.AbstractScoreObjectiveType
AbstractScoreObjective

Abstract type for functional objectives that can be used in the score function.

Note

score is defined in the ReactiveMP package.

source
RxInfer.BetheFreeEnergyType
BetheFreeEnergy(marginal_skip_strategy, scheduler, diagnostic_checks)

Creates Bethe Free Energy values stream when passed to the score function.

source
RxInfer.apply_diagnostic_checkFunction
apply_diagnostic_check(check, context, stream)

This function applies a check to the stream. Accepts optional context object for custom error messages.

source
diff --git a/dev/library/exported-methods/index.html b/dev/library/exported-methods/index.html index 7a094deb6..fff610b12 100644 --- a/dev/library/exported-methods/index.html +++ b/dev/library/exported-methods/index.html @@ -801,4 +801,4 @@ with_latest wsample wsample! -zipped +zipped diff --git a/dev/library/functional-forms/index.html b/dev/library/functional-forms/index.html index a70e8328a..5c2febbe6 100644 --- a/dev/library/functional-forms/index.html +++ b/dev/library/functional-forms/index.html @@ -23,4 +23,4 @@ q(x) :: Marginal(x_posterior) end
RxInfer.FixedMarginalFormConstraintType
FixedMarginalFormConstraint

One of the form constraint objects. Provides a constraint on the marginal distribution such that it remains fixed during inference. Can be viewed as blocking of updates of a specific edge associated with the marginal. If nothing is passed then the computed posterior marginal is returned.

Traits

  • is_point_mass_form_constraint = false
  • default_form_check_strategy = FormConstraintCheckLast()
  • default_prod_constraint = ProdAnalytical()
  • make_form_constraint = Marginal (for use in @constraints macro)

See also: ReactiveMP.constrain_form, ReactiveMP.DistProduct

source

CompositeFormConstraint

It is possible to create a composite functional form constraint with either + operator or using @constraints macro, e.g:

form_constraint = SampleListFormConstraint(1000) + PointMassFormConstraint()
@constraints begin 
     q(x) :: SampleList(1000) :: PointMass()
-end
+end diff --git a/dev/library/model-specification/index.html b/dev/library/model-specification/index.html index f036400fe..fd6c58701 100644 --- a/dev/library/model-specification/index.html +++ b/dev/library/model-specification/index.html @@ -1,4 +1,4 @@ Model specification · RxInfer.jl

Model specification in RxInfer

RxInfer.@modelMacro
@model function model_name(model_arguments...; model_keyword_arguments...)
     # model description
-end

@model macro generates a function that returns an equivalent graph-representation of the given probabilistic model description.

Supported alias in the model specification

  • a || b: alias for OR(a, b) node (operator precedence between ||, &&, -> and ! is the same as in Julia).
  • a && b: alias for AND(a, b) node (operator precedence ||, &&, -> and ! is the same as in Julia).
  • a -> b: alias for IMPLY(a, b) node (operator precedence ||, &&, -> and ! is the same as in Julia).
  • ¬a and !a: alias for NOT(a) node (Unicode \neg, operator precedence ||, &&, -> and ! is the same as in Julia).
  • a + b + c: alias for (a + b) + c
  • a * b * c: alias for (a * b) * c
  • Normal(μ|m|mean = ..., σ²|τ⁻¹|v|var|variance = ...) alias for NormalMeanVariance(..., ...) node. Gaussian could be used instead Normal too.
  • Normal(μ|m|mean = ..., τ|γ|σ⁻²|w|p|prec|precision = ...) alias for NormalMeanPrecision(..., ...) node. Gaussian could be used instead Normal too.
  • MvNormal(μ|m|mean = ..., Σ|V|Λ⁻¹|cov|covariance = ...) alias for MvNormalMeanCovariance(..., ...) node. MvGaussian could be used instead MvNormal too.
  • MvNormal(μ|m|mean = ..., Λ|W|Σ⁻¹|prec|precision = ...) alias for MvNormalMeanPrecision(..., ...) node. MvGaussian could be used instead MvNormal too.
  • MvNormal(μ|m|mean = ..., τ|γ|σ⁻²|scale_diag_prec|scale_diag_precision = ...) alias for MvNormalMeanScalePrecision(..., ...) node. MvGaussian could be used instead MvNormal too.
  • Gamma(α|a|shape = ..., θ|β⁻¹|scale = ...) alias for GammaShapeScale(..., ...) node.
  • Gamma(α|a|shape = ..., β|θ⁻¹|rate = ...) alias for GammaShapeRate(..., ...) node.
source
RxInfer.ModelGeneratorType
ModelGenerator

ModelGenerator is a special object that is used in the inference function to lazily create model later on given constraints, meta and options.

See also: inference

source
RxInfer.create_modelFunction
create_model(::ModelGenerator, constraints = nothing, meta = nothing, options = nothing)

Creates an instance of FactorGraphModel from the given model specification as well as optional constraints, meta and options.

Returns a tuple of 2 values:

    1. an instance of FactorGraphModel
    1. return value from the @model macro function definition
source
RxInfer.ModelInferenceOptionsType
ModelInferenceOptions(; kwargs...)

Creates model inference options object. The list of available options is present below.

Options

  • limit_stack_depth: limits the stack depth for computing messages, helps with StackOverflowError for some huge models, but reduces the performance of inference backend. Accepts integer as an argument that specifies the maximum number of recursive depth. Lower is better for stack overflow error, but worse for performance.

Advanced options

  • pipeline: changes the default pipeline for each factor node in the graph
  • global_reactive_scheduler: changes the scheduler of reactive streams, see Rocket.jl for more info, defaults to no scheduler

See also: inference, rxinference

source
+end

@model macro generates a function that returns an equivalent graph-representation of the given probabilistic model description.

Supported alias in the model specification

source
RxInfer.ModelGeneratorType
ModelGenerator

ModelGenerator is a special object that is used in the inference function to lazily create model later on given constraints, meta and options.

See also: inference

source
RxInfer.create_modelFunction
create_model(::ModelGenerator, constraints = nothing, meta = nothing, options = nothing)

Creates an instance of FactorGraphModel from the given model specification as well as optional constraints, meta and options.

Returns a tuple of 2 values:

    1. an instance of FactorGraphModel
    1. return value from the @model macro function definition
source
RxInfer.ModelInferenceOptionsType
ModelInferenceOptions(; kwargs...)

Creates model inference options object. The list of available options is present below.

Options

  • limit_stack_depth: limits the stack depth for computing messages, helps with StackOverflowError for some huge models, but reduces the performance of inference backend. Accepts integer as an argument that specifies the maximum number of recursive depth. Lower is better for stack overflow error, but worse for performance.

Advanced options

  • pipeline: changes the default pipeline for each factor node in the graph
  • global_reactive_scheduler: changes the scheduler of reactive streams, see Rocket.jl for more info, defaults to no scheduler

See also: inference, rxinference

source
diff --git a/dev/manuals/background/index.html b/dev/manuals/background/index.html index 1a77cd3b6..a3960c8f5 100644 --- a/dev/manuals/background/index.html +++ b/dev/manuals/background/index.html @@ -1,2 +1,2 @@ -Background: variational inference · RxInfer.jl
+Background: variational inference · RxInfer.jl
diff --git a/dev/manuals/constraints-specification/index.html b/dev/manuals/constraints-specification/index.html index 75aaae102..cf45aed65 100644 --- a/dev/manuals/constraints-specification/index.html +++ b/dev/manuals/constraints-specification/index.html @@ -108,4 +108,4 @@ ... end -model, returnval = create_model(my_model(arguments...); constraints = constraints)

Alternatively, it is possible to use constraints directly in the automatic inference and rxinference functions that accepts constraints keyword argument.

+model, returnval = create_model(my_model(arguments...); constraints = constraints)

Alternatively, it is possible to use constraints directly in the automatic inference and rxinference functions that accepts constraints keyword argument.

diff --git a/dev/manuals/custom-node/index.html b/dev/manuals/custom-node/index.html index bb22cbd23..cceb2a80b 100644 --- a/dev/manuals/custom-node/index.html +++ b/dev/manuals/custom-node/index.html @@ -66,53 +66,53 @@ vline!([π_real], label="Real π") - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

As a sanity check, we can create the same model with the RxInfer built-in node Bernoulli and compare the resulting posterior distribution with the one obtained using our custom MyBernoulli node. This will give us confidence that our custom node is working correctly. We use the Bernoulli node with the same Beta prior and the observed data, and then run inference. We can compare the two posterior distributions and observe that they are exactly the same, which indicates that our custom node is performing as expected.

@model function coin_model(n)
 
     y = datavar(Float64, n)
@@ -133,4 +133,4 @@
     error("Results are not identical")
 else
     println("Results are identical 🎉🎉🎉")
-end
Results are identical 🎉🎉🎉

Congratulations! You have succesfully implemented your own custom node in RxInfer. We went through the definition of a node to the implementation of the update rules and marginal posterior calculations. Finally we tested our custom node in a model and checked if we implemented everything correctly.

+end
Results are identical 🎉🎉🎉

Congratulations! You have succesfully implemented your own custom node in RxInfer. We went through the definition of a node to the implementation of the update rules and marginal posterior calculations. Finally we tested our custom node in a model and checked if we implemented everything correctly.

diff --git a/dev/manuals/debugging/index.html b/dev/manuals/debugging/index.html index 103f1a893..96f14cde7 100644 --- a/dev/manuals/debugging/index.html +++ b/dev/manuals/debugging/index.html @@ -31,51 +31,51 @@ vline!([θ_real], label="Real θ", title = "Inference results") - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

We can figure out what's wrong by looking at the Memory Addon. To obtain the trace, we have to add addons = (AddonMemory(),) as an argument to the inference function.

result = inference(
     model = coin_model(length(dataset)),
     data  = (x = dataset, ),
@@ -108,47 +108,49 @@
 vline!([θ_real], label="Real θ", title = "Inference results")
- + - + - + - + - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Now the posterior is visible in the plot.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Now the posterior is visible in the plot.

diff --git a/dev/manuals/getting-started/index.html b/dev/manuals/getting-started/index.html index 19d0ee570..2a22d1295 100644 --- a/dev/manuals/getting-started/index.html +++ b/dev/manuals/getting-started/index.html @@ -79,81 +79,81 @@ plot(p1, p2, layout = @layout([ a; b ])) - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + +

In our dataset we used 10 coin flips to estimate the bias of a coin. It resulted in a vague posterior distribution, however RxInfer scales very well for large models and factor graphs. We may use more coin flips in our dataset for better posterior distribution estimates:

dataset_100   = float.(rand(rng, Bernoulli(p), 100))
 dataset_1000  = float.(rand(rng, Bernoulli(p), 1000))
 dataset_10000 = float.(rand(rng, Bernoulli(p), 10000))
θestimated_100   = custom_inference(dataset_100)
@@ -167,91 +167,91 @@
 plot(p1, p3, layout = @layout([ a; b ]))
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

With larger dataset our posterior marginal estimate becomes more and more accurate and represents real value of the bias of a coin.

println("mean: ", mean(θestimated_10000))
 println("std:  ", std(θestimated_10000))
mean: 0.7529223698671196
-std:  0.004310967761308469

Where to go next?

There are a set of examples available in RxInfer repository that demonstrate the more advanced features of the package and also Examples section in the documentation. Alternatively, you can head to the Model specification which provides more detailed information of how to use RxInfer to specify probabilistic models. Inference execution section provides a documentation about RxInfer API for running reactive Bayesian inference.

+std: 0.004310967761308469

Where to go next?

There are a set of examples available in RxInfer repository that demonstrate the more advanced features of the package and also Examples section in the documentation. Alternatively, you can head to the Model specification which provides more detailed information of how to use RxInfer to specify probabilistic models. Inference execution section provides a documentation about RxInfer API for running reactive Bayesian inference.

diff --git a/dev/manuals/inference/inference/index.html b/dev/manuals/inference/inference/index.html index 0a8b427cd..5b12878e6 100644 --- a/dev/manuals/inference/inference/index.html +++ b/dev/manuals/inference/inference/index.html @@ -50,4 +50,4 @@ on_marginal_update = (model, name, update) -> println("$(name) has been updated: $(update)"), after_inference = (args...) -> println("Inference has been completed") ) -)

The callbacks keyword argument accepts a named-tuple of 'name = callback' pairs. The list of all possible callbacks and their arguments is present below:

  • on_marginal_update: args: (model::FactorGraphModel, name::Symbol, update)
  • before_model_creation: args: ()
  • after_model_creation: args: (model::FactorGraphModel, returnval)
  • before_inference: args: (model::FactorGraphModel)
  • before_iteration: args: (model::FactorGraphModel, iteration::Int)::Bool
  • before_data_update: args: (model::FactorGraphModel, data)
  • after_data_update: args: (model::FactorGraphModel, data)
  • after_iteration: args: (model::FactorGraphModel, iteration::Int)::Bool
  • after_inference: args: (model::FactorGraphModel)

before_iteration and after_iteration callbacks are allowed to return true/false value. true indicates that iterations must be halted and no further inference should be made.

  • addons

The addons field extends the default message computation rules with some extra information, e.g. computing log-scaling factors of messages or saving debug-information. Accepts a single addon or a tuple of addons. If set, replaces the corresponding setting in the options. Automatically changes the default value of the postprocess argument to NoopPostprocess.

  • postprocess

The postprocess keyword argument controls whether the inference results must be modified in some way before exiting the inference function. By default, the inference function uses the DefaultPostprocess strategy, which by default removes the Marginal wrapper type from the results. Change this setting to NoopPostprocess if you would like to keep the Marginal wrapper type, which might be useful in the combination with the addons argument. If the addons argument has been used, automatically changes the default strategy value to NoopPostprocess.

  • catch_exception

The catch_exception keyword argument specifies whether exceptions during the inference procedure should be catched in the error field of the result. By default, if exception occurs during the inference procedure the result will be lost. Set catch_exception = true to obtain partial result for the inference in case if an exception occurs. Use RxInfer.issuccess and RxInfer.iserror function to check if the inference completed successfully or failed. If an error occurs, the error field will store a tuple, where first element is the exception itself and the second element is the catched backtrace. Use the stacktrace function with the backtrace as an argument to recover the stacktrace of the error. Use Base.showerror function to display the error.

See also: InferenceResult, rxinference

source
RxInfer.InferenceResultType
InferenceResult

This structure is used as a return value from the inference function.

Public Fields

  • posteriors: Dict or NamedTuple of 'random variable' - 'posterior' pairs. See the returnvars argument for inference.
  • free_energy: (optional) An array of Bethe Free Energy values per VMP iteration. See the free_energy argument for inference.
  • model: FactorGraphModel object reference.
  • returnval: Return value from executed @model.
  • error: (optional) A reference to an exception, that might have occured during the inference. See the catch_exception argument for inference.

See also: inference

source
+)

The callbacks keyword argument accepts a named-tuple of 'name = callback' pairs. The list of all possible callbacks and their arguments is present below:

  • on_marginal_update: args: (model::FactorGraphModel, name::Symbol, update)
  • before_model_creation: args: ()
  • after_model_creation: args: (model::FactorGraphModel, returnval)
  • before_inference: args: (model::FactorGraphModel)
  • before_iteration: args: (model::FactorGraphModel, iteration::Int)::Bool
  • before_data_update: args: (model::FactorGraphModel, data)
  • after_data_update: args: (model::FactorGraphModel, data)
  • after_iteration: args: (model::FactorGraphModel, iteration::Int)::Bool
  • after_inference: args: (model::FactorGraphModel)

before_iteration and after_iteration callbacks are allowed to return true/false value. true indicates that iterations must be halted and no further inference should be made.

  • addons

The addons field extends the default message computation rules with some extra information, e.g. computing log-scaling factors of messages or saving debug-information. Accepts a single addon or a tuple of addons. If set, replaces the corresponding setting in the options. Automatically changes the default value of the postprocess argument to NoopPostprocess.

  • postprocess

The postprocess keyword argument controls whether the inference results must be modified in some way before exiting the inference function. By default, the inference function uses the DefaultPostprocess strategy, which by default removes the Marginal wrapper type from the results. Change this setting to NoopPostprocess if you would like to keep the Marginal wrapper type, which might be useful in the combination with the addons argument. If the addons argument has been used, automatically changes the default strategy value to NoopPostprocess.

  • catch_exception

The catch_exception keyword argument specifies whether exceptions during the inference procedure should be catched in the error field of the result. By default, if exception occurs during the inference procedure the result will be lost. Set catch_exception = true to obtain partial result for the inference in case if an exception occurs. Use RxInfer.issuccess and RxInfer.iserror function to check if the inference completed successfully or failed. If an error occurs, the error field will store a tuple, where first element is the exception itself and the second element is the catched backtrace. Use the stacktrace function with the backtrace as an argument to recover the stacktrace of the error. Use Base.showerror function to display the error.

See also: InferenceResult, rxinference

source
RxInfer.InferenceResultType
InferenceResult

This structure is used as a return value from the inference function.

Public Fields

  • posteriors: Dict or NamedTuple of 'random variable' - 'posterior' pairs. See the returnvars argument for inference.
  • free_energy: (optional) An array of Bethe Free Energy values per VMP iteration. See the free_energy argument for inference.
  • model: FactorGraphModel object reference.
  • returnval: Return value from executed @model.
  • error: (optional) A reference to an exception, that might have occured during the inference. See the catch_exception argument for inference.

See also: inference

source
diff --git a/dev/manuals/inference/manual/index.html b/dev/manuals/inference/manual/index.html index df45fafc8..50ffdddf8 100644 --- a/dev/manuals/inference/manual/index.html +++ b/dev/manuals/inference/manual/index.html @@ -77,57 +77,57 @@ plot!(p1, [ real_mean ], seriestype = :vline, label = "Real mean", color = :red4, opacity = 0.7) - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
p2    = plot(title = "'Precision' posterior marginals")
 grid2 = 0.01:0.001:0.35
 
@@ -143,59 +143,59 @@
 plot!(p2, [ real_precision ], seriestype = :vline, label = "Real precision", color = :red4, opacity = 0.7)
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Computing Bethe Free Energy

VMP inference boils down to finding the member of a family of tractable probability distributions that is closest in KL divergence to an intractable posterior distribution. This is achieved by minimizing a quantity known as Variational Free Energy. RxInfer uses Bethe Free Energy approximation to the real Variational Free Energy. Free energy is particularly useful to test for convergence of the VMP iterative procedure.

The RxInfer package exports score function for an observable of free energy values:

fe_observable = score(model, BetheFreeEnergy())
# Reset posterior marginals for `m` and `w`
 setmarginal!(m, vague(NormalMeanVariance))
 setmarginal!(w, vague(Gamma))
@@ -213,51 +213,51 @@ 

plot(fe_values, label = "Bethe Free Energy", xlabel = "Iteration #")
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/manuals/inference/overview/index.html b/dev/manuals/inference/overview/index.html index c27ee182e..1f1dd391b 100644 --- a/dev/manuals/inference/overview/index.html +++ b/dev/manuals/inference/overview/index.html @@ -1,2 +1,2 @@ -Overview · RxInfer.jl

Inference execution

The RxInfer inference API supports different types of message-passing algorithms (including hybrid algorithms combining several different types):

Whereas belief propagation computes exact inference for the random variables of interest, the variational message passing (VMP) in an approximation method that can be applied to a larger range of models.

The inference engine itself isn't aware of different algorithm types and simply does message passing between nodes, however during model specification stage user may specify different factorisation constraints around factor nodes by using where { q = ... } syntax or with the help of the @constraints macro. Different factorisation constraints lead to a different message passing update rules. See more documentation about constraints specification in the corresponding section.

Automatic inference specification on static datasets

RxInfer exports the inference function to quickly run and test you model with static datasets. See more information about the inference function on the separate documentation section.

Automatic inference specification on real-time datasets

RxInfer exports the rxinference function to quickly run and test you model with dynamic and potentially real-time datasets. See more information about the rxinference function on the separate documentation section.

Manual inference specification

While both inference and rxinference use most of the RxInfer inference engine capabilities in some situations it might be beneficial to write inference code manually. The Manual inference documentation section explains how to write your custom inference routines.

+Overview · RxInfer.jl

Inference execution

The RxInfer inference API supports different types of message-passing algorithms (including hybrid algorithms combining several different types):

Whereas belief propagation computes exact inference for the random variables of interest, the variational message passing (VMP) in an approximation method that can be applied to a larger range of models.

The inference engine itself isn't aware of different algorithm types and simply does message passing between nodes, however during model specification stage user may specify different factorisation constraints around factor nodes by using where { q = ... } syntax or with the help of the @constraints macro. Different factorisation constraints lead to a different message passing update rules. See more documentation about constraints specification in the corresponding section.

Automatic inference specification on static datasets

RxInfer exports the inference function to quickly run and test you model with static datasets. See more information about the inference function on the separate documentation section.

Automatic inference specification on real-time datasets

RxInfer exports the rxinference function to quickly run and test you model with dynamic and potentially real-time datasets. See more information about the rxinference function on the separate documentation section.

Manual inference specification

While both inference and rxinference use most of the RxInfer inference engine capabilities in some situations it might be beneficial to write inference code manually. The Manual inference documentation section explains how to write your custom inference routines.

diff --git a/dev/manuals/inference/postprocess/index.html b/dev/manuals/inference/postprocess/index.html index 9bba5eb02..2ea57d2ad 100644 --- a/dev/manuals/inference/postprocess/index.html +++ b/dev/manuals/inference/postprocess/index.html @@ -1,2 +1,2 @@ -Inference results postprocessing · RxInfer.jl

Inference results postprocessing

Both inference and rxinference allow users to postprocess the inference result with the postprocess = ... keyword argument. The inference engine operates on wrapper types to distinguish between marginals and messages. By default these wrapper types are removed from the inference results if no addons option is present. Together with the enabled addons, however, the wrapper types are preserved in the inference result output value. Use the options below to change this behaviour:

+Inference results postprocessing · RxInfer.jl

Inference results postprocessing

Both inference and rxinference allow users to postprocess the inference result with the postprocess = ... keyword argument. The inference engine operates on wrapper types to distinguish between marginals and messages. By default these wrapper types are removed from the inference results if no addons option is present. Together with the enabled addons, however, the wrapper types are preserved in the inference result output value. Use the options below to change this behaviour:

diff --git a/dev/manuals/inference/rxinference/index.html b/dev/manuals/inference/rxinference/index.html index ef296dc3f..00141f368 100644 --- a/dev/manuals/inference/rxinference/index.html +++ b/dev/manuals/inference/rxinference/index.html @@ -117,4 +117,4 @@ end

and later on:

engine = rxinference(events = Val((:after_iteration, )), ...)
 
-subscription = subscribe!(engine.events, MyEventListener(...))

See also: rxinference, RxInferenceEngine

source +subscription = subscribe!(engine.events, MyEventListener(...))

See also: rxinference, RxInferenceEngine

source diff --git a/dev/manuals/meta-specification/index.html b/dev/manuals/meta-specification/index.html index b5bfb22f6..95520bd39 100644 --- a/dev/manuals/meta-specification/index.html +++ b/dev/manuals/meta-specification/index.html @@ -28,4 +28,4 @@ ... end -model, returnval = create_model(my_model(arguments...); meta = meta)

Alternatively, it is possible to use constraints directly in the automatic inference and rxinference functions that accepts meta keyword argument.

+model, returnval = create_model(my_model(arguments...); meta = meta)

Alternatively, it is possible to use constraints directly in the automatic inference and rxinference functions that accepts meta keyword argument.

diff --git a/dev/manuals/model-specification/index.html b/dev/manuals/model-specification/index.html index 623babb7c..cb5b4cb8a 100644 --- a/dev/manuals/model-specification/index.html +++ b/dev/manuals/model-specification/index.html @@ -114,4 +114,4 @@ # All methods can be combined easily y ~ NormalMeanVariance(y_mean, y_var) where { q = q(μ)q(y_var)q(out) } -y ~ NormalMeanVariance(y_mean, y_var) where { q = q(y_mean, v)q(y) }

Metadata option

Is is possible to pass any extra metadata to a factor node with the meta option. Metadata can be later accessed in message computation rules. See also Meta specification section.

z ~ f(x, y) where { meta = ... }
+y ~ NormalMeanVariance(y_mean, y_var) where { q = q(y_mean, v)q(y) }

Metadata option

Is is possible to pass any extra metadata to a factor node with the meta option. Metadata can be later accessed in message computation rules. See also Meta specification section.

z ~ f(x, y) where { meta = ... }
diff --git a/dev/search/index.html b/dev/search/index.html index cbebdf075..8ae694b4f 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · RxInfer.jl

Loading search...

    +Search · RxInfer.jl

    Loading search...