-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuleMethodError: improve debugging information #366
Comments
Also, @fonsp proposed an interesting solution to help the user here. I am not completely sure how to implement it, but the idea is the following: convert the model into a forward model, run the forward pass, and hope that an error in the forward pass would be better. The point is the following - We have some troubles with rules at the backward pass, but we have them because the model actually does not have any physical meaning (dimension sanity check is not passing). So for this particular example the broken forward model can be something like this @model function linear_regression(a, b, x)
y .~ Normal(mean = a * x .+ b, variance = 1.0)
y .~ Uninformative()
end if you run it
you will obtain the following error ERROR: Cannot broadcast scalar inputs over an unspecified or one-dimensional return array. Did you accidentally make a statement like this: `x ~ Bernoulli(Beta.(1, 1))` without initializing `x`?
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] __check_vectorized_input(::Tuple{Nothing, GraphPPL.NodeLabel})
@ GraphPPL ~/repos/ReactiveBayes/GraphPPL.jl/src/model_macro.jl:553
[3] macro expansion
@ ~/repos/ReactiveBayes/GraphPPL.jl/src/model_macro.jl:609 [inlined]
[4] macro expansion
@ ~/repos/ReactiveBayes/RxInfer.jl/linear_model_forward_pass.jl:28 [inlined]
[5] add_terminated_submodel!(__model__::GraphPPL.Model{…}, __context__::GraphPPL.Context, __options__::GraphPPL.NodeCreationOptions{…}, ::typeof(linear_regression), __interfaces__::@NamedTuple{…}, ::Static.StaticInt{…})
@ Main ~/repos/ReactiveBayes/GraphPPL.jl/src/model_macro.jl:724
[6] add_terminated_submodel!(model::GraphPPL.Model{…}, context::GraphPPL.Context, options::GraphPPL.NodeCreationOptions{…}, fform::Function, interfaces::@NamedTuple{…})
@ GraphPPL ~/repos/ReactiveBayes/GraphPPL.jl/src/graph_engine.jl:2069
[7] add_terminated_submodel!(model::GraphPPL.Model{…}, context::GraphPPL.Context, fform::Function, interfaces::@NamedTuple{…})
@ GraphPPL ~/repos/ReactiveBayes/GraphPPL.jl/src/graph_engine.jl:2065
[8] add_toplevel_model!(model::GraphPPL.Model{…}, context::GraphPPL.Context, fform::Function, interfaces::@NamedTuple{…})
@ GraphPPL ~/repos/ReactiveBayes/GraphPPL.jl/src/graph_engine.jl:2085
[9] create_model(callback::RxInfer.var"#24#26"{…}, generator::GraphPPL.ModelGenerator{…})
@ GraphPPL ~/repos/ReactiveBayes/GraphPPL.jl/src/model_generator.jl:97
[10] __infer_create_factor_graph_model(generator::GraphPPL.ModelGenerator{…}, conditioned_on::@NamedTuple{…})
@ RxInfer ~/repos/ReactiveBayes/RxInfer.jl/src/model/model.jl:122
[11] create_model(generator::RxInfer.ConditionedModelGenerator{GraphPPL.ModelGenerator{…}, @NamedTuple{…}})
@ RxInfer ~/repos/ReactiveBayes/RxInfer.jl/src/model/model.jl:110
[12] batch_inference(; model::GraphPPL.ModelGenerator{…}, data::@NamedTuple{…}, initialization::Nothing, constraints::Nothing, meta::Nothing, options::Nothing, returnvars::KeepLast, predictvars::Nothing, iterations::Int64, free_energy::Bool, free_energy_diagnostics::Tuple{…}, showprogress::Bool, callbacks::Nothing, addons::Nothing, postprocess::DefaultPostprocess, warn::Bool, catch_exception::Bool)
@ RxInfer ~/repos/ReactiveBayes/RxInfer.jl/src/inference/batch.jl:199
[13] batch_inference
@ ~/repos/ReactiveBayes/RxInfer.jl/src/inference/batch.jl:94 [inlined]
[14] #infer#244
@ ~/repos/ReactiveBayes/RxInfer.jl/src/inference/inference.jl:306 [inlined]
[15] top-level scope
@ ~/repos/ReactiveBayes/RxInfer.jl/linear_model_forward_pass.jl:34
Some type information was truncated. Use `show(err)` to see complete types. The key part here, as we think, is @model function linear_regression(a, b, x)
y .~ Normal(mean = a .* x .+ b, variance = 1.0)
y .~ Uninformative()
end And this model runs! results = infer(
model = linear_regression(a = 10.0, b = 1.0),
data = (x = [1.0, 1.0, 1, 1, 1, 1],),
returnvars = (y = KeepLast()),
iterations = 1,
free_energy = true
)
@show results.posteriors[:y]
# 6-element Vector{NormalMeanVariance{Float64}}:
# NormalMeanVariance{Float64}(μ=11.0, v=1.0)
# NormalMeanVariance{Float64}(μ=11.0, v=1.0)
# NormalMeanVariance{Float64}(μ=11.0, v=1.0)
# NormalMeanVariance{Float64}(μ=11.0, v=1.0)
# NormalMeanVariance{Float64}(μ=11.0, v=1.0)
# NormalMeanVariance{Float64}(μ=11.0, v=1.0) |
That would be nice, but GraphPPL models do not store a direction because our models are not DAGs. You can have loops in the graph structure, making it impossible to convert a model into a forward pass model. IMO its not going to work outside of a very small class of models. There is also no notion of observations. You can have data coming both from the "top" and the "bottom" of the graph structure. E.g. in this model The original confusion comes from the fact that I think the error says that the rule is not defined for something like a |
Sure, I understand that our models are not DAGs, but when you generate a forward model, you can use additional information beyond just the graph, such as information that the user provided in the inference call. Additionally, for many nodes, we have interfaces for input and output. We can define a method for each node that would determine the generative direction.
In this model, x is attached to the interface |
Hello! 👋
I was experimenting with error messages in RxInfer, and talking with @Nimrais we found one example where the error message could really improve.
I followed the example from https://reactivebayes.github.io/RxInfer.jl/stable/examples/basic_examples/Bayesian%20Linear%20Regression%20Tutorial/ , but I removed the
.
broadcasting in the definition ofy
:Before:
After:
Error message:
The information that is missing in the error message is:
*
, currently hidden in the possible fix)a
andx
)y .~ Normal(mean = a * x + b, variance = 1.0)
)(
Base.Experimental.register_error_hint
might be useful here.)When you do something similar in base Julia, you get a MethodError, and the error message answers these three questions:
The text was updated successfully, but these errors were encountered: