-
-
Notifications
You must be signed in to change notification settings - Fork 102
Open
Labels
Waiting for response 💌Need more information from people who submitted the issueNeed more information from people who submitted the issue
Description
So, I've noticed when plotting simulated quantile residuals, the results can be quite different between using DHARMa and performance.
For example, in this code:
library(DHARMa)
library(lme4)
library(performance)
library(DHARMa)
library(lme4)
library(performance)
set.seed(31415)
testData = createData(sampleSize = 10, overdispersion = 0.5, family = poisson())
fittedModel <- glm(observedResponse ~ Environment1,
family = "poisson", data = testData)
simulationOutput <- simulateResiduals(fittedModel = fittedModel)
plotQQunif(simulationOutput,
testDispersion = FALSE,
testUniformity = FALSE,
testOutliers = FALSE)
check_residuals(simulationOutput) |> plot()
This produces
While qualitatively these match, there are some big differences at the upper tail with the performance version looking really problematic while the DHARMa version does not.
What's going on here? This seems like there shouldn't be disagreement, and it makes me nervous that there is!
This is a reprex, but in working with real data, the discrepencies are even larger - although the KS tests agree. So, the values are the same? I'm confused.
Metadata
Metadata
Assignees
Labels
Waiting for response 💌Need more information from people who submitted the issueNeed more information from people who submitted the issue