evidence 'authority' ? #356
Replies: 4 comments 4 replies
-
Power in the frequentist sense isn't really a thing - not common to mix error rates into Bayesian stats. There also isn't as much of a problem with sequential analysis, so the general idea is - collect data until you have enough evidence. Having said that - some thoughts:
|
Beta Was this translation helpful? Give feedback.
-
I think it all boils down to the fact that indeed, in the Bayesian framework, one needs to think differently than in the frequentist one, and there is no straightforward equivalence to "power analysis", as one is able to quantify the evidence in favour, or against, given the available observation. That said, here are some interesting links:
|
Beta Was this translation helpful? Give feedback.
-
I'm referring to the number of observations that form the 'evidence'. (I see I inadvertently chose the adjective 'effective'. I'm not talking about https://en.wikipedia.org/wiki/Effective_sample_size. ugh!) |
Beta Was this translation helpful? Give feedback.
-
sample size investigation (more info) bottom line: in a binomial distribution context, I'm feeling more confident about comparing two trials that use similar sample sizes rather than comparing the binomial modes derived from trials that contain dissimilar sample sizes. For a reasonable comparison, I sense (from the figure below) that a "valid sample size" should be > 30. question: am I making a statistically unsound conclusions?
prior and posterior visuals, code and data : https://github.com/cordphelps/hdi |
Beta Was this translation helpful? Give feedback.
-
I'm looking for tools to examine the authority of the 'evidence' used in posterior derivations. Is there a concept similar to frequentist 'power' for determining appropriate sample size for bayesian evidence? Thank you.
Beta Was this translation helpful? Give feedback.
All reactions