Skip to content

Commit 6075bae

Browse files
committed
Changes to vignettes
#227
1 parent 3a3e291 commit 6075bae

File tree

2 files changed

+13
-13
lines changed

2 files changed

+13
-13
lines changed

vignettes/convert.Rmd

+8-5
Original file line numberDiff line numberDiff line change
@@ -39,17 +39,20 @@ The `effectsize` package contains function to convert among indices of effect si
3939

4040
The most basic conversion is between *r* values, a measure of standardized association between two continuous measures, and *d* values (such as Cohen's *d*), a measure of standardized differences between two groups / conditions.
4141

42-
Let's look at the following data:
42+
Let's simulate some data:
4343

44-
```{r, echo=FALSE}
44+
```{r}
4545
set.seed(1)
4646
data <- bayestestR::simulate_difference(n = 10,
4747
d = 0.2,
4848
names = c("Group", "Outcome"))
49-
data$Group <- as.numeric(data$Group)
49+
```
50+
51+
```{r, echo=FALSE}
5052
print(data, digits = 3)
5153
```
5254

55+
5356
We can compute Cohen's *d* between the two groups:
5457

5558
```{r}
@@ -58,8 +61,8 @@ cohens_d(Outcome ~ Group, data = data)
5861

5962
But we can also treat the 2-level `group` variable as a numeric variable, and compute Pearon's *r*:
6063

61-
```{r}
62-
correlation::correlation(data)
64+
```{r, warning=FALSE}
65+
correlation::correlation(data)[2,]
6366
```
6467

6568

vignettes/standardize_parameters.Rmd

+5-8
Original file line numberDiff line numberDiff line change
@@ -62,11 +62,9 @@ standardize_parameters(m)
6262
Standardizing the coefficient of this *simple* linear regression gives a value of `0.87`, but did you know that for a simple regression this is actually the **same as a correlation**? Thus, you can eventually apply some (*in*)famous interpretation guidelines (e.g., Cohen's rules of thumb).
6363

6464
```{r}
65-
library(parameters)
66-
6765
r <- cor.test(iris$Sepal.Length, iris$Petal.Length)
6866
69-
model_parameters(r)
67+
parameters::model_parameters(r)
7068
```
7169

7270

@@ -165,7 +163,7 @@ However, not all hope is lost yet - we can still try and recover the partial cor
165163

166164

167165
```{r}
168-
params <- model_parameters(mod)
166+
params <- parameters::model_parameters(mod)
169167
170168
t_to_r(params$t[-1], df_error = params$df_error[-1])
171169
```
@@ -184,7 +182,7 @@ hardlyworking$age_g <- cut(hardlyworking$age,
184182
mod <- lm(salary ~ xtra_hours + n_comps + age_g + seniority,
185183
data = hardlyworking)
186184
187-
model_parameters(mod)
185+
parameters::model_parameters(mod)
188186
```
189187

190188
It seems like the best or most important predictor is `n_comps` as it has the coefficient. However, it is hard to compare among predictors, as they are on different scales. To address this issue, we must have all the predictors on the same scale - usually in the arbitrary unit of *standard deviations*.
@@ -217,7 +215,7 @@ standardize_parameters(mod, method = "refit", two_sd = TRUE)
217215
mod_z <- standardize(mod, two_sd = FALSE, robust = FALSE)
218216
mod_z
219217
220-
model_parameters(mod_z)
218+
parameters::model_parameters(mod_z)
221219
```
222220

223221

@@ -258,8 +256,7 @@ Linear mixed models (LMM/HLM/MLM) offer an additional conundrum to standardizati
258256
The solution: standardize according to level of the predictor [@hoffman2015longitudinal, page 342]! Level 1 parameters are standardized according to variance *withing* groups, while level 2 parameters are standardized according to variance *between* groups. The resulting standardized coefficient are also called *pseudo*-standardized coefficients.[^Note that like method `"basic"`, these are based on the model matrix.]
259257

260258
```{r, eval=knitr::opts_chunk$get("eval") && require(lme4) && require(lmerTest)}
261-
library(lme4)
262-
m <- lmer(mpg ~ cyl + am + vs + (1|cyl), mtcars)
259+
m <- lme4::lmer(mpg ~ cyl + am + vs + (1|cyl), mtcars)
263260
264261
standardize_parameters(m, method = "pseudo", df_method = "satterthwaite")
265262

0 commit comments

Comments
 (0)