Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usecase clustering #90

Open
wants to merge 8 commits into
base: gh-pages
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 147 additions & 0 deletions src/use_case_classification.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Use Cases

## Classification

One popular example regarding a classification task is the "Titanic" showcase. We have different passenger information - like name, age or fare - available with the aim to predict which kind of people would have survived the titanic sinking.

Therefore we load the titanic dataset and other libraries that are needed for this use case.

```{r, results='hide', message=FALSE, warning=FALSE}
library(titanic)
library(mlr)
library(BBmisc)
```


```{r}
data = titanic_train
head(data)
```

Our aim - as mentioned before - is to predict which kind of people would have survided.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

survided

typo


Therefore we will work off the following steps:

* preprocessing, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/preproc/index.html) and [here](http://mlr-org.github.io/mlr-tutorial/devel/html/impute/index.html)
* create a task,[here](http://mlr-org.github.io/mlr-tutorial/devel/html/task/index.html)
* provide a learner, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/learner/index.html)
* train the model, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/train/index.html)
* predict the survival chance, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/predict/index.html)
* validate the model,[here](http://mlr-org.github.io/mlr-tutorial/devel/html/performance/index.html) and [here](http://mlr-org.github.io/mlr-tutorial/devel/html/resample/index.html)

#### Preprocessing

The data set is corrected regarding their data types.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would do str(data) to show the different types, then mention how and why they need corrected


```{r}
data[, c("Survived", "Pclass", "Sex", "SibSp", "Embarked")] = lapply(data[, c("Survived", "Pclass", "Sex", "SibSp", "Embarked")], as.factor)
```

Next, unuseful columns will be dropped.

```{r}
data = dropNamed(data, c("Cabin","PassengerId", "Ticket", "Name"))
```

And missing values will be imputed, in this case Age and Fare.

```{r}
data$Embarked[data$Embarked == ""] = NA
data$Embarked = droplevels(data$Embarked)
data = impute(data, cols = list(Age = imputeMedian(), Fare = imputeMedian(), Embarked = imputeMode()))
data = data$data
```

### Create a task

In the "task" the data set and the target column is specified. People who survived are labelled with "1".

```{r}
task = makeClassifTask(data = data, target = "Survived", positive = "1")
```

### Define a learner

A classification learner is selected. You can find an overview of all learners [here](http://mlr-org.github.io/mlr-tutorial/devel/html/integrated_learners/index.html)

```{r}
lrn = makeLearner("classif.randomForest", predict.type = "prob")
```

### Fit the model

To fit the model - and afterwards predict - the data set is split into a training and a test data set.

```{r}
n = getTaskSize(task)
trainSet = seq(1, n, by = 2)
testSet = seq(2, n, by = 2)
```

```{r}
mod = train(learner = lrn, task = task, subset = trainSet)
```

### Predict

Predicting the target values for new observations is implemented the same way as most of the other predict methods in R. In general, all you need to do is call predict on the object returned by train and pass the data you want predictions for.

```{r}
pred = predict(mod, task, subset = testSet)
```

The quality of the predictions of a model in mlr can be assessed with respect to a number of different performance measures. In order to calculate the performance measures, call performance on the object returned by predict and specify the desired performance measures.

```{r}
calculateConfusionMatrix(pred)
performance(pred, measures = list(acc, fpr, tpr))
df = generateThreshVsPerfData(pred, list(fpr, tpr, acc))
plotThreshVsPerf(df)
plotROCCurves(df)
```

### Extension of the original use case

As you might have seen the titanic library also provides a second dataset.

```{r}
test = titanic_test
head(test)
```

This one does not contain any survival information, but we now can use our fitted model and predict the survival probability for this data set.

The same preprocessing steps - as for the "data" data set - have to be applied

```{r}
test[, c("Pclass", "Sex", "SibSp", "Embarked")] = lapply(test[, c("Pclass", "Sex", "SibSp", "Embarked")], as.factor)

test = dropNamed(test, c("Cabin","PassengerId", "Ticket", "Name"))

test = impute(test, cols = list(Age = imputeMedian(), Fare = imputeMedian()))
test = test$data

summarizeColumns(test)
```

You can use the task and learner that you have already created.

```{r}
task
lrn
```

The training step will be different now. We don't use a subset to fit the model, but use all data.

```{r}
mod = train(learner = lrn, task = task)
```

For the prediction part, we will use the new test data set.

```{r}
pred = predict(mod, newdata = test)
pred
```


144 changes: 144 additions & 0 deletions src/usecase_clustering.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
---
output:
pdf_document: default
html_document: default
---

# Clustering

```{r, echo = FALSE}
set.seed(1234)
```

This is a use case for clustering with the [%mlr] package. We consider the [agriculture](&cluster::agriculture) dataset that contains observations about $n=12$ countries including

* the GNP (Gross National Product) per head (\texttt{x}) ,
* the percentage in agriculture (\texttt{y}).

So let's have a look at the data first.

```{r, fig.asp = 0.8}
data(agriculture, package = "cluster")

plot(y ~ x, data = agriculture)
```


We aim to group the observations into clusters that contain similar objects. We will

* define the learning task ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/task/index.html)),
* select a learning method ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/learner/index.html)),
* train the learner ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/train/index.html)),
* evaluate the performance of the model ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/performance/index.html)) and
* tune the model ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/tune/index.html)).

### Defining a task

We now have to define a clustering task. Notice that a clustering task doesn't have a target variable.

```{r}
agri.task = makeClusterTask(data = agriculture)
agri.task
```

Printing the task shows us some basic information as the number of observations, the data types of the features or if there are still some missing values, that should have been preprocessed.

### Defining a learner

We generate the learner by calling [&makeLearner] and specifying the learning method, and, if needed, hyperparameters.

An overview over all learners can be found [here](integrated_learners.md). You can also call the [&listLearners] command for our specific task.


```{r eval = FALSE}
listLearners(obj = agri.task)
```

We will apply the $k$-means algorithm with $3$ centers for the moment

```{r}
cluster.lrn = makeLearner("cluster.kmeans", centers = 3)
cluster.lrn
```
### Train the model

The next step is to train our learner by feeding it with our data.

```{r}
agri.mod = train(learner = cluster.lrn, task = agri.task)
```

We can extract the model and have a look at it.

```{r}
getLearnerModel(agri.mod)
```

### Prediction

Now, we can predict the target values, our cluster labels.

```{r}
agri.pred = predict(agri.mod, task = agri.task)
agri.pred
```

### Performance

Since the data given to the learner is unlabeled, there is no objective evaluation of the accuracy of our model. We have to consider other criteria in unsupervised learning.

An overview over all performance measures can be found [here](measures.md). You can also call the [&listMeasures] command for our specific task.

```{r}
listMeasures(agri.task)
```

Let's have a look at the silhouette coefficient and the Davies-Boulding index.

```{r}
performance(agri.pred, measures = list(silhouette, db), task = agri.task)
```

### Tuning

It's hard to say if our clustering is good since up to now we have nothing to compare to. Could we have done better by choosing a different number of centers?

Tuning will address the question of choosing the best hyperparameters for our problem.

We first create a search space for the number of clusters $k$, e. g. $k \in \lbrace 2, 3, 4, 5 \rbrace$. Further we define an optimization algorithm and a [resampling strategy](resample.md). Here we use grid search and 3-fold cross validation.

Finally, by combining all the previous pieces, we can tune the parameter $k$ by calling [&tuneParams]. We will use discrete_ps with grid search and the silhouette coefficient as optimization criterion:

```{r}
discrete_ps = makeParamSet(makeDiscreteParam("centers", values = c(2, 3, 4, 5)))
ctrl = makeTuneControlGrid()
res = tuneParams(cluster.lrn, agri.task, measures = silhouette, resampling = cv3,
par.set = discrete_ps, control = ctrl)
```

Setting $k=2$ yields the best results for our clustering problem.
So let's generate a learner with the optimal hyperparameter $k=2$.

```{r}
tuned.lrn = setHyperPars(cluster.lrn, par.vals = res$x)
```

We have to train the tuned learner again and predict the results.
```{r}
tuned.mod = train(tuned.lrn, agri.task)
tuned.pred = predict(tuned.mod, task = agri.task)
tuned.pred
```

This is our final clustering for our problem.

```{r, fig.asp = 0.8}
plot(y ~ x, col = getPredictionResponse(tuned.pred), data = agriculture)
```