Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usecase clustering #90

Open
wants to merge 8 commits into
base: gh-pages
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 147 additions & 0 deletions src/use_case_classification.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Use Cases

## Classification

One popular example regarding a classification task is the "Titanic" showcase. We have different passenger information - like name, age or fare - available with the aim to predict which kind of people would have survived the titanic sinking.

Therefore we load the titanic dataset and other libraries that are needed for this use case.

```{r, results='hide', message=FALSE, warning=FALSE}
library(titanic)
library(mlr)
library(BBmisc)
```


```{r}
data = titanic_train
head(data)
```

Our aim - as mentioned before - is to predict which kind of people would have survided.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

survided

typo


Therefore we will work off the following steps:

* preprocessing, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/preproc/index.html) and [here](http://mlr-org.github.io/mlr-tutorial/devel/html/impute/index.html)
* create a task,[here](http://mlr-org.github.io/mlr-tutorial/devel/html/task/index.html)
* provide a learner, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/learner/index.html)
* train the model, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/train/index.html)
* predict the survival chance, [here](http://mlr-org.github.io/mlr-tutorial/devel/html/predict/index.html)
* validate the model,[here](http://mlr-org.github.io/mlr-tutorial/devel/html/performance/index.html) and [here](http://mlr-org.github.io/mlr-tutorial/devel/html/resample/index.html)

#### Preprocessing

The data set is corrected regarding their data types.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would do str(data) to show the different types, then mention how and why they need corrected


```{r}
data[, c("Survived", "Pclass", "Sex", "SibSp", "Embarked")] = lapply(data[, c("Survived", "Pclass", "Sex", "SibSp", "Embarked")], as.factor)
```

Next, unuseful columns will be dropped.

```{r}
data = dropNamed(data, c("Cabin","PassengerId", "Ticket", "Name"))
```

And missing values will be imputed, in this case Age and Fare.

```{r}
data$Embarked[data$Embarked == ""] = NA
data$Embarked = droplevels(data$Embarked)
data = impute(data, cols = list(Age = imputeMedian(), Fare = imputeMedian(), Embarked = imputeMode()))
data = data$data
```

### Create a task

In the "task" the data set and the target column is specified. People who survived are labelled with "1".

```{r}
task = makeClassifTask(data = data, target = "Survived", positive = "1")
```

### Define a learner

A classification learner is selected. You can find an overview of all learners [here](http://mlr-org.github.io/mlr-tutorial/devel/html/integrated_learners/index.html)

```{r}
lrn = makeLearner("classif.randomForest", predict.type = "prob")
```

### Fit the model

To fit the model - and afterwards predict - the data set is split into a training and a test data set.

```{r}
n = getTaskSize(task)
trainSet = seq(1, n, by = 2)
testSet = seq(2, n, by = 2)
```

```{r}
mod = train(learner = lrn, task = task, subset = trainSet)
```

### Predict

Predicting the target values for new observations is implemented the same way as most of the other predict methods in R. In general, all you need to do is call predict on the object returned by train and pass the data you want predictions for.

```{r}
pred = predict(mod, task, subset = testSet)
```

The quality of the predictions of a model in mlr can be assessed with respect to a number of different performance measures. In order to calculate the performance measures, call performance on the object returned by predict and specify the desired performance measures.

```{r}
calculateConfusionMatrix(pred)
performance(pred, measures = list(acc, fpr, tpr))
df = generateThreshVsPerfData(pred, list(fpr, tpr, acc))
plotThreshVsPerf(df)
plotROCCurves(df)
```

### Extension of the original use case

As you might have seen the titanic library also provides a second dataset.

```{r}
test = titanic_test
head(test)
```

This one does not contain any survival information, but we now can use our fitted model and predict the survival probability for this data set.

The same preprocessing steps - as for the "data" data set - have to be applied

```{r}
test[, c("Pclass", "Sex", "SibSp", "Embarked")] = lapply(test[, c("Pclass", "Sex", "SibSp", "Embarked")], as.factor)

test = dropNamed(test, c("Cabin","PassengerId", "Ticket", "Name"))

test = impute(test, cols = list(Age = imputeMedian(), Fare = imputeMedian()))
test = test$data

summarizeColumns(test)
```

You can use the task and learner that you have already created.

```{r}
task
lrn
```

The training step will be different now. We don't use a subset to fit the model, but use all data.

```{r}
mod = train(learner = lrn, task = task)
```

For the prediction part, we will use the new test data set.

```{r}
pred = predict(mod, newdata = test)
pred
```


148 changes: 148 additions & 0 deletions src/usecase_clustering.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
---
output:
pdf_document: default
html_document: default
---

# Clustering

```{r echo = FALSE}
set.seed(1234)
```

This is a use case for clustering with the [%mlr] package. We consider the [agriculture](https://www.rdocumentation.org/packages/cluster/versions/1.10.0/topics/agriculture) dataset that contains observations about $n=12$ countries including
Copy link
Contributor

@schiffner schiffner Mar 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use [agriculture](&cluster::agriculture).
(The build script for the tutorial will expand this to the correct link.)


* the GNP (Gross National Product) per head (\texttt{x}) ,
* the percentage in agriculture (\texttt{y}).

So let's have a look at the data first.

```{r, fig.width = 5}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please specify the aspect ratio (fig.asp) instead of the fig.width.
(This works better for the pdf version of the tutorial.)

library("cluster")
data(agriculture)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use data(agriculture, package "cluster")


plot(y ~ x, data = agriculture)
```


We aim to group the observations into clusters that contain similar objects. We will

* define the learning task ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/task/index.html)),
* select a learning method ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/learner/index.html)),
* train the learner with data ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/train/index.html)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "train the learner" is sufficient.

* evaluate the performance of the model ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/performance/index.html)) and
* tune the model ([here](http://mlr-org.github.io/mlr-tutorial/devel/html/tune/index.html)).

### Defining a task

We now have to define a clustering task. Notice that a clustering task doesn't have a target variable.

```{r message = FALSE}
library(mlr)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need library(mlr) and then can also leave out the message = FALSE option.

agri.task = makeClusterTask(data = agriculture)
agri.task
```

Calling the task again shows us some basic informations as the number of observations, the data types of the features or if there are still some missing values, that should have been preprocessed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling -> printing
informations -> information


### Defining a learner

We generate the learner by calling ``makeLearner`` and specifying the learning method, and, if needed, hyperparameters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use a link [&makeLearner].


An overview over all learners can be found [here](http://mlr-org.github.io/mlr-tutorial/devel/html/integrated_learners/index.html). You can also call the \texttt{listLearners} command for our specific task.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Please change the link to [here](integrated_learners.md)
  • Please use also a link to [&listLearners]



```{r, warning=FALSE, eval = FALSE}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can leave warning=FALSE out, because travis has all packages and no warning will be produced

listLearners(obj = agri.task)
```

We will apply the $k$-means algorithm with $3$ centers for the moment

```{r}
cluster.lrn = makeLearner("cluster.kmeans", centers = 3)
cluster.lrn
```
### Train the model

The next step is to train our learner by feeding it with our data.

```{r}
agri.mod = train(learner = cluster.lrn, task = agri.task)
```

We can extract the model and have a look at it.

```{r}
getLearnerModel(agri.mod)
```

### Prediction

Now, we can predict the target values, our cluster labels.

```{r}
agri.pred = predict(agri.mod, task = agri.task)
agri.pred
```

### Performance

Since the data given to the learner is unlabeled, there is no objective evaluation of the accuracy of our model. We have to consider other criterions in unsupervised learning.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

criterions -> criteria


An overview over all performance measures can be found [here](http://mlr-org.github.io/mlr-tutorial/devel/html/measures/index.html). You can also call the \texttt{listMeasures} command for our specific task.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • [here](measures.md)
  • [&listMeasures]


```{r}
listMeasures(agri.task)
```

Let's have a look at the silhouette coefficient and the Davies-Boulding index.

```{r message = FALSE}
library("clValid")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this package needed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it's necessary for calculating the measures but in fact it isn't.


performance(agri.pred, measures = list(silhouette, db), task = agri.task)
```

### Tuning

It's hard to say if our clustering is good since up to now we have nothing to compare to. Could we have done better by choosing another parameter for the number of centers?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would replace "choosing another parameter for the number of centers" by "choosing a different number of centers"


Tuning will address the question of choosing the best hyperparameters for our problem.

We first create a search space for the number of clusters $k$, e. g. $k \in \lbrace 2, 3, 4, 5 \rbrace$. Further we define an optimization algorithm and a [resampling strategy](http://mlr-org.github.io/mlr-tutorial/devel/html/resample/index.html).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above you need to link to resample.md.


Finally, by combining all the previous pieces, we can tune the parameter $k$ by calling \texttt{tuneParams}. We will use discrete_ps with grid search and the silhouette coefficient as optimization criterion:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • [&tuneParams]
  • I would also mention 3-fold cross-validation.


```{r}
discrete_ps = makeParamSet(makeDiscreteParam("centers", values = c(2, 3, 4, 5)))
ctrl = makeTuneControlGrid()
res = tuneParams(cluster.lrn, agri.task, measures = silhouette, resampling = cv3,
par.set = discrete_ps, control = ctrl)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please indent code by 2 spaces?

```

Setting $k=2$ yields the best results for our clustering problem.
So let's generate a learner with the optimal hyperparameter $k=2$.

```{r}
tuned.lrn = setHyperPars(cluster.lrn, par.vals = res$x)
```

We have to train the tuned learner again and predict the results.
```{r}
tuned.mod = train(tuned.lrn, agri.task)
tuned.pred = predict(tuned.mod, task = agri.task)
tuned.pred
```

This is our final clustering for our problem.

```{r, fig.width= 5}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use fig.asp.

plot(y ~ x, col = tuned.pred$data$response, data = agriculture)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please use the getter function (I think getPredictionResponse should work here)?

```