Skip to content

Commit

Permalink
update docs for multi-label classification
Browse files Browse the repository at this point in the history
  • Loading branch information
idroz committed Sep 5, 2019
1 parent 8a9886f commit 029d959
Showing 1 changed file with 46 additions and 38 deletions.
84 changes: 46 additions & 38 deletions docs/supervised.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,16 @@ Supervised Dimensionality Reduction
===================================

``ivis`` is able to make use of any provided class labels to perform
supervised dimensionality reduction. Supervised embeddings can
combine the distance-based characteristics of the unsupervised ``ivis``
algorithm with clear class boundaries between the class categories when trained
to classify inputs simulateously to embedding them. The
supervised dimensionality reduction. Supervised ``ivis`` can thus be used in Metric Learning applications, as well as classical supervised classifier/regressor problems. Supervised embeddings can combine the distance-based characteristics of the unsupervised ``ivis`` algorithm with clear class boundaries between the class categories when trained to classify inputs simulateously to embedding them. The
resulting embeddings encode relevant class-specific information into
lower dimensional space, making them useful for enhancing the
performance of a classifier.

Many supervision metrics are available; for instance, it is also possible
to perform supervised training on continous labels, by providing a regression
metric to the ``supervision_metric`` parameter when constructing an Ivis obect.
``ivis`` supports the use of any of the classification or regression
losses included with keras, so long as the labels are provided in the
``ivis`` supports both classification and regression problems and makes use of the losses included with keras, so long as the labels are provided in the
correct format.


Generating Embeddings with Supervised ``ivis``
----------------------------------------------
Classification
--------------

To train ``ivis`` in supervised mode using the default softmax
classification loss, simply provide the labels to the fit method's
Expand Down Expand Up @@ -49,34 +41,13 @@ integers with each integer corresponding to a class.

Experimental data has shown that ``ivis`` converges to a solution faster
in supervised mode. Therefore, our suggestion is to lower the value of
the ``n_epochs_without_progress`` parameter from the default of 50 to
the ``n_epochs_without_progress`` parameter from the default to
around 5. Here are the resulting embeddings:

.. image:: _static/mnist-embedding-comparison_titled.png

It is possible to control the relative importance Ivis places on the
labels when training in supervised mode with the
``supervision_weight`` parameter. This variable should be a float
between 0.0 to 1.0, with higher values resulting in supervision
affecting the training process more, and smaller values resulting in it
impacting the training less. By default, the parameter is set to 0.5.
Increasing it to 0.8 will result in more cleanly separated classes.

::

weight = 0.8
model = Ivis(n_epochs_without_progress=5,
supervision_weight=weight)
model.fit(X_train, Y_train)

As an illustration of the impact the ``supervision_weight`` has on
the resulting embeddings, see the following plot of supervised ``ivis``
applied to MNIST with different weight values:

.. image:: _static/classification-weight-impact-mnist.jpg

Obtaining Classification Probabilities
--------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Since training ``ivis`` in supervised mode causes the algorithm to optimize
the supervised objective in conjunction with the triplet loss function, it is
Expand Down Expand Up @@ -120,7 +91,7 @@ generalizability of the classifier compared to a pure softmax classifier.


Linear-SVM classifier
---------------------
~~~~~~~~~~~~~~~~~~~~~

It's also possible to utilize different supervised metrics to train the
supervised network by adjusting the ``supervsed_metric`` parameter.
Expand Down Expand Up @@ -168,9 +139,22 @@ highest classification performance.

.. image:: _static/SVM-accuracy-classification-weight-zoomed.png

Multi-label classification
~~~~~~~~~~~~~~~~~~~~~~~~~~

In cases where a single observation is accompanied by multiple response variables, ``ivis`` implements support for multi-label classification. Ensuring that ``y`` is a multi-dimensional array (N x L), where L is the number of unique labels, multi-label model can be fitted as:

::

ivis = Ivis(k=30, model='maaten',
supervision_metric='binary_crossentropy')
ivis.fit(x, y)


Note that the only requirement is that supervision metric is set to ``binary_crossentropy``.

Supervised Regression
---------------------
Regression
--------------

It is also possible to perform supervised training on continous labels.
To do this, a regression metric should be provided to ``supervision_metric``
Expand Down Expand Up @@ -214,3 +198,27 @@ of correlation between the predicted and actual values on the test set,
although it is lower than on the training set - the R-squared value is 0.63.

.. image:: _static/boston_test_regression_mae_pred-true.png

Supervision Weight
------------------

It is possible to control the relative importance ``ivis`` places on the
labels when training in supervised mode with the
``supervision_weight`` parameter. This variable should be a float
between 0.0 to 1.0, with higher values resulting in supervision
affecting the training process more, and smaller values resulting in it
impacting the training less. By default, the parameter is set to 0.5.
Increasing it to 0.8 will result in more cleanly separated classes.

::

weight = 0.8
model = Ivis(n_epochs_without_progress=5,
supervision_weight=weight)
model.fit(X_train, Y_train)

As an illustration of the impact the ``supervision_weight`` has on
the resulting embeddings, see the following plot of supervised ``ivis``
applied to MNIST with different weight values:

.. image:: _static/classification-weight-impact-mnist.jpg

0 comments on commit 029d959

Please sign in to comment.