The Reuters dataset

The objective here is to classify short news stories into one of 46 topics available.

Preparing the data

Here, we use the multi-assignment operator (%<-%) from the zeallot package to unpack the list into a set of distinct variables.

## [1] 8982
## [1] 2246

As with the IMDB reviews, each example is a list of integers (word indices):

##  [1]    1    2    2    8   43   10  447    5   25  207  270    5 3095  111
## [15]   16  369  186   90   67    7   89    5   19  102    6   19  124   15
## [29]   90   67   84   22  482   26    7   48    4   49    8  864   39  209
## [43]  154    6  151    6   83   11   15   22  155   11   15    7   48    9
## [57] 4579 1005  504    6  258    6  272   11   15   22  134   44   11   15
## [71]   16    8  197 1245   90   67   52   29  209   30   32  132    6  109
## [85]   15   17   12
## [1] 3

You can vectorize the data with the exact same code as in the IMDB example

Vectorize the labels:

Building the model

  • The dimensionality of the output space (46 classes) is much larger.

Information bottleneck

  • Each layer can only access information present in the output of the previous layer.
  • Each layer can potentially become an information bottleneck.
  • A 16-dimensional intermediate layer may be too limited to learn to separate 46 different classes:
  • Such small layers may act as information bottlenecks, permanently dropping relevant information.

For this reason we will use larger layers. Let’s go with 64 units.

Compiling the model

The best loss function to use in this case is categorical_crossentropy.

Validating your approach

Let’s set apart 1000 samples in the training data to use as a validation set.

Now, let’s train the network for 20 epochs.

The network begins to overfit after nine epochs. Let’s train a new network from scratch for nine epochs and then evaluate it on the test set.

## $loss
## [1] 1.021877
## 
## $acc
## [1] 0.777382

This approach reaches an accuracy of ~ 79%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%. But in this case it’s closer to 18%, so the results seem pretty good, at least when compared to a random baseline:

## [1] 0.1843277

Predictions on new data

Each entry in predictions is a vector of length 46:

## [1] 2246   46

The coefficients in this vector sum to 1:

## [1] 1

The largest entry is the predicted class—the class with the highest probability:

## [1] 4