Quite in line with the last blog posts about Principal Component Analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE) I want to discuss another dimensionality reduction technique that originated in the Neural Network (NN) community, known as Autoencoders.
The idea behind an autoencoder is conceputally quite simple and results in very powerful outcomes if applied correctly. Unfortunately, however, it suffers from the typical problem of NNs in that the outcomes are not readily interpretablefn-images and somewhat harder to train. But before we jump into the autoencoder let's do some preparatory work for motivation and understanding. The code and and a notebook are available on my personal github.
To illustrate the idea let's look at a simple model where we have two dimension that follow the relationship
with a noise term , a bias and a slope as show in the figure to the left. To truthfully represent the full data we would have to store floats as we have two dimensions and data points. However, it is quite obvious that we would also store a lot of the noise that is actually not important for the model. If we can afford to "loose" some information it would be possible to store the data with just floats -- values for , the bias and the slope . For large we would get a compression of a factor by reducing the noise. As a side benefit, we made the considerably simplified the data, as we reduced the number of dimensions from two to one! This is the general idea for lossy compression algorithmsfn-jpeg to reduce the data by removing "noise". The idea of removing noise to reduce data is directly generalizable to higher dimensions, though technically significantly more challenging.
The second ingredient we need before diving into the Denoising Autoencoder (DAE) is to understand what simple NNs learn to separate features that are not linearly separable. I will steal this beautiful example from Chris Olah's blog which is defintely worth reading as well. Let's have a look at the blue and red curves in the original image below A neural network without any hidden layers and a sigmoid activation function is a fancy way of writting a Logistic Regression which is a linear model. The trained models learns that the best possible way to split the two curves by classifying samples within the blue and red shaded regions. However, in the center we have misclassification of blue points, whereas on the edges we misclassify red points. The linear decision boundary minimizes the misclassification rate as any other linear separator would do a worse job.
The introduction of a hidden layer changes the game dramatically! The NN learns a non-linear transformation, that makes the two classes linearly separable in the hidden layer and hence forms a non-linear decision boundary in the original input space. To visualize this we can look at the distorsions of the underlying grid, when going from the input to the hidden layer. As a result the NN is able to perfectly speparate those classesfn-overfitting!
Now we can tackle the DAE: We combine the concepts of lossy compression and non-linear representation. The idea is to learn a non-linear representation of the data that minimizes noise while maximizing the ability to truthfully restore the data from the compressed format. The network structure is quite simple and consist of two major components, the encoder and the decoder network.
Encoder: Here, we start with input nodes, corresponding to the dimensionality of the input data and reduce this to nodes corresponding to the desired dimensionality size of the compressed data, i.e. we learn a function
where is the activation function and is a matrix of weights. Even though this looks like a simple matrix multiplication, this need not be the case as we can string several hidden layers together to for the transformation , thus making the represeantaion more non-linear.
Decoder: there are two generally used decoder types - the tied and untied decoder. In case of a tied decoder, we use the inverse transformation , which for a single layer is just the matrix transpose. This results in a stiffer system, but has the advantage of a having to learn fewer parameters and is hence less prone to overfitting. The untied decoder learns a complete separate representation:
to efficiently map the compressed data to the originial input space. Here is a matrix. This type needs much more training data as the complete network has typically twice as many parameters as the tied autoencoder network.
Learning objective: The reason why this system is called autoencoder is the objective funtion the system wants to minimize:
It tries to optimize the decoded representation of the data with respect to itself. It should be pointed out that the loss function doesn't need to be a squared error loss, but can be chosen appropriately for the input data, e.g. log-loss for binary input data. Note that depending on the activation function the input data needs to be scaled. E.g. the sigmoid clamps the output to the interval and hence the data has to be scaled to fit within this range for the autoencoder to work correctly.
The first thing to note is that it an unsupervised technique, meaning it doesn't need labelled data. This in turn means that it can be used for clustering or as a feature preprocessor by learning new representations. Learning an encoder lets us reduce the dimensionality of the data while preserving a lot of information by learning important, potentially nonlinear, features. The dimensionality-reduced data can then be fed into a unsupervised / semi-supervised clustering algorithm that can now perform its magic without suffering from the Curse of Dimensionality. In some cases further clustering might not even be necessary as the non-linear features already learn such structures in the compressed data.
A second use case is the pretraining of a Deep Neural Network (similar to a Deep Belief Network). In this case the autoencoder learns a good weight initialization that can then be used to further train the network using supervised techniques
###Example To demonstrate the workings of the autoencoder, I want to create a two-dimensional visualization of the Internet Advertisment dataset. The data contains 1560 features and only 3300 records. Hence we easily run into the curse of dimensionality as the density in the high-dimensional space is quite small. {: .text.img-right width="60%"} Using a deep autoencoder to reduce the dimensionality to two dimensions we can get an intuition about how the data is distributed and what kind of features can be learned. For the plots, I implemented my own Autoencoder using TensorFlowfn-JMetzen. The code can be found in the DataScience ToolKit package on my github repository. The figure showing the training losses displays the typical learing curve of a Neural Network in that it has plateaus followed by steep rapid descends in loss. The black dashed lines indicate the epochs where the learning rate was adjusted.
To ensure that the autoencoder is not saturated, i.e. that the hidden layers by default output or we look a the mean value of the outputs of the first layer (and strictly speaking we should also look at the other layers).
To see what feature distribution the network learned, we can look at a scatter plot of the encoded representation. Note that the AE distributed the features nicely across the range making maximum use of the available space; another indication that the individual layers are not saturated.
The AE learns to distinguish some kinds of advertisment from non-advertisments quite well, so we could train a good classifier on those. However, we also see a big cluster, where ads and non-ads overlap significantly. We can conclude that in this region the spaming ads are very successful in hiding between the non-ads and we might have to do more feature engineering to find good splitting features.
###Final notes for training an AE To summarize, I want to provide a list of things that you can try for successfully training an AutoEncoder:
I hope this was helpful. As alwyas a IPython Notebook is available on my personal github account. Enjoy playing with AutoEncoders!