Auto Encoder

Auto Encoder

These networks have the same number of output and input neurons. By having a hidden layer smaller than the input layer, the encoder forces the input data to be represented in a compressed version. Later, the decoder reconstructs the data only using compressed hidden layer outputs. Auto encoders specialize in unsupervised learning- unlabelled data without input output pairs. Auto encoders are mainly used to reduce the dimensionality of data.

See this website for more information.

Variational Auto Encoder

Variational Auto Encoder

Variational AE stores a probabilistic range of data in the hidden layer rather than vanilla AE’s discrete information storing system. Because of the distributions in VAEs, it can generate hybrid outputs from distinct inputs. VAEs are useful in producing synthetic human faces, text, and interpolative results.

See Irhum Shafkat’s detailed explanation of VAE on Towards Data Science

Denoising Auto Encoder

Denoising Auto Encoder

Denoising autoencoders have the ability to reconstruct corrupted data. DAEs are a stochastic version of vanilla AE that randomly corrupts the input, hence introducing noise, that the AE must learn to reconstruct. Therefore, instead of having the output be similar to the input, the system is trained to make the outputs denoised and “cleaner”.

See Skymind.ai’s wiki and image of Denoising AE

Sparse Auto Encoder

Sparse Auto Encoder

SAE is one approach to automatically learn features from unlabeled data. Instead of comparing the inputs and outputs, the network feeds back the input plus a sparsity driver. The driver acts like a threshold of error- resembling a spiking neural network- and allows the average activation value to be small.

See Andrew Ng’s lecture on SAE at Stanford