Denoising autoencoders have the ability to reconstruct corrupted data. DAEs are a stochastic version of vanilla AE that randomly corrupts the input, hence introducing noise, that the AE must learn to reconstruct. Therefore, instead of having the output be similar to the input, the system is trained to make the outputs denoised and “cleaner”.
Sparse Auto Encoder
SAE is one approach to automatically learn features from unlabeled data. Instead of comparing the inputs and outputs, the network feeds back the input plus a sparsity driver. The driver acts like a threshold of error- resembling a spiking neural network- and allows the average activation value to be small.
See Andrew Ng’s lecture on SAE at Stanford
Markov Chain
Hopfield Network
Boltzmann Machine
Restricted Boltzmann Machine
Deep Belied Network
Deep Belief Networks are stacks of RBM. The last output layer must use SoftMax activation function to create a classifier. Each layer (except the first & last one) are hidden and input at the same time (as on RBM). This network is used for Image/ Face Recognition or Video Sequence recognition.
Deep Convolutional Network
Deconvolutional Network
Deep Convolutional Inverse Graphics Network
This network has two parts : The model learns a representation of images, disentangled with scene structure and viewing transformations such as depth rotation and lighting variations. This generative model reconstructs the input and outputs a more complete and sufficient model (in image processing, transforming original input).