Denoising autoencoders have the ability to reconstruct corrupted data. DAEs are a stochastic version of vanilla AE that randomly corrupts the input, hence introducing noise, that the AE must learn to reconstruct. Therefore, instead of having the output be similar to the input, the system is trained to make the outputs denoised and “cleaner”.
See Skymind.ai’s wiki and image of Denoising AE
SAE is one approach to automatically learn features from unlabeled data. Instead of comparing the inputs and outputs, the network feeds back the input plus a sparsity driver. The driver acts like a threshold of error- resembling a spiking neural network- and allows the average activation value to be small.
See Andrew Ng’s lecture on SAE at Stanford
Markov Chains are distinct from artificial neural networks. Markov processes describe a certain step of events using its statistical nature. The state is memoryless, so the present state depends from the previous state. Common uses for Markov Chains are predicting future weathers given today’s weather and Google’s algorithm in finding out the order of search result.
Hopfield Network is an example of markov chain, each node acts as its own input, hidden and output neuron. Before training, a desired outcome has an unique neuron state vector. When training, the weights are calibrated through activation thresholds. When weights are established and unchanging, the network is fully trained. Afterwards, the test neuron state vector will follow the allocated weights to minimize “energy”/ temperature so that the states become the desired ones. HN is an alternative way to classify images without using gradients and backpropagation. It is used for reconstruction of images.
See Sixte De Maupeou’s website
Deep Belief Networks are stacks of RBM. The last output layer must use SoftMax activation function to create a classifier. Each layer (except the first & last one) are hidden and input at the same time (as on RBM). This network is used for Image/ Face Recognition or Video Sequence recognition.
Convolutional Networks is a special architecture that classifies input into local receptive fields with pooling layers inside the hidden layers. Because these hidden neurons have shared weights and biases, they better resemble feature maps. The network then compares these maps- instead of individual neurons like what vanilla feed forward architecture does- and finds indexes of similarities. Finally, the output layer returns the likeness of the input to the classifications. DCN is especially effective in image recognition where it compares images in bulk rather by pixel.
See Michael Nielsen’s excellent book
This network has two parts : The model learns a representation of images, disentangled with scene structure and viewing transformations such as depth rotation and lighting variations. This generative model reconstructs the input and outputs a more complete and sufficient model (in image processing, transforming original input).