Denoising Auto Encoder

Denoising Auto Encoder

Denoising autoencoders have the ability to reconstruct corrupted data. DAEs are a stochastic version of vanilla AE that randomly corrupts the input, hence introducing noise, that the AE must learn to reconstruct. Therefore, instead of having the output be similar to the input, the system is trained to make the outputs denoised and “cleaner”.

See Skymind.ai’s wiki and image of Denoising AE

Sparse Auto Encoder

Sparse Auto Encoder

SAE is one approach to automatically learn features from unlabeled data. Instead of comparing the inputs and outputs, the network feeds back the input plus a sparsity driver. The driver acts like a threshold of error- resembling a spiking neural network- and allows the average activation value to be small.

See Andrew Ng’s lecture on SAE at Stanford

Markov Chain

Markov Chains are distinct from artificial neural networks. Markov processes describe a certain step of events using its statistical nature. The state is memoryless, so the present state depends from the previous state. Common uses for Markov Chains are predicting future weathers given today’s weather and Google’s algorithm in finding out the order of search result.

Hopfield Network

Hopfield Network is an example of markov chain, each node acts as its own input, hidden and output neuron. Before training, a desired outcome has an unique neuron state vector. When training, the weights are calibrated through activation thresholds. When weights are established and unchanging, the network is fully trained. Afterwards, the test neuron state vector will follow the allocated weights to minimize “energy”/ temperature so that the states become the desired ones. HN is an alternative way to classify images without using gradients and backpropagation. It is used for reconstruction of images.

Boltzmann Machine

Boltzmann Machine has a higher capacity than Hopfield Networks BMs are networks that make stochastic, or random order, decisions about whether to be on or off (binary). The capacity corresponds to the number of patterns per neuron that can be recalled.

 

Restricted Boltzmann Machine

To be a valid RBM, all the neurons each layer of the network must not be connected otherwise it’s a regular BM. The network must be trained first in an unsupervised fashion.Using reconstruction (outputs are fed as inputs), RBM uses binary states with a probabilistic model to observe the physical likelihood of sequential events and relationships. RBM is used for detecting patterns in data and finding underlying factors.

 

Deep Belied Network

Deep Belief Networks are stacks of RBM. The last output layer must use SoftMax activation function to create a classifier. Each layer (except the first & last one) are hidden and input at the same time (as on RBM). This network is used for Image/ Face Recognition or Video Sequence recognition.

Deep Convolutional Network

Convolutional Networks is a special architecture that classifies input into local receptive fields with pooling layers inside the hidden layers. Because these hidden neurons have shared weights and biases, they better resemble feature maps. The network then compares these maps- instead of individual neurons like what vanilla feed forward architecture does- and finds indexes of similarities. Finally, the output layer returns the likeness of the input to the classifications. DCN is especially effective in image recognition where it compares images in bulk rather by pixel.

Deconvolutional Network

DN are convolutional neural networks that work in a reversed process. The goal is to restore or reconstruct data that has been degraded by a convolving method. Deconvolution network is a shape generator that produces object segmentation from the given data. Examples include up-sampling each pixel in image clarification.

Deep Convolutional Inverse Graphics Network

This network has two parts : The model learns a representation of images, disentangled with scene structure and viewing transformations such as depth rotation and lighting variations. This generative model reconstructs the input and outputs a more complete and sufficient model (in image processing, transforming original input).