Perceptron

Perceptron

Perceptrons are single layer neural networks. It takes in inputs, sums up the linear combinations, and passes through an activation function to an output layer. Perceptrons are linear classifiers, which is used for supervised learning and classifying input data.

Feed Forward

Feed Forward

Very similar to a perceptron but with a hidden layer. Activation goes from input to output without back loops. This type of network is usually trained using backpropagating, a method to compute gradients.FFs are more flexible than binary perceptrons because there’s a intermediate stage of evaluation.

Radial Basis Function

Radial Basis Function

Radial Basis Functions are Feed Forward Neural Networks but with a radial basis activation instead of logistic functions. The radial basis function typically is faster and easily interpretable. However, classification takes more time.

See Ramraj Chandradevan’s detailed explanation of RBF on Towards Data Science

Deep Feed Forward

Deep Feed Forward

Deep Feed Forwards are multi-layer perceptrons. They are feed forward neural networks with multiple hidden layers. Usually with gradient descent, having more hidden layers is more beneficial as it allows higher specificity and complexities, but at the same time becomes slower to train.