Feed Forward

Feed Forward

Very similar to a perceptron but with a hidden layer. Activation goes from input to output without back loops. This type of network is usually trained using backpropagating, a method to compute gradients.FFs are more flexible than binary perceptrons because there’s a intermediate stage of evaluation.

Leave a Reply

Your email address will not be published. Required fields are marked *