Very similar to a perceptron but with a hidden layer. Activation goes from input to output without back loops. This type of network is usually trained using backpropagating, a method to compute gradients.FFs are more flexible than binary perceptrons because there’s a intermediate stage of evaluation.