SAE is one approach to automatically learn features from unlabeled data. Instead of comparing the inputs and outputs, the network feeds back the input plus a sparsity driver. The driver acts like a threshold of error- resembling a spiking neural network- and allows the average activation value to be small.
See Andrew Ng’s lecture on SAE at Stanford