How to make neural nets smaller while still preserving their performance.
This is a subtle problem,
As we suspect that part of their special sauce is precisely that they are overparameterized which is to say, one reason they work is precisely that they are bigger than they “need” to be.
The problem of finding the network that is *smaller than the bigger one that it seems to need to be* is tricky.
My instinct is to use some sparse regularisation but this does not carry over to the deep network setting, at least naïvely.

## Pruning

Train a big network, then deleting neurons and see if it still works. See Jacob Gildenblat, Pruning deep neural networks to make them fast and small, or Why reducing the costs of training neural networks remains a challenge – TechTalks This is not complicated (I think?) but involves lots of retraining and therefore lots of compute.

## Lottery tickets

Kim Martineau’s summary of the state of the art in “Lottery ticket” (Frankle and Carbin 2019) pruning strategies is fun; See also You et al. (2019) for an elaboration. The idea here is that we can try to “prune early” and never both fitting the big network as in classic pruning.

## Regularising away neurons

Seems like it should be easy to apply something like LASSO in the NN setting to deep neural nets to trim away irrelevant features. Aren’t they just stacked layers of regressions, after all? and it works so well in linear regressions. But in deep nets it is not generally obvious how to shrink away whole neurons.

I am curious if Lemhadri et al. (2021) does the job:

Much work has been done recently to make neural networks more interpretable, and one approach is to arrange for the network to use only a subset of the available features. In linear models, Lasso (or ℓ1-regularized) regression assigns zero weights to the most irrelevant or redundant features, and is widely used in data science. However the Lasso only applies to linear models. Here we introduce LassoNet, a neural network framework with global feature selection. Our approach achieves feature sparsity by adding a skip (residual) layer and allowing a feature to participate in any hidden layer only if its skip-layer representative is active. Unlike other approaches to feature selection for neural nets, our method uses a modified objective function with constraints, and so integrates feature selection with the parameter learning directly. As a result, it delivers an entire regularization path of solutions with a range of feature sparsity. We apply LassoNet to a number of real-data problems and find that it significantly outperforms state-of-the-art methods for feature selection and regression. LassoNet uses projected proximal gradient descent, and generalizes directly to deep networks. It can be implemented by adding just a few lines of code to a standard neural network.

## Edge ML

A.k.a. Tiny ML, Mobile ML etc. A major consumer of compressing neural nets, since small devices cannot fit large nerual nets. See Edge ML

## References

*arXiv:1611.05162 [cs, Stat]*, November. http://arxiv.org/abs/1611.05162.

*arXiv:2003.03033 [cs, Stat]*, March. http://arxiv.org/abs/2003.03033.

*arXiv:1612.01183 [cs, Math]*, December. http://arxiv.org/abs/1612.01183.

*arXiv:1511.05641 [cs]*, November. http://arxiv.org/abs/1511.05641.

*arXiv:1506.04449 [cs]*, June. http://arxiv.org/abs/1506.04449.

*arXiv:1710.09282 [cs]*, October. http://arxiv.org/abs/1710.09282.

*PMLR*. http://proceedings.mlr.press/v70/cutajar17a.html.

*arXiv:1702.08489 [cs, Stat]*, February. http://arxiv.org/abs/1702.08489.

*arXiv:1803.03635 [cs]*, March. http://arxiv.org/abs/1803.03635.

*arXiv:1701.06106 [cs, Stat]*. http://arxiv.org/abs/1701.06106.

*arXiv:1701.02291 [cs, Stat]*, January. http://arxiv.org/abs/1701.02291.

*arXiv:1606.05316 [cs]*, June. http://arxiv.org/abs/1606.05316.

*arXiv:1609.09106 [cs]*, September. http://arxiv.org/abs/1609.09106.

*arXiv:1509.01240 [cs, Math, Stat]*, September. http://arxiv.org/abs/1509.01240.

*arXiv:2002.08797 [cs, Stat]*, June. http://arxiv.org/abs/2002.08797.

*arXiv:1802.03494 [cs]*, January. http://arxiv.org/abs/1802.03494.

*arXiv:1704.04861 [cs]*, April. http://arxiv.org/abs/1704.04861.

*arXiv:1602.07360 [cs]*, February. http://arxiv.org/abs/1602.07360.

*Advances in Neural Information Processing Systems*, 598–605. http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf.

*arXiv:1702.07028 [cs]*. http://arxiv.org/abs/1702.07028.

*Journal of Machine Learning Research*22 (127): 1–29. http://jmlr.org/papers/v22/20-848.html.

*arXiv:2103.03014 [cs]*, March. http://arxiv.org/abs/2103.03014.

*Workshop on Learning to Generate Natural Language*. http://arxiv.org/abs/1708.00077.

*arXiv:1712.01312 [cs, Stat]*, December. http://arxiv.org/abs/1712.01312.

*Proceedings of ICML*. http://arxiv.org/abs/1701.05369.

*arXiv:1711.02782 [cs, Stat]*, November. http://arxiv.org/abs/1711.02782.

*arXiv:1606.07326 [cs, Stat]*, June. http://arxiv.org/abs/1606.07326.

*Pattern Recognition*115 (July): 107899. https://doi.org/10.1016/j.patcog.2021.107899.

*arXiv:2003.02389 [cs, Stat]*, March. http://arxiv.org/abs/2003.02389.

*arXiv:1607.00485 [cs, Stat]*, July. http://arxiv.org/abs/1607.00485.

*arXiv:1605.06560 [cs]*, May. http://arxiv.org/abs/1605.06560.

*arXiv:1611.06791 [cs]*, November. http://arxiv.org/abs/1611.06791.

*arXiv:1507.02284 [cs, Math, Stat]*, July. http://arxiv.org/abs/1507.02284.

*arXiv Preprint arXiv:1702.04008*. https://arxiv.org/abs/1702.04008.

*arXiv:1603.05691 [cs, Stat]*, March. http://arxiv.org/abs/1603.05691.

*IEEE transactions on pattern analysis and machine intelligence*41 (10): 2495–2510. https://doi.org/10.1109/TPAMI.2018.2857824.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 253–61. Curran Associates, Inc. http://papers.nips.cc/paper/6390-cnnpack-packing-convolutional-neural-networks-in-the-frequency-domain.pdf.

*TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers*. O’Reilly Media, Incorporated.

## No comments yet. Why not leave one?