
The use of deep neural networks (DNNs) is currently transforming many areas of science and engineering. Although DNN-based techniques outperform traditional algorithms in most signal processing tasks, they can exhibit weaknesses such as reduced robustness and a tendency to produce hallucinations. These issues are linked to the DNN's Lipschitz constant, which typically worsens exponentially with the addition of layers. In this work, we present a framework for the design of stable networks with maximal expressivity. Our scheme involves a combination of learnable 1-Lip activations and a few energy-preserving convolution layers. We train these activations using a second-order total variation penalty, leading to adaptive linear spline solutions and a corresponding representer theorem for Lip-1 deep splines. We illustrate our method with the task of image reconstruction, and demonstrate state-of-the-art results.