Making neural networks more trustworthy and sustainable
The use of deep neural networks (DNNs) is currently transforming many areas of science and engineering. Although DNN-based techniques outperform traditional algorithms in most signal processing tasks, they can exhibit weaknesses such as reduced robustness and a tendency to produce hallucinations. These issues are linked to the DNN's Lipschitz constant, which typically worsens exponentially with the addition of layers. In this work, we present a framework for the design of stable networks with maximal expressivity.