As machine learning applications become larger and more widely used, there is an increasing need for efficient systems solutions. The performance of essentially all machine learning applications is limited by bottlenecks with effects that cut across traditional layers in the software stack. Because of this, addressing these bottlenecks effectively requires a broad combination of work in theory, algorithms, systems, and hardware. To do this in a principled way, I propose a general approach called mindful relaxation. The approach starts by finding a way to eliminate a bottleneck by changing the algorithm's semantics. It proceeds by identifying structural conditions that let us prove guarantees that the altered algorithm will still work. Finally, it applies this structural knowledge to implement improvements to the performance and accuracy of entire systems.
In this talk, I will describe the mindful relaxation approach, and demonstrate how it can be applied to a specific bottleneck (parallel overheads), problem (inference), and algorithm (asynchronous Gibbs sampling). I will demonstrate the effectiveness of this approach on a range of problems including CNNs, and finish with a discussion of my future work on methods for fast machine learning.
Executive Assistant to the Department Chair
email@example.com | Ph: (858) 534-7013