Talk: Optimizing Distributed TensorFlow
TensorFlow allows to run distributed training, but making the most out of hardware still takes a lot of work. In this talk, you will learn:
- How to setup distributed Tensorflow across multiple CPUs and GPUs.
- Analyze TensorFlow timeline to figure out bottlenecks
- Tune various components of the training stack to achieve optimal training speed.
Startup.ML and General Assembly Present: Scaling Deep Learning
Presented by Startup.ML and GA, this conference will focus on the best practices for deploying deep learning models into production on a variety of hardware and cloud platforms.