Illia Polosukhin Photo

Illia Polosukhin

TENSORFLOW

San Francisco

Talk: Optimizing Distributed TensorFlow

TensorFlow allows to run distributed training, but making the most out of hardware still takes a lot of work. In this talk, you will learn:

  • How to setup distributed Tensorflow across multiple CPUs and GPUs.

  • Analyze TensorFlow timeline to figure out bottlenecks

  • Tune various components of the training stack to achieve optimal training speed.

Upcoming Courses

Previously (1)

Want to teach at ga?

Join our global community of instructors and help shape the next generation of industry leaders — while moving your own career forward with proven subject matter expertise, leadership experience, and public speaking skills. Teach online or on campus, full-time
or part-time.

Learn More