Horovod was originally developed by Uber to make distributed deep learning fast and easy to use, bringing model training time down from days and weeks to hours and minutes. With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.
Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Scale up to hundreds of GPUs with upwards of 90% scaling efficiency.
Start scaling your model training with just a few lines of Python code.
Runs the same for TensorFlow, Keras, PyTorch, and MXNet; on premise, in the cloud, and on Apache Spark.
Horovod is an open source project as part of the LF AI Foundation. We invite you to come join our community on GitHub as both a user and a contributor to Horovod’s development. We look forward to your contributions!
Join the Conversation
Horovod maintains the following mailing lists. You are invited to join any the meet your interests:
Horovod-Announce: Stay up to date with the latest Horovod developments.
Horovod-Technical-Discuss: Discuss the technical details and direction of Horovod.
Horovod-TSC: Project governance and organization announcements.