2021.05.13
NICT released RaNNC, an automatic parallelization middleware for very large neural networks, as open source software.
- Menu
- Emerging topics Machine learning
- Data-driven Intelligent System Research Center, Universal Communication Research Institute, National Institute of Information and Communications Technology
- https://github.com/nict-wisdom/rannc
NICT (National Institute of Information and Communications Technology) and the University of Tokyo jointly developed RaNNC (Rapid Neural Net Connector), an automatic parallelization middleware for very large neural networks, and released it on March 31, 2021 on GitHub. In recent years, the size of neural networks continued to grow, and some large-scale neural networks do not fit into the memory of a single GPU anymore. Thus, users need to manually partition networks by rewriting their definitions. RaNNC automates the partitioning of neural networks and drastically simplifies parallel training using multiple GPUs. To the best of our knowledge, RaNNC is the first middleware that automates partitioning of neural networks without any modification of the definitions of the networks.
The source code of RaNNC is now available on GitHub (https://github.com/nict-wisdom/rannc) under the MIT license, which allows free use even for commercial purposes.
RaNNC (Rapid Neural Net Connector)
■Date of release:March 31st 2021
■GitHub URL:https://github.com/nict-wisdom/rannc
■License:MIT license
■Features:RaNNC drastically simplifies parallel training using multiple GPUs by automatically partitioning large-scale neural networks.
■Expectation:RaNNC enables many users to easily train large-scale networks and helps them improve a variety of AI systems .
Menu
1