Habana Labs, developer of AI processors, has announced the Habana Gaudi AI training processor. The architecture enables near-linear scaling of training systems performance, as high throughput is maintained, even at smaller batch sizes, thus allowing performance scaling of Gaudi-based systems from a single-device, to large systems built with hundreds of Gaudi processors. The processor includes 32Gb of HBM-2 memory, currently offered in two forms: HL-200, a PCle card supporting eight ports 100GB Ethernet, and H-205, a mezzanine card compliant with the OCP-OAM specification, supporting either 10 ports of 100Gb or 20 ports of 50Gb Ethernet.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” commented Linley Gwennap, principal analyst of The Linley Group: “Gaudi offers strong performance and industry-leading power efficiency among AI training accelerators, enabling large clusters of accelerators, built using industry-standard components.”

For more information on Gaudi AI training and Goya AI inference processors, please visit: www.habana.ai