According to a recent press release, “Today at the Baidu Create AI developer conference in Beijing, Intel Corporate Vice President Naveen Rao announced that Baidu* is collaborating with Intel on development of the new Intel® Nervana™ Neural Network Processor for Training (NNP-T). The collaboration involves the hardware and software designs of the new custom accelerator with one purpose – training deep learning models at lightning speed… Artificial intelligence (AI) isn’t a single workload; it’s a pervasive capability that will enhance every application, whether it’s running on a phone or in a massive data center. Phones, data centers and everything in between have different performance and power requirements, so one-size AI hardware doesn’t fit all. Intel offers exceptional choice in AI hardware with enabling software, so customers can run complex AI applications where the data lives. The NNP-T is a new class of efficient deep learning system hardware designed to accelerate distributed training at scale. Close collaboration with Baidu helps ensure Intel development stays in lock-step with the latest customer demands on training hardware.”
The release continues, “How Intel and Baidu Collaborate: Since 2016, Intel has been optimizing Baidu’s PaddlePaddle* deep learning framework for Intel® Xeon® Scalable processors. Now, the companies give data scientists more hardware choice by optimizing the NNP-T for PaddlePaddle. The impact of these AI solutions is enhanced with additional Intel technologies. For example, Intel® Optane™ DC Persistent Memory provides improved memory performance that allows Baidu to deliver personalized mobile content to millions of users through its Feed Stream* service and Baidu’s AI recommendation engines for a more efficient customer experience.”
Read more at Business Wire.
Image used under license from Shutterstock.com