by Angela Guess
According to a recent press release, “Synopsys, Inc. today announced that it has enhanced the convolutional neural network (CNN) engine in its DesignWare® EV6x Vision Processors to address the increasing video resolution and frame rate requirements of high-performance embedded vision applications. The CNN engine delivers up to 4.5 TeraMACs per second when implemented in 16-nanometer (nm) FinFET process technologies under typical conditions, four times more performance than Synopsys’ previous CNN engine. It also supports both coefficient and feature map compression/decompression to reduce data bandwidth requirements and decrease power consumption. The vision CPU scales from one to four vector DSPs and operates in parallel to the CNN engine, delivering maximum throughput for a broad range of high-performance embedded vision applications such as advanced driver assistance systems (ADAS), video surveillance, augmented and virtual reality, and simultaneous localization and mapping (SLAM).”
The release goes on, “The DesignWare EV6x Processor family integrates scalar, vector DSP and CNN processing units for highly accurate and fast vision processing. The EV6x supports any convolutional neural network, including popular networks such as AlexNet, VGG16, GoogLeNet, Yolo, Faster R-CNN, SqueezeNet and ResNet. Designers can run CNN graphs originally trained for 32-bit floating point hardware on the EV6x’s 12-bit CNN engine, significantly reducing the power and area of their designs while maintaining the same levels of detection accuracy. The engine delivers power efficiency of up to 2,000 GMACs/sec/W when implemented in 16-nm FinFET process technologies (worst-case conditions). The EV6x’s CNN hardware also supports neural networks trained for 8-bit precision to take advantage of the lower memory bandwidth and power requirements of these graph types.”
Read more at PR Newswire.
Photo credit: Synopsys