Neural Network Inference Engine IP Core Delivers >10 TeraOPS per Watt

VeriSilicon Expands Leadership in Deep Neural Network Processing with Breakthrough NN Compression Technology VIP8000 NN Processor Scaling from 0.5 to 72 TeraOPS

Highlights:

  • Scalable from IoT edge always-on to server ASICs with performance from 0.5 to 72 TeraOPS
  • Delivers more than 10 TeraOPS per Watt in 14nm Process Technology
  • Fully programmable processor supports OpenCL, OpenVX, and a wide range of NN frameworks (TensorFlow, Caffe, AndroidNN, ONNX, NNEF, etc.)
  • Native acceleration for i8, i16, fp16 and fp32 inference, supporting a broad spectrum of NN network topologies at variable precisions
  • Dramatic reductions in memory bandwidth requirements with the introduction of Hierarchical Compression, Software Tiling/Caching, Pruning, Fetch Skipping and Layer Merging technology
  • 10 new VIP8000 IP licensees for the product added in 2017

NUREMBERG, Germany--()--VeriSilicon Holdings Co., Ltd. (VeriSilicon) today announced significant milestones have been achieved for its versatile and highly scalable neural network inference engine family VIP8000.

“The biggest thing to happen in the computer industry since the PC is AI and machine learning, it will truly revolutionize, empower, and improve our lives. It can be done in giant machines from IBM and Google, and in tiny chips made with VeriSilicon’s neural network processors,” said Dr. Jon Peddie, president Jon Peddie Research. “By 2020 we will wonder how we ever lived without our AI assistants,” he added.

Machine learning and neural network processing represent the next major market opportunity for embedded processors. The International Data Corporation (IDC) forecasts spending on AI and machine learning to grow from $8B in 2016 to $47B by 2020. With the release of the latest generation of their NN inference IP, VeriSilicon establishes itself as a significant driver of growth in this category. The industry-leading top-end performance of the Vivante VIP8000 processor continues to expand the application space from always-on battery powered IoT clients to AI server farm applications.

VeriSilicon’s latest updates to VIP8000 are specifically designed to accelerate neural network model inferencing with greater efficiency and inference speed while slashing memory bandwidth requirements compared to alternative DSP, GPU, and CPU hybrid processor approaches. The fully programmable VIP8000 processors reach the performance and memory efficiency of dedicated fixed-function logic with the customizability and future proofing of full programmability in OpenCL, OpenVX, and a wide range of NN frameworks (TensorFlow, Caffe, AndroidNN, ONNX, NNEF, etc.). The VIP8000 NN architecture can handle a wide range of AI workloads, while optimizing memory management of the data that flows through the processor.

Not only does VeriSilicon’s NN engine outperform all traditional DSP, GPU and CPU hybrid systems, it is industry-proven and has been shipping to licensees as a ready IP core for more than 18 months. In 2017 alone, 10 major ASIC developers selected VIP after rigorous benchmarking of both competing IP solutions and SoCs. VeriSilicon has been successful licensing to a wide range of end-customers with applications from ADAS and autonomous vehicles, security surveillance, home entertainment, imaging to dedicated ASICs for servers.

The VIP8000 NN processor achieves the industry’s highest performance and energy efficiency levels and is the most scalable platform on the market. This NN engine can range from 0.5 to 72 TeraOPS, with power efficiency of more than 10 TeraOPS per Watt based on a recent 14nm implementation of the IP. The introduction of new Hierarchical Compression, Software Tiling/Caching, Pruning, Fetch Skipping and patent pending, Layer Merging technology further reduces memory bandwidth requirements for VIP8000 relative to other processor architectures.

“AI is everywhere. With patent-pending Neural Network compression technology, VIP8000 family efficiently delivers the performance that accelerates the adoption of AI in embedded products. We are deeply engaged with leading customers ranging from deeply embedded to edge server products,” said Weijin Dai, Chief Strategy Officer, Executive Vice President and GM of VeriSilicon’s Intellectual Property Division. “Applications and algorithms to address these challenges are rapidly advancing and we are combining AI technology with VeriSilicon’s extensive IP portfolio to deliver breakthrough solutions to our customers. AI needs to deliver value efficiently.”

VeriSilicon supports a wide range of NN frameworks and networks (TensorFlow, Caffe, AndroidNN, Amazon Machine Learning, ONNX, NNEF, AlexNet, VGG16, GoogLeNet, Yolo, Faster R-CNN, MobileNet, SqueezeNet, ResNet, RNN, LSTM, etc.) and also provides numerous software and hardware solutions to enable developers to create high-performance Neural Network models and machine-learning-based applications.

VeriSilicon at Embedded World 2018

Learn more about the VIP8000 NN and related VeriSilicon IP, NN ecosystem solution development partners, custom silicon and advanced packaging (SiP) turnkey services at Embedded World 2018 in Nuremberg, Germany, February 27 – March 1, Hall 4A / 4A-360.

More details, please contact: press@verisilicon.com

Contacts

VeriSilicon Holdings Co., Ltd.
Miya Kong, +86 21 51311118
press@verisilicon.com

Release Summary

VeriSilicon announced significant milestones achieved for its versatile and highly scalable neural network inference engine family VIP8000.

Contacts

VeriSilicon Holdings Co., Ltd.
Miya Kong, +86 21 51311118
press@verisilicon.com