-

MLPerf Results Show Advances in Machine Learning Inference

MLCommons establishes a new record with over 5,300 performance results and 2,400 power measurement results, 1.37X and 1.09X more than the previous round.

SAN FRANCISCO--(BUSINESS WIRE)--Today, the open engineering consortium MLCommons® announced fresh results from MLPerfTM Inference v2.1, which analyzes the performance of inference - the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. This round established new benchmarks with nearly 5,300 performance results and 2,400 power measures, 1.37X and 1.09X more than the previous round, respectively, reflecting the community's vigor.

MLPerf benchmarks are comprehensive system tests that stress machine learning models, software, and hardware, and optionally monitor energy consumption. The open-source and peer-reviewed benchmark suites level the playing ground for competitiveness, which fosters innovation, performance, and energy efficiency for the whole sector.

“We are very excited with the growth in the ML community and welcome new submitters across the globe such as Biren, Moffett AI, Neural Magic, and SAPEON,” said MLCommons Executive Director David Kanter. “The exciting new architectures all demonstrate the creativity and innovation in the industry designed to create greater AI functionality that will bring new and exciting capability to business and consumers alike.”

The MLPerf Inference benchmarks are focused on datacenter and edge systems, and Alibaba, ASUSTeK, Azure, Biren, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lenovo, Moffett, Nettrix, Neural Magic, NVIDIA, OctoML, Qualcomm Technologies, Inc., SAPEON, and Supermicro are among the contributors to the submission round.

To view the results and find additional information about the benchmarks please visit https://mlcommons.org/en/inference-datacenter-21/ and https://mlcommons.org/en/inference-edge-21/. These results reveal extensive industry participation, a focus on energy saving, paving the path for more capable intelligent systems that will benefit society as a whole.

About MLCommons

MLCommons is an open engineering consortium with a mission to benefit society by accelerating innovation in machine learning. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate of the organization, please visit http://mlcommons.org/ and contact participation@mlcommons.org.

Contacts

David Kanter
press@mlcommons.org

MLCommons


Release Versions

Contacts

David Kanter
press@mlcommons.org

More News From MLCommons

Latest MLPerf Results Display Gains for All

SAN FRANCISCO--(BUSINESS WIRE)--Today, MLCommons®, an open engineering consortium, announced new results from the industry-standard MLPerf™ Training, HPC and Tiny benchmark suites. Collectively, these benchmark suites scale from ultra-low power devices that draw just a few microwatts for inference all the way up to the most powerful multi-megawatt data center training platforms and supercomputers. The latest MLPerf results demonstrate up to a 5X improvement in performance helping deliver faster...

MLPerf Results Highlight More Capable ML Training

SAN FRANCISCO--(BUSINESS WIRE)--Today MLCommons®, an open engineering consortium, released new results from MLPerf™ Training v2.0, which measures the performance of training machine learning models. Training models faster empowers researchers to unlock new capabilities such as diagnosing tumors, automatic speech recognition or improving movie recommendations. The latest MLPerf Training results demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way...

MLPerf Results Show Advances in Machine Learning Inference Performance and Efficiency

SAN FRANCISCO--(BUSINESS WIRE)--Today MLCommons™, an open engineering consortium, released new results for three MLPerf™ benchmark suites - Inference v2.0, Mobile v2.0, and Tiny v0.7. These three benchmark suites measure the performance of inference - applying a trained machine learning model to new data. Inference enables adding intelligence to a wide range of applications and systems. Collectively, these benchmark suites scale from ultra-low power devices that draw just a few microwatts all t...
Back to Newsroom