AnyVision to NIST: "AI companies must purge demographic bias from their algorithms and be transparent about their methodology"

NEW YORK--()--AnyVision, the world’s leading Recognition AI company, today released an open letter in response to an invitation for public comment by the National Institute of Standards and Technology (NIST). The letter is intended to stimulate discussion around public trust in AI and facial recognition technology.

As part of the vital public debate around artificial intelligence, NIST has recently released a document outlining a list of factors that contribute to an individual’s potential trust in AI platforms and apps. The NIST document shows how an individual or organization should consider the factors based on the task and the risk involved in trusting the decision of an AI system and contributes to NIST’s efforts to advance the development of trustworthy AI tools.

AnyVision is the first known entity from within the AI industry to address the initiative and respond to NIST in an open letter entitled: "Purging Demographic Bias While Increasing Transparency in Facial Recognition." In the open letter, AnyVision CEO, Avi Golan, issues a call to action for NIST to apply a similar logic to the trusted use of facial recognition technology.

AnyVision is a world-leading visual AI platform company that organizations across the globe use to create trusted, seamless experiences in their physical spaces, primarily through the use of face and object recognition technology. In March, AnyVision was ranked among the top solutions in the world and achieved top rankings across all five categories in the Face Recognition Vendor Test conducted by NIST.

In the open letter, Golan points out: "Understanding the use case and the criticality of the decisions made by the AI algorithms impact how much trust users should place with the AI systems. If the AI is being used to make a music or movie recommendation based on historical preferences, it’s not critical if the AI arrives at the wrong conclusion. However, if AI is being used to make a cancer diagnosis then that’s another matter. These are the types of nuanced distinctions that are critical as opposed to making broad, sweeping generalizations about the use of facial recognition."

In April the European Commission announced regulations on the use of AI including strict safeguards on recruitment, critical infrastructure, credit scoring, migration, and law enforcement algorithms. The EU further delineated risk into categories of unacceptable risk, high risk, limited risk, and minimal risk. The regulations introduced additional clarity by recognizing the underlying differences in the use cases in which AI is being applied.

According to Golan, "These are steps in the right direction in understanding and categorizing AI. It’s largely understood that AI is providing significant benefits including improved speed, accuracy, cost savings, fraud detection, medical diagnoses, and customer experience. At the same time, it’s vital to address its historical weaknesses. Consequently, AI companies must continue to purge demographic bias from their algorithms and be transparent about their methodology and the training data used to develop their models. Unfortunately, this level of nuance is missing from most discussions today related to facial recognition.”

AnyVision’s open letter calls upon NIST to help define and shape the discussion around the responsible use of facial recognition and video surveillance by drafting similar guidelines as they’ve done for the use of AI.

Last year AnyVision conducted the Fair Facial Recognition Challenge and invited teams from the AI industry and academia to participate and test whether their algorithms are racially biased. The results of the top 10 teams proved that racial bias can be significantly reduced or totally eliminated by training facial recognition algorithms on a wide range of video and still images of people of different races, genders, and ages. The challenge demonstrated that current AI-based facial recognition systems have dramatically improved in the last couple of years and can achieve unprecedented accuracy in these kinds of scenarios.

Golan further remarks, "AnyVision is willing to share its industry insights and best practices from our vast research experience with leading global players, including name-brand retailers, global hospitality and entertainment companies, and law enforcement agencies from around the world. Moreover, AnyVision welcomes the opportunity to work with NIST, as well as thought leaders from academia and NGOs, to help draft these guidelines and best practices."

About AnyVision

AnyVision is a world-leading visual AI platform company that organizations across the globe use to create trusted, seamless experiences in their physical spaces. Proven to operate with the highest accuracy in real-time and real-world scenarios, AnyVision harnesses its cutting-edge research and powerful technology platform to make the world a safer, more intuitive and more connected place. For more information, please visit


Dean Nicolls
CMO, AnyVision

Release Summary

This press release is announcing AnyVision's open letter to NIST to encourage guidance around the use of responsible facial recognition technology.

Social Media Profiles


Dean Nicolls
CMO, AnyVision