NEW YORK--(BUSINESS WIRE)--Since the wave of protests that swept the U.S after the death of George Floyd, the issue of biases in AI systems and specifically facial recognition have come up for public discussion.
Academics and activists for the Black Lives Matter (BLM) movement cited studies that showed a black woman was 5 times more likely to be misidentified by facial recognition algorithms than a white man.
This is the background to the Fair Face Recognition Workshop and Challenge, held by AnyVision as part of the European Conference on Computer Vision 2020 (ECCV 2020) on August 23-29, 2020. Dr. Eduard Vazquez, VP Visual Analytics Research at AnyVision explains:
“The aim of the challenge was to evaluate accuracy and bias in gender and skin color of submitted algorithms on the task of 1:1 face verification. Participants were scored using an in-the-wild dataset provided by AnyVision, further enriched by 12,500 new images, all annotated for gender and skin color as well as five other attributes: age group, eyeglasses, head pose, image source and face size. Despite the large variability, the dataset was purposefully unbalanced to simulate a real-world scenario where AI-based models are supposed to present fair outcomes.”
The challenge attracted 151 participants, from academic teams to start-ups, who made more than 1,800 submissions in total. The top 10 teams exceeded 0.999% accuracy and were able to minimize bias to the point where it was almost negligible.
The challenge is conducted under laboratory conditions and examined a narrow aspect of the capabilities required for effective real-time face recognition in the real world.
Dr. Vazquez added, “The results proved that current AI-based facial recognition systems have dramatically improved in the last couple of years and can achieve an unprecedented accuracy in these kinds of scenarios. The comprehensive bias analysis performed shows that while there is still important work to be done, the relative difference on bias was extremely small. This shows that current face recognition systems, within a proper legislative framework and in the right applications can help avoid human bias and are ready to aid human efforts in creating a positive impact on our society.”
AnyVision’s facial recognition systems are used in public buildings such as hospitals and airports around the world, and since its inception the company has taken a hard stance on training bias out of its systems and advocating for responsible use of this technology in the countries and communities in which it operates.
The results are encouraging because they prove that ethnic biases can be addressed through education and training, and that the academic community is committed to the issue.
This Workshop was one of only 15 out of hundreds of proposed sessions accepted by the conference and featured a number of high-profile, Ivy League scholars in the fields of AI and computer vision.”
AnyVision is the world’s leading developer of face, body and object recognition platforms. Our solutions are built to function on any sensor, with any resolution and are proven to operate in real-time and real-world scenarios. We bring together the best and brightest minds in AI, deep learning and computer vision to make the world a safer, more intuitive and more connected place.