The AI Ethics Deficit — 94% of IT Leaders Call for More Attention to Responsible and Ethical AI Development

87% believe AI development should be regulated, says new SnapLogic study

New research from SnapLogic reveals IT Leaders in the US and UK want more attention paid to AI ethics, responsibility, and regulation. (Graphic: Business Wire)

SAN MATEO, Calif. & LONDON--()--Ethical and responsible AI development is a top concern for IT decision-makers (ITDMs), according to new research from SnapLogic, which found that 94% of ITDMs across the US and UK believe more attention needs to be paid to corporate responsibility and ethics in AI development. A further 87% of ITDMs believe AI development should be regulated to ensure it serves the best interests of business, governments, and citizens alike.

The new research, conducted by Vanson Bourne on behalf of SnapLogic, studied the views and perspectives of ITDMs across industries, asking key questions such as: who bears primary responsibility to ensure AI is developed ethically and responsibly, will global expert consortiums impact the future development of AI, and should AI be regulated and, if so, by whom?

Who Bears Responsibility?

When asked where the ultimate responsibility lies to ensure AI systems are developed ethically and responsibly, more than half (53%) of ITDMs point to the organizations developing the AI systems, regardless of whether that organization is a commercial or academic entity. However, 17% place responsibility with the specific individuals working on AI projects. What’s striking is that respondents in the US are more than twice as likely as those in the UK to assign responsibility to individual workers (21% vs. 9%).

A similar number (16%) see an independent global consortium, comprised of representatives from government, academia, research institutions, and businesses, as the only way to establish fair rules and protocol to ensure the ethical and responsible development of AI. A further 11% of ITDMs believe responsibility should fall to the governments in the countries where the AI systems are developed.

Independent Guidance and Expertise

Some independent regional initiatives providing AI support, guidance, and oversight are already taking shape, with the European Commission High-Level Expert Group on Artificial Intelligence being one such example. ITDMs see expert groups like this as a positive step in addressing the ethical issues around AI. Half of ITDMs (50%) believe organizations developing AI will take guidance and adhere to recommendations from expert groups like this as they develop their AI systems. Additionally, 55% believe these groups will foster better collaboration between organizations developing AI.

However, Brits are more skeptical of the impact these groups will have. 15% of ITDMs in the UK stated that they expect organizations will continue to push the limits on AI development without regard for the guidance expert groups provide, compared with 9% of their American counterparts. Furthermore, 5% of UK ITDMs indicated that guidance or advice from oversight groups would be effectively useless to drive ethical AI development unless it becomes enforceable by law.

A Call for Regulation

Many believe that ensuring ethical and responsible AI development will require regulation. In fact, 87% of ITDMs believe AI should be regulated, with 32% noting that this should come from a combination of government and industry, while 25% believe regulation should be the responsibility of an independent industry consortium.

However, some industries are more open to regulation than others. Almost a fifth (18%) of ITDMs in manufacturing oppose the regulation of AI, followed by 13% of those in the Technology sector, and 13% of those in the Retail, Distribution and Transport sector. In giving reasons for the rejection of regulation, respondents were nearly evenly split between the belief that regulation would slow down AI innovation, and that AI development should be at the discretion of the organizations creating AI programs.

Championing AI Innovation, Responsibly

Gaurav Dhillon, CEO at SnapLogic, commented: “AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes. We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way.”

Dhillon continued: “Data quality, security, and privacy concerns are real, and the regulation debate will continue. But AI runs on data — it requires continuous, ready access to large volumes of data that flows freely between disparate systems to effectively train and execute the AI system. Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained. Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.”

About the research

The research was conducted by independent research house Vanson Bourne in February 2019 on behalf of SnapLogic. A total of 300 IT decision-makers participated in the study, representing organizations with more than 1,000 employees across the United States and the United Kingdom.

About Vanson Bourne

Vanson Bourne is an independent specialist in market research for the technology sector. Their reputation for robust and credible research-based analysis is founded upon rigorous research principles and their ability to seek the opinions of senior decision-makers across technical and business functions, in all business sectors and all major markets. For more information, visit vansonbourne.com.

About SnapLogic

SnapLogic provides the #1 intelligent integration platform. The company’s AI-powered workflows and self-service integration capabilities make it fast and easy for organizations to manage all their application integration, data integration, and data engineering projects on a single, scalable platform. Hundreds of Global 2000 customers — including Adobe, AstraZeneca, Box, Emirates, Schneider Electric, and Wendy’s — rely on SnapLogic to automate business processes, accelerate analytics, and drive digital transformation. Learn more at snaplogic.com.

Connect with SnapLogic via our Blog, Twitter, Facebook, or LinkedIn.

Contacts

Scott Behles
SnapLogic
scott.behles@snaplogic.com
+1 415-571-4462

Marnie Spicer
Kaizo for SnapLogic
snaplogic@kaizo.co.uk
+44 (0) 20 3176 4723

Release Summary

New research from SnapLogic reveals IT Leaders in the US and UK want more attention paid to AI ethics, responsibility, and regulation.

Contacts

Scott Behles
SnapLogic
scott.behles@snaplogic.com
+1 415-571-4462

Marnie Spicer
Kaizo for SnapLogic
snaplogic@kaizo.co.uk
+44 (0) 20 3176 4723