HYAS Infosec Groundbreaking Research on AI-Generated Malware Contributes to the AI Act, Other AI Policies and Regulations

Provides AI Regulation Initiatives with Deep Insight into the Potential Harms of Fully Autonomous and Intelligent Malware and Helps Advance Cybersecurity Protections Against AI-Driven Threats

VANCOUVER, British Columbia--()--HYAS Infosec, an adversary infrastructure platform provider that offers unparalleled visibility, protection and security against all kinds of malware and attacks, is pleased to share that research cited from HYAS Labs, the research arm of HYAS, is being utilized by contributors to and framers of the European Union's AI Act.

The AI Act is widely viewed as a cornerstone initiative that is helping shape the trajectory of AI governance, with the United States’ policies and considerations soon to follow.

AI Act researchers and framers assert that the Act reflects a specific conception of AI systems, viewing them as non-autonomous statistical software with potential harms primarily stemming from datasets. The researchers view the concept of "intended purpose," drawing inspiration from product safety principles, as a fitting paradigm and one that has significantly influenced the initial provisions and regulatory approach of the AI Act.

However, these researchers also see a substantial gap in the AI Act concerning AI systems devoid of an intended purpose, a category that encompasses General-Purpose AI Systems (GPAIS) and foundation models.

HYAS’ work on AI-generated malware -- specifically, BlackMamba, as well as its more sophisticated and fully autonomous cousin, EyeSpy – is helping advance the understanding of AI systems that are devoid of an intended purpose, including GPAIS and the unique challenges posed by GPAIS to cybersecurity.

HYAS research is proving important for both the development of proposed policies and for the real-world challenges posed by the rising dilemma of fully autonomous and intelligent malware which cannot be solved by policy alone.

HYAS is providing researchers with tangible examples of GPAIS gone rogue. BlackMamba, the proof of concept cited in the research paper “General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole,” by Claire Boine and David Rolnick, exploited a large language model to synthesize polymorphic keylogger functionality on-the-fly and dynamically modified the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality.

EyeSpy, the more advanced (and more dangerous) proof of concept from HYAS Labs, is a fully autonomous AI-synthesized malware that uses artificial intelligence to make informed decisions to conduct cyberattacks and continuously morph to avoid detection. The challenges posed by an entity such as EyeSpy capable of autonomously assessing its environment, selecting its target and tactics of choice, strategizing, and self-correcting until successful – all while dynamically evading detection – was highlighted at the recent Cyber Security Expo 2023 in presentations such as “The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI.”

In response to the nuanced challenges posed by GPAIS, the EU Parliament has proactively proposed provisions within the AI Act to regulate these complex models. The significance of these proposed measures cannot be overstated and will help to further refine the AI Act and sustain its continued usefulness in the dynamic landscape of AI technologies.

HYAS CEO, David Ratner, said: “The industry as a whole must prepare for a new generation of threats. Cybersecurity and cyber defense must have the appropriate visibility into the digital exhaust and meta information thrown off by fully autonomous and dynamic malware to ensure operational resiliency and business continuity.”

Additional Resources:

General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole” https://www.bu.edu/law/files/2023/09/General-Purpose-AI-systems-in-the-AI-Act.pdf. Paper submitted by Claire Boine, Research Associate at the Artificial and Natural Intelligence Toulouse Institute and in the Accountable AI in a Global Context Research Chair at University of Ottawa, researcher in AI law, and CEO of Successif, and David Rolnick, Assistant Professor in CS at McGill and Co-Founder of Climate Change AI, to WeRobot 2023.

News – European Parliament - The European Union's AI Act: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Future of Life Institute “General Purpose - AI and the AI Act” What are general purpose AI systems? Why regulate general purpose AI systems? https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act.pdf

Towards Data Science – “AI-powered Monopolies and the New World Order - How AI’s reliance on data will empower tech giants and reshape the global orderhttps://towardsdatascience.com/ai-powered-monopolies-and-the-new-world-order-1c56cfc76e7d

"The Red Queen's Gambit: Cybersecurity Challenges in the Age of AI" presented by Lindsay Thorburn at Cyber Security Expo 2023 https://www.youtube.com/watch?v=Z2GsZHCXc_c

HYAS Blog: “Effective AI Regulation Requires Adaptability and Collaborationhttps://www.hyas.com/blog/effective-ai-regulation-requires-adaptability-and-collaboration

About HYAS

HYAS is a world-leading authority on cyber adversary infrastructure and communication to that infrastructure. HYAS is dedicated to protecting organizations and solving intelligence problems through detection of adversary infrastructure and anomalous communication patterns. HYAS helps businesses see more, do more, and understand more in real time about the nature of the threats they face. HYAS turns metadata into actionable threat intelligence, actual adversary visibility, and protective DNS that renders malware inoperable.

For more information, visit www.HYAS.com

Contacts

Dan Chmielewski
Madison Alexander PR for HYAS
Dchm@madisonalexanderpr.com
949-231-2965

Release Summary

HYAS research is being used by contributors to/framers of the EU AI Act, a cornerstone helping shape the trajectory of U.S. AI governance & policies.

Contacts

Dan Chmielewski
Madison Alexander PR for HYAS
Dchm@madisonalexanderpr.com
949-231-2965