Experts weigh in on new AI regulations

Experts Weigh

In a significant step towards regulating artificial intelligence, both the United States and the European Union have adopted a risk-based approach to AI development. This move emphasizes the importance of transparency and security in the rapidly evolving field of AI. The US has mandated that all federal agencies appoint a chief AI officer and submit annual reports identifying AI systems in use, associated risks, and mitigation plans.

Similarly, the EU has established requirements for risk assessment, testing, and oversight, particularly for high-risk AI applications.

Both regions have prioritized “Security by design and by default,” encouraging a proactive approach to security that anticipates and mitigates threats before they become problematic. This places a greater responsibility on software developers to incorporate security measures into their daily work.

At the core of “Security by design, by default” is the code used to build AI models and applications. As AI development and regulation expand, developers will need to be proficient in coding and securing AI effectively.

Reports indicate that malicious submissions on code repositories are now emerging on AI development platforms, making it crucial to bake security into the process from the outset.

Experts advocate proactive AI security

The level of risk and regulatory requirements will vary depending on the type of AI application. Systems considered a threat to humans, such as government-run social scoring and biometric identification, will be banned.

High-risk systems, including those used in aviation, automotive, and medical devices, will face stricter regulations. Most current AI applications, like AI-enabled games and spam filters, fall into the limited risk category and will not be heavily regulated. Under the EU AI Act, applications like ChatGPT are not yet considered high risk, but they must ensure transparency, avoid illegal content generation, and not use undisclosed copyrighted data in training models.

Models capable of posing systemic risk will be required to undergo testing prior to release and report any incidents. The US approach emphasizes self-regulation, with developers required to comply with risk management, testing, data governance, human oversight, transparency, and cybersecurity for high-risk deployments. Ultimately, the new AI regulations underscore the critical nature of proactive security and the urgency of identifying weaknesses before they turn into large-scale problems.

As Ram Movva, chairman and chief executive officer of Securin, and Aviral Verma, leader of the Research and Threat Intelligence team at Securin, suggest, now is the time for developers and organizations to return to the core principles of security by design.