With the rapid advancement of artificial intelligence models, AI security issues have become increasingly prominent. On Wednesday, Irregular, a company focused on AI security evaluation, announced a new round of $80 million financing led by Sequoia Capital and Raptor Capital, with Assaf Lapaport, CEO of cybersecurity company Wiz, also participating in the investment. According to insiders, this funding round valued Irregular at $450 million.

AI, Artificial Intelligence

Forward-looking Security Protection Concept

"Our view is that soon, a large amount of economic activity will come from interactions between humans and artificial intelligence, as well as between artificial intelligence and artificial intelligence, which will break security systems in multiple aspects," said co-founder Dan Lahav to TechCrunch. This assessment reflects Irregular's deep insight into the security challenges of the AI era.

Irregular was previously known as Pattern Labs and has now become an important player in the AI evaluation field. Its research has been widely applied in the security evaluation of top industry models, including Claude 3.7 Sonnet and OpenAI's o3 and o4-mini models. More notably, the company's model vulnerability detection and evaluation framework, SOLVE, has gained widespread application in the industry.

Innovative Simulation Environment Technology

Although Irregular has accumulated extensive experience in evaluating existing model risks, the core goal of this funding round is more ambitious: to identify and prevent potential risks before they actually occur. The company has built a precise simulation environment system that can conduct intensive testing before a model is released.

"We have a complex network simulation environment where AI plays both the role of an attacker and a defender," explained co-founder Omer Nevo, "when a new model is launched, we can know in advance which defense measures are effective and which are not."

Rising Industry Awareness of Security

As the potential risks of cutting-edge AI models become increasingly evident, security has become a core concern across the entire AI industry. OpenAI fully reformed its internal security measures this summer to guard against potential industrial espionage, reflecting the importance that leading companies place on security issues.

At the same time, the ability of AI models to detect software vulnerabilities is continuously improving, which has significant impacts on both attackers and defenders. For the founders of Irregular, this is just the first of many security challenges caused by the increasing capabilities of large language models.

The Race Between Security and Capabilities

"If the goal of leading laboratories is to create increasingly complex and powerful models, then our goal is to ensure the safety of these models," Lahav said, "but this is a constantly evolving goal, so there will certainly be a lot of work to be done in the future."

This statement clearly outlines the essential challenge in the AI security field: it is an ongoing competition between the enhancement of AI capabilities and the development of security protection, requiring forward-looking technological innovation and continuous investment.