Recently, AI security startup Irregular announced a $80 million funding round, led by renowned investment companies Sequoia Capital and Redpoint Ventures, and also attracted participation from Assaf Rappaport, CEO of Wiz. According to sources related to the transaction, Irregular's current valuation has reached $450 million.
Image source note: The image is AI-generated, and the image licensing service is Midjourney
Co-founder Dan Lahav of Irregular said, "We believe that a large portion of future economic activities will come from human-machine interaction and interactions between AI and AI, which will break existing security systems in multiple aspects." The company, originally named Pattern Labs, has already taken an important position in the field of AI evaluation, and its work has been widely referenced in the safety assessments of models such as Claude3.7Sonnet and OpenAI's o3 and o4-mini. In addition, the model vulnerability detection capability scoring framework (SOLVE) developed by Irregular is widely used in the industry.
Although Irregular has made significant progress in risk assessment of existing models, the goal of this funding round is more ambitious: identifying potential new risks and behaviors before the model is put into practical use. To achieve this, Irregular has built a complex simulation environment system that allows for in-depth testing before the model is released.
Co-founder Omer Nevo mentioned, "We have a complex network simulation where AI plays the role of both attacker and defender. Therefore, when a new model is launched, we can observe where the defense is effective and where it fails."
As the AI industry pays more attention to security issues, the potential risks brought by cutting-edge models are becoming increasingly evident. This summer, OpenAI made comprehensive adjustments to its internal security measures to prevent possible corporate espionage. At the same time, the ability of AI models to discover software vulnerabilities is also increasing, which has a profound impact on both attackers and defenders.
For the founders of Irregular, this is just the first challenge among the security risks brought by the continuously improving capabilities of large language models. Lahav said, "If the goal of cutting-edge labs is to create increasingly complex and powerful models, then our goal is to ensure the safety of these models. However, this is a constantly changing task, so there is still a lot of work to be done in the future."
Key points:
🌟 Irregular successfully raised $80 million, with a valuation of $450 million.
🔒 The company is committed to assessing the security risks of AI models, especially the potential risks of new models.
⚙️ Irregular tests the defensive capabilities of AI models through complex network simulations to enhance security.