Skip to content

Cybersecurity AI (CAI)

A lightweight, ergonomic framework for building bug bounty-ready Cybersecurity AIs (CAIs).

CAI on JWT@PortSwigger CTF — Cybersecurity AI CAI on HackableII Boot2Root CTF — Cybersecurity AI
asciicast asciicast

🎯 Milestones

HTB top 90 Spain (5 days) HTB ranking top 50 Spain (6 days) HTB ranking top 30 Spain (7 days) HTB ranking top 500 World (7 days) HTB Human vs AI CTF top 1 (AIs) world HTB Human vs AI CTF top 1 Spain HTB Human vs AI CTF top 20 World HTB Human vs AI CTF 750 $ Mistral AI Robotics Hackathon 2500 $ Bug rewards 250 $

📦 Package Attributes

version downloads Linux OS X Windows Android Discord arXiv

⚠️ CAI is in active development, so don't expect it to work flawlessly. Instead, contribute by raising an issue or sending a PR.

Access to this library and the use of information, materials (or portions thereof), is not intended, and is prohibited, where such access or use violates applicable laws or regulations. By no means the authors encourage or promote the unauthorized tampering with running systems. This can cause serious human harm and material damages.

By no means the authors of CAI encourage or promote the unauthorized tampering with compute systems. Please don't use the source code in here for cybercrime. Pentest for good instead. By downloading, using, or modifying this source code, you agree to the terms of the LICENSE and the limitations outlined in the DISCLAIMER file.

Motivation

Why CAI?

The cybersecurity landscape is undergoing a dramatic transformation as AI becomes increasingly integrated into security operations. We predict that by 2028, AI-powered security testing tools will outnumber human pentesters. This shift represents a fundamental change in how we approach cybersecurity challenges. AI is not just another tool - it's becoming essential for addressing complex security vulnerabilities and staying ahead of sophisticated threats. As organizations face more advanced cyber attacks, AI-enhanced security testing will be crucial for maintaining robust defenses.

This work builds upon prior efforts[1] and similarly, we believe that democratizing access to advanced cybersecurity AI tools is vital for the entire security community. That's why we're releasing Cybersecurity AI (CAI) as an open source framework. Our goal is to empower security researchers, ethical hackers, and organizations to build and deploy powerful AI-driven security tools. By making these capabilities openly available, we aim to level the playing field and ensure that cutting-edge security AI technology isn't limited to well-funded private companies or state actors.

Bug Bounty programs have become a cornerstone of modern cybersecurity, providing a crucial mechanism for organizations to identify and fix vulnerabilities in their systems before they can be exploited. These programs have proven highly effective at securing both public and private infrastructure, with researchers discovering critical vulnerabilities that might have otherwise gone unnoticed. CAI is specifically designed to enhance these efforts by providing a lightweight, ergonomic framework for building specialized AI agents that can assist in various aspects of Bug Bounty hunting - from initial reconnaissance to vulnerability validation and reporting. Our framework aims to augment human expertise with AI capabilities, helping researchers work more efficiently and thoroughly in their quest to make digital systems more secure.

Ethical principles behind CAI

You might be wondering if releasing CAI in-the-wild given its capabilities and security implications is ethical. Our decision to open-source this framework is guided by two core ethical principles:

  1. Democratizing Cybersecurity AI: We believe that advanced cybersecurity AI tools should be accessible to the entire security community, not just well-funded private companies or state actors. By releasing CAI as an open source framework, we aim to empower security researchers, ethical hackers, and organizations to build and deploy powerful AI-driven security tools, leveling the playing field in cybersecurity.

  2. Transparency in AI Security Capabilities: Based on our research results, understanding of the technology, and dissection of top technical reports, we argue that current LLM vendors are undermining their cybersecurity capabilities. This is extremely dangerous and misleading. By developing CAI openly, we provide a transparent benchmark of what AI systems can actually do in cybersecurity contexts, enabling more informed decisions about security postures.

CAI is built on the following core principles:

  • Cybersecurity oriented AI framework: CAI is specifically designed for cybersecurity use cases, aiming at semi- and fully-automating offensive and defensive security tasks.
  • Open source, free for research: CAI is open source and free for research purposes. We aim at democratizing access to AI and Cybersecurity. For professional or commercial use, including on-premise deployments, dedicated technical support and custom extensions reach out to obtain a license.
  • Lightweight: CAI is designed to be fast, and easy to use.
  • Modular and agent-centric design: CAI operates on the basis of agents and agentic patterns, which allows flexibility and scalability. You can easily add the most suitable agents and pattern for your cybersecurity target case.
  • Tool-integration: CAI integrates already built-in tools, and allows the user to integrate their own tools with their own logic easily.
  • Logging and tracing integrated: using phoenix, the open source tracing and logging tool for LLMs. This provides the user with a detailed traceability of the agents and their execution.
  • Multi-Model Support: more than 300 supported and empowered by LiteLLM. The most popular providers:
  • Anthropic: Claude 3.7, Claude 3.5, Claude 3, Claude 3 Opus
  • OpenAI: O1, O1 Mini, O3 Mini, GPT-4o, GPT-4.5 Preview
  • DeepSeek: DeepSeek V3, DeepSeek R1
  • Ollama: Qwen2.5 72B, Qwen2.5 14B, And many more

Closed-source alternatives

Cybersecurity AI is a critical field, yet many groups are misguidedly pursuing it through closed-source methods for pure economic return, leveraging similar techniques and building upon existing closed-source (often third-party owned) models. This approach not only squanders valuable engineering resources but also represents an economic waste and results in redundant efforts, as they often end up reinventing the wheel. Here are some of the closed-source initiatives we keep track of and attempting to leverage genAI and agentic frameworks in cybersecurity AI:


[1] Deng, G., Liu, Y., Mayoral-Vilches, V., Liu, P., Li, Y., Xu, Y., ... & Rass, S. (2024). {PentestGPT}: Evaluating and harnessing large language models for automated penetration testing. In 33rd USENIX Security Symposium (USENIX Security 24) (pp. 847-864).

Citation

If you want to cite our work, please use the following format

@misc{mayoralvilches2025caiopenbugbountyready,
      title={CAI: An Open, Bug Bounty-Ready Cybersecurity AI},
      author={Víctor Mayoral-Vilches and Luis Javier Navarrete-Lozano and María Sanz-Gómez and Lidia Salas Espejo and Martiño Crespo-Álvarez and Francisco Oca-Gonzalez and Francesco Balassone and Alfonso Glera-Picón and Unai Ayucar-Carbajo and Jon Ander Ruiz-Alcalde and Stefan Rass and Martin Pinzger and Endika Gil-Uriarte},
      year={2025},
      eprint={2504.06017},
      archivePrefix={arXiv},
      primaryClass={cs.CR},
      url={https://arxiv.org/abs/2504.06017},
}

Acknowledgements

CAI was initially developed by Alias Robotics and co-funded by the European EIC accelerator project RIS (GA 101161136) - HORIZON-EIC-2023-ACCELERATOR-01 call. The original agentic principles are inspired from OpenAI's swarm library and translated into newer prototypes. This project also makes use of other relevant open source building blocks including LiteLLM, and phoenix.