safety

OpenAI reveals cyberdefense breakthroughs

OpenAI has shared some of the breakthrough cybersecurity work it’s supported with its grant program

Martin Crowley
June 21, 2024

OpenAI launched a Cybersecurity Grant Program in 2023 to “equip cyber defenders with the most advanced AI models and to empower ground-breaking research at the nexus of cybersecurity and artificial intelligence,” and has now shared some of the breakthrough projects it’s supported.

Some of the projects it has supported include:

Since it launched the program, OpenAI has supported a multitude of cyber defense projects including:

Wagner Lab

The security lab at UC Berkeley is working with OpenAI to develop protection techniques to prevent prompt-injection attacks in large language models (LLMs).

Coguard

Coguard is using AI and OpenAI technology to detect and reduce software misconfigurations, a common cause of security threats, as current methods rely on outdated rules-based policies.

Breuer Lab

Breuer Lab is working with OpenAi to develop new defense techniques that prevent neural network attacks without compromising accuracy or efficiency.

Boston University

OpenAI is supporting the security lab at Boston University to improve the ability of LLMS to identify and fix code vulnerabilities so cyber defenders can prevent code exploitation for malicious use.

Why is OpenAI running the program?

Since it launched the program, OpenAI has had over 600 applications, highlighting the interest in the potential of using AI to prevent cyber security attacks. OpenAI is giving many members of the cyber defense community free access to ChatGPT Plus, believing its technology will help cyber defenders with tasks such as “writing code to analyze artifacts during investigations, creating log parsers, and summarizing incident statuses within strict time constraints.