Prompt-Engineering

Context Engineering: How BGA Teaches LLMs to Write Exploits
The Problem with Teaching AI to Hack Teaching an LLM to write working exploits is more challenging than typical AI tasks. Unlike most applications where “close enough” works, vulnerability exploitation requires precise execution. A single character error can make an entire exploit fail. Take this seemingly simple Java reflective call injection vulnerability: String className = request.getParameter("class"); Class.forName(className); // BUG: arbitrary class loading This looks straightforward, but there’s a catch: to exploit this vulnerability, the LLM must load the exact class name "jaz.Zer" to trigger Jazzer’s detection. Not "jaz.Zero", not "java.Zer", not "jaz.zer". One character wrong and the entire exploit fails.