Generating Adversarial Prompts from Incidents and Guidelines
Salapare, Kurt (2024)
Rapid deployment of Large Language Models (LLMs) has intro- duced significant security vulnerabilities, yet the limited public availability of detailed incident reports regarding exact prompts or techniques used impedes comprehensive security analysis and the development of robust defenses. This research addresses this gap by designing and evaluating a novel AI agent capable of automatically generating adversarial prompts from existing security guidelines and reported incidents. The agent employs a two-phase workflow: first, processing unstructured text into a classified, metadata-rich dataset via LLM-driven paragraph classification, and second, utilizing these insights to generate executable adversarial prompts. We inves- tigate the performance of various LLMs and prompt architectures (Descrip- tive, Concise, Few-Shot) within the agent, evaluating their computational efficiency, classification reliability, and the characteristics of the generated prompts. This systematic methodology offers a reproducible framework for improving proactive security analysis by providing a structured approach to adversarial prompt generation for law enforcement and developers.
Salapare_BA_TechnicalComputerScience.pdf