Vulnerable LLM/AI Labs
1. Gandalf by Lakera 🧙
2. Prompt Injection Playground
3. LLM-Vuln-Lab
4. LMQL Prompt Sandbox
5. OpenPromptGame
🔧 Tools for LLM/AI Pentesting
Tool
Use Case
🧪 Vulnerable AI Models & Architectures
🔹 Mini-GPT, Vicuna, Mistral (Locally Hosted)
🔹 LangChain & LlamaIndex Demo Apps
🔹 HoneyPrompt Project
🌐 Online Platforms & CTFs
🔹 AI Village at DEF CON (Labs & Recordings)
🔹 MITRE ATLAS
📚 Research, Guides, and Learning Materials
Resource
Description
🧭 Learning Path for LLM/AI Pentesting
Phase
Focus
Tools/Resources
11. AdvPromptLab
12. PromptInjection.ai (Red Team Simulator)
13. LLM Attacks by Hugging Face
14. RAG Vulnerability Playground
🧪 LLM/AI Attack Datasets for Research and Practice
Dataset
Use Case
⚙️ LLM-Specific Evaluation & Attack Tools (More Advanced)
Tool
Description
🧠 Red Team Training and Research Projects
🔹 DEF CON AI Red Teaming Datasets
🔹 Stanford CRFM Jailbreak Taxonomy
🧱 Building Custom Vulnerable AI Apps
App Type
What to Include
🔐 Defensive Techniques & Countermeasure Testing
Technique
Defense Tool
📚 Next-Level Resources to Follow
Resource
Why It’s Useful
Last updated