The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess security risks in large language models (LLMs). It automates probing for vulnerabilities such as prompt injection, data leakage, jailbreaking, and other adversarial exploits by running targeted tests against AI models. Garak supports multiple model types, including local and cloud-based LLMs, and generates structured reports highlighting security weaknesses. By leveraging predefined and customizable probes, security researchers and AI developers can use Garak to systematically evaluate model robustness, mitigate risks, and improve AI system resilience against exploitation.