LM Report – Burp AI features
This video explains the new LM Report AI Feature
LM Report – Burp AI features Read Post »
This video explains the new LM Report AI Feature
LM Report – Burp AI features Read Post »
This video explains the new AI HTTP Analyzer AI Feature
AI HTTP Analyzer – Burp AI features Read Post »
This video explains the new Shadow Repeater AI Feature
Shadow Repeater – Burp AI features Read Post »
This video explains the new Finding broken access control with AI Feature
Finding broken access control with AI – Burp AI features Read Post »
This video explains the new Record login sequence with AI Feature
Record login sequence with AI – Burp AI features Read Post »
This video explains the new Burp AI Explainer Feature
Explainer Feature – Burp AI features Read Post »
This video explains the new Burp AI Explore Feature
Explore Feature – Burp AI features Read Post »
The J2 Playground by Scale AI is an interactive platform designed to test the resilience of large language models (LLMs)
Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak) Read Post »
Tensor Trust is an online game developed by researchers at UC Berkeley to study prompt injection vulnerabilities in AI systems.
Tensortrust AI – Prompt Injection and Prompt Hardening Game Read Post »
PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where a model inadvertently exposes hidden system
Prompt Map – free tool to test for Prompt Leakage, AI Security Expert Read Post »
The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess security risks in large language models
Quick overview of Garak – a free LLM vulnerability scanner Read Post »
Prompt injection threats in terminals and IDEs via ANSI escape characters exploit the ability of these sequences to manipulate text
Prompt Injection into terminals / IDEs via ANSI escape code characters Read Post »
When AI agents autonomously browse websites and encounter tasks that are intentionally unsolvable or computationally intensive, they become susceptible to
AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert Read Post »
AI agents that autonomously browse the web introduce significant security risks, particularly related to data exfiltration through covert copy-and-paste operations
AI Agent Data Exfiltration, Rabbit R1, AI Security Expert Read Post »
Unbounded Consumption refers to scenarios where Large Language Models (LLMs) are subjected to excessive and uncontrolled usage, leading to resource
OWASP Top 10 LLM10:2025 Unbounded Consumption Read Post »
Misinformation refers to the generation of false or misleading information by Large Language Models (LLMs), which, despite appearing credible, can
OWASP Top 10 LLM09:2025 Misinformation Read Post »
Vector and Embedding Weaknesses refers to security vulnerabilities in Large Language Models (LLMs) that arise from improper handling of vector
OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses Read Post »
System Prompt Leakage refers to the risk that system prompts—internal instructions guiding the behavior of Large Language Models (LLMs)—may inadvertently
OWASP Top 10 LLM07:2025 System Prompt Leakage Read Post »
Excessive Agency refers to the vulnerability arising when Large Language Models (LLMs) are granted more functionality, permissions, or autonomy than
OWASP Top 10 LLM06:2025 Excessive Agency Read Post »