Blog
MCP CLI Tool – Interact with MCP Servers from the command line
A quick video demonstrating how to use the MCP CLI Tool and interact with…
MCP Inspector Tool – Dissect your Model Context Protocol (MCP) Servers (Streamable HTTP)
A quick video demonstrating how to use the MCP Inspector Tool and dissect your…
MCP Inspector Tool – Dissect your Model Context Protocol (MCP) Servers
A quick video demonstrating how to use the MCP Inspector Tool and dissect your…
Hide ASCII unicode tags in emojis or letters -> Direct and Indirect Prompt Injection!
A quick video demonstrating how to hide ASCII unicode tags in emojis or letters…
Run an AI LLM inside a PDF Document! Totally Insane! What comes next?
A quick video showing you how to run an LLM model inside a PDF…
Invisible Prompt Injection via a malicious MCP Server (Claude Desktop – MCP Server)
A quick video on Invisible Prompt Injection (ASCII Unicode tags) via a malicious MCP…
Prompt Injection via a malicious MCP Server (Claude Desktop – MCP Server)
A quick video on Prompt Injection via a malicious MCP Server (Claude Desktop –…
Setting up a basic MCP Server (Claude Desktop – MCP Server)
A quick video on Setting up a basic MCP Server (Claude Desktop – MCP…
Injecting a fake Tool call (Claude Desktop – MCP Server)
A quick video demonstrating Injecting a fake Tool call (Claude Desktop – MCP Server)
Simple Prompt Injection Kit for Evaluation and Exploitation (SPIKEE)
A quick video overview of the Simple Prompt Injection Kit for Evaluation and Exploitation…
Automating your Penetration Testing – Agentic Pentesting with PentestGPT.ai
A quick video demonstrating on how to automate your Penetration Testing with PentestGPT.ai
Meta Llama Firewall – Regex Scanner (Demo 4)
A quick video introducing Meta Llama Firewall – Regex Scanner (Demo 4)
Meta Llama Firewall – Codeshield (Demo 3)
A quick video introducing Meta Llama Firewall – Codeshield (Demo 3)
Meta Llama Firewall – Promptguard (Demo 2)
A quick video introducing Meta Llama Firewall – Promptguard (Demo 2)
Meta Llama Firewall – Input guardrails for LLMs (Demo 1)
A quick video introducing Meta Llama Firewall – Input guardrails for LLMs (Demo 1)
Jailbreak Challenge – Red Team Arena – redarena.ai
Quick video showcase of another Jailbreak challenge platform called redarena.ai
My Top 5 most profitable bug bounty programs
Quick video on showcasing the Top 5 Bug Bounty Platforms I had must success…
What if that LLM was agentic? Benchmark simulation how a model would perform as an agent. Agent Dojo
Video demo on the Agent Dojo tool which helps you simulate an LLM model…
Image Prompt Injection -> Agent -> MCP -> RCE (Remote Code Execution)
This video shows the attack chain of an image based prompt injection into an…
Building Burp Extensions with ChatGPT Prompts
A guide to creating Burp extensions using ChatGPT prompts is provided. This video demonstrates…
MyLLMAuto – Multi-chain LLM application CTF challenge
This is a Capture The Flag (CTF) application designed to teach prompt injection in…
Insecure Agents (1 demo) – Giskard AI challenge
This video features a walkthrough of 1 demo on Giskard AI (Insecure Agents)
Factually Wrong Statements (1 demo) – Giskard AI challenge
This video features a walkthrough of 1 demo on Giskard AI (Factually Wrong Statements)
Off Topic (1 demo) – Giskard AI challenge
This video features a walkthrough of 1 demo on Giskard AI (Off Topic)
Breaking the Boundaries of Strict Responses (1 demo) – Giskard AI challenge
This video features a walkthrough of 1 demo on Giskard AI (Breaking the Boundaries…
Breaking the Boundaries of Strict Responses (2 demos) – Giskard AI challenge
This video features a walkthrough of 2 demos on Giskard AI (Breaking the Boundaries…
Level 5 walkthrough of Hackmerlin prompt injection challenge
This video features a walkthrough of Level 5 of Hackmerlin’s prompt injection challenge.
Level 4 walkthrough of Hackmerlin prompt injection challenge
This video features a walkthrough of Level 4 of Hackmerlin’s prompt injection challenge.
Level 3 walkthrough of Hackmerlin prompt injection challenge
This video features a walkthrough of Level 3 of Hackmerlin’s prompt injection challenge.
Level 1 and 2 walkthrough of Hackmerlin prompt injection challenge
This video features a walkthrough of Level 1 and 2 of Hackmerlin’s prompt injection…
AI Escape Room Pango’s Dungeon Level 5 from pangea.cloud
This video features a walkthrough of Level 5 of the AI Escape Room Pango’s…
AI Escape Room Pango’s Dungeon Level 4 from pangea.cloud
This video features a walkthrough of Level 4 of the AI Escape Room Pango’s…
AI Escape Room Pango’s Dungeon Level 3 from pangea.cloud
This video features a walkthrough of Level 3 of the AI Escape Room Pango’s…
AI Escape Room Pango’s Dungeon Level 2 from pangea.cloud
This video features a walkthrough of Level 2 of the AI Escape Room Pango’s…
AI Escape Room Pango’s Dungeon Level 1 from pangea.cloud
This video features a walkthrough of Level 1 of the AI Escape Room Pango’s…
OWASP Agentic AI Vulnerabilities – Quick Overview
This video goes through the basic vulnerabilities specific to Agentic AI systems.
Burp Suite & AI: The Future of Vulnerability Analysis, Bounty Prompt Burp Extension
Burp Suite and AI are powerful tools on their own, but together they become…
Multi-vector attack against an MCP server – Demo
This challenge demonstrates a sophisticated multi-vector attack against an MCP server. It requires chaining…
Malicious Code Execution in an MCP server – Demo
This challenge demonstrates a malicious code execution vulnerability in an MCP server. The MCP…
Token Theft vulnerability in an MCP server – Demo
This challenge demonstrates a token theft vulnerability in an MCP server. The MCP server…
Excessive permission scope in an MCP server – Demo
This challenge demonstrates the dangers of excessive permission scope in an MCP server. The…
Agentic AI Guardrails Playground (Invariant Labs)
Invariant Explorer, accessible at explorer.invariantlabs.ai, is an open-source observability tool designed to help developers…
Claude executing script via MCP server leading to exfiltration of bash shell (RCE – Remote Code Execution)
Claude executing a script via the MCP (Model Context Protocol) server demonstrates a critical…
MCP Tool poisoning demo. Are you sure your MCP servers are not malicious?
Model Context Protocol poisoning is an emerging AI attack vector where adversaries manipulate the…
Promptfoo a very powerful and free LLM security scanner
Promptfoo is an open-source platform designed to help developers test, evaluate, and secure large…
Claude Desktop with Desktop Commander MCP to control your machine via AI
Claude Desktop, when integrated with Desktop Commander MCP, enables seamless AI-driven control of your…
Scan your MCP servers for vulnerabilities specific to agentic AI
The mcp-scan project by Invariant Labs is a security auditing tool designed to analyze…
Image prompt injection to invoke MCP tools
Visual prompt injection targeting the Model Context Protocol (MCP) is particularly dangerous because it…
Indirect Prompt Injection into coding assistants like GitHub Copilot or Cursor
Indirect prompt injection via instruction files in tools like GitHub Copilot or Cursor occurs…
Agentic Radar – free agentic code scanning
Agentic Radar is a security scanner designed to analyze and assess agentic systems, providing…
Burp MCP Server with Claude Desktop – Revolution in App Penetration Testing
The MCP Server is a Burp Suite extension that enables integration with AI clients…
AI HTTP Analyzer – Burp AI features
This video explains the new AI HTTP Analyzer AI Feature
Shadow Repeater – Burp AI features
This video explains the new Shadow Repeater AI Feature
Finding broken access control with AI – Burp AI features
This video explains the new Finding broken access control with AI Feature
Record login sequence with AI – Burp AI features
This video explains the new Record login sequence with AI Feature
Explainer Feature – Burp AI features
This video explains the new Burp AI Explainer Feature
Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak)
The J2 Playground by Scale AI is an interactive platform designed to test the…
Tensortrust AI – Prompt Injection and Prompt Hardening Game
Tensor Trust is an online game developed by researchers at UC Berkeley to study…
Prompt Map – free tool to test for Prompt Leakage, AI Security Expert
PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where…
Quick overview of Garak – a free LLM vulnerability scanner
The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess…
Prompt Injection into terminals / IDEs via ANSI escape code characters
Prompt injection threats in terminals and IDEs via ANSI escape characters exploit the ability…
AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert
When AI agents autonomously browse websites and encounter tasks that are intentionally unsolvable or…
AI Agent Data Exfiltration, Rabbit R1, AI Security Expert
AI agents that autonomously browse the web introduce significant security risks, particularly related to…
OWASP Top 10 LLM10:2025 Unbounded Consumption
Unbounded Consumption refers to scenarios where Large Language Models (LLMs) are subjected to excessive…
OWASP Top 10 LLM09:2025 Misinformation
Misinformation refers to the generation of false or misleading information by Large Language Models…
OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses
Vector and Embedding Weaknesses refers to security vulnerabilities in Large Language Models (LLMs) that…
OWASP Top 10 LLM07:2025 System Prompt Leakage
System Prompt Leakage refers to the risk that system prompts—internal instructions guiding the behavior…
OWASP Top 10 LLM06:2025 Excessive Agency
Excessive Agency refers to the vulnerability arising when Large Language Models (LLMs) are granted…
OWASP Top 10 LLM05:2025 Improper Output Handling
Improper Output Handling refers to the inadequate validation and sanitization of outputs generated by…
OWASP Top 10 LLM04:2025 Data and Model Poisoning
Data and Model Poisoning refers to the deliberate manipulation of an LLM’s training data…
OWASP Top 10 LLM03:2025 Supply Chain
Supply Chain refers to vulnerabilities in the development and deployment processes of Large Language…
OWASP Top 10 LLM02:2025 Sensitive Information Disclosure
Sensitive Information Disclosure refers to the unintended exposure of confidential data—such as personal identifiable…
OWASP Top 10 LLM01:2025 Prompt Injection
Prompt Injection refers to a security vulnerability where adversarial inputs manipulate large language models…
Prompt injection via audio or video file
Audio and video prompt injection risks involve malicious manipulation of inputs to deceive AI…
LLM image misclassification and the consequences
Misclassifying images in multimodal AI systems can lead to unintended or even harmful actions,…
LLMs reading CAPTCHAs – threat to agent systems?
LLMs with multimodal capabilities can be leveraged to read and solve CAPTCHAs in agentic…
Indirect conditional prompt injection via documents
Conditional indirect prompt injection is an advanced attack where hidden instructions in external content—such…
Indirect Prompt Injection with documents
Indirect prompt injection with documents is an attack technique where adversarial instructions are embedded…
LLM01: Visual Prompt Injection | Image based prompt injection
Multi-modal prompt injection with images is a sophisticated attack that exploits the integration of…
LLM01: Indirect Prompt Injection | Exfiltration to attacker
Data exfiltration from a large language model (LLM) can be performed using markdown formatting…
Prompt Airlines – AI Security Challenge – Flag 5
In this video we take a look at solving the promptairlines.com challenge (Flag 5)
Prompt Airlines – AI Security Challenge – Flag 4
In this video we take a look at solving the promptairlines.com challenge (Flag 4)
Prompt Airlines – AI Security Challenge – Flag 3
In this video we take a look at solving the promptairlines.com challenge (Flag 3)
Prompt Airlines – AI Security Challenge – Flag 1 and 2
In this video we take a look at solving the promptairlines.com challenge (Flag 1…
Prompt leakage and indirect prompt injections in Grok X AI
In this video we will take a look at various prompt injection issues in…
myllmbank.com Walkthrough Flag 3
In this video we will take a look at flag 3 of myllmbank.com
myllmbank.com Walkthrough Flag 2
In this video we will take a look at flag 2 of myllmbank.com
myllmbank.com Walkthrough Flag 1
In this video we will take a look at flag 1 of myllmbank.com
SecOps Group AI/ML Pentester Mock Exam 2
This is a walkthrough of the SecOps Group AI/ML Pentester Mock Exam 2
SecOps Group AI/ML Pentester Mock Exam 1
This is a walkthrough of SecOps Group AI/ML Pentester Mock Exam 1
CSRF potential in LLMs
Cross-Site Request Forgery (CSRF) via prompt injection through a GET request is a potential…
Prompt Injection via clipboard
Prompt injection via clipboard copy/paste is a security concern where malicious text, copied into…
KONTRA OWASP LLM Top 10 Playground
ONTRA offers an interactive training module titled “OWASP Top 10 for Large Language Model…
Pokebot Health Agent to practice prompt injection
A simple Health Agent to practice prompt injection
Certified AI/ML Penetration Tester
The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group,…
Image Prompt injection and double instructions
Prompt injection via images involves embedding hidden or overt textual commands within visual elements…