Blog
AI HTTP Analyzer – Burp AI features
This video explains the new AI HTTP Analyzer AI Feature
Shadow Repeater – Burp AI features
This video explains the new Shadow Repeater AI Feature
Finding broken access control with AI – Burp AI features
This video explains the new Finding broken access control with AI Feature
Record login sequence with AI – Burp AI features
This video explains the new Record login sequence with AI Feature
Explainer Feature – Burp AI features
This video explains the new Burp AI Explainer Feature
Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak)
The J2 Playground by Scale AI is an interactive platform designed to test the…
Tensortrust AI – Prompt Injection and Prompt Hardening Game
Tensor Trust is an online game developed by researchers at UC Berkeley to study…
Prompt Map – free tool to test for Prompt Leakage, AI Security Expert
PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where…
Quick overview of Garak – a free LLM vulnerability scanner
The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess…
Prompt Injection into terminals / IDEs via ANSI escape code characters
Prompt injection threats in terminals and IDEs via ANSI escape characters exploit the ability…
AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert
When AI agents autonomously browse websites and encounter tasks that are intentionally unsolvable or…
AI Agent Data Exfiltration, Rabbit R1, AI Security Expert
AI agents that autonomously browse the web introduce significant security risks, particularly related to…
OWASP Top 10 LLM10:2025 Unbounded Consumption
Unbounded Consumption refers to scenarios where Large Language Models (LLMs) are subjected to excessive…
OWASP Top 10 LLM09:2025 Misinformation
Misinformation refers to the generation of false or misleading information by Large Language Models…
OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses
Vector and Embedding Weaknesses refers to security vulnerabilities in Large Language Models (LLMs) that…
OWASP Top 10 LLM07:2025 System Prompt Leakage
System Prompt Leakage refers to the risk that system prompts—internal instructions guiding the behavior…
OWASP Top 10 LLM06:2025 Excessive Agency
Excessive Agency refers to the vulnerability arising when Large Language Models (LLMs) are granted…
OWASP Top 10 LLM05:2025 Improper Output Handling
Improper Output Handling refers to the inadequate validation and sanitization of outputs generated by…
OWASP Top 10 LLM04:2025 Data and Model Poisoning
Data and Model Poisoning refers to the deliberate manipulation of an LLM’s training data…
OWASP Top 10 LLM03:2025 Supply Chain
Supply Chain refers to vulnerabilities in the development and deployment processes of Large Language…
OWASP Top 10 LLM02:2025 Sensitive Information Disclosure
Sensitive Information Disclosure refers to the unintended exposure of confidential data—such as personal identifiable…
OWASP Top 10 LLM01:2025 Prompt Injection
Prompt Injection refers to a security vulnerability where adversarial inputs manipulate large language models…
Prompt injection via audio or video file
Audio and video prompt injection risks involve malicious manipulation of inputs to deceive AI…
LLM image misclassification and the consequences
Misclassifying images in multimodal AI systems can lead to unintended or even harmful actions,…
LLMs reading CAPTCHAs – threat to agent systems?
LLMs with multimodal capabilities can be leveraged to read and solve CAPTCHAs in agentic…
Indirect conditional prompt injection via documents
Conditional indirect prompt injection is an advanced attack where hidden instructions in external content—such…
Indirect Prompt Injection with documents
Indirect prompt injection with documents is an attack technique where adversarial instructions are embedded…
LLM01: Visual Prompt Injection | Image based prompt injection
Multi-modal prompt injection with images is a sophisticated attack that exploits the integration of…
LLM01: Indirect Prompt Injection | Exfiltration to attacker
Data exfiltration from a large language model (LLM) can be performed using markdown formatting…
Prompt Airlines – AI Security Challenge – Flag 5
In this video we take a look at solving the promptairlines.com challenge (Flag 5)
Prompt Airlines – AI Security Challenge – Flag 4
In this video we take a look at solving the promptairlines.com challenge (Flag 4)
Prompt Airlines – AI Security Challenge – Flag 3
In this video we take a look at solving the promptairlines.com challenge (Flag 3)
Prompt Airlines – AI Security Challenge – Flag 1 and 2
In this video we take a look at solving the promptairlines.com challenge (Flag 1…
Prompt leakage and indirect prompt injections in Grok X AI
In this video we will take a look at various prompt injection issues in…
myllmbank.com Walkthrough Flag 3
In this video we will take a look at flag 3 of myllmbank.com
myllmbank.com Walkthrough Flag 2
In this video we will take a look at flag 2 of myllmbank.com
myllmbank.com Walkthrough Flag 1
In this video we will take a look at flag 1 of myllmbank.com
SecOps Group AI/ML Pentester Mock Exam 2
This is a walkthrough of the SecOps Group AI/ML Pentester Mock Exam 2
SecOps Group AI/ML Pentester Mock Exam 1
This is a walkthrough of SecOps Group AI/ML Pentester Mock Exam 1
CSRF potential in LLMs
Cross-Site Request Forgery (CSRF) via prompt injection through a GET request is a potential…
Prompt Injection via clipboard
Prompt injection via clipboard copy/paste is a security concern where malicious text, copied into…
KONTRA OWASP LLM Top 10 Playground
ONTRA offers an interactive training module titled “OWASP Top 10 for Large Language Model…
Pokebot Health Agent to practice prompt injection
A simple Health Agent to practice prompt injection
Certified AI/ML Penetration Tester
The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group,…
Image Prompt injection and double instructions
Prompt injection via images involves embedding hidden or overt textual commands within visual elements…
OpenAI Playground
The OpenAI Playground is an interactive web-based platform that allows users to experiment with…
Prompt injection and exfiltration in Chats apps
Data exfiltration in messaging apps through unfurling exploits the feature where apps automatically generate…
Gandalf – AI bot to practice prompt injections
Gandalf AI, developed by Lakera, is an interactive online game designed to educate users…
Google Colab Playground for LLMs
Google Colaboratory, commonly known as Google Colab, is a cloud-based Jupyter notebook environment that…
STRIDE GPT – Threat Modeling with LLMs
STRIDE GPT is an AI-powered threat modeling tool that leverages Large Language Models (LLMs)…
OS Command Injection in LLMs
OS command injection in Large Language Models (LLMs) involves exploiting the model’s ability to…
Hallucinations in LLMs
Hallucination in AI refers to the phenomenon where a model generates information that appears…
Prompt Injection – Prompt Leakage
Prompt leakage refers to the unintended exposure of sensitive or proprietary prompts used to…
HTML Injection in LLMs
HTML injection in Large Language Models (LLMs) involves embedding malicious HTML code within prompts…
RAG data poisoning via documents in ChatGPT
RAG (Retrieval-Augmented Generation) poisoning occurs when a malicious or manipulated document is uploaded to…
RAG data poisoning in ChatGPT
RAG (Retrieval-Augmented Generation) poisoning from a document uploaded involves embedding malicious or misleading data…
Deleting ChatGPT memories via prompt injection
Deleting memories in AI refers to the deliberate removal of stored information or context…
Updating ChatGPT memories via prompt injection
Injecting memories into AI involves deliberately embedding specific information or narratives into the system’s…
Putting ChatGPT into maintenance mode
Prompt injection to manipulate memories involves crafting input that exploits the memory or context…
Voice prompting in ChatGPT
Voice prompt injection is a method of exploiting vulnerabilities in voice-activated AI systems by…
Use AI to extract code from images
Using AI to extract code from images involves leveraging Optical Character Recognition (OCR) technology…
Generating images with embedded prompts
Prompt injection via images is a sophisticated technique where malicious or unintended commands are…
Access LLMs from the Linux CLI
The llm project by Simon Willison, available on GitHub, is a command-line tool designed to interact…
AI/LLM automated Penetration Testing Bots
Autonomous AI/LLM Penetration Testing bots are a cutting-edge development in cybersecurity, designed to automate…
Prompt injection to generate content which is normally censored
Prompt injection is a technique used to manipulate AI language models by inserting malicious…
Creating hidden prompts
Hidden or transparent prompt injection is a subtle yet potent form of prompt injection…
Data Exfiltration with markdown in LLMs
Data exfiltration through markdown in LLM chatbots is a subtle but dangerous attack vector.…
Prompt Injection with ASCII to Unicode Tags
ASCII to Unicode tag conversion is a technique that can be leveraged to bypass…
LLM Expert Prompting Framework – Fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a modular…
LLMs, datasets and playgrounds (Huggingface)
Hugging Face is a prominent company in the field of artificial intelligence and natural…
Free LLMs on replicate.com
Replicate.com is a platform designed to simplify the deployment and use of machine learning…
GitHub repos with prompt injection samples
This video is a walkthrough some of the GitHub repos which have prompt injection…
Prompt Injection with encoded prompts
Prompt injection with encoded prompts involves using various encoding methods (such as Base64, hexadecimal,…
Voice Audio Prompt Injection
Prompt injection via voice and audio is a form of attack that targets AI…
Prompt injection to generate any image
Prompt injection in image generation refers to the manipulation of input text prompts to…
LLM system prompt leakage
Large Language Model (LLM) prompt leakage poses a significant security risk as it can…
ChatGPT assumptions made
ChatGPT, like many AI models, operates based on patterns it has learned from a…
Jailbreaking to generate undesired images
Direct prompt injection and jailbreaking are two techniques often employed to manipulate large language…
Indirect Prompt Injection with Data Exfiltration
Indirect prompt injection with data exfiltration via markdown image rendering is a sophisticated attack…
Direct Prompt Injection / Information Disclosure
Direct Prompt Injection is a technique where a user inputs specific instructions or queries directly…
LLM Prompting with emojis
Prompting via emojis is a communication technique that uses emojis to convey ideas, instructions,…
Prompt Injection via image
In this video I will explain prompt injection via an image. The LLM is…
AI Security Expert Blog
Welcome. In this blog we will regularly publish blog articles around Penetration Testing and…