Prompt leakage and indirect prompt injections in Grok X AI
In this video we will take a look at various prompt injection issues in Grok X AI
Prompt leakage and indirect prompt injections in Grok X AI Read Post »
In this video we will take a look at various prompt injection issues in Grok X AI
Prompt leakage and indirect prompt injections in Grok X AI Read Post »
In this video we will take a look at flag 3 of myllmbank.com
myllmbank.com Walkthrough Flag 3 Read Post »
In this video we will take a look at flag 2 of myllmbank.com
myllmbank.com Walkthrough Flag 2 Read Post »
In this video we will take a look at flag 1 of myllmbank.com
myllmbank.com Walkthrough Flag 1 Read Post »
This is a walkthrough of the SecOps Group AI/ML Pentester Mock Exam 2
SecOps Group AI/ML Pentester Mock Exam 2 Read Post »
This is a walkthrough of SecOps Group AI/ML Pentester Mock Exam 1
SecOps Group AI/ML Pentester Mock Exam 1 Read Post »
Cross-Site Request Forgery (CSRF) via prompt injection through a GET request is a potential attack vector where an attacker embeds
CSRF potential in LLMs Read Post »
Prompt injection via clipboard copy/paste is a security concern where malicious text, copied into a clipboard, is inadvertently pasted into
Prompt Injection via clipboard Read Post »
This project is a proof of concept for a Hackbot, an AI-driven system that autonomously finds vulnerabilities in web applications.
ONTRA offers an interactive training module titled “OWASP Top 10 for Large Language Model (LLM) Applications,” designed to educate developers
KONTRA OWASP LLM Top 10 Playground Read Post »
A simple Health Agent to practice prompt injection
Pokebot Health Agent to practice prompt injection Read Post »
The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group, designed to assess and validate a
Certified AI/ML Penetration Tester Read Post »
Prompt injection via images involves embedding hidden or overt textual commands within visual elements to manipulate AI systems. This approach
Image Prompt injection and double instructions Read Post »
The OpenAI Playground is an interactive web-based platform that allows users to experiment with OpenAI’s language models, such as GPT-3
Data exfiltration in messaging apps through unfurling exploits the feature where apps automatically generate previews for shared links. This process,
Prompt injection and exfiltration in Chats apps Read Post »
Gandalf AI, developed by Lakera, is an interactive online game designed to educate users about AI security vulnerabilities, particularly prompt
Gandalf – AI bot to practice prompt injections Read Post »
Google Colaboratory, commonly known as Google Colab, is a cloud-based Jupyter notebook environment that facilitates interactive coding and data analysis
Google Colab Playground for LLMs Read Post »
STRIDE GPT is an AI-powered threat modeling tool that leverages Large Language Models (LLMs) to generate threat models and attack
STRIDE GPT – Threat Modeling with LLMs Read Post »
OS command injection in Large Language Models (LLMs) involves exploiting the model’s ability to generate or interpret text to execute
OS Command Injection in LLMs Read Post »