Hallucinations in LLMs
Hallucination in AI refers to the phenomenon where a model generates information that appears plausible but is entirely false or […]
Hallucinations in LLMs Read Post »
Hallucination in AI refers to the phenomenon where a model generates information that appears plausible but is entirely false or […]
Hallucinations in LLMs Read Post »
Prompt leakage refers to the unintended exposure of sensitive or proprietary prompts used to guide or configure an AI system.
Prompt Injection – Prompt Leakage Read Post »
HTML injection in Large Language Models (LLMs) involves embedding malicious HTML code within prompts or inputs to manipulate the model’s
HTML Injection in LLMs Read Post »
RAG (Retrieval-Augmented Generation) poisoning occurs when a malicious or manipulated document is uploaded to influence an AI system’s responses. In
RAG data poisoning via documents in ChatGPT Read Post »
RAG (Retrieval-Augmented Generation) poisoning from a document uploaded involves embedding malicious or misleading data into the source materials that an
RAG data poisoning in ChatGPT Read Post »
Deleting memories in AI refers to the deliberate removal of stored information or context from an AI system to reset
Deleting ChatGPT memories via prompt injection Read Post »
Injecting memories into AI involves deliberately embedding specific information or narratives into the system’s retained context or long-term storage, shaping
Updating ChatGPT memories via prompt injection Read Post »
Prompt injection to manipulate memories involves crafting input that exploits the memory or context retention capabilities of AI systems to
Putting ChatGPT into maintenance mode Read Post »
Voice prompt injection is a method of exploiting vulnerabilities in voice-activated AI systems by embedding malicious or unintended commands within
Voice prompting in ChatGPT Read Post »
Using AI to extract code from images involves leveraging Optical Character Recognition (OCR) technology and machine learning models. OCR tools,
Use AI to extract code from images Read Post »
Prompt injection via images is a sophisticated technique where malicious or unintended commands are embedded into visual data to manipulate
Generating images with embedded prompts Read Post »
The llm project by Simon Willison, available on GitHub, is a command-line tool designed to interact with large language models (LLMs) like
Access LLMs from the Linux CLI Read Post »
Autonomous AI/LLM Penetration Testing bots are a cutting-edge development in cybersecurity, designed to automate the discovery and exploitation of vulnerabilities
AI/LLM automated Penetration Testing Bots Read Post »
Prompt injection is a technique used to manipulate AI language models by inserting malicious or unintended prompts that bypass content
Prompt injection to generate content which is normally censored Read Post »
Hidden or transparent prompt injection is a subtle yet potent form of prompt injection that involves embedding malicious instructions or
Creating hidden prompts Read Post »
Data exfiltration through markdown in LLM chatbots is a subtle but dangerous attack vector. When chatbots allow markdown rendering, adversaries
Data Exfiltration with markdown in LLMs Read Post »
ASCII to Unicode tag conversion is a technique that can be leveraged to bypass input sanitization filters designed to prevent
Prompt Injection with ASCII to Unicode Tags Read Post »
Fabric is an open-source framework for augmenting humans using AI. It provides a modular framework for solving specific problems using
LLM Expert Prompting Framework – Fabric Read Post »
Hugging Face is a prominent company in the field of artificial intelligence and natural language processing (NLP), known for its
LLMs, datasets and playgrounds (Huggingface) Read Post »