Data and Model Poisoning refers to the deliberate manipulation of an LLM’s training data or model parameters to introduce vulnerabilities, biases, or backdoors, leading to compromised outputs, degraded performance, or unethical behaviors.
Data and Model Poisoning refers to the deliberate manipulation of an LLM’s training data or model parameters to introduce vulnerabilities, biases, or backdoors, leading to compromised outputs, degraded performance, or unethical behaviors.