Agentic AI Application Memory Vulnerabilities
generated by meta ai Here are the specific risks and attack vectors organized by the stage of the memory process. 1. Poisoning the Memory (Data Integrity Attack) This is the most direct form of "hacking." An attacker could intentionally introduce bad information into the memory store that the agent will later retrieve. How it works: "Some memories are wrong from the start... a memory-equipped agent can turn one mistake into a recurring one by storing it and retrieving it later as evidence." An adversary could deliberately provide false feedback, wrong tool-call trajectories, or incorrect answers during interactions. Example: "We have seen agents cite notebooks from earlier runs that were themselves wrong, then reuse those results with even more confidence." An attacker could create...