Posts

Showing posts with the label agenticai

Agentic AI Application Memory Vulnerabilities

Image
                                                           generated by meta ai Here are the specific risks and attack vectors organized by the stage of the memory process. 1. Poisoning the Memory (Data Integrity Attack) This is the most direct form of "hacking." An attacker could intentionally introduce bad information into the memory store that the agent will later retrieve. How it works: "Some memories are wrong from the start... a memory-equipped agent can turn one mistake into a recurring one by storing it and retrieving it later as evidence." An adversary could deliberately provide false feedback, wrong tool-call trajectories, or incorrect answers during interactions. Example: "We have seen agents cite notebooks from earlier runs that were themselves wrong, then reuse those results with even more confidence." An attacker could create...

Agentic AI Plumbing

In the rapidly evolving landscape of Agentic AI (systems where AI agents take autonomous actions), these five acronyms represent the "new plumbing" of the internet. They are open-source protocols that allow different AI agents, tools, and businesses to talk to each other, negotiate, and even spend money securely. Here is the breakdown of the agentic AI stack: 1. MCP (Model Context Protocol) Role: The "USB-C" of AI. What it does: Developed by Anthropic (and adopted by Google, OpenAI, and Microsoft), MCP allows an AI model to safely "plug in" to your data and tools. Example: Instead of writing custom code to let an agent read your Google Drive or Slack, you use an MCP server. It provides the context (data) and tools (capabilities) the agent needs to work. 2. A2A (Agent-to-Agent Protocol) Role: The "Common Language" for agents. What it does: Launched by Google and now part of the Linux Foundation, A2A defines how one AI agent talks to anoth...