Agentic AI Threat Vector

It is critical leaders understand Agentic GenAI that can connect to external systems has the same capabilities as malware for data exfiltration.

While malware takes action intentionally, GenAI is still a threat vector because GenAI systems are prone to prompt injection, context poisoning, and hallucinations.

Sandboxing GenAI systems is rarely practical because models will be running on someone else’s systems, and Agents must connect to systems to take actions.

While you should try to limit prompt injection, mitigate context poisoning, and keep a human in the loop for important decisions … your main control on data exfiltration is to limit what GenAI agents can access.

Simon Willison has a great read on this topic.