AI Prompt Injection Attacks
Bruce Schneier has hit the nail on the head in his recent post on AI prompt injection attacks. Schneier feels it’s not possible to fully secure large language models (LLMs) against this kind of attack. Essentially, you can use AI to generate injection prompts, but read the article to learn more.