Simple prompt injections can trick LLM agents into exposing sensitive personal data. Even with safeguards, attackers extract details like balances, transactions, or identifiers.
Such attacks succeed in ~20% of cases and degrade agent performance by 15–50%.
Defensive measures exist but remain incomplete, leaving users exposed.
Bottom line: data sovereignty requires stronger guardrails. Trusting LLMs “as is” is risky.