PARIS, France: The rapid rise of artificial intelligence “agents” is raising fresh cybersecurity concerns, as experts warn that the technology’s growing capabilities may outpace safeguards designed to control it.
AI agents — tools built on large language models such as ChatGPT and Claude — are designed to perform tasks autonomously, from managing emails to handling online workflows. Platforms like OpenClaw have attracted millions of users, reflecting strong global interest in automation.
“We’ve moved from an AI you could talk with via a chatbot to an agentic AI, which can take action… the threat and the risks are definitely much greater,” said Yazid Akadiri, a cybersecurity expert.
Researchers studying AI agents have identified a range of potential risks, including unintended actions such as deleting data or sharing sensitive information.
Growing attack surface
Security specialists say the risks extend beyond system errors. Because AI agents require access to personal accounts such as email, calendars and search engines, they could become attractive targets for cybercriminals.
“As soon as (attackers) are inside an environment, (they’re) immediately going to the internal LLM (agent) that’s being used and using that then to interrogate the systems for more information,” said Wendi Whitmore of Palo Alto Networks.
Experts also warn of hidden malicious instructions embedded in websites or software extensions, which could manipulate agents into carrying out harmful actions.
While developers acknowledge the risks, analysts say many users may adopt the technology without fully understanding its implications, raising concerns about data security and privacy as AI agents become more widespread.



