The development of artificial intelligence brings great opportunities, but also new security challenges for organisations. As was the case with the introduction of internet banking 30 years ago, time, understanding, and patience are required to ensure the secure implementation of new technologies.

 

At this year’s Money Motion, the revolution in the security aspects of AI agents was discussed by Viktor Olujić, Management Board Member at ASEE Solutions and Monri Payments Croatia, together with Tomislav Vazdar, CEO of Riskoria Advising & Professional Services, Milan Parat, Chair of the Security Committee of the Croatian Banking Association, Davor Aničić, CEO and co-founder of Velebit AI, and Kristian Kamber, Vice President of AI Security at Zscaler.

The latest global research conducted among risk managers in more than 100 countries clearly shows that cybersecurity is the number one business risk, while agentic AI ranks second. Today, cybersecurity and AI go hand in hand. For many experts, the turning point occurred at the end of last year. The first fully automated attacks, orchestrated using agentic AI, demonstrated how a small number of individuals can manage entire fraud chains. Looking back, until 2020, traditional attacks such as DDoS and man-in-the-middle dominated. With the emergence of ChatGPT, there was an explosion of sophisticated phishing campaigns and deepfake content. Today, we are entering the era of agent-based attacks, where AI independently plans and executes attacks.

Experience shows that every technological revolution requires adaptation. The introduction of multi-factor authentication in banking took years, and today it is a standard. A similar process is expected with artificial intelligence. One of the key challenges is that many companies still lack sufficient knowledge—they are not aware of which AI tools they are using, they do not have control over the entire AI ecosystem, and risks are scaling faster than they can be understood. AI not only increases the number of attacks but also their speed and sophistication.

Just as banks introduced KYC (Know Your Customer), there is now a growing need for the concept of “Know Your Agent.” It is essential to understand the intent of AI systems, the behaviour of agents, and how they communicate and access data. Without this, risks become difficult to manage. Research has identified hundreds of potential AI risks. Organisations are facing challenges such as the invisible spread of AI tools within systems, lack of control over endpoints, prompt injection attacks and model manipulation, bias and inaccurate outputs (hallucinations), as well as the reality that AI is being used to defend another AI.

Although AI is used to defend against AI-driven threats, human oversight remains essential. The “human-in-the-loop” approach is key to reducing false positives, making final security decisions, and continuously improving systems.

Regardless of the industry, organisations must adapt quickly—clearly define how and where they use AI, map all AI agents and tools within their systems, conduct detailed risk assessments, implement security models based on behaviour and intent, and continuously test and monitor AI systems.

Agentic AI represents a new level of digital risk—but also an opportunity. Organisations that develop understanding in time, implement control mechanisms, and keep humans at the centre of decision-making will be the ones that successfully navigate this new era.