Damir Čaušević, Co-Founder and CEO of Monri Payments, discusses the future of the fintech industry and the role of artificial intelligence in its development. Will AI soon make purchases on our behalf, and where will clear regulatory boundaries need to be established? Read the full interview to find out.
First, we wanted to understand how the role of technology has evolved — have we moved from the era of “traditional” software to one in which AI is becoming the core operating system of fintech?
No, the role of technology has not fundamentally changed. Fintech is still in a phase where AI is discussed more than it is truly deployed at scale, and the stage of broader implementation still lies ahead. In 2026, we will likely continue to speak intensively about AI, while widespread adoption progresses more slowly than the conversation surrounding it. In the years to come, I expect AI to become increasingly embedded as a foundational component of fintech solutions, and less visible as a standalone “feature.”
Critics argue that today’s AI in fintech is essentially “glorified statistics” packaged in strong marketing. Where does marketing end and genuine intelligence, capable of delivering measurable financial impact, begin?
I agree with the critics. We often witness the “AI” label being added to products and solutions that have existed for years, without any real depth or substantive change in how they operate. Genuine application begins where AI delivers measurable outcomes — such as cost reduction, increased efficiency, and improved customer satisfaction. At present, the most significant impact comes from the internal use of AI within companies. Examples such as automated customer support, advanced customer onboarding processes, and the personalization and prediction of customer needs can already generate tangible, measurable financial savings and performance improvements.
Following AI Agents, the Next Step Is MAS (Multi-Agent Systems)
Do you foresee a scenario in the near future where payments are not initiated by humans, but by AI agents that optimize spending or choose the most advantageous moment for a transaction on behalf of the user? For example, purchasing airline tickets at the time when they are most affordable. Do you personally use AI agents, or do you plan to use them?
A scenario in which an AI agent initiates a purchase is highly realistic and will soon become part of everyday life. The ability to make smarter and faster decisions will become a key competitive advantage, while agentic commerce will further accelerate automation, personalization, and efficiency within payment systems. At the end of last year, we witnessed the first such transactions outside the United States and the UAE. Card schemes demonstrated agent-driven payments in these markets, where AI agents independently search, select, and execute transactions on behalf of users — for example, booking cinema tickets or purchasing products online — with a strong emphasis on security and user consent.
While today we are discussing AI agents that autonomously execute purchases or rebalance portfolios, the next stage of development is already taking shape: Multi-Agent Systems (MAS). Rather than individual agents performing isolated tasks, this involves a coordinated system of specialized agents that collaborate, exchange information, negotiate, and make decisions in real time within complex environments. Multi-Agent Systems thus represent a shift toward operational models in which AI orchestrates entire processes. The question is no longer whether we will have the option to use agents, but how open we will be to embracing them.
If an AI agent makes a wrong decision and harms a user—for example, buys stocks or executes a transaction at the wrong time—who assumes legal and financial responsibility? Is the industry even ready for a scenario of ‘autonomous fault’?
The industry and lawmakers have developed and are still developing appropriate legal frameworks, including the EU AI Act and proposed directives on AI liability. Objectively speaking, at this point the scenario of an ‘autonomous culprit’ will not even be implemented; AI agents are tools, and as such, they do not have legal personality. Until regulations are perfected, the safest approach is to structure contracts legally, clearly specify technical documentation, ensure oversight, and clearly define who bears the risks for the benefit of all parties involved.
If payments become completely invisible and “painless,” does the consumer lose the psychological sense of money’s value? Could fintech become an unwitting accomplice in generating consumer debt through excessive automation?
Payments are already almost “invisible” today — a single fingerprint scan or facial recognition via Apple Pay or Google Pay can authorize a transaction without any tangible sense of money changing hands. This trend is expected to continue, particularly for repetitive purchases, where an AI agent could, for example, automatically order milk when it detects none is left in the fridge. However, even in such scenarios, the user provides clear instructions and sets the rules in advance. Interaction between the consumer and the payment system must always exist. Responsibility for generating consumer debt primarily remains with the consumer.
On the other hand, fintech can also play a positive role — through spending limits, alerts, analytics, and budgeting tools. Technology itself is neither the cause nor the solution to debt; it is a tool that users can leverage either for overspending or for more financially responsible behavior.
How can we trust a system whose decisions (the so-called “black box”) often cannot be fully explained even by the engineers who trained it? Is “AI explainability” in fintech a myth, or an indispensable requirement?
In fintech, systems — including AI systems — must never be trusted blindly. Humans remain the final authority for decisions, accountability, and ethical judgment, and critical thinking is a key part of the process. “Black box” models that process data and make decisions through algorithms invisible to us can provide recommendations and optimizations, but the ultimate decision must remain under human control, with the possibility for review and correction. Trust is built through technology, oversight, and the ethical responsibility of humans, who understand the context, assess risks, and assume accountability for the outcomes.
Will the EU AI Act trigger the so-called “Brussels Effect”?
How do you view the EU AI Act in the context of fintech? Do you believe stricter regulation will slow down innovation, or will it establish a necessary framework for secure banking? Will the EU AI Act make us the safest digital market in the world, or will the U.S. and Asia take control of global financial flows in the meantime?
The European Union market is far from negligible — we are talking about 450 million people with above-average purchasing power compared to the rest of the world. Every company that offers its solutions in the EU will need to comply with the EU AI Act, which could create a Brussels effect similar to what we saw with GDPR, where companies adjusted their global standards to meet EU regulations. The result could be a trend toward a globally secure digital market, while the control of global financial flows will depend on other factors.
What advice would you give to the new generation of founders launching AI-first fintech projects today — what should they focus on? Is it not risky to encourage young entrepreneurs to build AI-first projects when a single update from Apple, Google, or OpenAI could literally remove them from the market within 24 hours?
This risk is present for most new projects today, not only AI-first fintech initiatives. Big Tech can change the rules of the game quickly, but that doesn’t mean it’s impossible to build a successful business. The key is to focus on niche problems — small enough that large players are not focused on them, yet large enough to develop a sustainable and scalable project. A good idea that solves a real problem will always find its place in the market.
For this reason, the jury members of the Money Motion Startup Competition place particular value on practical, market-applicable ideas. The competition gives young founders the opportunity to test AI-based solutions and win prizes from a total fund of €60,000, which supports the further development of their projects.
Will AI agents play any role in this year’s Money Motion, or will we have to wait another year for that?
This year, we have introduced a completely new Automation stage in the Money Motion program, giving AI agents their own dedicated platform. They will certainly play a role at this year’s conference through live demonstrations, presentations, and hands-on experiences in the expo area. Perhaps next year, they will replace all speakers at the conference or even conduct this interview for you, so stay tuned to witness developments and new applications in real time.
You can read the full interview published on the Zimo portal by clicking on the link.