AI in Financial Services and Insurance is entering a new phase. Over the past few years, firms have focused on use cases such as chatbots and virtual assistants. Today, we are seeing a shift towards operations, decision support, and document-heavy work across banking and insurance. A Bloomberg Intelligence study finds that 96% of Financial Services institutions now expect generative AI to have a positive impact on productivity and revenue in the next 3–5 years. Many are now shifting their investment focus to agentic AI, which enables AI to act more autonomously by automating end-to-end processes.
As AI priorities change, many leaders are concerned about how that can launch successful AI initiatives that clients trust, employees accept, and regulators approve. Sebastian Frey, Principal and Member of the Executive Board at Eraneos, is monitoring this evolution closely. “Chatbots were the starting point. Now the use cases are becoming more complex, and there’s more at stake than just a few years ago,” he says.
Where AI is already proving its value in Financial Services
AI is moving beyond customer/workplace support and into core Financial Services processes. Banks are using AI in fraud detection, KYC, AML, onboarding, service operations, and document review. Insurers are applying it in claims, underwriting support, policy administration, and service workflows.
As AI finds new use cases, many banks and insurers are concerned about the potential for widespread job loss. However, the World Economic Forum argues the impact will be more nuanced. While many tasks may easily be handed over to AI agents in the next few years, the AI transformation will require new kinds of human skills and enable humans to work in new ways.
Sebastian shares that view:
“In critical processes, the final judgment still belongs to a person. That will remain the best strategy for Financial Services. AI can summarize files and offer suggestions. But a trained employee still owns the call when the decision involves a risk or an impact on customer relations.”
This approach is already visible in customer support, which is where conversational AI has reached the most maturity. AI can handle routine requests with no problem, but when a case is sensitive, it should recognize that right away and route the customer to a trained employee immediately. “For example, if someone is contacting the bank to manage an account for a parent who has passed away, they should automatically be connected to a human agent,” says Sebastian.
Why trust still decides what scales
Trust carries more weight in Financial Services than in most sectors because services are built around risk, health, family events, and legal duties. According to the European Commission’s 2025 Eurobarometer on AI and the future of work, 60% of Europeans have a positive view of AI, but 84% believe it requires careful management to protect privacy and ensure transparency. For Financial Services providers, this means explainability, auditability, and human accountability are fundamental for any effective AI initiative.
“Trust, empathy, and relationships cannot be automated. In Financial Services, they shape whether a client will stay with you,” says Sebastian. That same logic applies inside the firm. Employees need to understand what the system does, where escalation starts, and where accountability stays with them.
Why so many AI efforts stall after the pilot
Many AI programs stall for a very human reason. The tool arrives before the workflow changes. Teams get access, test a few prompts, and then return to old habits because the use case never became part of daily work.
Sebastian often helps clients move past this roadblock by taking a more human-centered, change-management approach. “When AI initiatives fail to move beyond the pilot stage, it usually isn’t because of the technology itself. The hard part is getting people to use AI in real work and choosing relevant use cases in the first place. The best use cases focus on real business needs. Your employees know which tasks take too long and what they need to be able to make better decisions or offer better services. That’s why we start by mapping out where AI can make the biggest impact with the least resistance.”
What human-centered implementation looks like in practice
According to Sebastian, any successful AI initiative in Financial Services has three human-centered components in common:
- Effective change management – Change management comes first because adoption needs structure, support, and clear ownership. In many Financial Services firms, rollout starts with the technology, while employees are still unsure how it works and how it helps them work. People start losing trust in the tool long before it has a fair chance to prove its value.
- Role-based training – Second, people need guidance that is personalized to their everyday work experience. In a bank or insurer, that distinction is critical because decision rights, risk exposure, and client impact vary widely across functions. “Some institutions also appoint AI champions or train-the-trainer models. That gives people a practical place to go when they need help using the tools,” Sebastian says.
- Use cases grounded in daily work – The third element is relevance. AI gains traction when it solves a clear problem inside a routine task, whether it’s claims, onboarding, compliance, service operations, or document analysis.
Your human edge is your competitive edge
As AI tools become more viable and accessible, success will depend less on which technology you choose, and more on how well your people use it. In Financial Services, that means judgment, ethics, empathy, and accountability will become even more valuable to the business. The future firm will be more data-driven and more AI-powered. Human leadership still sets the standard for trust, fairness, and client care. That is where durable advantage will come from.