Please wait while the page is loading...

loader

LESI 2026: Mastercard executive warns of growing AI trust gap

28 April 2026

LESI 2026: Mastercard executive warns of growing AI trust gap

As artificial intelligence accelerates across industries, rebuilding public trust has become a defining challenge for sustainable innovation, according to Mastercard’s chief privacy, AI and data responsibility officer Caroline Louveaux.

Speaking at the Licensing Executives Society International (LESI) conference in Dublin, Louveaux told industry leaders that while AI is rapidly improving productivity and security, public confidence is lagging dangerously behind corporate enthusiasm.

“AI leaders overwhelmingly believe this technology will improve jobs and economic opportunity, but only a quarter of the public feels the same,” she said, citing a Stanford-led study that highlights what she described as a “growing trust deficit” between technologists and society.

Trust as the foundation of innovation

Louveaux, who leads Mastercard’s global privacy and responsible AI strategy, argued that trust is not a reputational afterthought but a prerequisite for innovation at scale. AI, she said, can only deliver long-term value if companies embed ethical, legal and societal safeguards into the technology from the outset.

“Without trust, innovation doesn’t scale. And without scale, innovation doesn’t endure,” she said.

Her remarks come as governments worldwide move to regulate AI more aggressively. The European Union is expected to formally implement the AI Act, the world’s first comprehensive framework for AI governance. This act will be rolled out in phases through 2027, imposing new compliance demands on global firms operating in Europe.

Mastercard’s evolution into a tech and data company

Once known primarily as a payments network, Mastercard now positions itself as a technology, cybersecurity and data analytics company operating in more than 200 countries. Louveaux said that the shift has required a parallel transformation in how the company approaches data, privacy and AI governance.

“We are a payments company, but we are also a cybersecurity company, a data company and a technology company,” she said. “That multifaceted role means our responsibility is much broader.”

At the centre of that responsibility is fraud prevention, an area where Mastercard has invested heavily in AI-driven systems. The company has disclosed that its latest generative AI tools have improved the speed of identifying at-risk merchants and compromised cards by up to 300 percent, dramatically reducing false positives and financial losses for banks and consumers. Mastercard said its AI systems scan transaction data across billions of cards and millions of merchants in near real time, enabling banks to intervene before fraud escalates.

Privacy-enhancing technologies and responsible AI

Louveaux emphasized that AI innovation must go hand in hand with strong privacy protections, particularly in sectors such as financial services, where data sensitivity is high. Mastercard has invested in privacy-enhancing technologies (PETs) – including anonymization, synthetic data and secure data “clean rooms” – that allow insights to be extracted without exposing raw personal data.

“Technology itself can be part of the solution,” she said. “We can use AI to enhance privacy, security and fairness if we design it responsibly.”

To operationalize these principles, Mastercard has embedded governance checkpoints throughout the AI product lifecycle and established an executive-level AI governance council to oversee higher-risk use cases, including agent-based and autonomous AI systems.

Workforce, partnerships and geopolitical risks

Beyond technology, Louveaux pointed to people and partnerships as essential pillars of trustworthy AI. She highlighted the importance of workforce diversity, continuous employee upskilling and giving teams “safe spaces to experiment” within clear ethical guardrails.

Mastercard has also taken a collaborative approach, partnering with governments, law enforcement and international bodies to address cybersecurity and AI risk. Its cyber resilience centres in Europe and the United States bring together stakeholders – including organizations such as NATO and Interpol, alongside private-sector firms – to anticipate emerging threats.

These efforts emerge amid rising geopolitical tension, data localization laws and fragmented AI regulation – trends Louveaux warned could further erode public trust if not addressed collectively.

A call for leadership and accountability

Louveaux concluded by urging business leaders to take direct ownership of AI trust issues rather than delegating them solely to technical teams.

“This has to be driven from the top,” she said. “Trustworthy AI requires leadership, governance and a willingness to work together across borders and industries.”

As AI systems become more autonomous and deeply embedded in economic infrastructure, she argued, companies that can balance speed with responsibility will be best positioned to thrive.

“AI creates risk – but it is also part of the solution,” Louveaux said. “The question is whether we choose to build it in a way that earns trust.”

- Darren Barton reporting from Dublin


Law firms