On the second day of the India AI Impact Summit 2026 at Bharat Mandapam in New Delhi, a session titled 'Scaling Trusted AI: Global Practices, Local Impact' focused on a basic but difficult question: How do you make AI trustworthy enough to use at scale?
The discussion therefore centred on to bring together practical lessons from companies already deploying AI. The idea was to move beyond theory and offer measurable, workable frameworks that organisations can adapt to different markets, especially emerging and resource-constrained ones.
Navrina Singh, Founder and CEO of Credo AI, said the conversation around AI now should shift from building to verifying. “It is not just about building AI. It is about verifying, governing and earning trust in AI,” she said. Companies today rely heavily on third-party AI systems. The challenge, she pointed out, is understanding which systems can be trusted and how to unpack complex AI supply chains. Governance, in her view, should not be an afterthought. It should be embedded into procurement, development and deployment processes from the start.
Fabrice Ciais, Vice President of Responsible AI at G42, spoke about building AI at national scale. Based in Abu Dhabi, G42 develops large data centres and AI models tailored for different industries and languages, including Arabic-English and Kazakh-English systems. He stressed that when AI is developed for government use or critical infrastructure, responsible AI cannot remain a corporate guideline, it must align with national priorities and protections. For countries investing in sovereign AI, trust and security are linked.
Caroline Louveaux, Chief Privacy Officer at Mastercard, said AI is not new to the financial sector. Mastercard has used AI for years to secure its payment network and detect fraud. Recently, the company has applied generative AI techniques to improve fraud detection accuracy by 300 per cent. But, she said, innovation alone is not enough. “For innovation to scale, people have to trust it.” She explained that Mastercard integrates privacy and responsible AI principles at the design stage, which helps the company stay ahead of regulatory requirements and maintain customer confidence.
Magesh Bagavathi, Senior Vice President at PepsiCo, brought a personal note into the discussion. “Thirty years ago, I was working in Nehru Place before I ended up here,” he said, underlining how much India’s technology landscape has evolved. But even with global platforms and scale, he stressed that “local makes a difference.”
Referring to large consumer ecosystems such as PepsiCo that serve close to 1.4 billion customers globally on a daily basis, he said scale brings responsibility. AI systems built for such reach must align with the broader vision of Vasudhaiva Kutumbakam, the idea that the world is one family.
While companies often highlight the benefits AI brings to business and consumers, he cautioned that the same systems can also create harm—for individuals or even for the planet—if not handled carefully.
According to him, organisations must focus on two things simultaneously: innovating at speed and not taking undue risks while doing so. Speed is essential in competitive markets, but unchecked speed can amplify mistakes.
Rajeev Kumar Gupta, EVP and CTO of PB Fintech Limited, said that in banking and financial services, trust is not optional. “Without trust, innovation becomes a liability,” he noted. The sector is accountable not only to customers but also to regulators, and long-term brand equity depends on maintaining that credibility.
When PB Fintech launched Policybazaar, it began building direct relationships with customers at scale. Today, the platform interacts with close to a million customers every day and has served over 200–250 million users over time. At that scale, AI becomes essential. It helps determine who qualifies for an insurance policy, whose loan application gets sanctioned, and who may be attempting fraud. With millions of daily interactions, summarising data, assessing risk and making timely decisions would not be possible manually.
Gupta emphasised that everything the company does is regulated. That regulatory framework shapes how AI models are designed and deployed. In financial services, AI systems must be transparent, auditable and aligned with compliance standards.