MUMBAI: The conversation on artificial intelligence in healthcare is often dominated by grand promises — faster diagnoses, scalable access, precision medicine at population scale. But at the inaugural Winter Dialogue on RAISE (Responsible AI for Synergistic Excellence in Healthcare) at Ashoka University last week, the focus shifted quietly but firmly to a harder set of questions: who does AI really work for, who does it leave out, and how do we govern what we do not yet fully understand.Hosted by the Koita Centre for Digital Health at Ashoka University (KCDH-A), in partnership with NIMS Jaipur and with WHO SEARO as technical host alongside ICMR-NIRDHS and the Gates Foundation, the two-day dialogue served as an official Pre-Summit Event of the AI Impact Summit 2026. It was also the first in a series of four national RAISE dialogues scheduled across India this month, with the opening edition focused on the theme of Health AI: Policy and Governance.
If there was a unifying thread across sessions, it was the gap between technical capability and institutional readiness. Dr Karthik Adapa, Regional Adviser for Digital Health at WHO, warned against what he called the persistent problem of “pilotitis” — the tendency for digital health solutions to remain trapped in experimental pilots without ever scaling into public systems. Frameworks such as SALIENT, he argued, were essential precisely because they force practitioners to think beyond models and metrics, and towards integration, evaluation, and long-term use.That tension between optimisation and equity surfaced repeatedly. In his opening remarks, Dr Anurag Agrawal posed a question that lingered across the conference halls: ‘Would you choose a model with higher average accuracy, but poor performance for women, or one with lower accuracy that shows equity in outcomes?’ His larger point was captured in a phrase that became something of a refrain: ‘AI for Health, not Healthcare for AI.’The panels that followed reflected how complicated that translation from principle to practice really is. From tuberculosis screening and cancer detection to maternal health monitoring across Indian states, case studies showed both promise and fragility — fragile data pipelines, uneven infrastructure, regulatory uncertainty, and deeply embedded social bias that algorithms can easily reproduce.Mental health discussions were particularly cautious. As Dr Prabha Chand observed, large language models are ‘optimised for engagement, not clinical outcomes,’ while Dr Smruti Joshi reminded the room that ‘mental health judgment cannot be fully automated.’ The challenge, several panellists argued, is not whether AI has a role, but how narrowly and carefully that role is defined — especially when working with vulnerable populations.Validation and accountability emerged as equally central. Dr Mary-Anne Hartley emphasised that imperfect data produces imperfect models, especially in contexts as diverse as India’s. Continuous monitoring, bias mitigation, and human-in-the-loop systems, panellists argued, must become standard rather than optional.Reflecting on the broader implications, Dr Anurag Agrawal returned to the ethical core of the discussion: ‘The real test of health AI is not peak accuracy in controlled settings, but equitable performance in the real world. If AI systems work well on average but fail women or marginalised populations, we have failed the purpose. We must design AI for health—not bend healthcare to fit AI.’That sentiment was echoed by Vice-Chancellor Somak Raychaudhury, who noted that ‘Responsible AI in health cannot be built in silos… Universities have a crucial role to play — not only in advancing research, but in creating the intellectual and institutional infrastructure needed to ensure that AI serves public good, equity, and trust at scale.’RAISE, as Aradhita Baral described it, is intended as “a platform for sustained dialogue rather than isolated conversations.” Its expansion to IIT Delhi, Bengaluru, and Hyderabad over the coming weeks suggests that India’s AI-in-health debate is finally moving from hype to homework — from what is possible to what is responsible.

