At a “Safe and Trusted AI” event in New Delhi on November 20, 2025, Google laid out a broad safety playbook for India, and the headline theme was clear: Google AI safety India is being built around the people most likely to be targeted or left behind in the AI era. Children, teenagers and older adults are now the focus of both product features and education efforts, as Google prepares for the AI Impact Summit 2026.
The company’s core argument is simple but timely. If AI is going to power services like finance, healthcare and education at scale, safety cannot be an optional accessory. It needs to be a default layer that sits quietly in the background, stopping scams, detecting manipulation and protecting privacy without turning everyday phones into surveillance devices.
Read Also: Apple iPhone 2027 lineup leak hints at split launches and a 20th anniversary showpiece
In This Article
Product protections that run on the phone
Google’s newest safety tools lean heavily on on-device AI, meaning the phone does the heavy lifting locally. This matters in India, where privacy concerns, patchy networks and the sheer scale of fraud make always-on protections far more practical than cloud-only solutions.
Here is what is rolling out or being piloted:
-
Real-time scam detection on calls: A new system powered by Gemini Nano on Pixel phones listens for scam patterns during calls from unknown numbers. It flags suspicious behaviour in the moment, without recording audio, storing transcripts, or sending conversation data back to Google. The feature is off by default, and users can disable it at any time.
- Advertisement - -
Screen-sharing fraud alerts: A pilot feature aims to block a growing scam trick. If someone is screen-sharing with an unknown caller and opens UPI or finance apps such as Google Pay, Navi or Paytm, the phone will flash a warning. Users can stop both the call and screen share with one tap, right when the risk is highest.
-
Enhanced Phone Number Verification: Google is developing a replacement for weak SMS one-time passwords. This new method relies on SIM-based verification inside the device, with user consent, and works on Wi-Fi or mobile data. The goal is fewer hijacked accounts and safer sign-ins across apps.
Together, these AI scam protection tools try to meet fraud where it happens: during live calls, while screens are shared, and at login moments that scammers love to exploit.
Teaching safer online habits, not just shipping features
Google is pairing technical defences with old-fashioned digital literacy, because the best safety stack still needs informed users. The company is expanding training programs that cover school communities, families and seniors.
Key efforts include:
-
LEO program online safety: Learn and Explore Online is arriving in India in December 2025. It trains teachers, parents and practitioners to use parental controls and to design age-appropriate online experiences for children and teens.
-
Super Searchers’ information literacy: This program has trained thousands of teachers and students in 2025, using a train-the-trainer model that now aims to reach low-income communities, women and seniors too.
-
AI safety for seniors: Under the “Sach Ke Sathi, DigiKavach for Seniors” initiative with Jagran, more than 5,000 seniors across roughly 30 cities are receiving in-person training on spotting scams and navigating digital services safely.
And to push the work further into classrooms and communities, Google.org is funding a CyberPeace Foundation grant of 200,000 US dollars. The money will support safer learning environments for kids and teens, AI-driven cyber defence tools, and hackathons that encourage startups to build fraud-fighting solutions aligned with the IndiaAI Mission.
Read Also: POCO F8 Pro and F8 Ultra set to land on November 26 with serious flagship firepower
Watermarking the AI world with SynthID
Safety today is not only about scams. It is also about trust in what we see and hear. To help users and institutions identify AI-generated content, Google is widening access to SynthID.
This includes a SynthID detector and API for partners, plus open-sourcing text watermarking through its Responsible GenAI Toolkit. For India, early access to SynthID watermarking India capability is being extended to researchers, academia and major publishers, helping newsrooms and fact-checkers verify whether content is AI-made.
This is not a silver bullet for deepfakes. But it is a practical tool to make authenticity checks faster and more standardised, especially in high-velocity media environments.
Working with regulators, researchers and startups
Google’s safety roadmap also stretches into the wider ecosystem:
-
In fintech, it is supporting efforts to keep users away from shady lending apps by promoting verified lists of authorised digital lenders.
-
In research, collaborations with institutions such as IIT Madras and CeRAI focus on AI safety tuned to India’s diverse languages and device realities.
-
In cybersecurity and privacy, projects like CodeMender and newer privacy-preserving AI techniques aim to make software safer by default.
-
For innovation, startup training programs are being set up so early-stage teams can build secure AI agents from the ground up.
The most interesting part of this push is not any single feature. It is the intent to treat India as a proving ground for safety at scale. If real-time scam detection, literacy drives and content watermarking can work in a country as complex and high-volume as India, they have a solid chance of working anywhere. Now the spotlight is on rollout speed, transparency and whether the rest of the industry follows suit.


