Google Messages has started rolling out its Sensitive Content Warnings feature, which automatically blurs images that might contain nudity while keeping all processing on your phone to protect privacy.
The feature uses Android’s SafetyCore AI to detect and blur potentially explicit images without sending any data to Google’s servers. This new tool aims to protect users, especially teenagers, from unwanted explicit content. It was first announced in October 2024 and is now appearing in beta versions.
“Google Messages uses on-device machine learning models to classify these scams, so your conversations stay private and the content is never sent to Google unless you report spam,” Google said in a blog post last year.
When someone receives images that may contain nudity, the system automatically blurs them and provides three options: view the image, block the sender, or learn about the risks of explicit content. Users who choose to view the image can blur it again later by tapping a “Remove preview” button.
The system also works when sending images. It provides a “gentle ‘speed bump'” that aims to help users reconsider the risks of sharing nude images and avoid accidental shares. People trying to send explicit images get a warning about risks and must confirm before they can send.
Google has set up different default settings based on user age. For kids with accounts managed through Family Link, the feature is on by default and cannot be turned off. Teens aged 13-17 without parental supervision also have the feature turned on automatically, but they can turn it off in settings. For adults, the feature is optional and must be manually turned on.
The technology currently has some limitations. It only works with regular images, not videos, and requires Android 9 or newer with at least 2GB of RAM. The feature is also not widely available yet, appearing only on certain phones running the latest beta version of Google Messages.
Unlike some competing services that use cloud-based processing for content moderation, Google’s approach keeps all analysis on the user’s device, avoiding privacy concerns about sending sensitive images to external servers.
Google plans to roll out this feature to all eligible Android devices by late 2025. Interested users can check if the feature is available by looking in Google Messages settings under Protection & Safety > Manage sensitive content warnings.
This feature comes as tech companies face growing pressure to protect younger users while respecting privacy. Apple has a similar Sensitive Content Warning feature, and Meta recently announced improved AI age detection for teen accounts on Instagram.