Microsoft and Google AI tools may use human reviewers—here’s what you need to know

Written by

Published 23 Oct 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Woman in Blue Coat Sitting on Car Seat Stock Photo

Generative artificial intelligence (AI) tools like Microsoft Copilot, Google’s Gemini, and OpenAI’s ChatGPT are being used by millions for work and personal purposes.

But are these conversations truly private?

Reports show that some companies have AI human reviewers monitoring chats for quality checks and safety, leading to serious privacy concerns. This affects businesses dealing with sensitive data and individuals discussing personal issues.

So, how can users protect their information, and why are human reviewers needed in the first place?

Why Humans Review AI Conversations

AI companies use human reviewers to spot errors and improve their tools. They review chat logs to understand what went wrong when the AI fails. This human feedback is then used to train and enhance the AI’s performance.

Companies also check conversations for misuse or safety concerns, relying on human monitors to flag problems. This practice is not new, as companies like Microsoft have used human reviewers for quality assurance in past services like Skype.

Privacy Risks for Business and Personal Use

AI tools like ChatGPT, Copilot, and Gemini are designed for both professional and personal settings. In workplaces, these tools help analyze data and speed up tasks, while at home, they act as virtual companions.

However, users may unknowingly share sensitive information. Some companies have already banned ChatGPT to avoid legal troubles, such as breaching Health Insurance Portability and Accountability Act (HIPAA) rules in the US. Employees are often directed to use approved AI tools that offer stricter data protections.

Even when personal information is shared, human reviewers might still access these chats, putting privacy at risk.

Which AI Tools Use Human Reviewers?

While it is not possible for AI companies to read every conversation, some AI tools do involve human oversight:

  • ChatGPT: Offers a temporary chat option for privacy, but even these conversations are stored for 30 days and may be reviewed.
  • Microsoft Copilot: Confirms that human feedback is used to refine responses, meaning some conversations are monitored.
  • Google Gemini: Warns users not to share confidential information, as human reviewers may read conversations to improve the AI.

Balancing Convenience and Privacy

While most AI chats are not monitored due to the volume of data, the risk remains. Businesses and individuals must stay cautious, understanding that privacy is not always guaranteed. Choosing AI solutions that run locally or come with clear privacy agreements is key to staying safe in an increasingly AI-powered world.

As AI continues to grow, users must remain aware of these risks to make informed choices about the technology they use.