Social media mega-giant Facebook is just one in a long line of corporations that have recently admitted to using humans to listen to audio recordings. These recordings–which users agreed to–were supposed to be used to help improve the accuracy of transcription services. They were only supposed to be “heard” by artificial intelligence. Instead, Facebook hired contractors across the globe to review the audio as well as the AI transcription for accuracy.
Google, Apple, Microsoft, and Amazon have all also admitted to similar practices.
Amazon admitted in April of 2019 that it utilized a human workforce to check quality for its Alexa service. These contractors often reported bearing witness to disturbing and potentially criminal recordings. In July of the same year, Google and Apple both acknowledged similar circumstances. In the Google case, a contractor leaked hundreds of audio clips to a Belgian news service. More than one in 10 of the recordings provided to the contractor was made accidentally, meaning that the people being recorded had no idea it was happening.
Apple similarly graded its popular Siri AI, noting that, “a small portion of Siri requests…” were reviewed and analyzed by human ears. Apple argued, however, that it properly protected customer data since audio recordings were not linked to an Apple ID. Nonetheless. Siri grading was suspended shortly after this trust infraction was made public.
By August, Amazon, Google, and Apple had all initiated changes to their QA programs. Amazon specifically made opting in or out more visible by creating a setting to help improve services that site users could enable or disable at will.
Another major player in the AI assistant game is Microsoft, which revealed that it, too, utilized a human workforce to perform quality assurance testing on Cortana.
Human Oversight Common
According to most industry experts, human oversight is often needed to ensure artificial intelligence can meet and exceed customer expectations. Unfortunately, although most users generally expect this, problems occur when these virtual assistants begin recording private conversations without prompting. This happens when “wake words” are misinterpreted. For example, Alexa might be inadvertently triggered by mishearing the names Alex or Alice, or when ambient noise is heard as the trigger phrase.
Skype and other services are also being scrutinized, with some claiming private information, including telephone numbers and sensual online encounters, were played for human quality assurance testers.
Is It a GDPR Violation?
In the UK, the Information Commissioner’s office has launched an investigation to determine whether these practices breach the General Data Protection Regulation (GDPR). Similar inquiries are taking place in the US and Ireland.
How to Protect Yourself
For the most part, information that is supposed to go to AI stays that way. But users who are concerned that their private conversations might be overheard by a stranger should take measures to protect themselves. Depending on the platform they are using, most should have the option to opt-out of automatic recordings. Similarly, many devices may be switched off so that they are not constantly listening for a waking phrase.
Until consumer protections are clearly outlined, companies may continue to blur the line between keeping and breaking the trust of their customers. Fortunately, at least for now, most major companies have halted the practice of over-sharing in the name of quality assurance.
Latest posts by Steven Wyer ( More about this Author )
- Marketing Through TripAdvisor? Don’t Offer Review Incentives April 16, 2020
- 7 Reasons Online Reviews Matter March 2, 2020
- Why Aren’t 4 Stars Enough? February 16, 2020
- Who’s Listening to Your Private Conversations? February 5, 2020
- Flip the Script: How to Combat Negative Reviews January 31, 2020
- Be the Star of Your Own Reviews December 18, 2019
- What Are Fake Reviews, and Why Should I Be Concerned? July 3, 2019
- FTC Strikes Blow to Fake Reviews, Amazon Shoppers (and Sellers) Will Benefit June 6, 2019
- FTC Targets Paid Reviews; Calls Out False Product Claims May 23, 2019
- Replies Made Easy Via Google Maps May 7, 2019