Researchers find 1,000 words that 'accidentally' activate Alexa, Siri and Google

WION Web Team
New Delhi, India Published: Jul 02, 2020, 08.11 PM(IST)

Alexa and Siri's trigger words Photograph:( Reuters )

Story highlights

While the researchers said that the developers might have used an array of words for activation to make things easier for the users, it does raise serious privacy concerns about this.

There are some words that might be switching on your AI systems that you might not be aware of. A group of researchers have compiled a list of nearly 1,000 words that can "accidentally" activate Amazon's Alexa and Apple's Siri.

The researchers are from Ruhr-Universität Bochum and the Max Planck Institute for Cyber Security and Privacy in Germany. 

Also read| CEOs of Apple, Amazon, Facebook and Google to testify in Congress over antitrust issues

The study was conducted on Amazon's Alexa, Apple's Siri, Google Assistant, Microsoft Cortana, as well as three virtual assistants exclusive to the Chinese market, from Xiaomi, Baidu, and Tencent, according to a report from the Ruhr-Universität Bochum news blog.

The researchers conducted the study by switching on the virtual assistants and leaving them alone in a room with a television playing episodes from several series and sitcoms such as Game of Thrones, Modern Family, and House of Cards, with English, German, and Chinese audio tracks for each.

Also read| Tik Tok removed from Apple's App store and Google Play after govt's ban

On playing these episodes, some words activated the system, which the researchers observed by the switching on of an LED light display. The team cross referenced the dialogue being spoken every time they observed the LED display turning on.

Once the assistants get activated by these words, the devices use local speech analysis software to determine if the sound was used with the purpose of activation or not. If the device concludes that the sound was a trigger, it sends an audio recording of several seconds to cloud servers for additional analysis.

Privacy concerns

While the researchers said that the developers might have used an array of words for activation to make things easier for the users, it does raise serious privacy concerns about this.

"The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans. Therefore, they are more likely to start up once too often rather than not at all," researcher Dorothea Kolossa said.

"From a privacy point of view, this is of course alarming, because sometimes very private conversations can end up with strangers. From an engineering point of view, however, this approach is quite understandable, because the systems can only be improved using such data. The manufacturers have to strike a balance between data protection and technical optimization," said Thorsten Holz, Professor at Ruhr-Universität Bochum.

Read in App