The New York Times reports that researchers in China and the USA have discovered a way to surreptitiously activate and command those virtual assistants by broadcasting instructions that are inaudible to the human ear. Researchers from Berkeley have said that they can even alter audio files "to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being almost undetectable to the human ear". While at present this is strictly an academic exercise, researchers at the university say it's foolish to assume hackers won't discover the same methods as well.
Nicholas Carlini, a fifth-year PhD student at UC Berkeley and one of the co-authors of the paper, said that the team just wanted to see if they could make the previously demonstrated exploit even more stealthy.
This theory was put to practice past year when Chinese researchers created a device from off the shelf parts to send such inaudible commands to virtual assistants.
Researchers in China call the technique, "Dolphin Attack".
Amazon says they have taken steps (but haven't told anyone what steps) to make sure Alexa is secure. Google said its platform has features that mitigate such commands. Apple says the HomePod is programmed to not perform certain tasks such as unlocking a door, while they insist Siri on the iPhone and iPad is safe since the device has to be unlocked in order to execute such commands. With nearly all virtual assistants getting more features, its time we address the inherent security loopholes they open up.
There is no USA law against broadcasting subliminal messages to humans, let alone machines.Читайте также: Walmart-Flipkart deal to hit India's retail sector: Traders' body
"The song carrying the command could spread through radio, TV or even any media player installed in portable devices like smartphones, potentially impacting millions of users in long distance", the researchers wrote. The receiver must be close to the device, but a more powerful ultrasonic transmitter can help increase the effective range.
The commands aren't discernible to humans, but will be gobbled up by the Echo or Home speakers, the research suggests. They were able to embed commands directly into recordings of music or spoken text.
They also embedded other commands into music clips.
"Companies have to ensure user-friendliness of their devices, because that's their major selling point", said Tavish Vaidya, a researcher at Georgetown. He wrote one of the first papers on audio attacks, which he titled "Cocaine Noodles" because devices interpreted the phrase "cocaine noodles" as "OK, Google".
But Carlini explained their goal is to flag the security problem - and then try to fix it.При любом использовании материалов сайта и дочерних проектов, гиперссылка на обязательна.
«» 2007 - 2018 Copyright.
Автоматизированное извлечение информации сайта запрещено.
Код для вставки в блог