AI learns how to fool text-to-speech. That’s bad news for voice assistants


A pair of computer scientists at the University of California, Berkeley developed an AI-based attack that targets text-to-speech systems. With their method, no matter what an audio file sounds like, the text output will be whatever the attacker wants it to be. This one is pretty cool, but it’s also another entry for the “terrifying uses of AI” category. The team, Nicholas Carlini and Professor David Wagner, were able to trick Mozilla’s popular DeepSpeech open-source text-to-speech system by, essentially, turning it on itself. In a white paper published last week the researchers state: Given any audio waveform, we can produce…

This story continues at The Next Web
The Next Web

Play
Slider