AI security expert Dawn Song warns that “adversarial machine learning” could be used to reverse-engineer systems—including those used in defense.
- by Will Knight
- March 25, 2019
Song said adversarial machine learning could be used to attack just about any system built on the technology. “It’s a big problem,” she told the audience. “We need to come together to fix it.”
Adversarial machine learning involves experimentally feeding input into an algorithm to reveal the information it has been trained on, or distorting input in a way that causes the system to misbehave.
By inputting lots of images into a computer vision algorithm, for example, it is possible to reverse-engineer its functioning and ensure certain kinds of outputs, including incorrect ones. Song presented several examples of adversarial-learning trickery that her research group has explored.
by Will Knight March 25, 2019 Intelligent Machines
How malevolent machine learning could derail AI
Can your own phone be used as a weapon against you? Can your mind be controlled with weaponized technology? A Top Secret control infographic released by accident?
A weapon that causes you to hear voices?
“Voice of God” technology leaked in Gov. docs for MIND CONTROL!
Published on May 11, 2018