University of Wyoming researchers recently conducted a study to discover if state-of-the-art image-recognizing neural networks were susceptible to false positives. The researchers tested the networks by generating random imagery using evolutionary algorithms. The algorithms would produce an image and then change it slightly. Both the altered copy and the original were shown to a conventional neural network with a data set of 1.3 million images. If the copy was recognized as anything in the algorithm’s cache with more certainty than the original, the researchers would keep it and repeat the process. Eventually, the technique produced dozens of images that were recognized by the neural network with more than 99-percent confidence. The researchers also found the artificial intelligence could routinely be fooled by images of pure static. Using a slightly different evolutionary technique, the researchers generated a different set of images that, to humans, all looked like static. However, the neural networks identified them, with upward of 99-percent certainty, as centipedes, cheetahs, and peacocks. The findings suggest neural networks develop a variety of visual cues that help them identify objects, according to University of Wyoming researcher Jeff Clune.
More info here: Wired News (01/05/15) Kyle VanHemert