One pixel attack for fooling convolutional neural network [PDF]

[3] Nguyen, A., Yosinski, J. and Clune, J., 2015. Deep neural networks are easily fooled: High confidence predictions fo

0 downloads 7 Views 844KB Size

Recommend Stories


Robust Language Identification Using Convolutional Neural Network
You're not going to master the rest of your life in one day. Just relax. Master the day. Than just keep

malicious url detection using convolutional neural network
At the end of your life, you will never regret not having passed one more test, not winning one more

Klasifikasi Citra Menggunakan Convolutional Neural Network
Don't be satisfied with stories, how things have gone with others. Unfold your own myth. Rumi

DNA Sequence Classification by Convolutional Neural Network
There are only two mistakes one can make along the road to truth; not going all the way, and not starting.

Attention-Based Convolutional Neural Network for Modeling Sentence Pairs
When you do things from your soul, you feel a river moving in you, a joy. Rumi

PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation
Open your mouth only if what you are going to say is more beautiful than the silience. BUDDHA

Attention-Based Temporal Weighted Convolutional Neural Network for Action Recognition
Every block of stone has a statue inside it and it is the task of the sculptor to discover it. Mich

A mixed-scale dense convolutional neural network for image analysis
Ego says, "Once everything falls into place, I'll feel peace." Spirit says "Find your peace, and then

Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on
The best time to plant a tree was 20 years ago. The second best time is now. Chinese Proverb

Idea Transcript


One pixel attack for fooling convolutional neural network Danilo Vasconcellos Vargas Kyushu University

Nature: Atari Games

Nature: Alpha-Go

AI is getting extremely intelligent!

Convolutional neuron network(CNN)

Existing attacks against CNNs

Adversarial pixel-perturbation[2]

[2] Goodfellow, I.J., Shlens, J. and Szegedy, C., 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.

Existing attacks against CNNs

Artificial images[3]

[3] Nguyen, A., Yosinski, J. and Clune, J., 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 427-436).

Our work: just one pixel

Figure. One image can be simultaneously perturbed to nine other classes.

Summary of the results In cifar-10 data-set (10 classes), each test image can be perturbed to 2 target classes on average. About 70% images can be perturbed to at least one target classes (sensitive images), with 75% classification confidence on average.

AI is getting extremely intelligent!?

What is it learning?

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.