Adversarial machine learning [PDF]

Adversarial examples - Unrecognizable images. □ Deep Neural Networks are Easily Fooled: High Confidence Predictions fo

6 downloads 9 Views 858KB Size

Recommend Stories


Adversarial and Secure Machine Learning
If your life's work can be accomplished in your lifetime, you're not thinking big enough. Wes Jacks

PDF Machine Learning
Courage doesn't always roar. Sometimes courage is the quiet voice at the end of the day saying, "I will

Machine Learning Theory [PDF]
2.2 Decision stumps. A class that is often used to get a weak learner in boosting are Decision Stumps. We will see below that for this class an ERM rule is efficiently implementable, which additionally makes this class computationally attractive. Dec

Python Machine Learning Cookbook Pdf
Ask yourself: How shall I live, knowing I will die? Next

PDF Machine Learning for Hackers
The best time to plant a tree was 20 years ago. The second best time is now. Chinese Proverb

PdF Machine Learning for Hackers
Ask yourself: Have I made someone smile today? Next

PDF Download Understanding Machine Learning
Ask yourself: What are my favorite ways to take care of myself physically, emotionally, mentally, and

Machine Learning - Discriminative Learning
Be who you needed when you were younger. Anonymous

Machine Learning
I want to sing like the birds sing, not worrying about who hears or what they think. Rumi

Machine Learning
Don’t grieve. Anything you lose comes round in another form. Rumi

Idea Transcript


COMPUTER VISION (& SECURITY) Megyeri István Inteligent Human Computer Interactions

Agenda ■ Introduction – Deep learning basics – Convolutional Neural Networks ■ Image classification ■ Object localization ■ Security aspects of Deep Learning – Adversarial Examples – Possible Attack scenarios – Defenses ■ Summary

Deep Learning basics - Arhitecture

Deep learning basics - Why now? ■ New training algorithm was needed – http://ruder.io/optimizing-gradient-descent/ ■ Perform well on very large datasets ■ Computation power (GPU)

Deep learning basics - Why now? ■ Flat at the ends, derivative close to zero ■ Relu derivative for positive input is always 1 ■ Further activation functions[1]

Convolutional Neural Networks ■ Classic structure: Fully connected networks – Permutation of the features give same result

CNN - Hierarchy of features

CNN - How it works? ■ https://devblogs.nvidia.com/deep-learning-nutshell-core-concepts/

CNN – Parameter sharing

Image classification

Image classification

Object localization ■ https://www.youtube.com/watch?v=EhcpGpFHCrw ■ Output: – Is there any object? – Mid point – Width & height – Class label

Landmark detection ■ http://apicloud.me/apis/facemark/docs/ ■ Output: – Is present an object or not? – Marker point coordinates ■ A few application: – Recognize emotions( https://www.youtube.com/watch?v=H5aaYGRGxDo ) – AR: Draw a hat/crown on the face (https://www.youtube.com/watch?v=Pc2aJxnmzh0 ) – Pose detection(https://www.youtube.com/watch?v=pW6nZXeWlGM ) – …

Going deeper

Going deeper

Adversarial examples ■ Explaining and Harnessing Adversarial Examples[2]

Adversarial examples ■ Intriguing properties of neural networks [3]

Adversarial examples - Unrecognizable images ■ Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [4]

Adversarial examples in real word ■ Transformation invariant: – https://blog.openai.com/robust-adversarial-inputs/ ■ Adversarial examples in real world: – https://www.youtube.com/watch?time_continue=7&v=gkKyBmULVvM ■ …

How fooling work? ■ Training: – „What happens to the score of the correct class when i wiggle this parameter?” ■ Generating adversarial examples: – „What happens to the score of (whatever class you want) when i wiggle this pixel?”

Attack scenarios ■ Non targeted vs targeted ■ White box, Black box attack – With or without probing – „Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task.” [5] – Even for decision trees, SVM classifier, Logistic regression, K-Nearest Neighbors, .. ■ Digital attack, Physical attack

Defense techniques ■ Regularization ■ Adversarial training ■ Gradient regularization ■ Ensemble adversarial training ■ Denoising

Summary

■ There are huge differences between humans and machines – Not generalize for „handmade” images ■ Deep Learning work pretty well in practice (for Natural images) – Image classification – Object localization – Landmark detection

References ■ https://en.wikipedia.org/wiki/Activation_function [1] ■ https://arxiv.org/abs/1412.6572 [2] ■ https://arxiv.org/abs/1312.6199 [3] ■ https://arxiv.org/abs/1412.1897 [4] ■ https://arxiv.org/pdf/1605.07277 [5] ■ https://arxiv.org/pdf/1712.09936 [6]

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.