2.2 Decision stumps. A class that is often used to get a weak learner in boosting are Decision Stumps. We will see below that for this class an ERM rule is efficiently implementable, which additionally makes this class computationally attractive. Dec
Don’t grieve. Anything you lose comes round in another form. Rumi
Idea Transcript
COMPUTER VISION (& SECURITY) Megyeri István Inteligent Human Computer Interactions
Agenda ■ Introduction – Deep learning basics – Convolutional Neural Networks ■ Image classification ■ Object localization ■ Security aspects of Deep Learning – Adversarial Examples – Possible Attack scenarios – Defenses ■ Summary
Deep Learning basics - Arhitecture
Deep learning basics - Why now? ■ New training algorithm was needed – http://ruder.io/optimizing-gradient-descent/ ■ Perform well on very large datasets ■ Computation power (GPU)
Deep learning basics - Why now? ■ Flat at the ends, derivative close to zero ■ Relu derivative for positive input is always 1 ■ Further activation functions[1]
Convolutional Neural Networks ■ Classic structure: Fully connected networks – Permutation of the features give same result
CNN - Hierarchy of features
CNN - How it works? ■ https://devblogs.nvidia.com/deep-learning-nutshell-core-concepts/
CNN – Parameter sharing
Image classification
Image classification
Object localization ■ https://www.youtube.com/watch?v=EhcpGpFHCrw ■ Output: – Is there any object? – Mid point – Width & height – Class label
Landmark detection ■ http://apicloud.me/apis/facemark/docs/ ■ Output: – Is present an object or not? – Marker point coordinates ■ A few application: – Recognize emotions( https://www.youtube.com/watch?v=H5aaYGRGxDo ) – AR: Draw a hat/crown on the face (https://www.youtube.com/watch?v=Pc2aJxnmzh0 ) – Pose detection(https://www.youtube.com/watch?v=pW6nZXeWlGM ) – …
Going deeper
Going deeper
Adversarial examples ■ Explaining and Harnessing Adversarial Examples[2]
Adversarial examples ■ Intriguing properties of neural networks [3]
Adversarial examples - Unrecognizable images ■ Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [4]
Adversarial examples in real word ■ Transformation invariant: – https://blog.openai.com/robust-adversarial-inputs/ ■ Adversarial examples in real world: – https://www.youtube.com/watch?time_continue=7&v=gkKyBmULVvM ■ …
How fooling work? ■ Training: – „What happens to the score of the correct class when i wiggle this parameter?” ■ Generating adversarial examples: – „What happens to the score of (whatever class you want) when i wiggle this pixel?”
Attack scenarios ■ Non targeted vs targeted ■ White box, Black box attack – With or without probing – „Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task.” [5] – Even for decision trees, SVM classifier, Logistic regression, K-Nearest Neighbors, .. ■ Digital attack, Physical attack
Defense techniques ■ Regularization ■ Adversarial training ■ Gradient regularization ■ Ensemble adversarial training ■ Denoising
Summary
■ There are huge differences between humans and machines – Not generalize for „handmade” images ■ Deep Learning work pretty well in practice (for Natural images) – Image classification – Object localization – Landmark detection