Neural Networks with Google's TensorFlow [PDF]

Nov 22, 2016 - ReLU – a non-linear activation function to put in the hidden layer. ReLU is one of many choices of a no

0 downloads 4 Views 2MB Size

Recommend Stories


[PDF] Download Neural Networks
I want to sing like the birds sing, not worrying about who hears or what they think. Rumi

[PDF] Download Neural Networks
Sorrow prepares you for joy. It violently sweeps everything out of your house, so that new joy can find

Collaborative Filtering with Neural Networks
Be who you needed when you were younger. Anonymous

Neural Networks
You have to expect things of yourself before you can do them. Michael Jordan

Neural Networks
Where there is ruin, there is hope for a treasure. Rumi

neural networks
Learning never exhausts the mind. Leonardo da Vinci

Neural Networks
Almost everything will work again if you unplug it for a few minutes, including you. Anne Lamott

neural networks
Open your mouth only if what you are going to say is more beautiful than the silience. BUDDHA

Getting Started with TensorFlow
If you are irritated by every rub, how will your mirror be polished? Rumi

Download PdF Learning TensorFlow
The happiest people don't have the best of everything, they just make the best of everything. Anony

Idea Transcript


Neural Networks with Google’s TensorFlow Shuo Zhang Computational discourse analysis 11/22/16

Overview 1. Neural Networks basics 2. Neural Networks specifics 3. Neural Networks with Googe’s TensorFlow 4. Coreference: Singleton classification example

Resources •  Deep learning course (Google) @ Udacity •  Machine learning course (Stanford, Andrew Ng) @ coursera •  Neural Network course (Geoffrey Hinton) @ coursera

1. NN basics

From linear to non-linear classifier

Pros and cons of linear models Pros:

Cons:

Conclusion:

•  Fast

•  Limited to modeling additive features

We want to use parameters within linear functions but able to efficiently do non-linear mapping.

•  Numerically stable •  Derivative is constant

•  Multiplicative or higher order features leas to huge parameter space, not suitable for nonlinear mapping

From logistic regression to neural networks

Inserting a non-linear layer: Rectified Linear Unit(ReLU)

Intuition: how NN makes non-linear mapping possible

Type of neural network •  Feed forward •  Feedback •  Self Organizing Map(SOM) •  ..

2. NN specifics

Multinomial logistic regression as the basic unit in NN

Softmax – turn outputs of linear functions into probability vectors

One-hot encoding

Cross entropy – measuring similarity between prediction and gold label

Putting it together again

MLR to NN

ReLU – a non-linear activation function to put in the hidden layer ReLU is one of many choices of a non-linear activation function. https://en.wikipedia.org/wiki/ Activation_function

Training a neural network •  Basically similar to training a linear model by optimizing a convex function using a method like gradient descent •  Example cost function for logistic based activation

Cost function – this is universal for linear classifier or NN •  Cost function is a function of the parameters that captures the difference between predicted and gold label, therefore we want to minimize it. •  How to minimize? Using gradient descent, at each iteration, adjust the weights. •  How to adjust weights? Subtracting gradient (derivative) will move you toward the minimum.

Gradient descent •  Keep in mind that W is a matrix, so we need to compute partial derivative with respect to each element of W, and sum them up.

Gradient Descent flavors •  Batch GD: classic approach, summing over derivative for all training examples at each iteration in order to perform one update to weights, very slow, but more stable, almost never used today •  Stochastic GD: only takes one example at each iteration and use the gradient computed from that example to adjust weights, fast, but less stable behavior •  Mini-batch GD: (in between) takes a mini-batch of examples (such as from 100 to 2000) and sum up those terms derivatives to perform update, balance between stability and speed (also good results), most used today

Neural Network training: forward backward propagation Intuition from linear classifier: Repeat: •  Compute an output •  Compute error •  Adjust weights (my implementation in Octave)

3. Neural Networks with Googe’s TensorFlow https://www.youtube.com/watch?v=oZikw5k_2FM

Setup https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html

Get started

Hyper parameter tuning (loss curve) •  Number of hidden nodes •  Learning rate •  Batch size •  Number of steps •  Overfitting

Google Udacity course example:notMNIST

Example code for notMNIST dataset (Udacity) •  https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ udacity (This set of ipython notebook is not only partial implementation, since it is meant to be an assignment to be completed. To view a complete implementation, refer to the .ipynb and html files I uploaded on the corpling server).

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.