Feature hashing | Frank [PDF]

Sep 25, 2015 - In machine learning, feature hashing, also known as the hashing trick(by analogy to the kernel trick), is

3 downloads 33 Views 142KB Size

Recommend Stories


hashing i
Never let your sense of morals prevent you from doing what is right. Isaac Asimov

Memorandum - Fried Frank [PDF]
May 25, 2017 - This memo provides a checklist of three major areas that should be addressed to ensure compliance with mortgage lending laws and to avoid unwanted – or unintended – consequences: .... Truth In Lending Act (“TILA”)10 and Regulat

PDF Anne Frank
There are only two mistakes one can make along the road to truth; not going all the way, and not starting.

pdf 60th anniversary feature
We can't help everyone, but everyone can help someone. Ronald Reagan

Download Eumocion (pdf) Frank Kinslow
Where there is ruin, there is hope for a treasure. Rumi

Feature Feature
You often feel tired, not because you've done too much, but because you've done too little of what sparks

Bayesian Supervised Hashing
If you are irritated by every rub, how will your mirror be polished? Rumi

PDF of the feature here
Don't watch the clock, do what it does. Keep Going. Sam Levenson

Multilinear Hyperplane Hashing
The only limits you see are the ones you impose on yourself. Dr. Wayne Dyer

Discrete Graph Hashing
Life isn't about getting and having, it's about giving and being. Kevin Kruse

Idea Transcript


Frank Home

Home

Archives

Archives

Categories

Categories

Tags

Tags

About

Search

About

Feature hashing 2015-09-25

MACHINE LEARNING

In machine learning, feature hashing, also known as the hashing trick(by analogy to the kernel trick), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a hash function to the features and using their hash values as indices directly, rather than looking the indices up in an associative array.

Motivating example In a typical document classification task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a bag of words (BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a feature (independent variable) of each of the documents in both the training and test sets. Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a term-document matrix where each row is a single document, and each column is a single feature/word; the entry i, j in such a matrix captures the frequency (or weight) of the j’th term of the vocabulary in document i. (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.) Typically, these vectors are extremely sparse. The common approach is to construct, at learning time or prior to that, a dictionary representation of the vocabulary of the training set, and use that to map words to indices. Hash tables and tries are common candidates for dictionary implementation. E.g., the three documents John likes to watch movies. Mary likes movies too. John also likes football. can be converted, using the dictionary

1

Term Index

2

John 1

3

likes 2

4

to 3

5

watch 4

6

movies 5

7

Mary 6

8

too 7

9

also 8

10

football9

to the term-document matrix

1

1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0

2

0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0

3

1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1

(Punctuation was removed, as is usual in document classification and clustering.) The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows. On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. This difficulty is why feature hashing has been tried for spam filtering at Yahoo! Research. Note that the hashing trick isn’t limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features.

Feature vectorization using the hashing trick Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function h to the features (e.g., words) in the items under consideration, then using the hash values directly as feature indices and updating the resulting vector at those indices: function hashing_vectorizer(features : array of string, N : integer):

1

x := new vector[N]

2

for f in features:

3

h := hash(f)

4

x[h mod N] += 1

5

return x

It has been suggested that a second, single-bit output hash function be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes function hashing_vectorizer(features : array of string, N : integer):

1

x := new vector[N]

2

for f in features:

3

h := hash(f)

4

idx := h mod N

5

if (f) == 1:

6

x[idx] += 1

7

else:

8

x[idx] -= 1

9

return x

The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of (h,) pairs and let the learning and prediction algorithms consume such streams; a linear model can then be implemented as a single hash table representing the coefficient vector.

Cookie

DEV 2017-06-12

WEB AJAX 2017-06-12

DEV GO 2016-12-21

DEV EXCELCSV 2016-12-16

DEV PYTHON SNIPPETS 2016-12-16

Hadoop (1) Hash (1) Hive (1)

Java (2) MySQL (1) Photo (1) SQL (1) Zabbix (1) ads (1) c++ (4) dev (39) github (1) internet (3) java (5) kaggle (8) machine learning (26) other (5) sklearn (2) system (8) travel (2) web (5)

2017 (2) 2016 (3) 2016 (2) 2016 (5) 2016 (7) 2016 (13) 2016 (2) 2016 (7) 2016 (3) 2016 (7) 2015 (7) 2015 (12) 2015 (12) 2015 (13) 2015 (1) 2015 (3) 2015 (1) 2015 (1) 2015 (2) 2015 (2) 2014 (1) 2014 (5) 2014 (1) 2014 (5) 2013 (1)

Apache

Bandits Elasticsearch, docker FlatBuffers Google Granger causality GridSearchCV HTTPS Jquery MapReduce Mock Monument Vallay Mysql PPTP, vpn Photoshop Pipeline Shadowsocks ad apache

blade crontab

docker elasticsearch font github go hashmap kaggle linux linux, ldd, lucene nio pagerank photo python swift, llvm zabbix

© 2017 Li Jingpeng Powered by Hexo. Theme by PPOffice

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.