Data Mining - Feature Hashing [Gerardnico] [PDF]

Jul 19, 2015 - from collections import defaultdict import hashlib def hashFunction(numBuckets, rawFeats, printMapping=Fa

4 downloads 17 Views 148KB Size

Recommend Stories


[PDF] Data Mining and Analysis
Make yourself a priority once in a while. It's not selfish. It's necessary. Anonymous

Data Mining
If you are irritated by every rub, how will your mirror be polished? Rumi

Data Mining in Government Overview Data Mining
You have survived, EVERY SINGLE bad day so far. Anonymous

Data Mining
Never let your sense of morals prevent you from doing what is right. Isaac Asimov

Data Mining
The best time to plant a tree was 20 years ago. The second best time is now. Chinese Proverb

DATA MINING
So many books, so little time. Frank Zappa

DATA MINING
If your life's work can be accomplished in your lifetime, you're not thinking big enough. Wes Jacks

Data Mining
The wound is the place where the Light enters you. Rumi

Data Mining
How wonderful it is that nobody need wait a single moment before starting to improve the world. Anne

Data Mining
You have survived, EVERY SINGLE bad day so far. Anonymous

Idea Transcript


Gerardnico Data Mining - Feature Hashing Breadcrumb:

Data Mining - Feature Hashing You are here: (Statistics|Probability|Machine Learning|Data Mining|Data and Knowledge Discovery|Pattern Recognition|Data Science|Data Analysis)

−Table of Contents 1 - About 2 - Articles Related 3 - Python 4 - Documentation / Reference

1 - About With a feature hashing function, the number of bucket becomes the number of features leading to a dimension reduction

2 - Articles Related Statistics - Dummy (Coding|Variable) - One-hot-encoding (OHE)

3 - Python from collections import defaultdict import hashlib def hashFunction(numBuckets, rawFeats, printMapping=False): """Calculate a feature dictionary for an observation's features based on hashing. Note: Use printMapping=True for debug purposes and to better understand how the hashing works. Args: numBuckets (int): Number of buckets to use as features. rawFeats (list of (int, str)): A list of features for an observation. Represented as (featureID, value) tuples. printMapping (bool, optional): If true, the mappings of featureString to index will be printed. Returns: dict of int to float: The keys will be integers which represent the buckets that the features have been hashed to. The value for a given key will contain the count of the (featureID, value) tuples that have hashed to that key. """ mapping = {} for ind, category in rawFeats: featureString = category + str(ind) mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets) if(printMapping): print mapping sparseFeatures = defaultdict(float) for bucket in mapping.values(): sparseFeatures[bucket] += 1.0 return dict(sparseFeatures)

sampleOne = [(0, 'blue'), (1, 'Open')] sampleTwo = [(0, 'red'), (1, 'Closed'), (2, 'Big')] sampleThree = [(0, 'purple'), (1, 'Running'), (2, 'Small') ] # Use four buckets sampOneFourBuckets = hashFunction(4, sampleOne, True) sampTwoFourBuckets = hashFunction(4, sampleTwo, True) sampThreeFourBuckets = hashFunction(4, sampleThree, True) # Use one hundred buckets sampOneHundredBuckets = hashFunction(100, sampleOne, True) sampTwoHundredBuckets = hashFunction(100, sampleTwo, True) sampThreeHundredBuckets = hashFunction(100, sampleThree, True) print '\t\t 4 Buckets \t\t\t 100 Buckets' print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets) print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets) print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)

4 - Documentation / Reference Spark Data Mining - Scalable Machine Learning data_mining/feature_hashing.txt · Last modified: 2018/01/31 09:10 by gerardnico (Statistics|Probability|Machine Learning|Data Mining|Data and Knowledge Discovery|Pattern Recognition|Data Science|Data Analysis) 306 pages The 1 Percent Rule (Absolute|True) Zero (Parameters|Model) (Accuracy|Precision|Fit|Performance) Metrics Adjusted R^2 Akaike information criterion (AIC) Algorithms (Anomaly|outlier) Detection Apriori algorithm Association (Rules Function|Model) - Market Basket Analysis Attribute (Importance|Selection) Affinity Analysis Area under the curve (AUC) Automatic Discovery Bootstrap aggregating (bagging) (Base rate fallacy|Bonferroni's principle) (Baseline|Naive) classification (Zero R) Bayes’ Theorem (Probability) Bayesian Benford's law (frequency distribution of digits) Best Subset Selection Regression Bias (Sampling error) Bias-variance trade-off (between overfitting and underfitting) Bayesian Information Criterion (BIC) R (Big R) Bimodal Distribution Binary logistic regression Mathematics (Combination|Binomial coefficient|n choose k) (Probability|Statistics) Binomial Distribution Data Mining, Book (Boosting|Gradient Boosting|Boosting trees) Bootstrap Resampling Decision boundary Visualization (C4.5|J48) algorithm (Case-control|retrospective) sampling Causation - Causality (Cause and Effect) Relationship Cumulative Distribution Function (CDF) Centering Continous Predictors Central limit theorem (CLT) Centroid (center of gravity) Chance Customer - Churn Analysis (Customer retention) (Class|Category|Label) Target (Classifier|Classification Function) Clustering (Function|Model) (Prediction|Recommender System) - Collaborative filtering Competitions (Kaggle and others) Statistics (Confidence|likelihood) (Prediction probabilities|Probability classification) Confidence Interval Confounding (factor|variable) - (Confound|Confounder) Confusion Matrix Content Analysis and Acquisition Continuous Variable Convex Correlation (Coefficient analysis) Cosine Similarity (Measure of Angle) Covariance Mallow's Cp Cross Product (of X and Y) (CP|SP) (Statistics|Data Mining) - (KFold) Cross-validation (rotation estimation) (Periodicity|Periodic phenomena|Cycle) (Data|Knowledge) Discovery Statistical Learning Data Point Data (Preparation | Wrangling | Munging) Data Product Data - Science Data Scientist Decision Tree (DT) Algorithm Decision Stump Deep Learning (Network) (Degree|Level) of confidence Degree of freedom (df) (dependent|paired sample) ttest Math - Derivative (Sensitivity to Change, Differentiation) Deviance Deviation Score (for one observation) Dimensionality (number of variable, parameter) (P) (Dimension|Feature) (Reduction) (Data|Text) Mining - Wordsense disambiguation (WSD) (Discretizing|binning) (bin) Discriminant analysis Quadratic discriminant analysis (QDA) (Discriminative|conditional) models Distance (Probability|Sampling) Distribution Dummy (Coding|Variable) One-hot-encoding (OHE) Effects (between predictor variable) Effect Size Elastic Net Model Ensemble Learning (meta set) Entropy (Information Gain) Prediction Error (Training versus Test) (Error|misclassification) Rate false (positives|negatives) (Estimation|Approximation) (Estimator|Point Estimate) Predicted (Score|Target|Outcome|...) (Evaluation|Estimation|Validation|Testing) Data analysis - Explanatory Data Science - (Data exploration|Exploratory Analysis|Discovery ?) Exponential Distribution F-distributions (F-Statistic|F-test|F-ratio) Face Recognition (Factor Variable|Qualitative Predictor) Factor Analysis Factorial Anova (Feature|Attribute) Extraction Function Feature Hashing (Attribute|Feature) (Selection|Importance) Fraud Detection (Frequency|Rate) (Frequent itemsets|cooccurring items) Frequentist Data Model - Fudge factor Fuzzy Logic (Partial Truth) Generalized additive model (GAM) Gaussian processes (modelling probability distributions over functions) Generalized Boosted Regression Models Generative Model Getting Started Generalized Linear Models (GLM) - Extensions of the Linear Model (Stochastic) Gradient descent (SGD) User Group Grouping Head Hierarchical Clustering Hierarchy High Dimension (Curse of Dimensionality) Data Science - History Homoscedasticity Hypothesis (Tests|Testing) ID3 Algorithm Intrusion detection systems (IDS) Independent t-test Statistical - Inference Information Gain Information Retrieval (Interaction|Synergy) effect Intercept - Regression (coefficient|constant) B 0 Model Interpretation (Interval|Delta) (Measurement) Java API for data mining (JDM) K-Means Clustering algorithm Kernel K-Nearest Neighbors (KNN) algorithm - Instance based learning Knots (Cut points) Kurtosis (Distribution Tail extremity) Statistical Learning - Lasso Standard Least Squares Fit Leptokurtic distribution (Level|Label) (Lying|Lie) (Life cycle|Project|Data Pipeline) Lift Chart Statistical Learning - Simple Linear Discriminant Analysis (LDA) Fisher (Multiple Linear Discriminant Analysis|multivariant Gaussian) Linear (Regression|Model) (Linear spline|Piecewise linear function) Little r - (Pearson productmoment Correlation coefficient) Global vs Local LOcal (Weighted) regrESSion (LOESS|LOWESS) Log-likelihood function (crossentropy) Logistic regression (Classification Algorithm) (Logit|Logistic) (Function|Transformation) Loss functions (Incorrect predictions penalty) Data Science - (Kalman Filtering|Linear quadratic estimation (LQE)) Machine Learning Main Effect Probability mass function (PMF) Maximum Maximum Entropy Algorithm Maximum likelihood Measure (Scales of measurement|Type of variables) (Missing Value|Not Available) (Function|Model) Model Selection Model Size (d) Model vs Expert Moderator Variable (Z) Moderation Monte Carlo (method|experiment) (stochastic process simulations) (Average|Mean) Squared (MS) prediction error (MSE) Multi-variant logistic regression Multi-class (classification|problem) (Multiclass Logistic|multinomial) Regression Multidimensional scaling ( similarity of individual cases in a dataset) Multiple Linear Regression Naive Bayes (NB) (Probabilistic?) Neural Network (PNN) (No Predictor|Mean|Null) Model Noise (Unwanted variation) Non-linear (effect|function|model) Non-Negative Matrix Factorization (NMF) Algorithm Multi-response linear regression (Linear Decision trees) (Normal|Gaussian) Distribution - Bell Curve Orthogonal Partitioning Clustering (O-Cluster or OC) algorithm Odds (Ratio) (One|Simple) Rule - (One Level Decision Tree) Outliers Cases (Overfitting|Overtraining|Robust|Generalization) (Underfitting) Data Science - Overgeneralization (Paretian|Power law) distribution Pareto Principle Pattern Principal Component (Analysis|Regression) (PCA) (Probability) Density Function (PDF) Mathematics - Permutation (Ordered Combination) Piecewise polynomials Partial least squares (PLS) Predictive Model Markup Language (PMML) Poisson (Process|distribution) (Global) Polynomial Regression (Degree) Population Parameter Post-hoc test Power of a test (Prediction|Guess) Predictive Model Markup Language (PMML) (Machine|Statistical) Learning (Predictor|Feature|Regressor|Characteristic) - (Independent|Explanatory) Variable (X) Privacy (Anonymization) Probability Probit Regression (probability on binary problem) Problem Process control (SPC) Pruning (a decision tree, decision rules) R-squared (R 2|Coefficient of determination) for Model Accuracy Random forest Random Variable Range Rare Event (Fraction|Ratio|Percentage|Share) (Variable|Measurement) Raw score Regression (Regression Coefficient|Weight|Slope) (B) Assumptions underlying correlation and regression analysis (Never trust summary statistics alone) (Machine learning|Inverse problems) - Regularization Reinforcement learning Sampling - Sampling (With|without) replacement (WR|WOR) ReSampling Validation Research (Residual|Error Term|Prediction error|Deviation) (e|) Resistant Result Considerations Ridge regression Root Mean Square (RMS) Root mean squared (Error|Deviation) (RMSE|RMSD) ROC Plot and Area under the curve (AUC) Rote Classifier Residual sum of Squares (RSS) = Squared loss ? (Decision) Rule Sampling Sampling Distribution Sampling Error Scale Scoring (Applying) (Random) Seed

(Shrinkage|Regularization) of Regression Coefficients Signal (Wanted Variation) Significance level (Significance | Significant) Effect Similarity Simple Effect (Univariate|Simple) Logistic regression (Univariate|Simple|Basic) Linear Regression Skew (-ed Distribution|Variable) ( Spread | Variability ) of a sample Stacking Standard Deviation (SD|s| |RMS width) Standard Error (SE) (Normalize|Standardize) Statistic Forward and Backward Stepwise (Selection|Regression) (Stochastic|random) process (Supervised|Directed) Learning ("Training") (Problem) Support Vector Machines (SVM) algorithm Singular Value Decomposition (SVD) (Student's) t-test (Mean Comparison) T-distributions Tail (Machine|Statistical) Learning (Target|Learned|Outcome|Dependent|Response) (Attribute|Variable) (Y|DV) Test (Test|Expected|Generalization) Error Test Set (Threshold|Cut-off) of binary classification Titanic Data Set (Training|Building|Learning|Fitting) Training Error Training (Data|Set) Nested (Transactional|Historical) Data Transform Treatments (Combination of factor level) True score (Classical test theory) (True Function|Truth) (Total) Sum of the square (TSS|SS) Tuning Parameter (two class|binary) classification problem Statistical Learning - Two-fold validation Data - Uncertainty Uniform Distribution (platykurtic) Unsupervised Learning ("Mining") Resampling through Random Percentage Split Validity (Valid Measures) (Variance|Dispersion|Mean Square) (MS) Variation (Change?) Probability and Vizualization Statistics vs (Machine Learning|Data Mining) Random Walk (Golf|Weather) Data Set Z Scale Z Score (Zero Mean) or Standard Score Back to top Loading [MathJax]/jax/output/CommonHTML/jax.js

Data (State) Data Processing Data Modeling Data Quality Data Structure Data Type Data Warehouse Data Visualization Data Partition Data Persistence Data Concurrency Data Type Number Time Text Collection Relation (Table) Tree Key/Value Graph Spatial Color Measure Levels Order Nominal Discrete Distance Ratio Code Compiler Lexical Parser Grammar Function Testing Shipping Data Type Versioning System Operating System Security File System Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International





Bootie Template designed by Gerardnico with the help of Bootstrap.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.