Code Monkey home page Code Monkey logo

word-embedding-dimensionality-selection's Introduction

Word Embedding Dimensionality Selection

This repo implements the dimensionality selection procedure for word embeddings. The procedure is proposed in the following papers, based on the notion of Pairwise Inner Produce (PIP) loss. No longer pick 300 as your word embedding dimensionality!

@inproceedings{yin2018dimensionality,
 title={On the Dimensionality of Word Embedding},
 author={Yin, Zi and Shen, Yuanyuan},
 booktitle={Advances in Neural Information Processing Systems},
 year={2018}
}

and

@article{yin2018pairwise,  
  title={Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off},  
  author={Yin, Zi},  
  journal={arXiv preprint arXiv:1803.00502},  
  year={2018}  
}

Currently, we implement the dimensionality selection procedure for the following algorithms:

  • Word2Vec (skip-gram)
  • GloVe
  • Latent Semantic Analysis (LSA)

How to use the tool

The tool provides an optimal dimensionality for an algorithm on a corpus. For example, you can use it to obtain the dimensionality for your Word2Vec embedding on the Text8 corpus. You need to have the following:

  • A corpus (--file [path to corpus])
  • A config file (yaml) for algorithm specific parameters (--config file [path to config file])
  • Name of algorithm (--algorithm [algorithm_name])

Run from root directory as package, e.g.:

python -m main --file data/text8.zip --config_file config/word2vec_sample_config.yml --algorithm word2vec

Implement your own

You can extend the implementation if you have another embedding algorithm that is based on matrix factorization. The only thing to do is to implement your matrix estimator as a subclass of SignalMatrix.

word-embedding-dimensionality-selection's People

Contributors

shudima avatar ziyin-dl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

word-embedding-dimensionality-selection's Issues

Will this work on fasttext embeddings ?

A beautiful work by you. Hope to see similar work for other types of embeddings like contextual word embeddings.
Will this work with fastext ? If no, what files I have to edit. Also, can you shed some light on how to edit the files for other embeddings.

Spectral Estimation

Great Work!!
I am following this paper and its code and I have a question related to Spectral Estimation. How did you get this formula. The reference you have given from [Chatterjee, 2015], I even could not find it in paper.
Also I want to use NMF(Non-negative matrix factorization) instead of SVD, when I use the spectral estimation formula in the paper, I got all the elements become zero. How could I overcome this issue.
Thanks!

core dumped error

error message:

Segmentation fault (core dumped) nohup python -m main --file data/train.txt --config_file config/train.yml --algorithm word2vec

about corpus format

I have a question about the format of corpus, I noticed the text8 corpus in data folder write in just one line, I want to know there is no influence in setence segmentation for embedding training?
And how can I prepare my own corpus, like Chinese wiki data, thank a lot! :)

word2vec CBOW

Thanks for sharing the code!

Sorry if the question is silly - my understanding of word embeddings is still premature and lack the required math background:
Should the SignalMatrix implementation for word2vec Skip Gram model fit for word2vec CBOW model?

Noise estimation in signal_matrix.py

Line: self.noise = np.std(diff) * 0.5 seems to be inconsistent with Frobenius term used in paper: ||M1-M2||/(2sqrt(mn)). Instead it should be np.sum(diff2)(0.5)/(2*n)?

Btw, isn't m=n=vocabulary size? Under this assumption, what's the physical meaning of noise in paper? For me, it doesn't make sense to apply same variance to every PMI entries. Can you help me to understand?

Error when executing in cmd

Hi, first thank you for making python implementation for your paper.

When I executed package in cmd, I ran into an error.
My environment is windows 10, 64 bit and python 3.7.1

Let me tell you the steps how I executed.

  1. turn on the cmd

  2. go to the directory where the folder named 'data', 'config', and python script 'main.py' exist

  3. execute what you wrote in ReadMe,
    python -m main --file data/text8.zip --config_file config/word2vec_sample_config.yml --algorithm word2vec

  4. There was an error in 24 line in main.py; yaml.load(f)
    So I changed that code to yaml.safe_load(f) and I fixed that error

However, after I finished step 3, the following error appeared and I do not know at all how to fix it.

Could you please give me a help? It will be very helpful for getting your advice.

image

Thank you in advance!!!

code problem

In code matrix/signal_matrix.py:

def construct_matrix(self):
raise NotImplementedError

Is this correct? Is there any problem with this part of the code?

Stop using Python 2

It's unclear why you're using Python 2 for your code. Are you not aware that it's scheduled for expiration soon? Please ensure your code works with the latest Python 3, and note the Python version requirement in the readme.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.