Comments (11)
conda install scikit-learn
solved it.
Thank you very much for the impressive availability and patience.
from nlg-eval.
In general I find that installing certain things like numpy pandas scikit-learn scipy
(no commas used for easy copy paste) with conda works better than with pip
, especially on Windows.
from nlg-eval.
this seems to be an issue in the java call to the meteor metric package. What worked for me was to change the memory allocation in
from
-Xmx2G
to -Xmx1G
. Can you please verify?from nlg-eval.
Thanks @temporaer.
Both windows & mac - same error.
from nlg-eval.
@AmitMY I cannot reproduce your issue. I installed the package and all dependencies in a new anaconda environment on windows, manually went through the steps in setup.sh
, and ran your script from the issue description. My java version differs slightly (1.8.0_161, but otherwise I do not see any differences.
The output is:
Bleu_1: 0.550000
Bleu_2: 0.428174
Bleu_3: 0.284043
Bleu_4: 0.201143
METEOR: 0.295797
ROUGE_L: 0.522104
CIDEr: 0.989039
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
SkipThoughtsCosineSimilairty: 0.626149
EmbeddingAverageCosineSimilairty: 0.884690
VectorExtremaCosineSimilarity: 0.568696
GreedyMatchingScore: 0.784205
{'CIDEr': 0.9890389437827768, 'GreedyMatchingScore': 0.784205, 'Bleu_4': 0.20114343061802883, 'Bleu_3': 0.28404282012721094, 'Bleu_2': 0.4281744192662396, 'Bleu_1': 0.5499999999725, 'ROUGE_L': 0.5221037854154413, 'METEOR': 0.2957971080379661, 'EmbeddingAverageCosineSimilairty': 0.88469, 'VectorExtremaCosineSimilarity': 0.568696, 'SkipThoughtCS': 0.6261495}
from nlg-eval.
Installed anaconda on a new ubuntu server.
# Anaconda installation
### Following: https://www.digitalocean.com/community/tutorials/how-to-install-the-anaconda-python-distribution-on-ubuntu-16-04
cd tmp
curl -O https://repo.continuum.io/archive/Anaconda2-5.1.0-Linux-x86_64.sh
bash Anaconda2-5.1.0-Linux-x86_64.sh # Installing in ~/anaconda2
source ~/.bashrc
conda create --name dev python=2
source activate dev # To activate the environment every time?
. deactivate # To deactivate
conda update conda
conda update anaconda
# Installing Java
conda config --add channels defaults
conda config --add channels conda-forge
conda config --add channels bioconda
conda install java-jdk
java --version
# Installing NLG Eval
cd ~/thesis
git clone https://github.com/Maluuba/nlg-eval.git
cd nlg-eval
pip install -e .
./setup.sh
Then, ran the script (from README):
nlg-eval --hypothesis=examples/hyp.txt --references=examples/ref1.txt --references=examples/ref2.txt
Now METEOR works!
However, this is the output:
Bleu_1: 0.550000
Bleu_2: 0.428174
Bleu_3: 0.284043
Bleu_4: 0.201143
METEOR: 0.295797
ROUGE_L: 0.522104
CIDEr: 1.242192
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xa
Traceback (most recent call last):
File "/home/nlp/amit/anaconda2/bin/nlg-eval", line 6, in
exec(compile(open(file).read(), file, 'exec'))
File "/home/nlp/amit/thesis/nlg-eval/bin/nlg-eval", line 20, in
compute_metrics()
File "/home/nlp/amit/anaconda2/lib/python2.7/site-packages/click/core.py", line 716, in call
return self.main(*args, **kwargs)
File "/home/nlp/amit/anaconda2/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/nlp/amit/anaconda2/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/nlp/amit/anaconda2/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/home/nlp/amit/thesis/nlg-eval/bin/nlg-eval", line 16, in compute_metrics
nlgeval.compute_metrics(hypothesis, references, no_overlap, no_skipthoughts, no_glove)
File "/home/nlp/amit/thesis/nlg-eval/nlgeval/init.py", line 42, in compute_metrics
from sklearn.metrics.pairwise import cosine_similarity
File "/home/nlp/amit/anaconda2/lib/python2.7/site-packages/sklearn/init.py", line 56, in
from . import __check_build
ImportError: cannot import name __check_build
Now, if I run pip install 'numpy>=1.12.0,<1.13.0' --force-reinstall
, I get an error saying I must use 1.11.0
.
If I use 1.11.0
using pip install -e .
, I am getting the above error.
Just to cover all bases, I did ran this both in a custom conda environment, and outside of a custom environment, no change
from nlg-eval.
Are you sure you are in the environment named dev
when you do the setup and run this command? If you run . deactivate
you will deactivate the env and maybe the env outside has a different version of numpy. I just wanted to confirm that you do not execute that line during the setup?
from nlg-eval.
Just to cover all bases, I did ran this both in a custom conda environment, and outside of a custom environment, no change
So basically I did ran "deactivate" on first run (just like the order of commands above), but to make sure I need the environment I ran everything again in the environment.
I now removed the data
directory, and running ./setup.sh
again inside the conda environment, in case that changes anything
from nlg-eval.
i ran all your commands in the same order except i did not run the following 4:
. deactivate # To deactivate
conda update conda
conda update anaconda
java --version
and it worked for me. I suspect this is still an environment issue. To confirm, wherever you run the nlg-eval
command, in the same env can you run and paste the output of
conda list
?
from nlg-eval.
Sure.
Ok, so I ran everything in the dev
environment same result.
conda list
boto 2.48.0
boto3 1.6.0
botocore 1.9.0
bz2file 0.98
ca-certificates 2017.08.26 h1d4fec5_0
certifi 2018.1.18 py27_0
chardet 3.0.4
click 6.3
docutils 0.14
futures 3.2.0
gensim 0.12.4
idna 2.6
java-jdk 8.0.112 0 bioconda
jmespath 0.9.3
libedit 3.1 heed3624_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 7.2.0 h7cc24e2_2
libstdcxx-ng 7.2.0 h7a57d05_2
ncurses 6.0 h9df7e31_2
nltk 3.1
numpy 1.11.0
openjdk 8.0.112 zulu8.19.0.1_3 conda-forge
openssl 1.0.2n hb7f436b_0
pip 9.0.1 py27ha730c48_4
python 2.7.14 h1571d57_29
python-dateutil 2.6.1
readline 7.0 ha6073c6_4
requests 2.18.4
s3transfer 0.1.13
scikit-learn 0.17
scipy 0.17.0
setuptools 38.5.1 py27_0
six 1.11.0
smart-open 1.5.6
sqlite 3.22.0 h1bed415_0
Theano 0.8.1
tk 8.6.7 hc745277_3
urllib3 1.22
wheel 0.30.0 py27h2bc6bb2_1
zlib 1.2.11 ha838bed_2
Yes, all versions are identical to README versions.
java -version
openjdk version "1.8.0_112"
OpenJDK Runtime Environment (Zulu 8.19.0.1-linux64) (build 1.8.0_112-b16)
OpenJDK 64-Bit Server VM (Zulu 8.19.0.1-linux64) (build 25.112-b16, mixed mode)
from nlg-eval.
Environment looks ok. Does something from https://stackoverflow.com/questions/15274696/importerror-in-importing-from-sklearn-cannot-import-name-check-build help? Unfortunately, I am unable to reproduce as this seems to likely be a scikit-learn install issue.
from nlg-eval.
Related Issues (20)
- Why do I only output Bleu when I use it on Mac ?
- download for glove 6B fails HOT 3
- ModuleNotFoundError: No module named 'nlgeval' HOT 7
- Problem with "the object oriented API for repeated calls in a script - multiple examples" HOT 4
- _pickle.UnpicklingError: pickle data was truncated HOT 1
- zipfile.BadZipFile: File is not a zip file
- TypeError: compute_individual_metrics() missing 1 required positional argument: 'hyp' HOT 2
- Assertion Error HOT 1
- about the files downloaded HOT 1
- nlg-eval --setup
- BrokenPipeError HOT 1
- CIDEr score evaluates to 0.0 no matter what references and metrics I use
- thanks for the codes! I have a question: should I tokenize the predictions and reference texts before using this api? HOT 1
- nlg-eval --setup error can't download glove.6B.zip HOT 3
- Compatibility with gensim 4 HOT 1
- New releases? HOT 1
- v2.4.0 tag does not have the right version info HOT 1
- "Not found for url" while downloading weights HOT 6
- what numpy version should i use to run? HOT 2
- Meteor issue
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nlg-eval.