Comments (16)
Forget to attach ./outputs/default/20240411_091943/logs/infer/llama-2-7b/tydiqa-goldp_arabic.out
. It reads
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
All other tydiqa-goldp_xx.out
file are the same as above.
from opencompass.
i got the same error....
from opencompass.
i got the same error....
please tell when it was fixed
from opencompass.
try export MKL_SERVICE_FORCE_INTEL=1
and run again
from opencompass.
try
export MKL_SERVICE_FORCE_INTEL=1
and run again
it doesn't work
from opencompass.
same Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library. Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
from opencompass.
try
export MKL_SERVICE_FORCE_INTEL=1
and run againit doesn't work
pytorch/pytorch#37377 (comment)
from opencompass.
Dear team, any updates? It seems that this bug is exclusively associated with some datasets like tidyqa
and XCOPA
. I can run the exemplar script successfully with meaningful outputs, in the same setting and environment.
python run.py --models hf_opt_125m hf_opt_350m --datasets siqa_gen winograd_ppl
from opencompass.
How about
export MKL_THREADING_LAYER=GNU export MKL_SERVICE_FORCE_INTEL=1
from opencompass.
And please check your environment, whether updated Pytorch, transformers, and whether running on Linux
from opencompass.
How about
export MKL_THREADING_LAYER=GNU export MKL_SERVICE_FORCE_INTEL=1
It doesn't work. I tried.
And please check your environment, whether updated Pytorch, transformers, and whether running on Linux
Libraries are updated. Yes it is indeed running on Linux. Anything particular I should care about if it is on Linux?
Or could you provide a script that you/admins have verified can successfully run TyDiQA Evaluation? (Any model would be fine). I can try to reproduce it in my environment and find out the differences. I think this is the fastest way to solve this issue.
from opencompass.
from mmengine.config import read_base
from opencompass.models import HuggingFaceCausalLM
from opencompass.partitioners import NaivePartitioner
from opencompass.partitioners.sub_naive import SubjectiveNaivePartitioner
from opencompass.runners import LocalRunner
from opencompass.tasks import OpenICLInferTask
from opencompass.tasks.subjective_eval import SubjectiveEvalTask
with read_base():
from .datasets.tydiqa.tydiqa_gen import tydiqa_datasets
from .models.hf_internlm.hf_internlm2_chat_7b import models
datasets = [*tydiqa_datasets]
from opencompass.models import HuggingFaceCausalLM
_meta_template = dict(
round=[
dict(role='HUMAN', begin='<|im_start|>user\n', end='<|im_end|>\n'),
dict(role='BOT', begin='<|im_start|>assistant\n', end='<|im_end|>\n', generate=True),
],
)
models = [
dict(
type=HuggingFaceCausalLM,
abbr='internlm2-chat-7b-hf',
path="internlm/internlm2-chat-7b",
tokenizer_path='internlm/internlm2-chat-7b',
model_kwargs=dict(
trust_remote_code=True,
device_map='auto',
),
tokenizer_kwargs=dict(
padding_side='left',
truncation_side='left',
use_fast=False,
trust_remote_code=True,
),
max_out_len=2048,
max_seq_len=2048,
batch_size=8,
meta_template=_meta_template,
run_cfg=dict(num_gpus=1, num_procs=1),
end_str='<|im_end|>',
generation_kwargs = {"eos_token_id": [2, 92542], "do_sample": True},
batch_padding=True,
)
]
infer = dict(
partitioner=dict(type=NaivePartitioner),
runner=dict(
type=LocalRunner,
max_num_workers=256,
task=dict(type=OpenICLInferTask)),
)
work_dir = 'outputs/test/'
hi here is my config, and runned by the script below
conda activate opencompass
export MKL_SERVICE_FORCE_INTEL=1
export HF_EVALUATE_OFFLINE=1
export HF_DATASETS_OFFLINE=1
export TRANSFORMERS_OFFLINE=1
export HF_ENDPOINT=https://hf-mirror.com
export TRANSFORMERS_CACHE='my cache dir'
python run.py configs/eval_my_config.py --mode all --reuse latest
from opencompass.
Same error happens with your code
04/28 19:40:37 - OpenCompass - ERROR - /cluster/project/sachan/yilei/projects/opencompass/opencompass/runners/local.py - _launch - 192 - task OpenICLInfer[internlm2-chat-7b-hf/tydiqa-goldp_arabic] fail, see
outputs/test/20240428_193926/logs/infer/internlm2-chat-7b-hf/tydiqa-goldp_arabic.out
It seems like a Linux-special problem.
from opencompass.
I am also running on linux platform, after checking the environment like PyTorch and transformers, our different is only the GPU version, I used A100 80G.
from opencompass.
Most probably not GPU problem. I tested it on A100 80G but still got the same error
from opencompass.
For those who encounter the same OpenICLInfer Error, reinstalling numpy
in the opencompass
conda env works for me! Plus I add two env vars:
export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
from opencompass.
Related Issues (20)
- [Bug] 根据官方的Quick Start运行,输出结果为空 HOT 6
- 泛化性更好的评测方式 HOT 2
- [Feature] Add -f for evaluation stage
- [Bug] CLUEWSC的测试结果全是 50%
- [Feature] Leval数据集少了两个config:codeU和sci_fi
- [Bug] 使用api测评时mode参数不起作用,超出max_seq_len并没有按mode切分输入
- [Feature] Falmes dataset evaluation seems to be missing configs and json file HOT 3
- [Bug] 评测lawbench数据集时偶现异常
- [Feature] 支持openai/GPT4-o的评测seting HOT 1
- GenInferencer PPLInferencer 不能集成到一起吗[Feature] HOT 2
- [Feature] 如何在needlebench 中使用api model? HOT 1
- [Feature] config的bug,提示下载configs,然后下载了又出现以下bug
- [Bug] unrecognized arguments: --no-batch-padding HOT 1
- opencompass榜单更新情况 HOT 2
- [Bug] hf_chatglm3_6b评测AFQMC数据集时,自测结果与官方不一致。且自测结果不稳定。 HOT 1
- No module named 'opencompass' HOT 8
- [Bug] Unable to use tutorial methods properly——KeyError: 'opt125m'or'opt350m' HOT 1
- [Bug] opencompass/cli和opencompass/datasets/IFEval下缺少__init__.py所以release版本是不能导入这两个包的 HOT 1
- [Bug] configs/datasets/agieval/agieval_mixed_713d14.py not found
- [Bug] llm-compression task faild at eval stage with latest version
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from opencompass.