Comments (7)
llvm-foreach: Segmentation fault (core dumped)
clang-15: error: ptxas command failed with exit code 254 (use -v to see invocation)
@wangzy0327 The issue is more likely in the compiler and the log shows that the core-dump happens in llvm-foreach.
-
You may enable then debug capabilities when building llvm and use gdb to check which compiler pass is guilty for this.
-
The issue intel/llvm#5980 in intel/llvm repo is very similar to this one and there are already many investigations there. Could you please check if it is helpful?
from onednn.
Intel C++/DPC++ Compiler follows semantic versioning schema and guarantees backward compatibility within the same major version. You can find version that oneDNN was tested with in the README.md of the corresponding release. On the source code level oneDNN may also be compatible with earlier compiler releases.
from onednn.
@shu1chen
I have raised a ticket in intel/llvm#5980. But it no reply yet.
from onednn.
@vpirogov
I cannot find relative content about oneDNN was tested in the README.md
Which readme.md has the relevant onednn test version and the dependent sycl version? Can you provide a screenshot or link to the relevant test version?
from onednn.
I'm referring to Validated Configurations section of the README.md.
from onednn.
@shu1chen
I refered to the issue-5980
I modified the two files llvm/lib/Target/NVPTX/MCTargetDesc/NVPTXTargetStreamer.cpp and llvm/lib/Target/NVPTX/MCTargetDesc/NVPTXTargetStreamer.h as you listed above with the modifications. I modified it and recompiled SYCL, then compiled oneDNN.
The output of make -j
as follow.
[100%] Linking CXX shared library libdnnl.so
llvm-foreach: Segmentation fault (core dumped)
clang-15: error: ptxas command failed with exit code 254 (use -v to see invocation)
clang version 15.0.0 (ssh://[email protected]:2222/wangziyang/intel-llvm-new.git 7ecb566e497fa926844521e8df2a2405c7e92e63)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /home/wzy/sycl_workspace/build-cuda-2022-06/bin
clang-15: note: diagnostic msg: Error generating preprocessed source(s).
src/CMakeFiles/dnnl.dir/build.make:776: recipe for target 'src/libdnnl.so.3.2' failed
make[2]: *** [src/libdnnl.so.3.2] Error 1
CMakeFiles/Makefile2:355: recipe for target 'src/CMakeFiles/dnnl.dir/all' failed
make[1]: *** [src/CMakeFiles/dnnl.dir/all] Error 2
Makefile:159: recipe for target 'all' failed
make: *** [all] Error 2
It's still the same error as before.
from onednn.
@wangzy0327 I meant that the core dump happens in the compiler and for CUDA backend, not in oneDNN. From the log, the compilation of oneDNN has completed, and the compiler triggers this error during linking phase.
The issue in intel/llvm#5980 has the similar issue for another shared library in debug mode and has some tracing info. Perhaps the solution there doesn't work for your case.
I am personally not an expert in CUDA backend compiler. Could you please raise a ticket in https://github.com/intel/llvm/issues repo to see if it's more helpful?
from onednn.
Related Issues (20)
- Bad speed for f32:s8:f32 matmul HOT 11
- How can I create a matmul primitive with A16W8 (active 16bits, weight 8bits) configuration? HOT 2
- [Proposal] Add cpu alloc/free callback to support customlize memory alloctor APIs. HOT 3
- Assertion `dynamic_cast<derived_type>(base) == base' failed HOT 3
- Why do the "reorder" operations of the same operator take very different times on the CPU and GPU platforms? HOT 3
- [ACL] 3D convolution kernel `NEConv3D` is not integrated
- INT8 Performance difference between OneDNN v2.6.3 and v3.4.1 HOT 1
- Possible null pointer dereference in cpu_reorder_pd
- Assertion failure in brgemm in debug build on G3 aarch64 machine HOT 3
- question about matmul_perf example HOT 2
- Information regarding threading backend in oneDNN HOT 1
- could not create a primitive descriptor iterator HOT 5
- cpu: s390x: build fails with saturate was not declared in this scope HOT 7
- Enabling onednn Graph API from framework level HOT 1
- Conditions for Running brgemm_convolution_fwd_t and jit_avx512_common_convolution_fwd_t in oneDNN HOT 3
- oneDNN with Nvidia GPU supprt
- batchnorm requires consistent in- and output mem format_tags HOT 1
- Build fail with CPU_RUNTIME=SEQ and graph compiler backend HOT 2
- OneDNN graph APi for LLM generation HOT 7
- Understand the document on block level APIs(https://github.com/oneapi-src/oneDNN/pull/1852) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onednn.