Code Monkey home page Code Monkey logo

Comments (10)

dutran avatar dutran commented on August 24, 2024

@whjxnyzh Thanks for the post. I have fixed the backward comparability with old Blob calls. Please try it again.

from c3d.

whjxnyzh123 avatar whjxnyzh123 commented on August 24, 2024

@dutran I try again. but this time......

src/caffe/util/math_functions.cu(140): error: calling a __host__ function("std::signbit<float> ") from a __global__ function("caffe::sgnbit_kernel<float> ") is not allowed

src/caffe/util/math_functions.cu(140): error: calling a __host__ function("std::signbit<double> ") from a __global__ function("caffe::sgnbit_kernel<double> ") is not allowed

2 errors detected in the compilation of "/tmp/tmpxft_00009fb3_00000000-12_math_functions.compute_35.cpp1.ii".
make: *** [build/src/caffe/util/math_functions.cuo] Error 2
make: *** Waiting for unfinished jobs....

from c3d.

dutran avatar dutran commented on August 24, 2024

It works fine on mine. Which CUDA are you using? Mine is 5.5. Looks like this is problem comes from CUDA.

from c3d.

whjxnyzh123 avatar whjxnyzh123 commented on August 24, 2024

@dutran

 NVIDIA-SMI 340.29     Driver Version: 340.29         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K20m          Off  | 0000:08:00.0     Off |                    0 |
| N/A   28C    P0    53W / 225W |     11MiB /  4799MiB |     98%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Compute processes:                                               GPU Memory |
|  GPU       PID  Process name                                     Usage      |
|=============================================================================|
|  No running compute processes found                                         |
+-----------------------------------------------------------------------------+

In caffe installation document.

library version 7.0 and the latest driver version are recommended, but 6.* is fine too
5.5, and 5.0 are compatible but considered legacy

from c3d.

dutran avatar dutran commented on August 24, 2024

I googled it. It seems your gcc and nvcc are not compatible. Anyway, try the below hack: http://stackoverflow.com/questions/28985551/caffe-installation-in-ubuntu-14-04 (minor change in caffe/include/caffe/util/math_functions.hpp)

from c3d.

whjxnyzh123 avatar whjxnyzh123 commented on August 24, 2024

@dutran Thank you very much. I will try it tomorrow. ps: your paper is great.

from c3d.

whjxnyzh123 avatar whjxnyzh123 commented on August 24, 2024

@dutran In caffe/include/caffe/util/math_functions.hpp I changed

using std::signbit;
DEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, y[i] = signbit(x[i])); 

to

// using std::signbit;
DEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, y[i] = std::signbit(x[i]));

so I make test successfully.
but when make runtest......

[kli@node2 C3D]$make runtest
build/test/test_all.testbin 0 --gtest_shuffle
Cuda number of devices: 1
Setting to use device 0
Current device id: 0
Note: Randomizing tests' orders with a seed of 64493 .
[==========] Running 401 tests from 74 test cases.
[----------] Global test environment set-up.
[----------] 2 tests from HingeLossLayerTest/0, where TypeParam = float
[ RUN      ] HingeLossLayerTest/0.TestGradientGPU
[       OK ] HingeLossLayerTest/0.TestGradientGPU (288 ms)
[ RUN      ] HingeLossLayerTest/0.TestGradientCPU
[       OK ] HingeLossLayerTest/0.TestGradientCPU (17 ms)
[----------] 2 tests from HingeLossLayerTest/0 (305 ms total)

[----------] 5 tests from ConcatLayerTest/0, where TypeParam = float
[ RUN      ] ConcatLayerTest/0.TestGPUGradient
[       OK ] ConcatLayerTest/0.TestGPUGradient (222 ms)
[ RUN      ] ConcatLayerTest/0.TestSetupNum
[       OK ] ConcatLayerTest/0.TestSetupNum (0 ms)
[ RUN      ] ConcatLayerTest/0.TestCPUGradient
[       OK ] ConcatLayerTest/0.TestCPUGradient (165 ms)
[ RUN      ] ConcatLayerTest/0.TestCPUNum
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 1.50359e-42
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 6.9887e-35
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 7.14662e-44
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 7.05658e-35
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 6.98088e-35
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 1.93379e-43
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 6.98098e-35
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 1.93379e-43
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 0.000695128
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 1.70212e+25
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 7.15612e+22
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 1.08966e+27
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 4.96403e+28
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 5.07783e+31
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 7.14662e-44
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 7.31572e+28
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 2.60742e-09
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 1.10415e+21
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:96: Failure
Value of: this->blob_bottom_vec_0[0]->data_at(n, c, h, w)
  Actual: 2.9643e+29
Expected: this->blob_top_->data_at(n, c, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.25609e-43
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 8.08549e-43
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.99648e+32
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 3.70227e-35
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.82311e+23
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 0
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 4.61141e+24
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 5.02391e-35
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 0.169386
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 0
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 1.10415e+21
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: -nan
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 7.58639e+31
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 1
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 1.7109e+19
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 2.99648e+32
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 1.63856e+19
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 2.82311e+23
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 7.94238e+17
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 4.61141e+24
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.63146e+20
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 0.169386
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.5754e+20
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 1.10415e+21
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 3.38443e-12
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 7.58639e+31
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 1.46018e-19
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 1.7109e+19
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 3.2487e+33
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 1.63856e+19
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 1.35559e-19
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 7.98741e+17
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 3.09798e+32
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 2.63146e+20
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.28424e+20
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 2.5754e+20
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 4.86111e+30
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 3.38443e-12
src/caffe/test/test_concat_layer.cpp:104: Failure
Value of: this->blob_bottom_vec_0[1]->data_at(n, c, h, w)
  Actual: 2.46772e+20
Expected: this->blob_top_->data_at(n, c+3, h, w)
Which is: 1.46018e-19
[  FAILED  ] ConcatLayerTest/0.TestCPUNum, where TypeParam = float (1 ms)
[ RUN      ] ConcatLayerTest/0.TestSetupChannels
[       OK ] ConcatLayerTest/0.TestSetupChannels (0 ms)
[----------] 5 tests from ConcatLayerTest/0 (388 ms total)

[----------] 9 tests from ConvolutionLayerTest/0, where TypeParam = float
[ RUN      ] ConvolutionLayerTest/0.TestGPUSimpleConvolution
[       OK ] ConvolutionLayerTest/0.TestGPUSimpleConvolution (1 ms)
[ RUN      ] ConvolutionLayerTest/0.TestCPUGradientGroup
[       OK ] ConvolutionLayerTest/0.TestCPUGradientGroup (742 ms)
[ RUN      ] ConvolutionLayerTest/0.TestSetup
[       OK ] ConvolutionLayerTest/0.TestSetup (0 ms)
[ RUN      ] ConvolutionLayerTest/0.TestCPUSimpleConvolution
[       OK ] ConvolutionLayerTest/0.TestCPUSimpleConvolution (5 ms)
[ RUN      ] ConvolutionLayerTest/0.TestGPUSimpleConvolutionGroup
*** glibc detected *** build/test/test_all.testbin: malloc(): memory corruption (fast): 0x0000000006bb9660 ***
======= Backtrace: =========
/lib64/libc.so.6[0x33ed475e66]
/lib64/libc.so.6[0x33ed479c3f]
/lib64/libc.so.6(__libc_malloc+0x71)[0x33ed47a6b1]
/usr/lib64/libstdc++.so.6(_Znwm+0x1d)[0x33f445e15d]
build/test/test_all.testbin[0x4dfd66]
build/test/test_all.testbin[0x54476d]
build/test/test_all.testbin[0x536551]
build/test/test_all.testbin[0x536637]
build/test/test_all.testbin[0x536777]
build/test/test_all.testbin(_ZN7testing8internal12UnitTestImpl11RunAllTestsEv+0x2cf)[0x53b67f]
build/test/test_all.testbin[0x54431d]
build/test/test_all.testbin[0x535b7a]
build/test/test_all.testbin[0x444831]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x33ed41ed5d]
build/test/test_all.testbin[0x444599]
======= Memory map: ========
00400000-007ed000 r-xp 00000000 00:16 589966976                          /public/home/kli/C3D/build/test/test_all.testbin
009ec000-00a0d000 rw-p 003ec000 00:16 589966976                          /public/home/kli/C3D/build/test/test_all.testbin
00a0d000-00a0e000 rw-p 00000000 00:00 0
0240c000-06bdb000 rw-p 00000000 00:00 0                                  [heap]
200000000-200100000 rw-s 84accb000 00:05 17983                           /dev/nvidiactl
200100000-204100000 ---p 00000000 00:00 0
204100000-204200000 rw-s 100894000 00:05 17983                           /dev/nvidiactl
204200000-204300000 ---p 00000000 00:00 0
204300000-204400000 rw-s 100abf000 00:05 17983                           /dev/nvidiactl
204400000-204500000 ---p 00000000 00:00 0
204500000-204600000 rw-s 1668d9000 00:05 17983                           /dev/nvidiactl
204600000-1900000000 ---p 00000000 00:00 0
33ed000000-33ed020000 r-xp 00000000 103:02 991240                        /lib64/ld-2.12.so
33ed21f000-33ed220000 r--p 0001f000 103:02 991240                        /lib64/ld-2.12.so
33ed220000-33ed221000 rw-p 00020000 103:02 991240                        /lib64/ld-2.12.so
33ed221000-33ed222000 rw-p 00000000 00:00 0
33ed400000-33ed58a000 r-xp 00000000 103:02 991241                        /lib64/libc-2.12.so
33ed58a000-33ed78a000 ---p 0018a000 103:02 991241                        /lib64/libc-2.12.so
33ed78a000-33ed78e000 r--p 0018a000 103:02 991241                        /lib64/libc-2.12.so
33ed78e000-33ed78f000 rw-p 0018e000 103:02 991241                        /lib64/libc-2.12.so
33ed78f000-33ed794000 rw-p 00000000 00:00 0
33ed800000-33ed817000 r-xp 00000000 103:02 1826817                       /lib64/libpthread-2.12.so
33ed817000-33eda17000 ---p 00017000 103:02 1826817                       /lib64/libpthread-2.12.so
33eda17000-33eda18000 r--p 00017000 103:02 1826817                       /lib64/libpthread-2.12.so
33eda18000-33eda19000 rw-p 00018000 103:02 1826817                       /lib64/libpthread-2.12.so
33eda19000-33eda1d000 rw-p 00000000 00:00 0
33edc00000-33edc02000 r-xp 00000000 103:02 1826818                       /lib64/libdl-2.12.so
33edc02000-33ede02000 ---p 00002000 103:02 1826818                       /lib64/libdl-2.12.so
33ede02000-33ede03000 r--p 00002000 103:02 1826818                       /lib64/libdl-2.12.so
33ede03000-33ede04000 rw-p 00003000 103:02 1826818                       /lib64/libdl-2.12.so
33ee000000-33ee083000 r-xp 00000000 103:02 1826820                       /lib64/libm-2.12.so
33ee083000-33ee282000 ---p 00083000 103:02 1826820                       /lib64/libm-2.12.so
33ee282000-33ee283000 r--p 00082000 103:02 1826820                       /lib64/libm-2.12.so
33ee283000-33ee284000 rw-p 00083000 103:02 1826820                       /lib64/libm-2.12.so
33ee400000-33ee415000 r-xp 00000000 103:02 1826819                       /lib64/libz.so.1.2.3
33ee415000-33ee614000 ---p 00015000 103:02 1826819                       /lib64/libz.so.1.2.3
33ee614000-33ee615000 r--p 00014000 103:02 1826819                       /lib64/libz.so.1.2.3
33ee615000-33ee616000 rw-p 00015000 103:02 1826819                       /lib64/libz.so.1.2.3
33ee800000-33ee807000 r-xp 00000000 103:02 1826840                       /lib64/librt-2.12.so
33ee807000-33eea06000 ---p 00007000 103:02 1826840                       /lib64/librt-2.12.so
33eea06000-33eea07000 r--p 00006000 103:02 1826840                       /lib64/librt-2.12.so
33eea07000-33eea08000 rw-p 00007000 103:02 1826840                       /lib64/librt-2.12.so
33eec00000-33eec16000 r-xp 00000000 103:02 1826828                       /lib64/libresolv-2.12.so
33eec16000-33eee16000 ---p 00016000 103:02 1826828                       /lib64/libresolv-2.12.so
33eee16000-33eee17000 r--p 00016000 103:02 1826828                       /lib64/libresolv-2.12.so
33eee17000-33eee18000 rw-p 00017000 103:02 1826828                       /lib64/libresolv-2.12.so
33eee18000-33eee1a000 rw-p 00000000 00:00 0
33ef000000-33ef01d000 r-xp 00000000 103:02 1826829                       /lib64/libselinux.so.1
33ef01d000-33ef21c000 ---p 0001d000 103:02 1826829                       /lib64/libselinux.so.1
33ef21c000-33ef21d000 r--p 0001c000 103:02 1826829                       /lib64/libselinux.so.1
33ef21d000-33ef21e000 rw-p 0001d000 103:02 1826829                       /lib64/libselinux.so.1
33ef21e000-33ef21f000 rw-p 00000000 00:00 0
33ef400000-33ef515000 r-xp 00000000 103:02 1826846                       /lib64/libglib-2.0.so.0.2800.8
33ef515000-33ef715000 ---p 00115000 103:02 1826846                       /lib64/libglib-2.0.so.0.2800.8
33ef715000-33ef716000 rw-p 00115000 103:02 1826846                       /lib64/libglib-2.0.so.0.2800.8
33ef716000-33ef717000 rw-p 00000000 00:00 0
33ef800000-33ef804000 r-xp 00000000 103:02 1826851                       /lib64/libgthread-2.0.so.0.2800.8
33ef804000-33efa03000 ---p 00004000 103:02 1826851                       /lib64/libgthread-2.0.so.0.2800.8
33efa03000-33efa04000 rw-p 00003000 103:02 1826851                       /lib64/libgthread-2.0.so.0.2800.8
33efc00000-33efc4a000 r-xp 00000000 103:02 1826852                       /lib64/libgobject-2.0.so.0.2800.8
33efc4a000-33efe49000 ---p 0004a000 103:02 1826852                       /lib64/libgobject-2.0.so.0.2800.8
33efe49000-33efe4b000 rw-p 00049000 103:02 1826852                       /lib64/libgobject-2.0.so.0.2800.8
33efe4b000-33efe4c000 rw-p 00000000 00:00 0 make: *** [runtest] Aborted (core dumped)
[kli@node2 C3D]$

from c3d.

dutran avatar dutran commented on August 24, 2024

@whjxnyzh Try it again now. Please make clean, then compile again, then test. Thanks

from c3d.

whjxnyzh123 avatar whjxnyzh123 commented on August 24, 2024

@dutran Great! it worked!!! Thank you very much!!!

from c3d.

dutran avatar dutran commented on August 24, 2024

Glad to hear! Cheers.

from c3d.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.