Code Monkey home page Code Monkey logo

block's People

Contributors

ajmay81 avatar elvirars avatar gkc1000 avatar naokin avatar robolivares avatar sanshar avatar shengg avatar sunqm avatar zhengbx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

block's Issues

NPDM crash

Dear developers,

I had a problem running a small test calculation

This is the input:

nelec 4
spin 0
irrep 1
schedule
0 1000 1e-09 0.0
end
maxiter 100
sweep_tol 1e-08
reset_iter
onedot
disk_dump_pdm
threepdm
npdm_no_intermediate
orbitals FCIDUMP
symmetry c2v
hf_occ integral
gaopt default
nroots 2
weights 0.5 0.5

The FCI dump file can be found at: http://pastebin.com/raw/nqwb1H8F

The calculation crashed near the end of the calculation, when doing the NPDM calculation

                     Block Iteration :: 0

Current NPDM sweep position = 1 of 2

                     Sites ::  2 3     # states: 8    # states: 8


                     Block Iteration :: 1

Current NPDM sweep position = 2 of 2

                     Sites ::  1 2 3     # states: 3    # states: 3

                     Elapsed Sweep CPU  Time (seconds): 0.190
                     Elapsed Sweep Wall Time (seconds): 0.241

                     Using the one dot algorithm ... 
                     Starting renormalisation sweep in backwards direction

                     Starting block is :: 
                     Sites ::  3     # states: 3    # states: 3


                     Block Iteration :: 0

Current NPDM sweep position = 1 of 2

This is gdb output:

0 0x00002aaaaf310cba in ftell () from /lib/x86_64-linux-gnu/libc.so.6

1 0x0000000000c48938 in SpinAdapted::Npdm::Threepdm_container::dump_to_disk(std::vector<std::pair<std::vector<int, std::allocator >, double>, std::allocator<std::pair<std::vector<int, std::allocator >, double> > >&) ()

2 0x0000000000c48d95 in SpinAdapted::Npdm::Threepdm_container::store_npdm_elements(std::vector<std::pair<std::vector<int, std::allocator >, double>, std::allocator<std::pair<std::vector<int, std::allocator >, double> > > const&) ()

3 0x0000000000bce1a1 in SpinAdapted::Npdm::Npdm_driver::do_inner_loop(char, SpinAdapted::Npdm::Npdm_expectations&, SpinAdapted::Npdm::NpdmSpinOps_base&, SpinAdapted::Npdm::NpdmSpinOps&, SpinAdapted::Npdm::NpdmSpinOps&) ()

4 0x0000000000bd0820 in SpinAdapted::Npdm::Npdm_driver::par_loop_over_block_operators(char, SpinAdapted::Npdm::Npdm_expectations&, SpinAdapted::Npdm::NpdmSpinOps&, SpinAdapted::Npdm::NpdmSpinOps&, SpinAdapted::Npdm::NpdmSpinOps&, bool) ()

5 0x0000000000bd11b3 in SpinAdapted::Npdm::Npdm_driver::loop_over_operator_patterns(SpinAdapted::Npdm::Npdm_patterns&, SpinAdapted::Npdm::Npdm_expectations&, SpinAdapted::SpinBlock const&) ()

6 0x0000000000bd1506 in SpinAdapted::Npdm::Npdm_driver::compute_npdm_elements(std::vector<SpinAdapted::Wavefunction, std::allocatorSpinAdapted::Wavefunction >&, SpinAdapted::SpinBlock const&, int, int) ()

7 0x0000000000bc8d4a in SpinAdapted::Npdm::npdm_block_and_decimate(SpinAdapted::Npdm::Npdm_driver_base&, SpinAdapted::SweepParams&, SpinAdapted::SpinBlock&, SpinAdapted::SpinBlock&, bool const&, bool const&, int, int) ()

8 0x0000000000bc9ac3 in SpinAdapted::Npdm::npdm_do_one_sweep(SpinAdapted::Npdm::Npdm_driver_base&, SpinAdapted::SweepParams&, bool const&, bool const&, bool const&, int const&, int, int) ()

9 0x0000000000bcb5af in SpinAdapted::Npdm::npdm(SpinAdapted::NpdmOrder, bool, bool) ()

10 0x00000000009c93c2 in calldmrg(char_, char_) ()

11 0x0000000000972acb in main ()

Compile Block-1.5.2 on Conda environment

Hello,

I am struggling in compiling Block on my conda environement.

Here are what I used:

  • g++ 5.2.0
  • MPI: no
  • Openmp 2018.0.0
  • boost 1.61.0
  • mkl 2017.0.3

and I got this error.
error.txt

I would greatly appreciate if you can tell me what packages in conda that can be used.
I have played around with other compilers (intel icpc, mpiicpc) and others (boost version, and recompile boost with mpi support).
None of the parallel or serial version could be compiled.
Please let me know if you need any further info to clarify the situation.
Thank for your help.

How to define the active space in dmrg-nevpt2?

[Block-1.0.1]
From the tests given in dmrg_tests directory, no clue about the active space of dmrg-nevpt2 is found, nor can I find the dmrg-scf module. Is the latest version of Block able to conduct dmrg-scf calculations? And I also would like to know if it is possible to delete certain virtual orbitals in dmrg-nevpt2 calculation in Block-input-file, which of course can be done by deleting by re-writing FCIDUMP-file.

spin-orbital 2-particle density matrices

This need is firstly opened in pyscf issue pyscf/pyscf#357 . 2-particle spin-orbital density matrices <i_alpha^+ j_alpha^+ k_alpha l_alpha>, <i_alpha^+ j_beta^+ k_beta l_alpha>, <i_beta^+ j_beta^+ k_beta l_beta> are needed to analyze spin correlation function

mpirun meets a segmentation fault

error message:

mpirun -np 2 ./block.spin_adapted input.dat > output.dat
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
[ 0] /lib64/libpthread.so.0(+0xf890)[0x2b3e55ef5890]
[ 1] ./block.spin_adapted[0x5528f6]
[ 2] ./block.spin_adapted[0x7d68a4]
[ 3] ./block.spin_adapted[0x7d0e88]
[ 4] ./block.spin_adapted[0x7cf3ab]
[ 5] ./block.spin_adapted[0x7ce3bd]
[ 6] ./block.spin_adapted[0x46d2bb]
[ 7] ./block.spin_adapted[0x46a453]
[ 8] ./block.spin_adapted[0x54d02a]
[ 9] /lib64/libc.so.6(__libc_start_main+0xf5)[0x2b3e57841b05]
[10] ./block.spin_adapted[0x427db7]
*** End of error message ***

it seems that the error occurs when a new thread is being created.
source code downloaded on 06-10-2015.
compiler: intel 2015 update 3
mpi version: openmpi-1.8.4
boost library version: 1.55.0

SpinAdapted::TwoElectronArray&]: Assertion `px != 0' failed.

When returning to previous warmup routine, code crashes with the following error

lock.spin_adapted: /home/ro3/libs/boost/boost_1_50_0_gcc/boost/smart_ptr/shared_ptr.hpp:418: boost::shared_ptr::reference boost::shared_ptr::operator*() const [with T = SpinAdapted::TwoElectronArray, boost::shared_ptr::reference = SpinAdapted::TwoElectronArray&]: Assertion `px != 0' failed.

where SpinAdapted::TwoElectronArray is located on spinblock.C

These changes were done more or less at the same time with the change in the routine, so I suspect they're connected. Any ideas on how to get around this?

Open-shell calculation with BLOCK. Reading input issues.

Hello,

I tried to simulate open-shell system, but BLOCK crashes on reading the FCIDUMP file.

For example, I want H2 molecule as open-shell. FCIDUMP file is exported as follows

&FCI NORB=   2,NELEC=   2,MS2= 1,
  ORBSYM=1,1,
  IUHF=.TRUE. ,
  ISYM=0,
/
 4.6190068036607312E-01   1   1   1   1
...
 2.0693987712918233E-01   4   4   4   4
...
-2.5214903899133018E-01   4   4   0   0
 4.3708714060572051E-01   0   0   0   0

it catches an exeption about wrong indexation in
https://github.com/sanshar/StackBlock/blob/f95317b08043b7c531289576d59ad74a6d920741/input.C#L2714
because it creates matrix 2x2, since I have NORB=2 but not 4x4.

But, according to agreements NORB should define number of orbitals for one spin direction. In case of open shell the total number of spin orbitals will be 2*NORB.

IF I change it to 4 orbitals, I get the error

Initial HF occupancy guess: 1 0 0 0 1 0 0 0 
Checking input for errors
Orbital cannot have an irreducible representation of 0  with c1 symmetry

and we see, that the total number of orbitals became 8, so NORB=4 is multiplied somewhere by 2. What is wrong here?

Also, I found in the code that o searches for "IUHF", not for "UHF" in the FCIDUMP file.

Could you help to resolve these problems?

Thanks.

Crash with intel compiler when run in parallel

Hi,

I compiled Block 1.0.1 with the Intel compiler 15.0.3. When I run a test job using mpi I get a segmentation fault. I found out that the wrong version of add_local_indices gets called, i.e. not the specialization
template<> void Op_component<Des>::add_local_indices(int i, int j , int k) but the function from the primary template which does nothing.

In my opinion this is a bug because in save_load_block.C there is no declaration of this specialization, but as stated in this answer from stackoverflow this is needed otherwise the program is ill-formed. This means undefined behavior and indeed icpc behaves differently than g++ here.

You could for example put declarations of all the specializations in op_components.C in the header file op_components.h to prevent this error.

Best regards,
Matthias

Dynamic correlation

Hi,

In order to obtain quantitative results, static correlation (which is nicely treated by DMRG) is not enough. We really need dynamic correlation included. I can see that several methods were implemented in Block to treat dynamic correlation, but none of them are well documented.

  • NEVPT2: there is an example, but no instruction to prepare the input files.
  • MPS-NEVPT2
  • MPSPT2
  • cu(4)-CASPT2 interfaced with Molcas. This seems promising, considering that Molcas is widely used. But there is neither an instruction nor even an example.

Thus it will be very nice if the authors can include some example as well as manual of these method.
Thank you.

improve makefile

improve makefile (especially where -g is located)
makedepend, such that we don't always have to recompile everything

Openmpi-1.8.5 doesn't seem to function well with Block-1.0.1 on my servers.

The program didn't complain about any error, but I am afraid something went wrong. Actually I tried to do the same calculation, DMRG-CASCI-[12e,28o], with different number of procs (8 vs 16). But the totoal time is almost the same (7424.351 vs 7384.303 ). I compiled openmpi using gcc-4.8.4, and boost-1.5.5 the same. By the way, the same calculation using Block-0.9.6 cost about 2000 seconds using 16 procs. How should I deal with this?

Open shell calculation of Li. Strange RDM1 occupation with half of electron and n_alpha - n_beta=0

Using the FCIDUMP generated by the method proposed from previous issue I'm trying to calculate Li atom with 3 electrons.

Input file is

num_thrds 12
nelec 3
orbitals li_6orb.fcidump
spin   1

schedule default
maxiter 20
startM 100
maxM 1000
hf_occ 2 1 0 0 0 0

onepdm

output reports that Spin=1

Finished Sweep with 1000 states and sweep energy for State [ 0 ] with Spin [ 1 ] :: -7.395656723881

but density matrix onepdm.0.0.txt has following occupations

12
0 0 9.99984987454079e-01
1 1 9.99984987454079e-01
2 2 4.99992968823474e-01
2 3 0.00000000000000e+00
3 2 0.00000000000000e+00
3 3 4.99992968823474e-01

So, the non-paired electron is divided between spin-up and spin-down orbital and non-diagonal components are zero. The number of electrons for spin-up is 1.5, and n_alpha - n_beta = 0.

Why does it happen? I'm not sure it works correctly.

Best regards

Failed by the run-time error, etc...

Due to some missing broadcasts in input.C, DMRG calc. failed by run-time errors. After fixed these bugs on my local repository, I tried to compute twopdm of a simple molecule, but it was not correctly computed. It seemed that twopdm elements were not computed (i.e. assign 0.0) except for root process. Does anyone meet the same issue? or have any idea?

mpspt2

Dear all,

How is PERTURB file in Block/dmrg_tests/mpspt2/ obtained ?
Is BLOCK 1.1 going to be available ?

Best wishes
Luca

BLOCK runtime error

Hi,

I tried to compile BLOCK as noted on http://sanshar.github.io/Block/, which finally succeeded with boost 1.55.0 and the intel mpiicpc compiler version 15.0.1 20141023. Boost is compiled with the intel icpc compiler with the same version number. However, when I try to run block with intel's mpirun I get the following error message:

Attempting to use an MPI routine before initializing MPI

Any ideas?

Best wishes,
Sebastian

IrrepSpace::operator+=(...) doesn't work correctly

IrrepSpace::operator+=(...) in IrrepSpace.C doesn't work correctly; "this" object is never updated.
Because this returns vector of IrrepSpace objects for non-abelian symmetry, it is not easy to fix this correctly for both abelian and non-abelian symmetries.

Naoki

Code Crash

Dear developers,

I am trying to get a functional binary of block code as i can not use the precompiled version in my system, I am using the source code from version 1.0.1. I try to compile it using gcc/g++ version 6.1.0, boost 1.55.0, openmpi 1.10.2 and openblas, all compiled with gcc/g++ 6.1.0. Many of the test run and produce correct results, but tests involving 2-RDM or NEVPT2 crash. For example, I try to run the example in c2_d2h_small, but program crash with

[malta:04396] *** Process received signal ***
[malta:04396] Signal: Segmentation fault (11)
[malta:04396] Signal code: Address not mapped (1)
[malta:04396] Failing at address: (nil)
[malta:04396] [ 0] /lib64/libpthread.so.0[0x365f60e4c0]
[malta:04396] [ 1] ../../block.spin_adapted-mpi[0x973936]
[malta:04396] [ 2] ../../block.spin_adapted-mpi[0x966aa3]
[malta:04396] [ 3] ../../block.spin_adapted-mpi[0x967c1a]
[malta:04396] [ 4] ../../block.spin_adapted-mpi[0x955a13]
[malta:04396] [ 5] ../../block.spin_adapted-mpi[0x956887]
[malta:04396] [ 6] ../../block.spin_adapted-mpi[0x994681]
[malta:04396] [ 7] ../../block.spin_adapted-mpi[0x645fe1]
[malta:04396] [ 8] ../../block.spin_adapted-mpi[0x6da93d]
[malta:04396] [ 9] /lib64/libc.so.6(__libc_start_main+0xf4)[0x365ea1d974]
[malta:04396] [10] ../../block.spin_adapted-mpi[0x60ac29]

[malta:04396] *** End of error message ***

mpirun noticed that process rank 4 with PID 4396 on node malta exited on signal 11 (Segmentation fault).

serial version also produce a segmentation fault and stop. I try to compile the code in another machine with gcc-4.9 but I got the same problem. I try to compile it also with different optimization levels, but nothing seems to work.

If I remove the line twopdm in the example the code runs and stop normally.

Best regards,
Jose Luis

error installation using boost 1.71.0

I know that block mush be compiled with boost 1.55, but I can't find that package anymore, then when I try use 1.71.0 version this error show up:

$ make
g++ -I. -I./include/ -I./ -I./newmat10/ -I/opt/local/include -I. -I./modules/generate_blocks/ -I./modules/onepdm -I./modules/twopdm/ -I./modules/npdm -I./modules/two_index_ops -I./modules/three_index_ops -I./modules/four_index_ops -std=c++0x -I./modules/ResponseTheory -I./modules/nevpt2 -I./molcas -I./modules/mps_nevpt -DNDEBUG -O2 -g -funroll-loops -Werror -DBLAS -DUSELAPACK -DSERIAL -DBOOST_1_71_0 -DFAST_MTP -D_HAS_CBLAS -c saveBlock.C -o saveBlock.o
In file included from input.h:21,
from global.h:36,
from pario.h:7,
from ObjectMatrix.h:12,
from StackBaseOperator.h:12,
from StackOperators.h:3,
from Stack_op_components.h:23,
from Stackspinblock.h:7,
from saveBlock.C:8:
/usr/include/boost/tr1/unordered_map.hpp:8:38: error: too many decimal points in number
8 | <title>boost/tr1/unordered_map.hpp - 1.47.0</title>
| ^~~~~~
In file included from input.h:21,
from global.h:36,
from pario.h:7,
from ObjectMatrix.h:12,
from StackBaseOperator.h:12,
from StackOperators.h:3,
from Stack_op_components.h:23,
from Stackspinblock.h:7,
from saveBlock.C:8:
/usr/include/boost/tr1/unordered_map.hpp:55:12: error: #include expects "FILENAME" or
55 | # include <boost/tr1/detail/config.hpp>
| ^
/usr/include/boost/tr1/unordered_map.hpp:68:10: error: #include expects "FILENAME" or
68 | #include <boost/unordered_map.hpp>
| ^
QC-DMRGIn file included from input.h:21,
from global.h:36,
from pario.h:7,
from ObjectMatrix.h:12,
from StackBaseOperator.h:12,
from StackOperators.h:3,
from Stack_op_components.h:23,
from Stackspinblock.h:7,
from saveBlock.C:8:
/usr/include/boost/tr1/unordered_map.hpp:1:1: error: expected unqualified-id before ‘<’ token
1 | <style type="text/css"> body { behavior: url(/style-v2/csshover3.htc); } </style> <![endif]-->
| ^
/usr/include/boost/tr1/unordered_map.hpp:29:56: error: expected unqualified-id before ‘<’ token
29 | world. — <a href=
| ^
input.h:102:3: error: ‘calcType’ does not name a type
102 | calcType m_calc_type;
| ^~~~~~~~
input.h:103:3: error: ‘noiseTypes’ does not name a type
103 | noiseTypes m_noise_type;
| ^~~~~~~~~~
input.h:104:3: error: ‘hamTypes’ does not name a type
104 | hamTypes m_ham_type;
| ^~~~~~~~
input.h:105:3: error: ‘WarmUpTypes’ does not name a type
105 | WarmUpTypes m_warmup;
| ^~~~~~~~~~~
input.h:107:3: error: ‘solveTypes’ does not name a type
107 | solveTypes m_solve_type;
| ^~~~~~~~~~
input.h:136:3: error: ‘algorithmTypes’ does not name a type
136 | algorithmTypes m_algorithm_type;
| ^~~~~~~~~~~~~~
input.h:177:3: error: ‘orbitalFormat’ does not name a type
177 | orbitalFormat m_orbformat;

and many other errors, boost 1.71.0 doesn't have tr1 module so I copy-paste from the older version but still doesn't work

npdm_intermediate fails multi-node run without shared file system

With "npdm_intermediate" option (by default), multi-node run fails due to the loading data which was created in different process/node, unless using shared file system.
Because file-based communication is not common in MPI parallelization, network communication should be used (but, the scaling is bad) and/or, change the algorithm to reduce communications.
For the time, I think "npdm_no_intermediate" should be default option for MPI build.

Naoki

FCIDUMP printing from rotated orbitals

There is an issue on the Molpro side, where we need to be careful that the FCIDUMP file is actually printing out the orbital ordering which we specified. In the case of rotating orbitals, if one uses the merge subprogram from Molpro, then one needs to add the save and orbital commands.
This is usually not the case when rotating orbitals around using the multi subprogram.
A zeroth-order way to ensure that the FCIDUMP looks reasonable is to look at the one-body integrals and see that the localized orbitals which should be closest to each other (i.e. < 8 | 9 >) gives a reasonable value compared to other orbitals that are not so close.

boost problem

I encounter the following problem: icpc is ver 18

icpc -I-mkl -I./include/ -I./ -I./newmat10/ -I/home/peterc/b66/include -I. -I./modules/generate_blocks/ -I./modules/onepdm -I./modules/twopdm/ -I./modules/npdm -I./modules/two_index_ops -I./modules/three_index_ops -I./modules/four_index_ops -std=c++0x -I./modules/ResponseTheory -I./modules/nevpt2 -I./molcas -I./modules/mps_nevpt -DNDEBUG -Ofast -xcommon-avx512 -qopenmp -w -DBLAS -DUSELAPACK -DSERIAL -DBOOST_1_56_0 -DFAST_MTP -D_HAS_CBLAS -D_HAS_INTEL_MKL -c Stackspinblock.C -o Stackspinblock.o
/home/peterc/b66/include/boost/bind/bind.hpp(602): error: initial value of reference to non-const must be an lvalue
unwrapper::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_], a[base_type::a3_], a[base_type::a4_], a[base_type::a5_], a[base_type::a6_]);
^
detected during:
instantiation of "void boost::_bi::list6<A1, A2, A3, A4, A5, A6>::operator()(boost::_bi::type, F &, A &, int) [with A1=boost::_bi::value<SpinAdapted::StackSpinBlock *>, A2=boost::arg<1>, A3=boost::_bi::value<const SpinAdapted::StackSpinBlock *>, A4=boost::reference_wrapperSpinAdapted::StackWavefunction, A5=boost::_bi::value<SpinAdapted::StackWavefunction >, A6=boost::_bi::valueSpinAdapted::SpinQuantum, F=void ()(const SpinAdapted::StackSpinBlock *,
boost::shared_ptrSpinAdapted::StackSparseMatrix &, const SpinAdapted::StackSpinBlock *, SpinAdapted::StackWavefunction &, SpinAdapted::StackWavefunction , const SpinAdapted::SpinQuantum &), A=boost::_bi::rrlist1<boost::shared_ptrSpinAdapted::StackSparseMatrix>]" at line 1306
instantiation of "boost::_bi::bind_t<R, F, L>::result_type boost::_bi::bind_t<R, F, L>::operator()(A1 &&) [with R=void, F=void (
)(const SpinAdapted::StackSpinBlock *, boost::shared_ptrSpinAdapted::StackSparseMatrix &, const SpinAdapted::StackSpinBlock *, SpinAdapted::StackWavefunction &, SpinAdapted::StackWavefunction *, const SpinAdapted::SpinQuantum &), L=boost::_bi::list6<boost::_bi::value<SpinAdapted::StackSpinBlock *>, boost::arg<1>, boost::_bi::value<const
SpinAdapted::StackSpinBlock *>, boost::reference_wrapperSpinAdapted::StackWavefunction, boost::_bi::value<SpinAdapted::StackWavefunction *>, boost::_bi::valueSpinAdapted::SpinQuantum>, A1=boost::shared_ptrSpinAdapted::StackSparseMatrix]" at line 159 of "/home/peterc/b66/include/boost/function/function_template.hpp"

Error when compiling with g++ and mpicxx

modules/npdm/npdm_expectations.C:855:18: error: ‘class std::mapstd::vector<int, SpinAdapted::Wavefunction>’ has no member named ‘emplace’
modules/npdm/npdm_expectations.C: In member function ‘void SpinAdapted::Npdm::Npdm_expectations::store(SpinAdapted::Npdm::NpdmSpinOps_base&)’:
modules/npdm/npdm_expectations.C:899:14: error: ‘class std::mapstd::vector<int, SpinAdapted::Wavefunction>’ has no member named ‘emplace’
make: *** [modules/npdm/npdm_expectations.o] Error 1

npdm_expectations.C

Can you help to get rid of the error while compiling ?

modules/npdm/npdm_expectations.C(855): error: class "std::map<std::vector<int, std::allocator>, SpinAdapted::Wavefunction, std::less<std::vector<int, std::allocator>>, std::allocator<std::pair<const std::vector<int, std::allocator>, SpinAdapted::Wavefunction>>>" has no member "emplace"
waves_.emplace(spin,opw2);
^

modules/npdm/npdm_expectations.C(899): error: class "std::map<std::vector<int, std::allocator>, SpinAdapted::Wavefunction, std::less<std::vector<int, std::allocator>>, std::allocator<std::pair<const std::vector<int, std::allocator>, SpinAdapted::Wavefunction>>>" has no member "emplace"
waves_.emplace(spin,opw2);
^

compilation aborted for modules/npdm/npdm_expectations.C (code 2)
make: *** [modules/npdm/npdm_expectations.o] Error 2

Symbol lookup error

/opt/block-1.5.0/block.spin_adapted: symbol lookup error: /usr/lib64/openmpi/lib/libmpi_cxx.so.1: undefined symbol: ompi_mpi_cxx_bool

Compilation Error

Trying to compile BlockCode with the following compilers and libraries:
gcc-4.8.1
openmpi-1.8.6
boost-1.58.0 with mpi support
atlas-3.8.4
blas-1.1

Then i got the following errors:
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_dtrsm' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference toATL_sscal'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_strmm' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_dgemm'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to ATL_sGetNB' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_sscal'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_zherk' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_zscal'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to ATL_cGetNB' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_cgemm'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_cherk' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_isamax'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_sswap' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference toATL_dscal'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to ATL_xerbla' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_cswap'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_zgemm' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_ctrmm'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to ATL_dGetNB' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_icamax'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_ctrsm' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_sgemm'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_dsyrk' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_idamax'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_dscal' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_zswap'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_dswap' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_ssyrk'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_ztrsm' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference toATL_ccplxinvert'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to ATL_zcplxinvert' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference toATL_zGetNB'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_ztrmm' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_cscal'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_xerbla' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_strsm'
/opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference to cblas_izamax' /opt/share/atlas/3.8.4-sse3/el6/x86_64/lib64/../lib64/liblapack.so: undefined reference tocblas_dtrmm'
collect2: error: ld returned 1 exit status
make: *** [block.spin_adapted] Error 1

Data for Manual

We need to run Cr2 calculations for the manual

  1. cpu time vs M
  2. Energy vs M
  3. walltime vs nprocs

raise OSError("no file with expected extension") in runnning DMRG from PySCF

Hi,

I tried to use block from pyscf in windows subsystem linux. I did

  1. download binary versions of block1.5 (both serial and parallel, I only want to use the serial at this stage)from
    https://sanshar.github.io/Block/build.html
  2. cp both serial and parallel versions into the same directory and extract them
  3. installed boost 1.5 by
    sudo apt-get purge libboost-all-dev
    sudo apt-get install libboost1.55-all
    up to this stage, ./block.spin_adapted-1.5.3-serial -v gives me version and copyright information.

and by https://github.com/sanshar/Block/blob/master/README_Examples/FCIDUMP
and https://github.com/sanshar/Block/blob/master/README_Examples/1/input.dat at the same directory as the binary files,
with ./block.spin_adapted-1.5.3-serial input.dat the program starts to work :)

  1. installed new version of pyscf by
    pip install pyscf
    pip install --upgrade pyscf
    pip install git+https://github.com/pyscf/dmrgscf
  2. created settings.py in .../pyscf/dmrgscf by
    BLOCKEXE = "/path/to/Block/block.spin_adapted" <- I have changed to the directory of block.spin_adapted-1.5.3 and block.spin_adapted-1.5.3-serial
    BLOCKEXE_COMPRESS_NEVPT = "/path/to/serially/compiled/Block/block.spin_adapted" <- I have changed to the directory of block.spin_adapted-1.5.3 and block.spin_adapted-1.5.3-serial
    BLOCKSCRATCHDIR = "/path/to/scratch" <- have set to a directory
    MPIPREFIX = "mpirun"
  3. created a test2.py as
    from pyscf import gto, scf, dmrgscf
    mf = gto.M(atom='C 0 0 0; C 0 0 1', basis='ccpvdz').apply(scf.RHF).run()
    mc = dmrgscf.dmrgci.DMRGSCF(mf, 6, 6)
    mc.run()
  4. python test2.py, it led to

Traceback (most recent call last):
File "test2.py", line 1, in <module>
from pyscf import gto, scf, dmrgscf
File "/home/username/anaconda3/lib/python3.7/site-packages/pyscf/dmrgscf/__init__.py", line 106, in <module>
from pyscf.dmrgscf import dmrgci
File "/home/username/anaconda3/lib/python3.7/site-packages/pyscf/dmrgscf/dmrgci.py", line 41, in <module>
libunpack = lib.load_library('libicmpspt')
File "/home/username/anaconda3/lib/python3.7/site-packages/pyscf/lib/misc.py", line 68, in load_library
return numpy.ctypeslib.load_library(libname, _loaderpath)
File "/home/username/anaconda3/lib/python3.7/site-packages/numpy/ctypeslib.py", line 155, in load_library
raise OSError("no file with expected extension")

I am wondering where I made the mistake? I don't think I have configurated openmpi, since I would like to run with a single core.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.