Code Monkey home page Code Monkey logo

psi4numpy's Introduction

Status Azure DevOps builds Codecov coverage
Latest Release Last release tag Commits since release python
Communication User site docs latest chat on forum dev chat on slack
Foundation license platforms python
Installation obtain latest Conda Anaconda-Server Badge
Demo Binder

Psi4 is an open-source suite of ab initio quantum chemistry programs designed for efficient, high-accuracy simulations of molecular properties. We routinely perform computations with >2500 basis functions on multi-core machines.

With computationally demanding portions written in C++, exports of many C++ classes into Python via Pybind11, and a flexible Python driver, Psi4 strives to be friendly to both users and developers.

License license

Psi4: an open-source quantum chemistry software package

Copyright (c) 2007-2024 The Psi4 Developers.

The copyrights for code used from other parties are included in the corresponding files.

Psi4 is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, version 3.

Psi4 is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with Psi4; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

The full text of the GNU Lesser General Public License (version 3) is included in the COPYING.LESSER file of this repository, and can also be found here.

Citation doi

The journal article reference describing Psi4 is:

D. G. A. Smith, L. A. Burns, A. C. Simmonett, R. M. Parrish, M. C. Schieber, R. Galvelis, P. Kraus, H. Kruse, R. Di Remigio, A. Alenaizan, A. M. James, S. Lehtola, J. P. Misiewicz, M. Scheurer, R. A. Shaw, J. B. Schriber, Y. Xie, Z. L. Glick, D. A. Sirianni, J. S. O'Brien, J. M. Waldrop, A. Kumar, E. G. Hohenstein, B. P. Pritchard, B. R. Brooks, H. F. Schaefer III, A. Yu. Sokolov, K. Patkowski, A. E. DePrince III, U. Bozkaya, R. A. King, F. A. Evangelista, J. M. Turney, T. D. Crawford, C. D. Sherrill, "Psi4 1.4: Open-Source Software for High-Throughput Quantum Chemistry", J. Chem. Phys. 152(18) 184108 (2020).

  • doi for Psi4 v1.1
  • doi for Psi4NumPy
  • doi for Psi4 alpha releases
  • doi for Psi3

psi4numpy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

psi4numpy's Issues

Tutorial 4a_Grids for different molecular symmetries

Hi ya'll,

I am trying to modify tutorial 4a_Grids to have the molecular orbitals of my own DFT/SCF calculations on a cartesian grid.
The program crashes if one adapts the tutorial to work with molecules of different symmetry groups.

Steps to reproduce:
Change the symmetry group in the Tutorial file to c2
Fix Ca_np by defining it as Ca_np = np.array( wfn.Ca().to_array(dense=True))
(line 69 in the provided python file)

The crash seems to occur in line points_func.compute_points(i_block)
(line 74 in the provided python file)

Python output:

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
libgdma.so         00007F50E02A30DC  for__signal_handl     Unknown  Unknown
libpthread-2.31.s  00007F50EF162420  Unknown               Unknown  Unknown
core.cpython-38-x  00007F50E2D38B01  Unknown               Unknown  Unknown
core.cpython-38-x  00007F50ECA0C8B6  Unknown               Unknown  Unknown
core.cpython-38-x  00007F50ECC00FB6  Unknown               Unknown  Unknown
core.cpython-38-x  00007F50ECC0115B  Unknown               Unknown  Unknown
core.cpython-38-x  00007F50EC846718  Unknown               Unknown  Unknown
python3.8          000055FBF388100E  Unknown               Unknown  Unknown
python3            000055FBF387613F  _PyObject_MakeTpC     Unknown  Unknown
python3.8          000055FBF38ABCA0  Unknown               Unknown  Unknown
python3            000055FBF3920923  _PyEval_EvalFrame     Unknown  Unknown
python3            000055FBF3911600  _PyEval_EvalCodeW     Unknown  Unknown
python3            000055FBF3912EB3  PyEval_EvalCode       Unknown  Unknown
python3.8          000055FBF3987622  Unknown               Unknown  Unknown
python3.8          000055FBF39981D2  Unknown               Unknown  Unknown
python3.8          000055FBF399B36B  Unknown               Unknown  Unknown
python3            000055FBF399B54F  PyRun_SimpleFileE     Unknown  Unknown
python3            000055FBF399BA29  Py_RunMain            Unknown  Unknown
python3            000055FBF399BC29  Py_BytesMain          Unknown  Unknown
libc-2.31.so       00007F50EEF80083  __libc_start_main     Unknown  Unknown
python3.8          000055FBF393EAD7  Unknown               Unknown  Unknown

c2_test_orb.py.txt

psi4.log

Do you know what causes this or how I could fix it?
Thank you for creating such a great program!

Update: This is absolutely not urgent since the points_func.compute_points() function is working if I just calculate everything in C1 symmetry.

obtaining

  • update the obtaining info on front page
    • move the 3 route outlines to psi GH page
    • here, just link to psi downloads page
  • general front-page updates
  • Add big warning to any ipynb binder-linked to use binder lightly and clone-your-own soon if you want to play further
  • Emphasize somewhere on obtaining that no compile required.

spin-orbital MO antisymmetrized integrals ordering

I'm looking over the psi4numpy CCSD.py code and trying to figure out the order of the integrals returned by "MO_spin = np.asarray(mints.mo_spin_eri(C, C))"

https://github.com/psi4/psi4numpy/blob/master/Coupled-Cluster/Spin_Orbitals/CCSD/CCSD.py#L79

Does Psi4 order the integrals where the alpha and beta spin functions alternate? For example, even-numbered orbitals corresponding to alpha, and odd-numbered to beta.

Also, are the integrals of the form [phi_i(1) phi_j(1) 1/r_12 phi_k(2) phi_l(2)] or <phi_i(1) phi_k(2) 1/r_12 phi_j(1) phi_l(2)>?

Thank you

ordering of angular moment-indexed basis functions in psi4

hello,

how can I ascertain the order of atomic orbital basis functions that psi4 uses ?
e.g. for p orbital basis functions, what is the ordering of l's from -1 to +1? some programmes use l=0, l=+1, l=-1, others use l=-1, l=0, l=+1, and yet some l=+1, l=0, l=-1...

and by extension, is there a concrete way I can verify the ordering of all atomic orbital basis functions for any basis set in psi4 ??

I did a lot of googling and reading but there doesn't appear to be a straight answer in the documentation.

the closest I came was this page in the psi4 README but it is not unambiguous, as it is unclear if this is always the case. https://psicode.org/psi4manual/master/prog_blas.html#how-to-name-orbital-bases-e-g-ao-so
. Note that in PSI4, the real combinations of spherical harmonic functions (see the paragraph below Eq. 15 in the Schlegel paper) are ordered as: 0, 1+, 1-, 2+, 2-, ….

thank you

Broken example: Fitting Lennard-Jones Parameters from Potential Energy Scan

I tried to repeat: https://github.com/psi4/psi4numpy/blob/master/Tutorials/01_Psi4NumPy-Basics/1b_molecule.ipynb

I got an error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[39], line 9
      5 mol = psi4.geometry(he_dimer.replace('**R**', 
      6                                      str(d)))
      8 # Compute the Counterpoise-Corrected interaction energy
----> 9 en = psi4.energy('MP2/aug-cc-pVDZ', 
     10                  molecule=mol, 
     11                  bsse_type='cp')
     13 # Place in a reasonable unit, Wavenumbers in this case
     14 en *= 219474.6

File ~/anaconda3/envs/p4env/lib/python3.10/site-packages/psi4/driver/driver.py:464, in energy(name, **kwargs)
    461 elif not isinstance(plan, AtomicComputer):
    462     # Advanced "Computer" active
    463     plan.compute()
--> 464     return plan.get_psi_results(return_wfn=return_wfn)
    466 else:
    467     # We have unpacked to an AtomicInput
    468     lowername = plan.method

File ~/anaconda3/envs/p4env/lib/python3.10/site-packages/psi4/driver/driver_nbody.py:1451, in ManyBodyComputer.get_psi_results(self, client, return_wfn)
   1424 def get_psi_results(
   1425     self,
   1426     client: Optional["qcportal.FractalClient"] = None,
   1427     *,
   1428     return_wfn: bool = False) -> EnergyGradientHessianWfnReturn:
   1429     """Called by driver to assemble results into ManyBody-flavored QCSchema,
   1430     then reshape and return them in the customary Psi4 driver interface: ``(e/g/h, wfn)``.
   1431 
   (...)
   1449 
   1450     """
-> 1451     nbody_model = self.get_results(client=client)
   1452     ret = nbody_model.return_result
   1454     wfn = core.Wavefunction.build(self.molecule, "def2-svp", quiet=True)

File ~/anaconda3/envs/p4env/lib/python3.10/site-packages/psi4/driver/driver_nbody.py:1368, in ManyBodyComputer.get_results(self, client)
   1365 core.print_out(info)
   1366 logger.info(info)
-> 1368 results = self.prepare_results(client=client)
   1369 ret_energy = results.pop("ret_energy")
   1370 ret_ptype = results.pop("ret_ptype")

File ~/anaconda3/envs/p4env/lib/python3.10/site-packages/psi4/driver/driver_nbody.py:1304, in ManyBodyComputer.prepare_results(self, results, client)
   1293 metadata = {
   1294     "quiet": self.quiet,
   1295     "nbodies_per_mc_level": self.nbodies_per_mc_level,
   (...)
   1301     "max_nbody": self.max_nbody,
   1302 }
   1303 if self.driver.name == "energy":
-> 1304     nbody_results = assemble_nbody_components("energy", trove["energy"], metadata.copy())
   1306 elif self.driver.name == "gradient":
   1307     nbody_results = assemble_nbody_components("energy", trove["energy"], metadata.copy())

File ~/anaconda3/envs/p4env/lib/python3.10/site-packages/psi4/driver/driver_nbody.py:648, in assemble_nbody_components(ptype, component_results, metadata)
    645         nocp_compute_list[len(w[0])].add(w)
    647 for nb in range(1, nbodies[-1] + 1):
--> 648     cp_by_level[nb] = _sum_cluster_ptype_data(
    649         ptype,
    650         component_results,
    651         cp_compute_list[nb],
    652         fragment_slice_dict,
    653         fragment_size_dict,
    654         mc_level_lbl=mc_level_lbl,
    655     )
    656     nocp_by_level[nb] = _sum_cluster_ptype_data(
    657         ptype,
    658         component_results,
   (...)
    662         mc_level_lbl=mc_level_lbl,
    663     )
    664     if nb in compute_dict["vmfc_levels"]:

File ~/anaconda3/envs/p4env/lib/python3.10/site-packages/psi4/driver/driver_nbody.py:307, in _sum_cluster_ptype_data(ptype, ptype_dict, compute_list, fragment_slice_dict, fragment_size_dict, mc_level_lbl, vmfc, nb)
    304         if vmfc:
    305             sign = ((-1)**(nb - len(frag)))
--> 307         ret += sign * ene
    309     return ret
    311 elif ptype == 'gradient':

TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'

conda install failed on windows

Create new env and install psi4 failed without useful error message:

$ conda create -n p4env psi4 -c psi4 -y
Collecting package metadata (current_repodata.json): done
Solving environment: unsuccessful attempt using repodata from current_repodata.json, retrying with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed

UnsatisfiableError:

I noticed in the anaconda repo, there is a win-64/psi4-1.7+6ce35a5-py38_0.tar.bz2, thus I assumed it was due to my newer python interpreter in the (base) env.

However, explicitly specify python=3.8 failed again, which is quite confusing:

$ conda create -n p4env python=3.8 psi4 -c psi4 -y
Collecting package metadata (current_repodata.json): done
Solving environment: unsuccessful attempt using repodata from current_repodata.json, retrying with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package python conflicts for:
python=3.8
psi4 -> msgpack-python -> python[version='>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.11,<3.12.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0|>=3.8|>=3.7|>=3.6|>=3.5|>=3.6.0']
psi4 -> python=3.8

Finally, create the env first, and install python=3.8, then psi4, failed again:

$ conda create -n p4env -y
$ conda activate p4env
$ conda install python=3.8 -y
$ conda install psi4 -c psi4
Collecting package metadata (current_repodata.json): done
Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
Solving environment: unsuccessful attempt using repodata from current_repodata.json, retrying with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
Solving environment: -
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

FYI, I downloaded the tar.bz2 file, and this is the recipe/meta.yaml content:

# This file created by conda-build 3.23.2
# meta.yaml template originally from:
# D:\a\1\s\conda\win, last modified Tue Dec  6 22:14:44 2022
# ------------------------------------------------

package:
  name: psi4
  version: 1.7+6ce35a5
source:
  path: D:\a\1\s
build:
  script:
    - md C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Scripts
    - copy /y D:\a\1\b\install\bin\psi4 C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Scripts
    - echo __pycache__ > exclude.txt
    - xcopy /f /i /s /exclude:exclude.txt D:\a\1\b\install\lib\psi4 C:\\tools\\miniconda3\\conda-bld\\psi4_1670369969426\\_h_env\\Lib\\site-packages\psi4
    - xcopy /f /i /s D:\a\1\b\install\share\psi4\basis       C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Lib\share\psi4\basis
    - xcopy /f /i /s D:\a\1\b\install\share\psi4\plugin      C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Lib\share\psi4\plugin
    - xcopy /f /i /s D:\a\1\b\install\share\psi4\quadratures C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Lib\share\psi4\quadratures
    - xcopy /f /i /s D:\a\1\b\install\share\psi4\databases   C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Lib\share\psi4\databases
    - xcopy /f /i /s D:\a\1\b\install\share\psi4\fsapt       C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Lib\share\psi4\fsapt
    - xcopy /f /i /s D:\a\1\b\install\share\psi4\grids       C:\tools\miniconda3\conda-bld\psi4_1670369969426\_h_env\Lib\share\psi4\grids
  string: py38_0
requirements:
  build: []
  run:
    - dftd3-python
    - gau2grid
    - gcp-correction
    - intel-openmp=2019.1
    - libint2 2.6.0 h2e52968_4
    - libxc
    - mkl=2019.1
    - msgpack-python
    - networkx
    - numpy
    - optking
    - pytest>=7.0.1
    - python=3.8
    - qcelemental=0.25.1
    - qcengine=0.26.0
    - scipy
test:
  commands:
    - python -c "import psi4; assert psi4.test('quick') == 0"
    - psi4 --test quick
about: {}
extra:
  copy_test_source_files: true
  final: true

As you can see, it explicitly requires python=3.8, but I just keep failing install psi4 using conda (Even though I reinstall anaconda entirely). The only way left for me to use psi4 on Windows is to download psi4conda installer, after installing, I can activate it by passing the absolute path of psi4conda env.

$ conda env activate E:\Soft\psi4conda

einsum can now use optimize=True without any issues

Hi all,

In part 1f of the tutorial, the usage of the optimize=True flag of einsum was discouraged due to the numpy version of its introduction (1.12) being too recent.

But that's a long time ago. Ubuntu (as early as version 18.04.3 LTS), for instance, has already version 1.13 in its repository:

% apt search python3-numpy
Sorting... Pronto
Full Text Search... Pronto
python3-numpy/bionic,now 1:1.13.3-2ubuntu1 amd64 [installed]
  Fast array facility to the Python 3 language

python3-numpy-dbg/bionic 1:1.13.3-2ubuntu1 amd64
  Fast array facility to the Python 3 language (debug extension)

python3-numpydoc/bionic,bionic 0.7.0-1 all
  Sphinx extension to support docstrings in Numpy format -- Python3

I think the tutorials could be modified in order to reflect that.
And the reason I believe a change in the tutorials is due is the following:

...
In [81]: %%timeit 
    ...: MO_n8 = np.einsum("pI,qJ,pqrs,rK,sL->IJKL", C, C, I, C, C)
25.9 s ± 79.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
...
In [90]: %%timeit 
    ...: MO_n5 = np.einsum("pA,pqrs->Aqrs", C, I) 
    ...: MO_n5 = np.einsum("qB,Aqrs->ABrs", C, MO_n5) 
    ...: MO_n5 = np.einsum("rC,ABrs->ABCs", C, MO_n5) 
    ...: MO_n5 = np.einsum("sD,ABCs->ABCD", C, MO_n5)
3.84 ms ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
...
In [92]: %%timeit 
    ...: MO_n8 = np.einsum("pI,qJ,pqrs,rK,sL->IJKL", C, C, I, C, C, optimize=True)
2.03 ms ± 230 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

That is, using optimize=True can be almost twice as fast as breaking the calculation in four less readable parts (not to mention that, in the example above, it's around four orders of magnitude faster than not using optimize=True, which makes me wonder why it's not numpy's default already...).

What do you think? Thanks in advance.

Missing documentation inside EOM-CCSD

I was looking at the HelperCCPert object in the file psi4numpy/Coupled-Cluster/spin_free_CC/helper_cc.py.

I was wondering what sort of (perturbative correction?) equations were implemented here. Could you maybe provide a reference? Thanks!

Numpy implementation of b3lyp

Apologies for a perhaps silly question. I've been meaning to get a better grasp of the nitty gritty details of b3lyp. Is there a python/numpy implementation of b3lyp in psi4numpy? If so, does anyone have a link?

I was looking at PySCF, but it calls C/fortran code which is much harder to experiment and play with. Considering porting that C/fortran code to python/numpy. Would save me a lot of time if this already exists somewhere in python/numpy.

SOUHF: Alpha Orbitals Aren't Beta

The iterative SOUHF code consistently confuses alpha and beta orbitals and doesn't match the non-iterative SOUHF code for individual iterations. For an example, see here. nalpha and nbeta should be switched. This is a possible cause of some convergence issues I see with the non-iterative SOUHF code.

I recommend that nobody treat the SO-SCF reference implementations are correct for the time being. I'm going to run correctness checks on all of them.

SAPT0_ROHF test failing

When I run pytest locally, it's failing the test_SAPT0_ROHF test on the second to last line of Symmetry-Adapted-Perturbation-Theory/SAPT0_ROHF.py on the psi4.compare_values(Eind, Ind20r, 5, 'Ind200,r') call with the following error: Ind200,r: computed value (-0.000251) does not match (0.000000) to 5 digits. @dgasmith has suggested that perhaps Psi4 has changed the name of its variables, causing this error?

SCF Gradients for DF JK types

DF-JK wK was recently fixed in Psi4 through this PR and we now have all of the technology to make a nice tutorial of this. This is a nice example of the chain rule in quantum chemistry and how these things are approached.

Gist can be found here. All it needs is a coat of paint and additional documentation. Any volunteers?

Problems about making intermediates in eom-ccsd

I am an undergraduate from China. Recently I'm read your code in repo psi4/psi4numpy to learn how to implement eom-ccsd method in python.
In the process of reading the code, I noticed that there are some small differences in making intermediates between code for ground state CCSD method and code for eom-ccsd method.
For example, in Stanton's paper, equation(9) is:
unnamed
While in the code for ground state CCSD method(in repo psi4/psi4numpy/coupled cluster/spin_orbital/CCSD / CCSD.py), the part is implemented as:
unnamed

However, in the code for eom-ccsd(in repo psi4/psi4numpy/coupled_cluster/RHF/helper_ccenergy.py), this part is implemented as:
unnamed

Obviously in the latter case the minus term in the bracket has been neglected. This kind of problem happened in making other intermediates as well. However, the weird thing is that one will get very close output using two functions.

reference of code block in psi4numpy/coupled_cluster/RHF/helper_cceom

Hi!

Recently I'm reading your code in repo psi4/psi4numpy and know how to implement eom-ccsd method in python.

In the process of reading the code, I'm confused about the code block where you make the sub-matrix of similarly transformed Hamiltonian.
image
As well for that build sigma2.

I have searched many materials (like origin paper of Stanton, many body method in chemistry and physics by Bartlett), but I still don't know how to derive these equations. I have read repo pyqchem written by jjgoing, where he constructs HSS, HSD, HDS and HDD them diagonalize it. This seems different from your method.

Could you tell me where to find the reference of these equations? I really appreciate your help.

Package for pypi and conda forge

It would be helpful for downstream users if this package was available via pypi and conda forge. This would allow users to install the package via commands like:

pip install psi4numpy
conda install -c conda-forge psi4numpy

Obervations and question on the FCI module.

Hi,

I have some observations on the code of the FCI/helper_CI modules. I know coding is a highly personal thing so these are only mentioned in the hope they may be of interest and not in any way as 'improvements' to the existing code.

  1. def countNumOrbitalsInBits(bits):
    as an alternative to the existing code you could simply use
    return bin(bits).count('1')

For a million integers the original took 16.6 sec and the alternative 2.7 sec.

  1. def obtBits2ObtIndexList(bits):

    as an alternative you could use list comprehension
    return [n for n, bit in enumerate(bin(bits)[2:]]::-1]) if bit == '1']

For a million integers the original took 22.1 sec and the alternative 11.8 sec.

  1. def obtIndexList2ObtBits(obtList):

    as an alternative you could use
    bit = 0
    return sum([obtList ^ (1 << i) for i in obtList])

this is only marginally faster than existing routine but saves 9 lines of code.

  1. def getOrbitalPositions(bits, orbitalIndexList):

    as an alternative you could use
return [n for n, i in enumerate(Determinant.obtBits2ObtIndexList(bits)) if i in orbitalIndexList]
  1. You have two separate routines generateSingleExcitationsOfDet and generateDoubleExcitationsOfDet which are used for calculating CIS and CISD. A small change to the determinant loops in FCI generates CIS
ground_state = Determinant(alpha_indexlist=range(ndocc), beta_indexlist=range(ndocc))
detList = []
for alpha in combinations(range(nmo), ndocc):
   for beta in combinations(range(nmo), ndocc):
       determinant = Determinant(alphaObtList=alpha, betaObtList=beta)
       if determinant.numberOfDiffOrbitals(ground_state) == 1:
           detList.append(determinant)

clearly for CISD the if statement becomes in range(0, 3) and so on for CISDT...Q...P etc. I'm not sure about the efficiency of this approach compared to the custom generateSingleExcitationsOfDet but it does give you CIS etc cheaply?

  1. Are the eigenvectors produced by the FCI bitstring approach immeditaely usable? It seems that for a 'standard' CIS spin-adapted singlet Hamiltonian the ordering of an eigenvector is $_1^6$ $_1^7$ $_2^6$ $_2^7$ ... $_5^6$ $_5^7$ where $_i^a$ is an excitation from occupied orbital i to virtual orbital a. However itertools will produce, I think, an ordering of $_5^7$_5^6$ ... $_1^7$ $_2^6$ which will not be consistant with eg dipole component integrals. Can someone show eg an oscillator strength being computed using FCI bitstring generated eigenvectors.

Thank you for the interesting FCI module.

ADC Eigenvectors

Hi,
I've checked the ADC eigenvalues against pyscf and there is agreement. However when looking at the eigenvectors there seems to be a problem. It looks like the Davidson routine is returning the full expanded sub-space rather than the eigenvectors that correspond to the returned eigenvalues. For IP the returned vector v_ip[:,0] is just a column of zeroes. Could someone check this please.
Peter

Additional term in making matrix elements of similarily transformed hamiltonian

In the code of making matrix elements of similarily transformed hamiltonian(coupled cluster/RHF/helper_cchbar).
image
The notes tell that these term only differ by one sign.
However, additional term appears in build_Hovvo, making the hamiltonian no longer antisymmetric.
image
I am wondering what happened and why this term is needed.

Mint release before publication

In preparation for the final submission of the Psi4NumPy paper we need to mint a final release that can be tagged with Zenodo. I plan to do this Friday, February 23rd. Please submit any desired changes before this deadline.

Factor in Sigma Derivatives in Electron Propagator EP2_SO

Hi @dgasmith,
Sorry this is probably me being dense! In EP2_SO.py the $\Sigma$'s are of the form $\frac{1}{2} \frac{\langle~\Vert~\rangle}{\Delta(\omega)}$, so the derivatives $\frac{\partial}{\partial{\omega}}$ will be $-\frac{1}{2} \frac{\langle~\Vert~\rangle}{(\Delta(\omega))^2}$. The code seems to have absorbed the factor $\frac{1}{2}$ somewhere? However, in EP3_SO the derivatives of $\Sigma^{(2)}$ appear with the factor $\frac{1}{2}$ incorporated into the EPterm expressions (lines 189-190). Inconveniently, for the $H_2 O$ test case the $\frac{1}{2}$ factors only affect the 4th decimal place so it's difficult to definitively separate the two versions. Can you clarify the situation for me please.
Peter

Reference implementations not working with current Psi4 master

Using psi4/psi4@0bffede,

$ python RHF.py

  Memory set to 476.837 MiB by Python driver.
Traceback (most recent call last):
  File "RHF.py", line 33, in <module>
    """)
  File "/home/eric/data/opt/apps/psi4/git-gnu-openblas/lib/psi4/driver/molutil.py", line 253, in geometry
    molecule = core.Molecule.from_dict(molrec['qm'])
RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)

not optimal use of einsum

Good day Daniel, I just found in your project amazing python realization of CCSD.
I immediately copied it in my project https://github.com/Konjkov/pyquante2/blob/master/pyquante2/cc/ccsd.py and carefully profiling with https://github.com/rkern/line_profiler.

I found that most of the time is spent executing the following line of code:
https://github.com/dgasmith/psi4numpy/blob/master/Coupled-Cluster/CCSD.dat#L201
which is not surprising.

If einsum indices over which the summation is performed standing in a beginig of the list: np.einsum('mnab,mnef->abef', tmp_tau, MO[o, o, v, v]), it is becomes slower than when in the end: np.einsum('abij,cdij->abcd', np.transpose(tmp_tau, axes=(2,3,0,1)), MO[v, v, o, o]).

Testing of both showed, that correct sequence of indices leads to a twofold speed up in the tests.
https://github.com/Konjkov/pyquante2/blob/master/tests/test_ccsd.py

Another possible cause of slow computing - the multiplication of three tensors simultaneously.
https://github.com/dgasmith/psi4numpy/blob/master/Coupled-Cluster/CCSD.dat#L300
sequential multiplication is always performed faster.

Despite these shortcomings , I am very glad that there are people like you who use Python in quantum chemistry. I'm waiting for the appearance of implementation of CASSCF and especially state-specific multireference coupled-cluster approach of Mukherjee and co-workers (Mk-MRCC) in python.

Good luck in python programming.
Vladimir.

Continuous Integration & testing for PRs

I think we need some sort of continuous integration testing of pull requests to ensure there are no underlying errors which may break the main repo. I think this continuous integration should (ideally)

  • call the most recent release versions of Psi4 and NumPy, and
  • look for compare_values and ensure both reference implementations and tutorials pass,

as well as (possibly) recommending PEP-compliant code style if possible. I think TravisCI is ideal, since they employ Pytest for Python integration and Psi4 already uses them. @loriab since the psi4numpy repo is within the Psi4 organization, would we be able to use Psi4's existing Travis account for this purpose?

For testing the interactive tutorials, we could potentially get Travis to execute

ipython nbconvert --to script tutorial.ipynb

which exports tutorial.ipynb to tutorial.py, before testing tutorial.py on the same footing as a reference implementation.

Thoughts?

AO basis Overlap matrix is printed incorrectly via psi4numpy RHF_Hessian script.

Hello,

As stated above, it appears fine in the PSI4 output file however. When the overlap is transformed to the MO basis it appears correctly also (unit matrix).

To reproduce use the script here

https://github.com/psi4/psi4numpy/blob/master/Self-Consistent-Field/RHF_Hessian.py
and Insert the lines near the top

S_ao = np.asarray(mints.ao_overlap()) print(S_ao)

Not all 1s along the diagonal as expected, whereas psi4 ouput does as well as my own code while testing
Usiing the latest release build from the psi4 donwload page(linux).

Thank you for all the great software.I am learning a lot from it :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.