Code Monkey home page Code Monkey logo

cp2k-input-tools's Introduction

CP2K

Release Status Debian Status Fedora Status Ubuntu Status Arch Status Homebrew Status Docker Status Spack Status Conda Status

CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFT, MP2, RPA, GW, tight-binding (xTB, DFTB), semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, ...), and classical force fields (AMBER, CHARMM, ...). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method.

CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of multi-threading, MPI, and CUDA.

Downloading CP2K source code

To clone the current master (development version):

git clone --recursive https://github.com/cp2k/cp2k.git cp2k

Note the --recursive flag that is needed because CP2K uses git submodules.

To clone a release version vx.y:

git clone -b support/vx.y --recursive https://github.com/cp2k/cp2k.git cp2k

For more information on downloading CP2K, see Downloading CP2K. For help on git, see Git Tips & Tricks.

Install CP2K

The easiest way to build CP2K with all of its dependencies is as a Docker container.

For building CP2K from scratch see the installation instructions.

Links

  • CP2K.org for showcases of scientific work, tutorials, exercises, presentation slides, etc.
  • The manual with descriptions of all the keywords for the CP2K input file
  • The dashboard to get an overview of the currently tested architectures
  • The Google group to get help if you could not find an answer in one of the previous links
  • Acknowledgements for list of institutions and grants that help to fund the development of CP2K

Directory organization

  • arch: Collection of definitions for different architectures and compilers
  • benchmarks: Inputs for benchmarks
  • data: Simulation parameters e.g. basis sets and pseudopotentials
  • exts: Access to external libraries via GIT submodules
  • src: The source code
  • tests: Inputs for tests and regression tests
  • tools: Mixed collection of useful scripts related to cp2k

Additional directories created during build process:

  • lib: Libraries built during compilation
  • obj: Objects and other intermediate compilation-time files
  • exe: Where the executables will be located

cp2k-input-tools's People

Contributors

dependabot-preview[bot] avatar dependabot[bot] avatar dev-zero avatar knuedd avatar pre-commit-ci[bot] avatar yakutovicha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

cp2k-input-tools's Issues

Implement aiida-cp2k input scaffolding

Dear cp2k-input-tools developper(s),

I have been trying to use your library to create a python dict from an existing CP2K input, to then use this dict to run new calculations in the aiida-cp2k framework. However I face the following issue.

When I use the CP2KInputParserSimplified parser, the KIND section is in the form :

'KIND': {'Te': {'BASIS_SET': 'TZVP-MOLOPT-SR-GTH', 'POTENTIAL': 'GTH-BLYP-q6'}, 'O': {'BASIS_SET': 'TZVP-GTH-q6', 'POTENTIAL': 'GTH-BLYP-q6'}}

My problem is that this tree-structure is not understood by AiiDA, which requires a format that seems to correspond to what you call "canonical form", generated by the CP2KInputParser :

'+KIND': [{'BASIS_SET': ['TZVP-MOLOPT-SR-GTH'], 'POTENTIAL': 'GTH-BLYP-q6', '': 'Te'}, {'BASIS_SET': ['TZVP-GTH-q6'], 'POTENTIAL': 'GTH-BLYP-q6', '': 'O'}]

except that in this case the '+' signs that are added to most dict key names for a reason that I do not quite understand are problematic. I do not yet know whether aiida-cp2k is able to recognize the key names despite the presence of these + signs, but it will create section duplicates if I try to modify the inoput parameters afterwards within aiida-cp2k.

In summary, is there a way to obtain the tree structure of the 'canonical form' (in particular for the 'KIND' section) without the '+' signs ?

Thanks for your help.
Best regards.

Sylvian

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv cp2k-input-tools-5QHZEISl-py3.8 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

[PackageNotFound]
Package pytest-console-scripts (0.2.0) not found.

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

language-server: implement completion

ok, we probably need a parsing/abstract syntax tree for this, then it could work like:

  1. parse until cursor line
  2. if the syntax is correct until here we should get the AST/parsing tree (if we are driving the parsing directly we can simply stop, if not by catching an exception)
  3. from the parsing we should get the current section in the schema
  4. lookup other keys/values

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv cp2k-input-tools-kOcA1kGx-py3.8 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

[PackageNotFound]
Package pytest-console-scripts (0.2.0) not found.

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

parsing of `X..Y` ranges for LIST keyword fails

I am not sure how often this is used for other places in the input file, but e.g. the LIST keyword of ISOLATED_ATOMS.
takes a range of the form start..end with integers start and end.
However, right now this is coded as integer in https://github.com/cp2k/cp2k-input-tools/blob/develop/cp2k_input_tools/cp2k_input.xml
(line 429209). This errors out when parsing.

A temporary fix is to change it to string, though this does not give us structured data to work with.

Bug: base-dir argument cannot be a bytes object.

When I run:

$ fromcp2k  -f yaml aiida.inp --base-dir files/

I get the following error:

Traceback (most recent call last):
  File "/home/aiida/.local/bin/fromcp2k", line 8, in <module>
    sys.exit(fromcp2k())
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/aiida/.local/lib/python3.7/site-packages/cp2k_input_tools/cli/fromcp2k.py", line 64, in fromcp2k
    tree = cp2k_parser.parse(fhandle, dict(var_values))
  File "/home/aiida/.local/lib/python3.7/site-packages/cp2k_input_tools/parser.py", line 205, in parse
    preprocessor = CP2KPreprocessor(fhandle, self._base_inc_dir, initial_variable_values)
  File "/home/aiida/.local/lib/python3.7/site-packages/cp2k_input_tools/preprocessor.py", line 38, in __init__
    self._inc_dirs = [Path(b) for b in base_dir]
  File "/home/aiida/.local/lib/python3.7/site-packages/cp2k_input_tools/preprocessor.py", line 38, in <listcomp>
    self._inc_dirs = [Path(b) for b in base_dir]
  File "/opt/conda/lib/python3.7/pathlib.py", line 1027, in __new__
    self = cls._from_parts(args, init=False)
  File "/opt/conda/lib/python3.7/pathlib.py", line 674, in _from_parts
    drv, root, parts = self._parse_args(args)
  File "/opt/conda/lib/python3.7/pathlib.py", line 658, in _parse_args
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not int

It looks like the bytes object is recognised as Sequence here:

elif isinstance(base_dir, Sequence):

Delay or avoid keyword value conversion

Currently we're always converting keyword values to their default unit.
This is not needed for linting, only when converting to JSON or YAML and may even lead to problems when the XML schema lists internal_cp2k as the default unit, see #54.

It might be a good idea to delay the actual conversion to a time when it is actually needed and/or avoid it completely by changing the JSON/YAML format to yield a string containing the original unit.

The parser tries to normalize values here:

if current_unit != default_unit:
# interpret the given value in the specified unit, convert it and get the raw value
value = (value * current_unit).to(default_unit).magnitude
values += [value]

The reason for this is that for serialization formats like json and yaml we yield values in the CP2K default unit since we do not export any unit specification.

It's a bit unfortunate that CP2K defines the internal units as abstract internal_cp2k rather than effective unit, especially for things where there would be a physical unit behind it (other than relative tolerances where this seems to be used otherwise).
This makes external linting of input files impossible since we could only validate such values by hardcoding the effective internal unit of those parameters within cp2k-input-tools.

I guess the proper way forward is to:

  1. store an explicitly given unit for a quantity with a default unit of internal_cp2k and only validate it, do not attempt to convert the value from or to internal_cp2k.
  2. when converting to a format (json or yaml) where the unit would be lost, either yield an error if the unit was explicitly given (since we can't convert it and the recipient of the json/yaml wouldn't know which unit it is), or yield the value as a string with the unit either suffixed (1.23 angstrom) or prefixed as in the CP2K input [angstrom] 1.23.

This would also delay unit conversion to the point when it becomes necessary (for linting it is not), rather than just attempting it all the time.

The intermediate solution: if the default unit is internal_cp2k, store as is, if there is an explicit unit given, raise an error since we can't convert to internal_cp2k.

Originally posted by @dev-zero in #54 (comment)

Multiple basis sets vs. simplified parser

I came across a problem when using CP2KInputParserSimplified with multiple basis sets for an atomic kind (say, an ADMM calculation) and wanted to check if it is something that should be fixed, or if I need to use the full parser. Specifically, given the input snippet

&FORCE_EVAL
  &SUBSYS
    &KIND H
      POTENTIAL GTH-PBE-q1
      BASIS_SET TZV2P-MOLOPT-GTH
      BASIS_SET AUX_FIT pFIT3
    &END KIND
  &END SUBSYS
&END FORCE_EVAL

and the code

from cp2k_input_tools.parser import CP2KInputParserSimplified
tree = CP2KInputParserSimplified().parse(open('test-basis.inp'))
print(tree['force_eval']['subsys']['kind']['H'])

I get

{'potential': 'GTH-PBE-q1', 'basis_set': [['AUX_FIT', 'pFIT3'], ['AUX_FIT', 'pFIT3']]}

which is incorrect and also ends up with an incorrect input file when written using CP2KInputGenerator.

Adding an explicit ORB to the primary basis does not help, as it results in:

{'potential': 'GTH-PBE-q1', 'basis_set': ['ORB', 'TZV2P-MOLOPT-GTH', ['AUX_FIT', 'pFIT3']]}

Switching the order of the two lines changes the output to the still incorrect

{'potential': 'GTH-PBE-q1', 'basis_set': ['AUX_FIT', 'pFIT3', 'TZV2P-MOLOPT-GTH']}

On the other hand, CP2KInputParser processes it fine. Is this a bug or expected behavior for the simplified parser? If so, should it perhaps raise an exception instead? Thank you for looking at this.

MM input causes unit problems

I've tried to use cp2k-input-tools with a fairly standard MM Force-eval, but unfortunately there are problems with the K keywords in the BEND and BOND sections.

It seems that the parser tries to determine the standard unit for these variables, which are extracted from the cp2k_input.xml file. However, there are multiple definitions of K in there and it looks like the parser takes a definition with default unit 'internal_cp2k', which eventually triggers the following error:

in read_input_cp2k
    tree = parser.parse(f_in)
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/cp2k_input_tools/parser.py", line 216, in parse
    self._parse_as_keyword(line)
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/cp2k_input_tools/parser.py", line 149, in _parse_as_keyword
    kw = Keyword.from_string(kw_node, kw_value, self._key_trafo)  # the key_trafo is needed to mangle keywords
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/cp2k_input_tools/keyword_helpers.py", line 127, in from_string
    default_unit = UREG.parse_expression(default_unit)
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/pint/registry.py", line 1264, in parse_expression
    lambda x: self._eval_token(x, case_sensitive=case_sensitive, **values)
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/pint/pint_eval.py", line 102, in evaluate
    return define_op(self.left)
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/pint/registry.py", line 1264, in <lambda>
    lambda x: self._eval_token(x, case_sensitive=case_sensitive, **values)
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/pint/registry.py", line 1159, in _eval_token
    {self.get_name(token_text, case_sensitive=case_sensitive): 1}
  File "/home/ucapcsc/software/miniconda3/lib/python3.7/site-packages/pint/registry.py", line 643, in get_name
    raise UndefinedUnitError(name_or_alias)
pint.errors.UndefinedUnitError: 'internal_cp2k' is not defined in the unit registry

I have circumvented this problem by changing all 'internal_cp2k' occasions in cp2k_input.xml to 'hartree', but this is a rather ugly fix...

`cp2k-input-tools 0.8.3` Release

Hey and first of all, thanks for this great tool!
We use it to make CP2k compatible with https://dvc.org/ for our research data management and parameter tracking.

We include cp2k-input-tools as a dependency in our package https://github.com/zincware/IPSuite which allows us to build the workflows and much more.

I've seen that you migrated to pydantic v2 in 3706fae .
We have also updated some of our tools to use the new pydantic release.

I was wondering what criteria you have on your side until you release the next version of cp2k-input-tools?
This would allow us, to also release a new version of our tool including cp2k-input-tools` on PyPi, because the package can't depend on the src.

Release 0.9.0 on PyPI?

More and more packages require a bump of the Pydantic dependency. I was happy to see it has already happened for this library's 0.9.0 version.

However, for some reason, it didn't make it to PyPI. @dev-zero, do you know if it was intentional?

Multiple BASIS_SET_FILE_NAME values not taken into account

Hi,

It seems that the cp2k-input-tools parser does not properly deals with the situation where multiple basis set file names are needed (different files for different elements in th system) :

&DFT
BASIS_SET_FILE_NAME /gpfs/home/scadars/CP2K/CP2K6.1/data/GTH_BASIS_SETS
BASIS_SET_FILE_NAME /gpfs/home/scadars/CP2K/CP2K6.1/data/BASIS_MOLOPT_UCL
POTENTIAL_FILE_NAME /gpfs/home/scadars/CP2K/CP2K6.1/data/GTH_POTENTIALS

In this case the parse() method of the CP2KInputParserSimplified class with the following options :

parser = CP2KInputParserSimplified(key_trafo=str.upper,
multi_value_unpack=False,
repeated_section_unpack=False,
level_reduction_blacklist=["KIND"])

returns a nested dict which contains among other key/value pairs of the dict value associated with the 'DFT' key :

'BASIS_SET_FILE_NAME': ['/gpfs/home/scadars/CP2K/CP2K6.1/data/BASIS_MOLOPT_UCL', '/gpfs/home/scadars/CP2K/CP2K6.1/data/BASIS_MOLOPT_UCL'],
'POTENTIAL_FILE_NAME': '/gpfs/home/scadars/CP2K/CP2K6.1/data/GTH_POTENTIALS'

It appears that the parser correctly interprets the fact that there are 2 values associated with 'BASIS_SET_FILE_NAME' but only the first value is used, and repeated, instead of using the second.

Another related issue is from the aiida-cp2k side, where the builder associated with cp2k code does not seem to accept lists of files as values for the 'basis' key in the builder.file (see aiida-cp2k issue 139 here : [(https://github.com/aiidateam/aiida-cp2k/issues/139)]).

Please note that, being unfamiliar with the CP2K code I don't know how common this situation, but I got sample my input files from a definitely experienced user, and the code itself is able to deal with such a situation.

Thank you very much for your help.
Best regards.

Sylvian

Linting errors with a new CP2K keyword

Recently, I came across an unexpected error when using the cp2klint command line tool which clashes with a recently (from version 9.1 onwards) added CP2K keyword PRINT_ATOM_KIND under CP2K_INPUT/MOTION/PRINT/FORCES. Specifically, trying to lint the following MWE input

&MOTION
  &PRINT  
    &FORCES  
      PRINT_ATOM_KIND TRUE  
    &END FORCES  
  &END PRINT  
&END MOTION

raises an invalid keyword error, which ideally should not be the case. Commenting out the problematic keyword in the input removes the error.

Bug detected when parsing structure from input file

Hi,

I tried to use the simplified parser to read a structure (as well as other parameters) from a CP2K input file containing the following COORD section :

&COORD
  UNIT angstrom
  Kr  0 0 0
  Kr  4 0 0
&END COORD

with the following paser options :

parser = CP2KInputParserSimplified(key_trafo=str.upper,
multi_value_unpack=False,
repeated_section_unpack=False,
level_reduction_blacklist=["KIND"])

The parser.parse(fhandle) method returns a dict that contains the following erroneous 'COORD' value :

'COORD': {'UNIT': 'angstrom', '*': ['Kr 4 0 0', 'Kr 4 0 0']}}

Two problems here. The '*' may cause problems in CP2K (see below). Maybe more importantly the coordinates of both atoms are idendical and correpond to the second atom in the original input file.

When I tried to submit this calculation with a script generated with :

fromcp2k -f aiida-cp2k-calc CP1K_INPUT_FILE

with the follwing 2 lines commented to use the structure provided in the input file :

structure = StructureData(...)

builder.structure = structure # read from original CP2K input file

The calculations is succesfully submitted by aiida to the (remote) computer, but the COORD section in the aiida.inp file looks as follows (again same coordinates for both atoms) :

  &COORD
     * Kr  4 0 0
     * Kr  4 0 0
     UNIT angstrom
  &END COORD

And the aiida.out file reads :


  • ___ *
  • / \ *
  • [ABORT] Incorrectly formatted input line for atom 1 found in COORD section. *
  • ___/ A floating point type object was expected, found Input line: *
  • | <* Kr 4 0 0> *
  • O/| *
  • /| | *
  • / \ atoms_input.F:209 *

Note that in the aiida-cp2k example wherein structure is read from the cp2k input file (here) the COORD value of the dict reads as follows :

                'COORD': {
                    ' ': ['H    2.0   2.0   2.737166', 'H    2.0   2.0   2.000000']
                },

No '*' sign here, but a blank key instead...

Thank you for your help.
Best wishes.

Sylvian.

comments after keywords

Hi, thanks on a nice project, I just discovered it while preparing something "simple" for myself :)
There's a failure when you have got comments on the same line as keyword, such as:

&POISSON  #your comment goes here

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv cp2k-input-tools-T1ptdnyg-py3.8 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

[SolverProblemError]
The current project's Python requirement (^3.6) is not compatible with some of the required packages Python requirement:
  - dataclasses requires Python >=3.6, <3.7

Because dataclasses (0.7) requires Python >=3.6, <3.7
 and no versions of dataclasses match >0.7,<0.8, dataclasses is forbidden.
So, because cp2k-input-tools depends on dataclasses (^0.7), version solving failed.

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

Problem with phonopy, cp2k_input_tools and transitions

I have found an error message when building supercells with atomic displacements using a combination of phonopy (2.23.1, 2.24.1, 2.24.2), cp2k_input_tools (0.9.1, 0.8.2, 0.7.2, 0.6.0, etc) and transitions (0.8.11, 0.7.2, 0.7.0, etc), also python versions 3.8 and 3.12. I do not understand the error message below, so I will appreciate any suggestions. Thanks.

pereira@debye:~/WORK/CP2K/Force% phonopy --cp2k -c aGST_ENERGY_FORCE.inp -d --dim 1 1 1
_
_ __ | |__ ___ _ __ ___ _ __ _ _
| '_ | '_ \ / _ | '_ \ / _ \ | '_ | | | |
| |) | | | | () | | | | () || |) | || |
| .__/|
| ||___/|| ||___() ./ _, |
|
| || |
/
2.23.1

Compiled with OpenMP support (max 8 threads).
Python version 3.12.3
Spglib version 2.3.1

Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/bin/phonopy", line 45, in
main(**argparse_control)
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/phonopy/cui/phonopy_script.py", line 1853, in main
cell_info = get_cell_info(settings, cell_filename, log_level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/phonopy/cui/phonopy_script.py", line 1641, in get_cell_info
cell_info = collect_cell_info(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/phonopy/cui/collect_cell_info.py", line 142, in collect_cell_info
unitcell, optional_structure_info = read_crystal_structure(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/phonopy/interface/calculator.py", line 472, in read_crystal_structure
unitcell, config_tree = read_cp2k(cell_filename)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/phonopy/interface/cp2k.py", line 78, in read_cp2k
tree = parser.parse(fhandle)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/cp2k_input_tools/parser.py", line 215, in parse
self._parse_as_keyword(line)
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/cp2k_input_tools/parser.py", line 149, in _parse_as_keyword
kw = Keyword.from_string(kw_node, kw_value, self._key_trafo) # the key_trafo is needed to mangle keywords
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/cp2k_input_tools/keyword_helpers.py", line 106, in from_string
tokens = tokenize(vstring)
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/cp2k_input_tools/tokenizer.py", line 129, in tokenize
char_map.get(char, tokenizer.token_char)(string, colnr)
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/transitions/core.py", line 405, in trigger
return self.machine._process(func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/transitions/core.py", line 1079, in _process
return trigger()
^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/phonopy/lib/python3.12/site-packages/transitions/core.py", line 421, in _trigger
raise MachineError(msg)
transitions.core.MachineError: "Can't trigger event quote_char from state comment!"

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv cp2k-input-tools-2RVZUAfo-py3.9 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

  PackageNotFound

  Package pygls (0.8.1) not found.

  at /usr/local/.pyenv/versions/3.9.0/lib/python3.9/site-packages/poetry/repositories/pool.py:144 in package
      140│                     self._packages.append(package)
      141│ 
      142│                     return package
      143│ 
    → 144│         raise PackageNotFound("Package {} ({}) not found.".format(name, version))
      145│ 
      146│     def find_packages(
      147│         self, dependency,
      148│     ):

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

The libxc subsections of &XC_FUNCTIONAL are not recognized

Hi

I'm trying to parse an input file using the new libxc subsections of &XC_FUNCTIONAL. But these sections are not recognized and I get the following error:

cp2k_input_tools.parser_errors.InvalidSectionError: ("invalid section 'MGGA_C_R2SCAN'", defaultdict(<function Context.. at 0x2b26fa545790>, {'filename': 'phonon.inp', 'linenr': 358, 'colnrs': [8], 'line': '&R2MGGA_C_SCAN', 'section': Section(name='XC_FUNCTIONAL', node=<Element 'SECTION' at 0x2b26f7505bd0>, subsections=[], keywords=[], param=None, repeats=False)}))

Can I somehow add these sections?

Thanks in advance,
Fabian

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.