Code Monkey home page Code Monkey logo

judftteam / aiida-kkr Goto Github PK

View Code? Open in Web Editor NEW
14.0 5.0 10.0 301.77 MB

AiiDA plugin of the high-performance density functional theory code JuKKR (www.judft.de) for high-throughput electronic structure calculations.

Home Page: https://aiida-kkr.readthedocs.io

License: MIT License

Python 98.42% Shell 0.98% OpenEdge ABL 0.61%
kkr aiida workflow forschungszentrum-juelich electronic-structure multiple-scattering ab-initio all-electron band-structure coherent-potential-approximation

aiida-kkr's People

Contributors

broeder-j avatar dantogni avatar dependabot-preview[bot] avatar dependabot[bot] avatar irratzo avatar janssenhenning avatar markusstruckmann avatar mstruckmann avatar philipprue avatar pre-commit-ci[bot] avatar raff-physics avatar rubelmozumder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

aiida-kkr's Issues

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Python 2.7 will no longer be supported in the next feature release of Poetry (1.2).
You should consider updating your Python version to a supported one.

Note that you will still be able to manage Python 2.7 projects by using the env command.
See <redacted> for more information.

Creating virtualenv aiida-kkr-oVdqxz92-py2.7 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

PackageNotFound

Package sphinx (3.4.0) not found.

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv aiida-kkr-ssyGGSoM-py2.7 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

[PackageNotFound]
Package aiida-core (1.1.1) not found.

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

Further cmdline features to implement

Here we collect features of what still has to be implemented on the cmdline.

  • launch: all workchains (excluding ...)

  • expose kkrparams params: i.e

from aiida_kkr.tools import kkrparams
params_node = Dict(dict=kkrparams(LMAX=2, ...))

Check impurity info node comparison

A check should be implemented here:

#TODO: implement also 'ilayer_center' check

If 'imp_cls' is in the impurity info Dict we should not do the to the comparison starting with 'Rcut' but compare the explicitly given impurity clusters.

Then we should compare the first four columns of the imp_cls array in the impurity info nodes (these correspond to x,y,z and layer index).

DOS mode not working in `kkr_scf_wc`

The kkr_scf_wc workchain allows to automatically compute the DOS of the starting potential and from the final (self-consistent) solution if the check_dosin the wf_parameters input Dict is set to True.

In the latest develop version this does not work anymore and lead to exceptions.

EOS workchain rescale function not working with CPA

In the EOS workchain the structure is rescaled by making use of ASE helper functions. However, this does not work if one has alloys, as ASE cannot handle them.

The simplest way to solve this issue is to instead use the get_pymatgen() method, since pymatgen structures can be alloys. One can then make use of the scale_lattice(target_volume) method. This will keep the shape of the cell constant while scaling the parameters such that the cell reaches the target_volume.

allow to control temperature increase in `kkr_scf_wc`

In the workflow we not only change the mixing parameter if a calculation does not converge but we also increase the temperature:

convergence_settings['tempr'] += 50.

We should include more control over this, e.g. by implementing a temprincrease input similar to

'mixreduce': 0.5, # reduce mixing factor by this factor if calculaito fails due to too large mixing

Information on the Bravais matrix not parsed correctly

In the output node of a KKR calculation one finds the information on Bravais matrix and the corresponding reciprocal lattice vectors:

 'direct_bravais_matrix': [[0.707107, 0.707107, 0.707107],
  [0.707107, 0.0, 0.0],
  [0.0, 0.707107, 0.707107]],

 'reciprocal_bravais_matrix': [[-0.707107, -0.707107, -0.707107],
  [-0.707107, 0.707107, 0.707107],
  [0.707107, -0.707107, -0.707107]],

This example is from a bulk fcc Cu ssystem where the information is certainly not parsed correctly.

kick out core states in `kkr_startpot_wc ` workchain

So war we terminate the kkr_startpot_wc if core states are found to lie within the energy contour:

return self.exit_codes.ERROR_CORE_STATES_IN_CONTOUR

To overcome this we can use a modified version of
def kick_out_corestates_wf(potential_sfd, emin):

to remove the core states that lie in the contour.

To keep the provenance we should create a new SinglefileData node with the new potential. This should then be recognized by the KKRcalculation which should use this potential instead of the normal output potential. This can be included here:

elif parent_calc.process_class == VoronoiCalculation:

Alternatively if the data provenance should not be kept we can simply reuse the _POTENTIAL_IN_OVERWRITE filename to overwrite the potential in the KkrCalculation.

KKR scf workflow does not reset mixing to simple mixing automatically

If a calculation is continued from some preconverged calculation and the calculation fails because for some reason the first KKR run did not converge, then the scf workflow might get stuck since it does no ttry resetting IMIX to zero if it was innitially set to some more aggressive mixing scheme.

Allow different qbound and threshold for aggr. mixing in `kkr_scf_wc`

It seems that the KKRcode sometimes stagnates with the convergence which prevents in the case of simple mixing to go below the qbound value. This might be overcome by allowing qbound to be smaller that the threshold for aggressive mixing:

new_params['QBOUND'] = self.ctx.threshold_aggressive_mixing

This can be done with adding a new input (e.g. qbound_straight) that can be set lower than the threshold for aggressive mixing (e.g. set qbound_straight = 10**-3 < self.ctx.threshold_aggressive_mixing =8*10**-3).

Should one add Band Structure and Jij's workchains?

I was looking at the documentation on how to submit Band structure calculations and Jij's determinations from previous calculations. And I have a couple of questions

  1. Should one generate band structure and/or Jij's workchains?: The calculations themselves seem quite straightforward once one has an output from an SCF or Voronoi calculation. Specially the band structure since one could in principle just use the DOS workchain with modified inputs for the submission. In that case, I think that adding some examples of this might be a good idea.

  2. Should one modify the workchains in such a way as to allow the direct calculation of lets say the DOS, without a previous calculation? I.e. the DOS calculation, for example, would search if the remote is present, if it is everything proceeds as normal, otherwise, voronoi, and SCF (if required) calculations are performed. What do you think?

  3. Not so related to this, should one add a verbose command, to limit the printing of some of the printout data to stdout?

New keywords for `retrieve_jij_files` crash `gf_writeout` workflow

When running the gf_writeout workflow for the latest develop branch it crashes since the keywords JIJRAD, JIJRADXY, JIJSITEI, JIJSITEJ from masci_tools/io/kkr_params.py raise a KeyError in the update_params_wf from aiida_kkr.tools.common_workfunctions. Possibly, this error will also occur in other workflows.

The error message for the gf_writeout workflow looks like this:

12/04/2018 10:38:02 AM <5320> aiida.orm.implementation.general.calculation.work.WorkCalculation: [REPORT] [19633|kkr_flex_wc|on_except]: Traceback (most recent call last):
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/process_states.py", line 228, in execute
    result = self.run_fn(*self.args, **self.kwargs)
  File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/workchain.py", line 158, in _do_step
    finished, stepper_result = self._stepper.step()
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/workchains.py", line 292, in step
    finished, result = self._child_stepper.step()
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/workchains.py", line 430, in step
    finished, retval = self._child_stepper.step()
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/workchains.py", line 292, in step
    finished, result = self._child_stepper.step()
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/workchains.py", line 242, in step
    return True, self._fn(self._workchain)
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/aiida_kkr/workflows/gf_writeout.py", line 271, in set_params_flex
    paranode_flex = update_params_wf(self.ctx.input_params_KKR, updatenode)
  File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/workfunctions.py", line 76, in wrapped_function
    result, _ = run_get_node(*args, **kwargs)
  File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/workfunctions.py", line 69, in run_get_node
    return proc.execute(), proc.calc
  File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 742, in execute
    result = super(FunctionProcess, self).execute()
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/processes.py", line 992, in execute
    return self.future().result()
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/kiwipy/futures.py", line 43, in result
    return super(Future, self).result(timeout=0.)
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
    raise_exc_info(self._exc_info)
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/process_states.py", line 228, in execute
    result = self.run_fn(*self.args, **self.kwargs)
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/processes.py", line 981, in run
    return self._run()
  File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 776, in _run
    result = self._func(*args, **kwargs)
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/aiida_kkr/tools/common_workfunctions.py", line 54, in update_params_wf
    nodedesc=nodedesc, **updatenode_dict)
  File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/aiida_kkr/tools/common_workfunctions.py", line 106, in update_params
    if kwargs[key] != inp_params[key]:
KeyError: u'JIJSITEJ'

Are magnetic host simulations properly parsed?

@PhilippRue, @broeder-j I was testing bcc Fe to see the SCF when magnetism is considered (NSPIN=2, mag_init: True in the wf_parameters ). And the calculation is failing with the current error Finished [302] KKR parser retuned an error, when I do some checking I find that what is happening is that the parser is giving the error Error parsing output of KKR: orbital moment'.

When I see the output.2.txt I see that no orbital moment is printed, i.e. I'm running without NEWSOSOL, for what I can see in masci_tools it should not even reach this point to get the failure state.

I think that what is happening is that the use_newsosol function in matsci_tools is incorrectly identifying NEWSOSOL to be active (or use_chebychev) even when it is not. (I tried to parse them manually and I get this error) even when the line in question is

 <use_Chebychev_solver>=  F   use the Chebychev solver (former: 'NEWSOSOL')

I think that in the previous inputcard as the presence of the line was all that was necessary the use_newsosol just looks if a line containing NEWSOSOL is larger than 0, case which would not work here. One could perhaps replace it by something like this:

newsosol = True
if line.startswith("<use_Chebychev_solver>="):
     _temp = line.split()
    if _temp[1].lower() == "f":
        newsosol = False

P.S. How does one activate NEWSOSOL in the kkr_params?

We implicitly assume no `alat_input` is given

Here

self.ctx.r_cls = find_cluster_radius(self.inputs.structure, self.ctx.nclsmin/1.15, nbins=100)[1] # find cluster radius (in alat units)

we use the alat value computed from the cell of the structure but one can also give a different alat_input value which will then rescale some values.

To circument this we should pass the cluster radius in Ang. units and then convert to alat here after we know whether or not an input_alat is given.

kkr_scf workflow does not recognize stagnating convergence

Especially for GGA calculations convergence might get stuck above the chosen convergence criterion. Then the kkr_scf workflow fails to recognize that a calculation has reached 'soft'-convergence already and resets to the last 'properly successful' calculation before submitting the next calculations (possibly indefinitely). This has, for example happenend here:
grafik

It would be better if KKR recognized such soft convergence and this could then be passed on to the kkr_scf workflow which should then stop (possibly with 'soft' convergence marked).

Bandstructure calculations fail with new `aiida_core` due to `ModificationNotAllowed` error

When running the band structure mode of the kkr.kkr calculation (if found_kpath) the calculations can't be submitted since the call of update_params_wf raises an error somehow. This ModificationNotAllowed error is present for aiida_core=1.0.0a4 and later. The error message for this calculation with RUNOPT=qdos is:

(test_jup_env) mb-bert:~ bert$ verdilog 23332
*** 23332 [Bandstructure calc for BiSbTe3 bulk]: TOSUBMIT
*** Scheduler output: N/A
*** Scheduler errors: N/A
*** 1 LOG MESSAGES:
+-> REPORT at 2018-12-18 12:08:40.134797+00:00
 | [23332|JobProcess_aiida_kkr.calculations.kkr:KkrCalculation|on_except]: Traceback (most recent call last):
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/process_states.py", line 228, in execute
 |     result = self.run_fn(*self.args, **self.kwargs)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/job_processes.py", line 632, in run
 |     calc_info, script_filename = self.calc._presubmit(folder, use_unstored_links=False)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/orm/implementation/general/calculation/job/__init__.py", line 1792, in _presubmit
 |     calcinfo = self._prepare_for_submission(folder, inputdict)
 |   File "/Users/bert/sourcecodes/github/aiida-kkr/aiida_kkr/calculations/kkr.py", line 374, in _prepare_for_submission
 |     parameters = update_params_wf(parameters, new_params_node)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/workfunctions.py", line 76, in wrapped_function
 |     result, _ = run_get_node(*args, **kwargs)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/workfunctions.py", line 68, in run_get_node
 |     proc = process_class(inputs=inputs, runner=runner)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/state_machine.py", line 188, in __call__
 |     inst.transition_to(inst.create_initial_state())
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/state_machine.py", line 329, in transition_to
 |     *sys.exc_info()[1:])
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/state_machine.py", line 342, in transition_failed
 |     raise_(type(exception), exception, trace)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/state_machine.py", line 311, in transition_to
 |     self._enter_next_state(new_state)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/state_machine.py", line 376, in _enter_next_state
 |     self._fire_state_event(StateEventHook.ENTERING_STATE, next_state)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/state_machine.py", line 290, in _fire_state_event
 |     callback(self, hook, state)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/processes.py", line 281, in <lambda>
 |     lambda _s, _h, state: self.on_entering(state))
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 224, in on_entering
 |     super(Process, self).on_entering(state)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/processes.py", line 594, in on_entering
 |     call_with_super_check(self.on_create)
 |   File "/Users/bert/.virtualenvs/test_jup_env/lib/python2.7/site-packages/plumpy/base/utils.py", line 29, in call_with_super_check
 |     fn(*args, **kwargs)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 113, in on_create
 |     self._pid = self._create_and_setup_db_record()
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 353, in _create_and_setup_db_record
 |     self._setup_db_record()
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 752, in _setup_db_record
 |     super(FunctionProcess, self)._setup_db_record()
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/work/processes.py", line 452, in _setup_db_record
 |     self.calc.add_link_from(parent_calc, 'CALL', link_type=LinkType.CALL)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/orm/implementation/general/calculation/__init__.py", line 582, in add_link_from
 |     return super(AbstractCalculation, self).add_link_from( src, label, link_type)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/orm/mixins.py", line 161, in add_link_from
 |     super(Sealable, self).add_link_from(src, label=label, link_type=link_type)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/orm/implementation/general/node.py", line 616, in add_link_from
 |     src._linking_as_output(self, link_type)
 |   File "/Users/bert/sourcecodes/github/aiida_core/aiida/orm/implementation/general/calculation/job/__init__.py", line 290, in _linking_as_output
 |     valid_states, self.get_state()))
 | ModificationNotAllowed: Can add an output node to a calculation only if it is in one of the following states: [u'SUBMITTING', u'RETRIEVING', u'PARSING'], it is instead TOSUBMIT

Improve finding of screening cluster size

So far the default value for the screening cluster size in the KKR code (parameter RCLSUTZ and RCLSUTXY) is set according to our experience with previous systems. This can be improved by automatically investigating the partial norm (see Wildberger PhD, page 48) of the structural Green function. In this way the size of the screening cluster could be determined automatically.

Auto check of input parameter

After convergence of a calculation check if the chosen parameter set is converged (e.g. check RMAX, GMAX, cluster size, number of Born iterations, ...)

maybe change_struc_imp_aux should be a workfunction instead of a simple function

If we change the definition of the function

def change_struc_imp_aux_wf(struc, imp_info): # Note: works for single imp at center only!

to add the @wf decorator just before the function definition, aiida should automatically recognize that inputs and outputs are nodes in the database and create the corresponding database nodes with links. This will improve the readability of the workflow afterwards since it makes it clearer to track how input and auxiliary structures are connected.

Here wf should be imported like this:

from aiida.work.workfunctions import workfunction as wf

Serial is always assumed in `start_voro`

When running start_voro the serial keyword is not explicitly passed via get_inputs_voronoi, this in turn results in the get_inputs_common setting an option resource dictionary that looks like this

        # overwrite settings for serial run
        options['withmpi'] = False
        options['resources'] = {"num_machines": 1}

This will fail if one is using a SGE kind of scheduler for the machine where voronoi is compiled.

the simplest solution would be to set

serial = not self.ctx.withmpi

builder = get_inputs_voronoi(
                    voronoicode, structure, options, label, description, params=params, serial=serial)

Unless there is a specific reason for this @PhilippRue , @broeder-j ?

I recognize that in principle one can have voronoi and the host code in two different machines, with two different kinds of schedulers and setups. So perhaps this is a too restrictive approach.

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:


Python 2.7 will no longer be supported in the next feature release of Poetry (1.2).
You should consider updating your Python version to a supported one.

Note that you will still be able to manage Python 2.7 projects by using the env command.
See <redacted> for more information.

Creating virtualenv aiida-kkr-NpYRtlDh-py2.7 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

PackageNotFound

Package pg8000 (1.15.0) not found.

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

aiida-kkr uses only the `v+ + v-` rms error for convergence checks

In case of two spin channels kkr provides the rms-error (convergence criterion) in the following way:

      ITERATION   2 average rms-error : v+ + v- =  5.7265D-03
                                        v+ - v- =  1.2692D-06

In the aiida-kkr plugin only the first value is used. There may however happen certain cases when the second value is the larger one.

bandstructure and DOS workflows don't recognize non-collinear angles

Whe a calculation uses non-collinear angles and then a band structure of DOS calculation follows the code neither tries to use the nonco_angles input nor the nonco angles from the output as it is done in the scf workflow.

To fix this the band structure and DOS workflows should check if the input calcualtions used nonco angles and then take either the output angles (or reuse the input angles if they were not updated).

EOS workchain failing for CPA

When submitting a EOS calculation with an alloy the workchain fails after the first voronoi calculation.

The calculation fails when in the kkr_scf_wc it tries to get the last_params_voronoi

self.ctx.last_params = self.ctx.voronoi.outputs.last_params_voronoi

aiida.common.exceptions.NotExistentAttributeError: Node<134354> does not have an output with link label 'last_params_voronoi'

However, if one submits a single SCF calculation it works without problems, indicating that there is some sort of issue when passing data between the SCF calculation.

The vorostart calculation fails with exit code 203

type         kkr_startpot_wc
state        Finished [203] Voronoi calculation unsuccessful. Check inputs

The VoronoiCalculation fails when setting up the RMTCORE

 | [134390|VoronoiCalculation|on_except]: Traceback (most recent call last):
 |   File "/home/jonathan/Codes/AiiDA/aiida-core/aiida/engine/processes/calcjobs/tasks.py", line 85, in do_upload
 |     calc_info = process.presubmit(folder)
 |   File "/home/jonathan/Codes/AiiDA/aiida-core/aiida/engine/processes/calcjobs/calcjob.py", line 522, in presubmit
 |     calc_info = self.prepare_for_submission(folder)
 |   File "/home/jonathan/Codes/Forks/aiida-kkr/aiida_kkr/calculations/voro.py", line 178, in prepare_for_submission
 |     natom, nspin, newsosol, warnings_write_inputcard = generate_inputcard_from_structure(
 |   File "/home/jonathan/Codes/Forks/aiida-kkr/aiida_kkr/tools/common_workfunctions.py", line 735, in generate_inputcard_from_structure
 |     params.fill_keywords_to_inputfile(output=input_filename)
 |   File "/home/jonathan/Codes/Forks/masci-tools/masci_tools/io/kkr_params.py", line 1248, in fill_keywords_to_inputfile
 |     self._check_input_consistency()
 |   File "/home/jonathan/Codes/Forks/masci-tools/masci_tools/io/kkr_params.py", line 1204, in _check_input_consistency
 |     self._check_array_consistency()
 |   File "/home/jonathan/Codes/Forks/masci-tools/masci_tools/io/kkr_params.py", line 1153, in _check_array_consistency
 |     raise TypeError('Error: array input not consistent for key {}'.format(key))
 | TypeError: Error: array input not consistent for key <RMTCORE>

When looking at the input data one can see that RMTCORE seems to be well setup '<RMTCORE>': [2.2906805904695], (for a single site structure with two chemical species).

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Updating dependencies
Resolving dependencies...
                                  
[PackageNotFound]  
Package [tarfile] not found.    
                                  
update [--no-dev] [--dry-run] [--lock] [--] [<packages>]...


If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

You can mention @dependabot in the comments below to contact the Dependabot team.

Sub SCF calculation do not keep provenance of the structures

When performing an EOS calculation, I noticed that each of the sub-calculations performed to determine the EOS, do not have the structure in the input nodes.

This is a problem since one is not keeping the provenance clearly, between the calculations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.