Code Monkey home page Code Monkey logo

surfaxe's People

Contributors

brlec avatar dandavies99 avatar danielskatz avatar kavanase avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

surfaxe's Issues

[SUGGESTION] Adding pre-commit hooks for linting/formatting the code

Hi,
Thanks for developing this nice package.

While I was reading/reviwing the code, I found couple of unused imports/variables (listed below). I suggest adding pre-commit hooks with pylint, yapf, and prospector to format/lint code at commits. This would capture these and also improves readability of the code.

Unused imports:

  • os,mpl,plt, Element, Lattice, and SiteCollection in analysis.py.
  • Structure and numpy in convergence.py.
  • os in generation.py.
  • mpl in io.py.
  • Slab, Element, PeriodicSite in vasp_data.py.
  • warnings and Structure in test_analysis.py.
  • warnings, os, Structure, and Slab in test_convergence.py
  • warnings, Structure in test_generation.py
  • warnings, os, Structure, surfaxe.io in test_io.py

Unused variables:

Best regards,
Pezhman

multiprocessing to speed up surface generation

Surface generation could probably be more rapid if a simple multiprocessing implementation was made use of. This would be particularly useful for when many thicknesses/vacuums are iterated over. Currently in generation.get_all_slabs:

# Iterate through vacuums and thicknessses
    provisional = []
    for vacuum in vacuums:
        for thickness in thicknesses:
            all_slabs = generate_all_slabs(struc, max_index, thickness, vacuum,
                                        center_slab=center_slab, bonds=bonds, 
                                        **all_slabs_kwargs)

The way I would see this working would be to use a function from use e.g. itertools.product to iterate over vacuums and thicknesses and then pass that iterable and the generate_all_slabs function to e.g. multiprocessing.pool.map to create the all_slabs list. Along the lines of:

combos  = itertools.product(vacuums, thicknesses)
with multiprocessing.Pool as pool:
    all_slabs = pool.map(functools.partial(get_all_slabs, <args>,), combos)

It may need some slight reworking to allow the args to be passed properly to get_all_slabs.
This could then feed into the next part of the code which filters out non-polar slabs etc. as normal.

Line length

Some lines of code are very long. We should decide on maximum character length per line, restructure existing code into this format and stick to this limit going forward.

customise the number of parallel workers

When parallelisation is used, the nubmer of the wokre is the same as the number of cpus. However, the cpus detected can be logical processor not physically cores. If a CPU has hyperthreading, then only half of the logical processors should be used.

In general, it would be useful lower the number of workers. In addition, too many worker can lead to increased memory usage due to the overheads.

Accessibility features

The initial design was not really done in line with design principles for accessibility. I think the package should try to implement at least the bare minimum.
The following changes would probably be good to have:

  • default plots are in colour schemes safe for all types of colourblindness
  • increase the font size across all plots
  • add the alt text descriptions to all figures used in the tutorials (i don't think you can add them to metadata of images as you make them but would be well nice if we could)
  • improve the descriptions of modules to make them clearer
  • no large blocks of text in the tutorials/readme
  • make sure the module names are distinguishable from each other and descriptive

implement unittesting

It would be nice if the package includes test, this will make sure it functions as expected, as well as making further development easy as unintensional broken features can be catched earily.

Suggestions:

JSON serialization error `Object of type int32 is not JSON serializable`

When I run the code below (code version 6cc8c38):

data= generate_slabs(
    structure='./POSCAR.mp-149_Si',
                    hkl=2,
                    parallelise=True,
                    #save_metadata=False,
                    thicknesses=[20,30], 
                    vacuums=[20,30], 
                    potcar_functional='ps', 
                    #save_slabs=False,
                    #save_metadata=False,
               make_input_files=False)

The following error is thrown:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-86e11d266020> in <module>
----> 1 data= generate_slabs(
      2     structure='./POSCAR.mp-149_Si',
      3                     hkl=2,
      4                     parallelise=True,
      5                     #save_metadata=False,

~/tmp/surfaxe/surfaxe/generation.py in generate_slabs(structure, hkl, thicknesses, vacuums, save_slabs, save_metadata, json_fname, make_fols, make_input_files, max_size, center_slab, ox_states, is_symmetric, layers_to_relax, fmt, name, config_dict, user_incar_settings, user_kpoints_settings, user_potcar_settings, parallelise, **kwargs)
    298             i['slab'] = i['slab'].as_dict()
    299         with open(json_fname, 'w') as f:
--> 300             json.dump(unique_list_of_dicts_copy, f)
    301 
    302     if save_slabs:

~/miniconda3/envs/py3w/lib/python3.8/json/__init__.py in dump(obj, fp, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    177     # could accelerate with writelines in some versions of Python, at
    178     # a debuggability cost
--> 179     for chunk in iterable:
    180         fp.write(chunk)
    181 

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode(o, _current_indent_level)
    427             yield _floatstr(o)
    428         elif isinstance(o, (list, tuple)):
--> 429             yield from _iterencode_list(o, _current_indent_level)
    430         elif isinstance(o, dict):
    431             yield from _iterencode_dict(o, _current_indent_level)

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode_list(lst, _current_indent_level)
    323                 else:
    324                     chunks = _iterencode(value, _current_indent_level)
--> 325                 yield from chunks
    326         if newline_indent is not None:
    327             _current_indent_level -= 1

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode_dict(dct, _current_indent_level)
    403                 else:
    404                     chunks = _iterencode(value, _current_indent_level)
--> 405                 yield from chunks
    406         if newline_indent is not None:
    407             _current_indent_level -= 1

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode_dict(dct, _current_indent_level)
    403                 else:
    404                     chunks = _iterencode(value, _current_indent_level)
--> 405                 yield from chunks
    406         if newline_indent is not None:
    407             _current_indent_level -= 1

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode_list(lst, _current_indent_level)
    323                 else:
    324                     chunks = _iterencode(value, _current_indent_level)
--> 325                 yield from chunks
    326         if newline_indent is not None:
    327             _current_indent_level -= 1

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode_dict(dct, _current_indent_level)
    403                 else:
    404                     chunks = _iterencode(value, _current_indent_level)
--> 405                 yield from chunks
    406         if newline_indent is not None:
    407             _current_indent_level -= 1

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode_dict(dct, _current_indent_level)
    403                 else:
    404                     chunks = _iterencode(value, _current_indent_level)
--> 405                 yield from chunks
    406         if newline_indent is not None:
    407             _current_indent_level -= 1

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in _iterencode(o, _current_indent_level)
    436                     raise ValueError("Circular reference detected")
    437                 markers[markerid] = o
--> 438             o = _default(o)
    439             yield from _iterencode(o, _current_indent_level)
    440             if markers is not None:

~/miniconda3/envs/py3w/lib/python3.8/json/encoder.py in default(self, o)
    177 
    178         """
--> 179         raise TypeError(f'Object of type {o.__class__.__name__} '
    180                         f'is not JSON serializable')
    181 

TypeError: Object of type int32 is not JSON serializable

It is caused because the dictionary contains a int as keys. For python this is OK, but not the case for JSON where the keys must be strings. I am not able to pin point where the offending dictionary key is.

propagation of `MAGMOM` and visualisation of magnetic ordering

It would be a nice feature to be able to generate surfaces with the magnetic orderings (either collinear or non-collinear) from the bulk relaxed orderings.

this should be doable with pymatgen as you can add_site_property to your structure.

My current hack-and-slash approach is to

bulk_poscar.add_site_property('magmom',vasprun.as_dict()['input']['incar']['MAGMOM'])

then after you have generated your slab from bulk_poscar reorder the magmoms for visualisation and for setting the MAGMOM in the INCAR (if taking a manual route) to:

def magnetism_identify(slab,magmom):
    for i,specie in enumerate(slab.as_dict()['sites']):
        if specie['properties']['magmom'] == magmom:
            slab.replace(i,'Sc')
    magnetic_slab = slab.get_sorted_structure() 
    return(magnetic_slab)

where the atoms with a +ve MAGMOM is replaced with a 'Sc' atom then sorted so that you can edit the POSCAR in such a way that you have all the spin-ups as Sc and all the spin-downs as the original atom. - > useful for visualisation

this is a bit of a dirty way of doing this, but it makes for easy visualisation.

image

Implementation of non-VASP codes

Is this a thing we're going to do? If yes, how? Currently the package only works with VASP but it could be an idea to attempt to future-proof it

Uploading to PyPI

Hi @brlec:

Thank you for your very nice work on surfaxe. I wanted to ask if there's a possibility you'd be willing to upload surfaxe to PyPI anytime soon. I'm working on a project where we are considering adding surfaxe as a dependency, but we are a bit hesitant to rely on a git link for CI testing purposes (and because it limits our ability to upload to PyPI).

It looks like it should be pretty straightforward for this repo seeing as the dependencies are all pip installable. If you need a hand with anything in that regard, don't hesitate to reach out.

Cheers,
Andrew

missing command line interface

The README mentions a series of scripts for command-line usage.
It would be nice if a proper command-line interface can be implemented.

The click package can be used to create such interface, and have it installed automatically during the pip installation process.

electrostatic_potential along another axis

Hi,

I use VASP with nanotubes. These nanotubes are periodic along the c-axis with vacuum space outside the structure on the a and b axis. I like Surfaxe's ease of analysis of LOCPOT files and was wondering if there is a way to specify the axis when using surfaxe.analysis.electrostatic_potential()

EDIT: fixed it by editing the source code locally, changing the axis.

error using `parallel=False`

I got this error when using parallelise=False.

data= generate_slabs(
    structure='./POSCAR.mp-149_Si',
                    hkl=2,
                    parallelise=False,
                    #save_metadata=False,
                    thicknesses=[20,30], 
                    vacuums=[20,30], 
                    potcar_functional='ps', 
               make_input_files=False)

There is a mssing , between primitive and max_normal_search:

~/tmp/surfaxe/surfaxe/generation.py in generate_slabs(structure, hkl, thicknesses, vacuums, save_slabs, save_metadata, json_fname, make_fols, make_input_files, max_size, center_slab, ox_states, is_symmetric, layers_to_relax, fmt, name, config_dict, user_incar_settings, user_kpoints_settings, user_potcar_settings, parallelise, **kwargs)
    218     else:
    219         # Set up kwargs again
--> 220         SG_kwargs = {k: mp_kwargs[k] for k in ['in_unit_planes', 'primitive' 
    221         'max_normal_search', 'reorient_lattice', 'lll_reduce']}
    222         gs_kwargs = {k: mp_kwargs[k] for k in ['ftol', 'tol', 'max_broken_bonds', 

~/tmp/surfaxe/surfaxe/generation.py in <dictcomp>(.0)
    218     else:
    219         # Set up kwargs again
--> 220         SG_kwargs = {k: mp_kwargs[k] for k in ['in_unit_planes', 'primitive' 
    221         'max_normal_search', 'reorient_lattice', 'lll_reduce']}
    222         gs_kwargs = {k: mp_kwargs[k] for k in ['ftol', 'tol', 'max_broken_bonds', 

KeyError: 'primitivemax_normal_search'

However, I think it is better to have the parallel and non-parallel branch to call the same function.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.