Code Monkey home page Code Monkey logo

smilei's Introduction

doc compilation

About Smilei

Smilei is an open-source, user-friendly electromagnetic particle-in-cell code for the kinetic simulation of plasmas. Co-developed by physicists and computer scientists, it is designed for high-performance on the most recent supercomputing architectures. Smilei is applied to a wide range of applications, from laser-plasma interaction, to accelerator physics, space physics and astrophysics. It is also used at the bachelor and master levels as a teaching platform for plasma physics.

Learn how to use Smilei

Feedback

  • Issues: bug reports, feature requests, documentation requests
  • Chat room: support, suggestions, sharing or just to say hi!
  • Discussions: when you need more space than the chatroom
  • Contribute.

smilei's People

Contributors

agolovanov avatar agrassi8 avatar beck-llr avatar charlesprouveur avatar charlesruyer avatar doubleagentdave avatar etiennemlb avatar fyli16 avatar georgeholt1 avatar hkallala avatar iltommi avatar izemzemi avatar jderouillat avatar lesnat avatar mccoys avatar mickaelgrech avatar nicolasaunai avatar oa-idris avatar rho-novatron avatar ruiapostolo avatar sam-kirby avatar sgatto avatar titoiride avatar unpairedbracket avatar weipengyao avatar xsgeng avatar xxirii avatar ydyoon93 avatar yingchaolu avatar z10frank avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smilei's Issues

question on multiple species in ParticleBinning diagnostics

Hello everyone.

I just have a small question on ParticleBinning diagnostics.
Say that in my namelist I put the following block:

DiagParticleBinning(
deposited_quantity = "weight",
every = myEvery,
time_average = 1,
species = ["electrons","ions"],
axes = [
["ekin", 0, 100, 1000]
]
)

Will the ParticleBinningX.h5 file contain two different spectra for the electron and ion species or will it rather create one only spectrum for both joined species? If the former is the case, how do I access the two spectra? If the latter is the case, then I need to put as many blocks like the one above as many are the species of which I want the spectra, right?

I tried to look it up in your guide (which by the way is very well done and useful), but I couldn't find this specific information.

Thank you in advance!

Arianna

PostProcess in Centos

Dear All,

I try to open the post-process part by running:
S = happi.Open("/home/xx/yy/cc")
which /home/xx/yy/cc is the path to the h5 files. (Fields0.h5)
but seems it does not work in linux:
bash: syntax error near unexpected token `('

PS: previously I run like:
mpirun -n 4 ./smilei my_namelist.py

ALL THE BEST

abs(coeff2*qqm*term3*term5)

In line 325 of the file /src/Collisions/Collisions.cpp, should the expression abs(coeff2*qqm*term3*term5) be abs(coeff2*qqm*term3*term5*term5) ?

A problem about MPI

Dear developers,
Thank you for your advanced code. Before I installed SMILEI, I installed gcc-4.6.4, openmpi-1.10.2, hdf5-1.8.16 and python-2.7 for dependencies. Then I installed SMILEI successfully. I run the namelist (tst1d_0_em_propagation.py) in the benchmarks directory for test. I got a error message when the computer was trying to initialize the diagnostic fields about MPI: An error occurred in MPI_Comm_create_keyval reported by process [703004673,12] on communicator MPI_COMM_WORLD; MPI_ERR_ARG: invalid argument of some other kind. The submission script is as follow:
#/bin/bas
#PBS -N smilei
#PBS -l nodes=1:ppn=16
#PBS -l walltime=550:50:00
#PBS -j oe
#PBS -q high
cd $PBS_O_WORKDIR
mpiexec -n 16 ./smilei benchmarks/tst1d_0_em_propagation.py
exit 0
It seems that SMILEI is based on the DSM supercomputer. But I use a SPM supercomputer. Is that the reason of the problem? How do I fix it? Thank you.

strange Ey field using Laser module

Hello,

I am trying to launch a right-hand polarized whistler wave with the laser module. The background plasma is uniform, isotropic and magnetized. The laser module is set to launch a right-hand polarized wave at 0.2 omce (electron gyrofrequency) propagating in +x direction. For the results, I got By, Bz and Ez correctly peaked at 0.2 omce. But Ey has a significant amount of other frequencies. In the plot below, the spectrum of By, Bz, Ez and Ey are shown. Note that the frequency range of Ey is 10 times larger than other components to bring Ey's all significant frequencies. Different colors represent different distance from the antenna.

spectrum_eb

I checked the polarization of By and Bz. It is right-hand polarized. Please see the hodogram and the field pattern below. Ez shows similar field pattern. But Ey has a substantial amount of smaller wave length components (I did not plot it here).

hodogram_bybz

field_bybz

Here is the python input file for this run. Do you have any idea why such strange Ey field occurs? Thanks.

tst2d_0_antenna.txt

Odd explosion - energy conservation problem

Hi all,
I have encountered an error while running a 2D simulation on SMILEIv3.0.
I have a population of warm electrons and ions in the center of a 2D box. They are left to freely expand. Halfway through the simulation the 'slowly' expanding electrons and ions violently explodes! The energy in the simulation goes up by a factor of 10^38 in a short amount of time. I've attached the input deck, some videos of the error and plots of the scalar quantities showing energy vs time.
Any help would be greatly appreciated!
Best regards,
Jimmy.

ZoomNoCol1LongAvoidNova.zip

p.s. The simulation segmentation faulted a while after the error. I have a ~ 16 gb core file that I can transfer if it is of use.

Problem with IO

Hi,
I tried to run SMILEI benchmark examples with openMPI 3.0.0 and all examples doing tracks diagnostics ( DiagTrackParticles section in python script ) are crashing due to the usage of
specific HDF5 routine that use internally MPI IO collective buffering i.e
H5Pset_dxpl_mpio( transfer, H5FD_MPIO_COLLECTIVE);
When using instead of collective the independent flag i.e
H5Pset_dxpl_mpio( transfer, H5FD_MPIO_INDEPENDENT);
everything works fine.

  • my config:
  • Debian 8 (64bit)
  • gcc 6.3
  • Lustre 2.10 filesystem

I notice the same crash with both HDF5 1.10.1 ( newest ) as well as the recommended one for SMILEI
( 1.8.16).
I think it is deeply link to the new romio implementation in MPI since using older version (1.10.7)
works fine.

Here is the coredump in both cases:

Running diags at time t = 0

[lxbk0341:12471] *** Process received signal ***
[lxbk0341:12471] Signal: Segmentation fault (11)
[lxbk0341:12471] Signal code: Address not mapped (1)
[lxbk0341:12471] Failing at address: (nil)
[lxbk0341:12472] *** Process received signal ***
[lxbk0341:12472] Signal: Segmentation fault (11)
[lxbk0341:12472] Signal code: Address not mapped (1)
[lxbk0341:12472] Failing at address: (nil)
[lxbk0341:12473] *** Process received signal ***
[lxbk0341:12473] Signal: Segmentation fault (11)
[lxbk0341:12473] Signal code: Address not mapped (1)
[lxbk0341:12473] Failing at address: (nil)
[lxbk0341:12471] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7ffb81f31890]
[lxbk0341:12471] [ 1] [lxbk0341:12472] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7fb043b13890]
[lxbk0341:12472] [ 1] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIOI_Flatten+0x1577)[0x7fb02a9b9657]
[lxbk0341:12472] [ 2] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIOI_Flatten_datatype+0xe3)[0x7fb02a9ba363]
[lxbk0341:12472] [ 3] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIO_Set_view+0x1fd)[0x7fb02a9aff5d]
[lxbk0341:12472] [ 4] [lxbk0341:12473] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f8e91034890]
[lxbk0341:12473] [ 1] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIOI_Flatten+0x1577)[0x7f8e77ed8657]
[lxbk0341:12473] [ 2] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIOI_Flatten_datatype+0xe3)[0x7f8e77ed9363]
/lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(mca_io_romio314_dist_MPI_File_set_view+0x2f6)[0x7fb02a996e06]
[lxbk0341:12472] [ 5] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(mca_io_romio314_file_set_view+0x83)[0x7fb02a990863]
[lxbk0341:12472] [ 6] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIOI_Flatten+0x1577)[0x7ffb68ce2657]
[lxbk0341:12471] [ 2] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIOI_Flatten_datatype+0xe3)[0x7ffb68ce3363]
[lxbk0341:12471] [ 3] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIO_Set_view+0x1fd)[0x7ffb68cd8f5d]
[lxbk0341:12471] [ 4] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(mca_io_romio314_dist_MPI_File_set_view+0x2f6)[0x7ffb68cbfe06]
[lxbk0341:12471] [ 5] [lxbk0341:12473] [ 3] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(ADIO_Set_view+0x1fd)[0x7f8e77ecef5d]
[lxbk0341:12473] [ 4] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(mca_io_romio314_dist_MPI_File_set_view+0x2f6)[0x7f8e77eb5e06]
[lxbk0341:12473] [ 5] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(mca_io_romio314_file_set_view+0x83)[0x7f8e77eaf863]
[lxbk0341:12473] [ 6] /lustre/hebe/rz/dbertini/plasma/softw/lib/libmpi.so.40(MPI_File_set_view+0xdd)[0x7f8e9047fb2d]
[lxbk0341:12473] [ 7] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x30cc77)[0x7f8e91ad3c77]
[lxbk0341:12473] [ 8] /lustre/hebe/rz/dbertini/plasma/softw/lib/openmpi/mca_io_romio314.so(mca_io_romio314_file_set_view+0x83)[0x7ffb68cb9863]
[lxbk0341:12471] [ 6] /lustre/hebe/rz/dbertini/plasma/softw/lib/libmpi.so.40(MPI_File_set_view+0xdd)[0x7ffb8137cb2d]
[lxbk0341:12471] [ 7] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x30cc77)[0x7ffb829d0c77]
[lxbk0341:12471] [ 8] /lustre/hebe/rz/dbertini/plasma/softw/lib/libmpi.so.40(MPI_File_set_view+0xdd)[0x7fb042f5eb2d]
[lxbk0341:12472] [ 7] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x30cc77)[0x7fb0445b2c77]
[lxbk0341:12472] [ 8] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5FD_write+0xe8)[0x7f8e918d8638]
[lxbk0341:12473] [ 9] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5F__accum_write+0x2ec)[0x7f8e918bd9bc]
[lxbk0341:12473] [10] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5FD_write+0xe8)[0x7ffb827d5638]
[lxbk0341:12471] [ 9] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5F__accum_write+0x2ec)[0x7ffb827ba9bc]
[lxbk0341:12471] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5FD_write+0xe8)[0x7fb0443b7638]
[lxbk0341:12472] [ 9] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5F__accum_write+0x2ec)[0x7fb04439c9bc]
[lxbk0341:12472] [10] [10] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5PB_write+0x960)[0x7fb04449b780]
[lxbk0341:12472] [11] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5F_block_write+0xfb)[0x7fb0443a015b]
[lxbk0341:12472] [12] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5PB_write+0x960)[0x7f8e919bc780]
[lxbk0341:12473] [11] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5F_block_write+0xfb)[0x7f8e918c115b]
[lxbk0341:12473] [12] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__chunk_allocate+0x1c5c)[0x7f8e9187016c]
/lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5PB_write+0x960)[0x7ffb828b9780]
[lxbk0341:12471] [11] [lxbk0341:12473] [13] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5F_block_write+0xfb)[0x7ffb827be15b]
[lxbk0341:12471] [12] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__chunk_allocate+0x1c5c)[0x7ffb8276d16c]
[lxbk0341:12471] [13] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0xb9910)[0x7ffb8277d910]
[lxbk0341:12471] [14] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__chunk_allocate+0x1c5c)[0x7fb04434f16c]
[lxbk0341:12472] [13] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0xb9910)[0x7fb04435f910]
[lxbk0341:12472] [14] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__alloc_storage+0x21f)[0x7fb04436484f]
[lxbk0341:12472] [15] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0xb9910)[0x7f8e91880910]
[lxbk0341:12473] [14] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__alloc_storage+0x21f)[0x7f8e9188584f]
[lxbk0341:12473] [15] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__layout_oh_create+0x4a9)[0x7f8e9188c149]
[lxbk0341:12473] [16] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__alloc_storage+0x21f)[0x7ffb8278284f]
[lxbk0341:12471] [15] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__layout_oh_create+0x4a9)[0x7fb04436b149]
[lxbk0341:12472] [16] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__create+0x8f6)[0x7fb044360d56]
[lxbk0341:12472] [17] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0xc63dc)[0x7fb04436c3dc]
[lxbk0341:12472] [18] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__create+0x8f6)[0x7f8e91881d56]
[lxbk0341:12473] [17] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0xc63dc)[0x7f8e9188d3dc]
[lxbk0341:12473] [18] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5O_obj_create+0xa4)[0x7f8e91951c14]
[lxbk0341:12473] [19] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__layout_oh_create+0x4a9)[0x7ffb82789149]
[lxbk0341:12471] [16] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__create+0x8f6)[0x7ffb8277ed56]
[lxbk0341:12471] [17] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0xc63dc)[0x7ffb8278a3dc]
[lxbk0341:12471] [18] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5O_obj_create+0xa4)[0x7fb044430c14]
[lxbk0341:12472] [19] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x173591)[0x7f8e9193a591]
[lxbk0341:12473] [20] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x1466c6)[0x7f8e9190d6c6]
[lxbk0341:12473] [21] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5G_traverse+0xef)[0x7f8e9190dbaf]
[lxbk0341:12473] [22] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5O_obj_create+0xa4)[0x7ffb8284ec14]
[lxbk0341:12471] [19] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x173591)[0x7ffb82837591]
[lxbk0341:12471] [20] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x1466c6)[0x7ffb8280a6c6]
[lxbk0341:12471] [21] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x173591)[0x7fb044419591]
[lxbk0341:12472] [20] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(+0x1466c6)[0x7fb0443ec6c6]
[lxbk0341:12472] [21] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5G_traverse+0xef)[0x7ffb8280abaf]
[lxbk0341:12471] [22] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5L_link_object+0xaf)[0x7ffb82838adf]
[lxbk0341:12471] [23] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__create_named+0x65)[0x7ffb8277e3f5]
[lxbk0341:12471] [24] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5Dcreate2+0x217)[0x7ffb827599c7]
[lxbk0341:12471] [25] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5L_link_object+0xaf)[0x7f8e9193badf]
[lxbk0341:12473] [23] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__create_named+0x65)[0x7f8e918813f5]
[lxbk0341:12473] [24] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5Dcreate2+0x217)[0x7f8e9185c9c7]
[lxbk0341:12473] [25] smilei(_ZN15DiagnosticTrack3runEP9SmileiMPIR11VectorPatchiP9SimWindow+0xcc7)[0x475467]
[lxbk0341:12473] [26] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5G_traverse+0xef)[0x7fb0443ecbaf]
[lxbk0341:12472] [22] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5L_link_object+0xaf)[0x7fb04441aadf]
[lxbk0341:12472] [23] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5D__create_named+0x65)[0x7fb0443603f5]
[lxbk0341:12472] [24] /lustre/hebe/rz/dbertini/plasma/softw/lib/libhdf5.so.101(H5Dcreate2+0x217)[0x7fb04433b9c7]
[lxbk0341:12472] [25] smilei(_ZN11VectorPatch11runAllDiagsER6ParamsP9SmileiMPIjR6TimersP9SimWindow+0x205)[0x4ff505]
[lxbk0341:12473] [27] smilei(main+0x17bf)[0x434faf]
[lxbk0341:12473] [28] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f8e8f8a2b45]
[lxbk0341:12473] [29] smilei(_ZN15DiagnosticTrack3runEP9SmileiMPIR11VectorPatchiP9SimWindow+0xcc7)[0x475467]
[lxbk0341:12471] [26] smilei(_ZN11VectorPatch11runAllDiagsER6ParamsP9SmileiMPIjR6TimersP9SimWindow+0x205)[0x4ff505]
[lxbk0341:12471] [27] smilei(main+0x17bf)[0x434faf]
[lxbk0341:12471] [28] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7ffb8079fb45]
[lxbk0341:12471] [29] smilei(_ZN15DiagnosticTrack3runEP9SmileiMPIR11VectorPatchiP9SimWindow+0xcc7)[0x475467]
[lxbk0341:12472] [26] smilei(_ZN11VectorPatch11runAllDiagsER6ParamsP9SmileiMPIjR6TimersP9SimWindow+0x205)[0x4ff505]
[lxbk0341:12472] [27] smilei(main+0x17bf)[0x434faf]
[lxbk0341:12472] [28] smilei[0x43592f]
[lxbk0341:12471] *** End of error message ***

Grid definition using arbitrary units ?

Hi ,
I got confused with the SMILEI internal system of Units-
So let's suppose i have a 2D domain defines by the standard x_min, x_max y_min y_max
points expressed in m,
x_min = -90.0e-6
x_max = 60.0e-6
y_min = -80.0e-6
y_max = 80.0e-6
Now i have the following number of cells:
nx = 15000
ny = 16000
which defines a space step i.e resolution in x y of 10 nm
How should i program this simple domain using arbitrary units in SMILEI ?
Which reference value, entity should i use , wavelength, frequency ?
And most of all how then can i re_scale the output once i choose such a reference ?

Crash related to very high tag number in MPI Send/Recv operations

Hello everyone,

I'm using Smilei on an Intel cluster.

Recently, I've experienced crashes when using a relatively high number of patches (e.g. a very simple simulation runs with 128x128 patches but crashes with 256x256 patches).

An inspection of the output on stderr reveals the following message:

Fatal error in PMPI_Isend: Invalid tag, error stack:
PMPI_Isend(171): MPI_Isend(buf=0x18969e0, count=1, dtype=USER, dest=13, tag=5097301, MPI_COMM_WORLD, request=0x202d3c30) failed
PMPI_Isend(115): Invalid tag, value is 5097301
Fatal error in MPI_Irecv: Invalid tag, error stack:
MPI_Irecv(170): MPI_Irecv(buf=0x1fd2e650, count=174, MPI_DOUBLE, src=12, tag=4471401, MPI_COMM_WORLD, request=0x2b75634) failed
MPI_Irecv(109): Invalid tag, value is 4471401
Fatal error in PMPI_Isend: Invalid tag, error stack:
PMPI_Isend(171): MPI_Isend(buf=0x19f28a0, count=1, dtype=USER, dest=11, tag=4369101, MPI_COMM_WORLD, request=0x203a1df0) failed
PMPI_Isend(115): Invalid tag, value is 4369101
slurmstepd: error: *** STEP 1998691.1 ON r037c01s03 CANCELLED AT 2018-09-10T16:35:56 ***
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
srun: error: r037c01s03: tasks 0-8: Killed
srun: Terminating job step 1998691.1
srun: error: r037c01s04: tasks 9-17: Killed

Using MPI_Comm_get_attr (as suggested here https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/541677 ) I've checked the maximum tag value for the MPI implementation available on the cluster and I've found that this maximum value is actually 4194303 (2^22 - 1). As far as I understand, the maximum value for the Send/Recv tags is implementation-dependent, with a minimum of 32767.

So, my guess is that Smilei allows for tags greater than 4194303 in some conditions, which leads to a crash on my machine.

I would like to kindly ask if my interpretation of the crash is correct and if there are possible workarounds (other than reducing the number of patches).

Many thanks in advance.

Best regards

problems on compiling the smilei

Dear developers,
I would like to compile the smilei code on my Mac, when I finished some dependencies via Homebrew and make in the smilei directory, there are some errors existing as following,

File "scripts/CompileTools/get-version.py", line 10
print open('.version', 'r').read()
^
SyntaxError: invalid syntax
File "scripts/CompileTools/python-config.py", line 32
print sysconfig.PREFIX
^
SyntaxError: Missing parentheses in call to 'print'
File "scripts/CompileTools/python-config.py", line 32
print sysconfig.PREFIX
^
SyntaxError: Missing parentheses in call to 'print'
Compiling src/Checkpoint/Checkpoint.cpp
In file included from src/Diagnostic/TimeSelection.h:10:0,
from src/Species/Particles.h:11,
from src/SmileiMPI/SmileiMPI.h:10,
from src/Profiles/Profile.h:6,
from src/Params/Params.h:14,
from src/Checkpoint/Checkpoint.cpp:15:
src/Tools/PyTools.h:20:10: fatal error: Python.h: No such file or directory
#include <Python.h>
^~~~~~~~~~
compilation terminated.
make: *** [build/src/Checkpoint/Checkpoint.o] Error 1

how to this problems?
Thank you very much and best regards,

Huan

How to initialize a high-energy electron beam

Hi, this is more about a question, not an issue; sorry if it should not be placed here.

Just wondering how to initialize a high-energy electron beam with self-consistent bunch fields. I may set the 'mean_velocity' to do that, but the problem is the electrons will reach the desired velocity in just one iteration, thus the associated abrupt acceleration can generate a strong EM radiation. Can we do a slow acceleration in finite iterations so that the synchrotron radiation can be minimized.

Thanks very much.

Conversion of outputs to OpenPMD

This is an issue to list the problems in the conversion to the openPMD standard.
The standard is defined here, and we aim to include its extension for PIC codes, "ED-PIC", defined here.

General problems

  • Formatting the timestep by a single integer %T is very limiting. Most importantly, there is not padding with zeros to a constant width. Currently, Smilei uses the printf format %010u. In preparation in upcoming standard
  • timeUnitSI and unitSI are not relevant when the simulation is not absolutely scaled. They should not be required attributes.
  • Unclear whether "meshesPath" and "particlesPath" can be the same as "basePath", and whether "basePath" can be simply "/%T/". Answer: they must be distinct folders
  • ED-PIC: all meshes are required to be in "meshesPath", and this group has some attributes (like fieldSolver and fieldBoundary and others) that force these meshes to be of the spatial type and basically must all correspond to the simulation box. This is problematic for defining sub-sections of the box, or phase-spaces, or temporal axes, or any mesh that is not that of the simulation box. I guess that all meshes that are not the same as the simulation box are not compliant with openPMD, because by definition, this is all openPMD is supposed to do.

Fields

  • ED-PIC: The particleBoundary attribute is associated to the meshes, but it should be associated to particles (different for each species). In preparation in upcoming standard
  • ED-PIC: Fields are forced to be in groups such as E which contain the components x, y, and z. This can be inconvenient. For now, Smilei uses the components separately Ex, Ey and Ez. Not a problem
  • ED-PIC: The naming of fields that derive from only one species is forced to use the convention speciesname_fieldname. This is annoying because Smilei requires the opposite: fieldname_speciesname in order to recognize the type of field by its first letter. Furthermore, a species can contain the underscore character, which makes the naming ambiguous. It makes much more sense to reverse. We'll find a way to accommodate Smilei to that

Probes

Probes are not compliant with ED-PIC's meshes because the mesh is not the same as simulation box. In addition, a mesh storage would not even work with openPMD because they are not always rectangular. The idea would be to store them as particles instead. Still, this has some issues:

  • Probes put all the components (Ex, Ey, etc.) in the same dataset, for performance. This is not supported in openPMD.
  • The position of probes are defined only once, and stay constant. OpenPMD requires to repeat the positions for each iteration, which would be a huge constraint in memory.

Particle diagnostics

  • How can we do phase-space meshes, or other? this seems impossible. Even removing the ED-PIC extension (which clearly has all meshes being the same as the simulation box), the standard openPMD does not appear to have anything for phase-spaces, or any other type of mesh. Example: (x,px), (px,py), (x,p), (x,y,Ekin), (charge), etc. In preparation in upcoming standard

Tracked particles

  • Arrays in tracked particles have one axis that is time. This is not supported in openPMD. We could change how smilei stores the data, but the performance result is unclear. More precisely, post-processing particles usually requires to take the info on one particle over time. The openPMD standard makes this very difficult as it requires opening timesteps one by one. In preparation in upcoming standard
  • How to define particles that are suppressed => not defined in the standard.
  • Required ordering by Id => not specified in the standard?

Screens

  • Same as particle diagnostics, as they contain phase-space data, or similar distributions. In preparation in upcoming standard

Scalars

We probably could do a single-point mesh for each scalar but this has several issues.

  • Scalars would need to be converted to HDF5.
  • ED-PIC requires meshes to be the same as the simulation box, which would not be the case.
  • Storing one scalar for each timestep in a different group would make a ton of groups, all of them having one value. For simplicity of use, it would be better to have one axis being time, but this is not in openPMD. In preparation in upcoming standard

Tool "checkOpenPMD_h5.py"

  • testing of the string attributes seems incorrect: there should be an automated conversion between [string_] and str. It's too hard to make this within C++, with cross-compiler compatibility. Check if it is related to variable-length strings
  • The format /data/%T/ for basePath is not mandatory in the standard (it is just an example), but it seems to be hardcoded in the checking. It forbids any other basePath argument.

stucking cpu for 3D laser plasma sample

Dear All,

I tried to run the 3D laser plasma benchmark sample:
tst3d_4_laser_wake.py

on a i7 cpu labtop on centos 7, 8GB Ram, but it seems does not proceed both for ncpus=4 and 1:
I also set the thread number to1.

Attached please find the log file:

test log.pdf

ATB

How to define a thin target

Hi,
I could not found any forum or mailing list related to SmileiPIC so i am using the issue tracker of github.
Could you tell me if one can simulate with SmileiPIC the interaction of lets say a gaussian laser beam with a thin carbon target?
If yes how can i define this target ( as species with a density distribution ? )
Thanks in advance
Denis Bertini

Compile error

I am running into a compiling error after running 'make':

--------------
Creating binary char for src/Python/pyprofiles.py : build/src/Python/pyprofiles.pyh
/bin/sh: xxd: command not found
Checking dependencies for src/Params/Params.cpp
Creating binary char for src/Python/pyprofiles.py : build/src/Python/pyprofiles.pyh
/bin/sh: xxd: command not found
Checking dependencies for src/Params/Params.cpp
Creating binary char for src/Python/pyprofiles.py : build/src/Python/pyprofiles.pyh
/bin/sh: xxd: command not found
Checking dependencies for src/Params/Params.cpp
.
.
.
--------------

which continues recursively.

Compile warnings

Hi Developers,
Thank you for shared this open source of PIC code. When I compile the SmileiPIC code, there appeared some warnings as following:

src/MultiphotonBreitWheeler/MultiphotonBreitWheeler.cpp:95:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
src/MultiphotonBreitWheeler/MultiphotonBreitWheeler.cpp:184:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
src/MultiphotonBreitWheeler/MultiphotonBreitWheeler.cpp: In member function ‘void MultiphotonBreitWheeler::operator()(Particles&, SmileiMPI*, MultiphotonBreitWheelerTables&, int, int, int)’:
src/MultiphotonBreitWheeler/MultiphotonBreitWheeler.cpp:160:13: warning: variable ‘position’ set but not used [-Wunused-but-set-variable]
     double* position[3];
             ^
src/MultiphotonBreitWheeler/MultiphotonBreitWheeler.cpp: In member function ‘void MultiphotonBreitWheeler::pair_emission(int, Particles&, double&, double, MultiphotonBreitWheelerTables&)’:
src/MultiphotonBreitWheeler/MultiphotonBreitWheeler.cpp:301:14: warning: variable ‘inv_gamma’ set but not used [-Wunused-but-set-variable]
     double   inv_gamma;
              ^
Compiling src/MultiphotonBreitWheeler/MultiphotonBreitWheelerTables.cpp
Compiling src/Params/OpenPMDparams.cpp
Compiling src/Params/Params.cpp
In file included from /usr/include/python2.7/Python.h:80:0,
                 from src/Tools/PyTools.h:20,
                 from src/Params/Params.cpp:8:
src/Params/Params.cpp: In constructor ‘Params::Params(SmileiMPI*, std::vector<std::basic_string<char> >)’:
src/Params/Params.cpp:90:65: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
                                                                 ^
/usr/include/python2.7/object.h:772:24: note: in definition of macro ‘Py_DECREF’
         --((PyObject*)(op))->ob_refcnt != 0)            \
                        ^
src/Params/Params.cpp:90:65: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
                                                                 ^
/usr/include/python2.7/object.h:772:24: note: in definition of macro ‘Py_DECREF’
         --((PyObject*)(op))->ob_refcnt != 0)            \
                        ^
src/Params/Params.cpp:90:65: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
                                                                 ^
/usr/include/python2.7/object.h:115:47: note: in definition of macro ‘Py_TYPE’
 #define Py_TYPE(ob)             (((PyObject*)(ob))->ob_type)
                                               ^
/usr/include/python2.7/object.h:775:9: note: in expansion of macro ‘_Py_Dealloc’
         _Py_Dealloc((PyObject *)(op));                  \
         ^
src/Params/Params.cpp:90:5: note: in expansion of macro ‘Py_DECREF’
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
     ^
src/Params/Params.cpp:90:65: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
                                                                 ^
/usr/include/python2.7/object.h:115:47: note: in definition of macro ‘Py_TYPE’
 #define Py_TYPE(ob)             (((PyObject*)(ob))->ob_type)
                                               ^
/usr/include/python2.7/object.h:775:9: note: in expansion of macro ‘_Py_Dealloc’
         _Py_Dealloc((PyObject *)(op));                  \
         ^
src/Params/Params.cpp:90:5: note: in expansion of macro ‘Py_DECREF’
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
     ^
src/Params/Params.cpp:90:65: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
                                                                 ^
/usr/include/python2.7/object.h:762:45: note: in definition of macro ‘_Py_Dealloc’
     (*Py_TYPE(op)->tp_dealloc)((PyObject *)(op)))
                                             ^
src/Params/Params.cpp:90:5: note: in expansion of macro ‘Py_DECREF’
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
     ^
src/Params/Params.cpp:90:65: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
                                                                 ^
/usr/include/python2.7/object.h:7**62:45: note: in definition of macro ‘_Py_Dealloc’
     (*Py_TYPE(op)->tp_dealloc)((PyObject *)(op)))**
                                             ^
src/Params/Params.cpp:90:5: note: in expansion of macro ‘Py_DECREF’
     Py_DECREF(PyObject_CallMethod(numpy, "seterr", "s", "ignore"));
     ^
Compiling src/Params/PeekAtSpecies.cpp
Compiling src/Patch/Domain.cpp
Compiling src/Patch/Patch.cpp
Compiling src/Patch/Patch1D.cpp
Compiling src/Patch/Patch2D.cpp
src/Patch/Patch2D.cpp:244:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
                 #pragma omp simd
 ^
Compiling src/Patch/Patch3D.cpp
Compiling src/Patch/SyncCartesianPatch.cpp
Compiling src/Patch/SyncVectorPatch.cpp
Compiling src/Patch/VectorPatch.cpp
Compiling src/picsar_interface/interface.cpp
Compiling src/Profiles/Function.cpp
Compiling src/Profiles/Profile.cpp
Compiling src/Projector/Projector.cpp
Compiling src/Projector/Projector1D.cpp
Compiling src/Projector/Projector1D2Order.cpp
Compiling src/Projector/Projector1D4Order.cpp
Compiling src/Projector/Projector2D.cpp
Compiling src/Projector/Projector2D2Order.cpp
Compiling src/Projector/Projector2D4Order.cpp
Compiling src/Projector/Projector3D.cpp
Compiling src/Projector/Projector3D2Order.cpp
Compiling src/Projector/Projector3D4Order.cpp
Compiling src/Radiation/Radiation.cpp
src/Radiation/Radiation.cpp:97:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
Compiling src/Radiation/RadiationCorrLandauLifshitz.cpp
src/Radiation/RadiationCorrLandauLifshitz.cpp:110:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
src/Radiation/RadiationCorrLandauLifshitz.cpp:157:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd reduction(+:radiated_energy_loc)
 ^
Compiling src/Radiation/RadiationLandauLifshitz.cpp
src/Radiation/RadiationLandauLifshitz.cpp:107:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
src/Radiation/RadiationLandauLifshitz.cpp:153:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd reduction(+:radiated_energy_loc)
 ^
Compiling src/Radiation/RadiationMonteCarlo.cpp
Compiling src/Radiation/RadiationNiel.cpp
src/Radiation/RadiationNiel.cpp:128:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
src/Radiation/RadiationNiel.cpp:164:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
src/Radiation/RadiationNiel.cpp:181:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd private(p,temp)
 ^
src/Radiation/RadiationNiel.cpp:244:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd private(temp)
 ^
src/Radiation/RadiationNiel.cpp:261:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd private(temp)
 ^
src/Radiation/RadiationNiel.cpp:279:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd private(temp)
 ^
src/Radiation/RadiationNiel.cpp:296:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd private(temp,rad_energy)
 ^
src/Radiation/RadiationNiel.cpp:325:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd reduction(+:radiated_energy_loc)
 ^
Compiling src/Radiation/RadiationTables.cpp
Compiling src/Smilei.cpp
Compiling src/SmileiMPI/AsyncMPIbuffers.cpp
Compiling src/SmileiMPI/SmileiMPI.cpp
Compiling src/SmileiMPI/SmileiMPI_test.cpp
Compiling src/Species/PartBoundCond.cpp
Compiling src/Species/Particle.cpp
Compiling src/Species/Particles.cpp
Compiling src/Species/PartWall.cpp
Compiling src/Species/Pusher.cpp
Compiling src/Species/PusherBoris.cpp
src/Species/PusherBoris.cpp:59:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
Compiling src/Species/PusherBorisNR.cpp
Compiling src/Species/PusherHigueraCary.cpp
src/Species/PusherHigueraCary.cpp:68:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
 #pragma omp simd
 ^
Compiling src/Species/PusherPhoton.cpp
src/Species/PusherPhoton.cpp:50:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
Compiling src/Species/PusherRRLL.cpp
Compiling src/Species/PusherVay.cpp
src/Species/PusherVay.cpp:70:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
     #pragma omp simd
 ^
Compiling src/Species/Species.cpp
Compiling src/Species/SpeciesNorm.cpp
Compiling src/Tools/backward.cpp
Compiling src/Tools/tabulatedFunctions.cpp
Compiling src/Tools/Timer.cpp
Compiling src/Tools/Timers.cpp
Compiling src/Tools/Tools.cpp
Compiling src/Tools/userFunctions.cpp
src/Tools/userFunctions.cpp:578:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd
 ^
src/Tools/userFunctions.cpp:584:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd
 ^
src/Tools/userFunctions.cpp:603:0: **warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd**
 ^
src/Tools/userFunctions.cpp:609:0: warning: ignoring #pragma omp simd [-Wunknown-pragmas]
         #pragma omp simd
 ^
Linking smilei
Compiling src/Smilei.cpp for test mode
Linking smilei_test for test mode
[hp@localhost smilei]$ 

Do you know what the problem is? Thank you!

Problem runnig radiation reaction module on more than one cores

It is mentioned in the instructions for writing the namelist for Smilei for radiation reaction and multi-photon Breit Wheeler process that if the tables are not provided, then the code generates tables of its own.
However, this is not the case. When I try to run the code without the tables, then the code does not run. But, when I copy the tables from the databases directory to the folder where I am running my simulation, the code runs fine.
Is there any way to automatically generate the files or we have to have the radiation and multi-photon Breit Wheeler tables beforehand.

The second issue is, when I run my simulations with multi-photon Breit Wheeler processes, with the table in the same folder, my simulations runs perfect. But, when I run the radiation reaction module, it runs fine on a single core, but when I try to run on more than one core, the code crashes giving the following error message:

Initializing radiation reaction
 --------------------------------------------------------------------------------
         The Monte-Carlo Compton radiation module is requested by some species.

         Factor classical raidated power: 2.0051e+03
         Threshold on the quantum parameter for radiation: 1.0000e-03
         Threshold on the quantum parameter for discontinuous radiation: 1.0000e-02
         Table path: ./
 
         --- Integration F/chipa table:
             Reading of the external database
             Dimension quantum parameter: 256
             Minimum particle quantum parameter chi: 1.0000e-03
             Maximum particle quantum parameter chi: 1.0000e+01
             Buffer size: 2068
         done in 1.5149e-02s
         --- Table chiphmin and xip:
[lfq00:103105] *** Process received signal ***
[lfq00:103105] Signal: Segmentation fault (11)
[lfq00:103105] Signal code: Address not mapped (1)
[lfq00:103105] Failing at address: 0x7fffa74d48d8
[lfq00:103105] [ 0] /lib64/libpthread.so.0(+0xf370)[0x7f3a64d4c370]
[lfq00:103105] [ 1] /lib64/libc.so.6(cfree+0x1c)[0x7f3a6381538c]
[lfq00:103105] [ 2] /home/theo/ujjwal/smilei-v-3.3/smilei(_ZN15RadiationTables21read_integfochi_tableEP9SmileiMPI+0x455)[0x54d055]
[lfq00:103105] [ 3] /home/theo/ujjwal/smilei-v-3.3/smilei(_ZN15RadiationTables24compute_integfochi_tableEP9SmileiMPI+0x54)[0x54d9e4]
[lfq00:103105] [ 4] /home/theo/ujjwal/smilei-v-3.3/smilei(_ZN15RadiationTables14compute_tablesER6ParamsP9SmileiMPI+0x33)[0x5521d3]
[lfq00:103105] [ 5]              Reading of the external database
             Dimension particle chi: 128
             Dimension photon chi: 128
             Minimum particle chi: 1.0000e-03
             Maximum particle chi: 1.0000e+01
             Buffer size for MPI exchange: 132120
/home/theo/ujjwal/smilei-v-3.3/smilei(main+0x135b)[0x42ed7b]
[lfq00:103105] [ 6] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f3a637b6b35]
[lfq00:103105] [ 7] /home/theo/ujjwal/smilei-v-3.3/smilei[0x42fb5f]
[lfq00:103105] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node lfq00 exited on signal 11 (Segmentation fault).

Problems running Smilei for two and three dimensional goemetries

I am able to compile and install Smilei successfully. It runs fine for 1d3v, but when I try to run the benchmark name lists for 2d3v and 3d, then it gets stuck after printing the following lines,

Applying external fields at time t = 0

Initializing diagnostics

Running diags at time t = 0

Please help!

Confusion about process and thread settings on the cluster

Dear Professors,
  I am trying to run smilei on a cluster (16 nodes 16 processors per node) with torque (PBS) scheduler. When I run smilei with ./smilei my_namelist.py, everything looks good. And I saw these information on screen:

       Number of MPI process: 1;    
       Number of patches : dimension 0 - number_of_patches : 256;      
       Number of thread per MPI process : 32.

  In patch_load.txt file:

      Total load: 256;     
      patch count: 256.

  However, when I run smilei on 2 nodes with torque scheduler using the same namelist, the code runs much more slower than the former case even though the cores and memory are double now. And the information list before changes:

      Number of MPI process: 2; 
      Number of patches : dimension 0 - number_of_patches : 256;
      Number of thread per MPI process : 16.

(by adding export OMP_NUM_THREADS=16 in my .bashrc. I also tried adding export OMP_NUM_THREADS=32 in .bashrc, which does change the number of thread per MPI process to 32, however, the code still runs slowly. ).

  In patch_load.txt file:

      Total patches: 128; 
      patches load 128.

I changed the number of nodes and ran again using the same namelist, found that the Total load=patch count=256/number of nodes (or number of MPI process, in my submission they equal to each other). What I think is that only the patch count should equal number of patches / number of nodes. The Total load should always equals to 256.

  What's more, I have talked with the cluster administer. She found that, when run smilei with ./smilei my_namelist.py, the code will run on 16 processors automatically, but when run smilei with torque schedule, the code will run on only 2 processors on the first node leaving the second node unused. It seems like that the number after "mpirun -n" in command mpirun -n $PBS_NUM_NODES ./smilei my_namelist.py is the total number of processors required on our cluster, not the total number of process. Finding this, I always set the number equals to the number of processors in submission file and then test smilei with torque schedule. When I use one node, the code runs as fast as running with ./smilei mynamelist.py. But if I use more than one nodes, the MPI will reports communication errors, something like this:

At least one pair of MPI processes are unable to reach each other for MPI communications. 
This means that no Open MPI device has indicated that it can be used to communicate between these processes.  
This is an error; Open MPI requires that all MPI processes be able to reach each other. 
This error can sometimes be the result of forgetting to specify the "self" BTL.
Process 1 ([[18428,1],17]) is on host: shenma046
Process 2 ([[18428,1],24]) is on host: shenma044
BTLs attempted: self sm
 
 Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
      An error occurred in MPI_Init_thread
--------------------------------------------------------------------------
MPI_INIT has failed because at least one MPI process is unreachable from another.  
This *usually* means that an underlying communication plugin -- such as a BTL or an MTL -- 
has either not loaded or not allowed itself to be used. 
Your MPI job will now abort.

Waiting for your teaching.

  For convenience, here is one of my submission file:

 #PBS -N Smilei
 #PBS -l nodes=2:ppn=16
 #PBS -l walltime=1:00:00
 #PBS -j oe
 #PBS -q parallel11
 module puege
 module load compiler/gcc/7.2.0
 module load mpi/openmpi/1.10.7
 module load hdf5/1.8.16-parallel
 cd $PBS_O_WORKDIR
 mpirun -n $PBS_NUM_NODES ./smilei my_namelist.py
or mpirun -n $ Total_Number_of_Processors  ./smilei my_namelist.py

SI to Smilei code units conversion check

Hi
I just want to know if i understood the basics of smiles code units.
So i want for example to setup a distribution for one electron species at rest ( drift speed V0=0)
@ Te = 11600 Kelvin ( 10eV) with a number density ne = 5.51e08/m^3. The SI units should now
be translate to Smilei code units i.e

In SI:
°°°°°

  • number density = 5.51e8/m^3
  • Ld (Debye Length ) = Vth,e/Wp,e ~ 1.32e6/1.32e6 ~ 1 m
  • Ls (Skin depth) = c/Wr = c/Wp,e = 3e8/1.32e6 ~ 227 m

Conversion in Smilei code units:
°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

number density = 1.0054 ( in units of Nr )
Ld ( Debye length ) = 1./227 ( in units of skin depth )

Is that conversion correct ?
Thanks in advance,
Denis Bertini

Saving a timetrace: (i) DiagProbe slow, (ii) Subgrid leads to 'Segmentation fault'

Dear Smilei community,

I would like to save a timetrace of a field component at certain spatial points. To this end, one can use either the probe diagnostocs 'DiagProbe' or the field diagnostics 'DiagFields' using the subgrid feature. Both led in my case to unsatisfactory results:

(i) Using DiagProbe:
I have noticed that adding the diagnostics
DiagProbe(
every = 1,
origin = [L0x/2., L0y/2.],
fields = ["Ey"]
)
slows down my simulation with 256x256 spatial points in 2D (only Maxwell, no particles) by a factor of 34. Did you ever noticed this problem?

(ii) Using Subgrid:
I have tried to use the 'DiagFields' diagnostics with the subgrid feature in many different ways, but always got a Segmentation fault:
DiagFields(
every = 1,
fields = ['Ey'],
subgrid = s_[128, 128]
)
Also using
'subgrid = s_[128:129:1, 128:129:1] ' -> Segmentation fault
'subgrid = s_[128:130:1, 128:130:1] ' -> Segmentation fault
'subgrid = s_[128:130, 128:130]' -> Segmentation fault
Only using the subgrid feature with 'subgrid = s_[::2, ::2]', i.e., reducing the number of written points but not the domain, works properly. I am doing s.th. wrong or is there a bug?

Very best regards
Illia

postprocess in ipython (cluster running)

Dear All,

I'm trying to postprocess the data using ipython,
seems all the required steps is done,
but still the folder can not be recognized:

In [1]: import happi

In [2]: S = happi.Open("xx/smilei/sample_laser_propagation
...: ")
Loaded simulation 'xx/smilei/sample_laser_propagation'
Scanning for Scalar diagnostics
Scanning for Field diagnostics
Scanning for Probe diagnostics
Scanning for ParticleBinning diagnostics
Scanning for Screen diagnostics
Scanning for Tracked particle diagnostics

In [3]: This application failed to start because it could not find or load the "
in "".

Reinstalling the application may fix this problem.
Aborted (core dumped)

Thanks in advance.

Cheers

diagnostics of restart run

Hi,

I have some large scale runs and need to restart the simulation every 12 hours. I checked the initial results and found some interesting time chunk that I should take a closer look. So more diagnostics (specifically, screen diagnostics) were added in the restart run, which was not present in the initial run. But the output came up with the warning:

[WARNING] Cannot find attribute DiagScreen0

The result is that the screen diagnostics were not written to disk. I would like to check that if adding more diagnostics is allowed in the restart run.

Thanks,
Xin

Screen normalisation to compare with experience

Hi,
I come to you because the way Smilei is calculating the sum of weights in Screen diagnostics, and the way to get a physical number of particle from this diag is still not very clear in my mind, even after reading the documentation.

For the context, I'm actually doing 1D simulations of laser-solid interaction, and get the electron weights in function of their kinetic energy on a screen diagnostic inside the target, then put this source term in a Monte Carlo code and get the results I want to compare with some experiments.

For this I need to understand how to normalise the screen output in order to get a number of electrons, not a density ; but I don't know which length I may use in longitudinal direction (on transverse direction I take the surface of the beam spot as a typical surface).

Some people also told me that for 1D PIC simulations, the diagnostic outputs should be in number of particle per unit of surface, because there is a lack of information on the two transverse directions. For 2D, outputs should be in number of particle per unit of length, because information are known in 2 directions and 1 direction is unknown.
The weights calculation are probably different in their case, and for going to these units I only need to multiply my output by a length in longitudinal direction.

I think I understand that if the diagnostic is the sum of weights in function of one space direction (so density in function of space) the bin size is the length to choose, because the sum is done over the whole bin and then the number of particles in this bin will depend on its size.

However, I can't choose the cell length as a longitudinal length with a screen diagnostic, because it does a sum of all particle weights passing through one single place and so my normalisation would strongly depend on the spatial resolution I choose for my simulation, which is clearly not physical.

I did some tests with the simulation length as a longitudinal length. I ran my simulation with different sim length (and adapt sim time for considering the same interaction time) and get these results :
Output in $N_r/MeV$:
var_lsim
I was firstly surprised that the results were different for different sim length, but probably I shouldn't be.

I tried after that to multiply the output by the critical density, my sim length (in meters), and the focal spot surface (in meter squared), and get this result :
Output in number of e-/MeV:
norm_lsim_x_waist2

This looks like really better.

If I multiply instead by the total number of cells I would get the same agreement between these 3 curves (because I divide all outputs by the same cell length in meter, but here I get a problem with units). So I need to get a confirmation that multiplying by the simulation length it is the right thing to do, and I would like to understand why.

Putting this apart, I also did some tests about particle_per_cell number, and found that the neutrality condition between electrons and ions have not to be respected for getting good results, if this neutrality condition is respected for density. I did not found this information in the doc (that now looks great !), and I think it could be nice to include it because it's a good feature of Smilei :)

Thank you,
L.

"weight" in ParticleBinning diag

I am wondering how to interpret "weight" in ParticleBinning Diagnostics? In the documentation, it is given as a "number density". Is it counting the number of particles in each bin? But then I found these weights are real numbers instead of integers. So I am guessing there are probably some weighting factors when counting the number of particles in each bin. It matters since I need to determine whether the "weight" in some bins is statistically significant or not. So how should I interpret "weight" in ParticleBinning?

Strange Cpu time usage with multiple MPI processes on tutorial example thermal_plasma_1d.py

Hi ,
I tried to run a simulation example in only one machine.
I noticed that when using more than one MPI process the total time needed to compute a full
simulation does not scale as expected.
I used the tutorial example "thermal_plasma_1d.py" and just change the duration of a simulation from
1024 to 10 (673 steps)
I got the following results

                   MPI process         CPU time  [s]                              
                           1                      25
                           2                      12
                           3                      113   ! 
                           4                      118   !

So up to 2 MPI process, the program scales perfectly but as soon as more than 2 MPI process is used., the CPU time increase and the scalability seems to be gone ?
What could be the problem here ?

My Machine configuration is:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping: 1
CPU MHz: 1760.250
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4801.81
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 35840K
NUMA node0 CPU(s): 0-13,28-41
NUMA node1 CPU(s): 14-27,42-55

crash on Intel Infiniband network

Hello,
Upon trying to run Smilei in our local cluster we find that it crashes with the backtrace below. This prompts the question: is the emphasis on the MPI_THREAD_MULTIPLE due to your calling MPI send/receive procedures from the spawned threads? We have confirmation from Intel that this is not supported in the True Scale Fabric hardware, which leaves out a significant group of possible users, I suppose. If this is not the case, we would like some support to troubleshoot this crash. Thank you.

smilei:17736 terminated with signal 11 at PC=7ff60f45a77d SP=7ff5d1ba99d0.  Backtrace:
/usr/lib64/libpsm_infinipath.so.1(psmi_mpool_get+0xd)[0x7ff60f45a77d]
/usr/lib64/libpsm_infinipath.so.1(psmi_mq_req_alloc+0x79)[0x7ff60f46dd49]
/usr/lib64/libpsm_infinipath.so.1(ips_proto_mq_isend+0x44)[0x7ff60f47ae04]
/usr/lib64/libpsm_infinipath.so.1(psm_mq_isend+0x3e)[0x7ff60f46d82e]
/gpfs/apps/tools/modulefiles/software/OpenMPI/1.10.2-GCC-7.2.0/lib/libmpi.so.12(ompi_mtl_psm_isend+0x115)[0x7ff610493c95]
/gpfs/apps/tools/modulefiles/software/OpenMPI/1.10.2-GCC-7.2.0/lib/libmpi.so.12(+0x248e59)[0x7ff6104fbe59]
/gpfs/apps/tools/modulefiles/software/OpenMPI/1.10.2-GCC-7.2.0/lib/libmpi.so.12(PMPI_Isend+0x115)[0x7ff610363455]
/gpfs/home/echacon/smilei/Smilei/smilei(_ZN7Patch2D12initExchangeEP5Fieldi+0x3ce)[0x51d4ae]
/gpfs/home/echacon/smilei/Smilei/smilei(_ZN15SyncVectorPatch13new_exchange0ERSt6vectorIP5FieldSaIS2_EER11VectorPatch+0xbd)[0x51965d]
/gpfs/home/echacon/smilei/Smilei/smilei(_ZN15SyncVectorPatch9exchangeBER11VectorPatch+0x14c)[0x51a70c]
/gpfs/home/echacon/smilei/Smilei/smilei(_ZN11VectorPatch12solveMaxwellER6ParamsP9SimWindowidR6Timers+0x175)[0x522e45]
/gpfs/home/echacon/smilei/Smilei/smilei[0x555a71]
/gpfs/apps/tools/modulefiles/software/GCC/7.2.0/lib64/libgomp.so.1(+0x1614e)[0x7ff6114be14e]
/usr/lib64/libpthread.so.0(+0x7df5)[0x7ff610d48df5]
/usr/lib64/libc.so.6(clone+0x6d)[0x7ff60ffd11ad]

Smilei with/without threads and SLURM

Hi,
I have a problem to understand why when using threads our simulation is 4 to 5 time slower than when
using pure MPI ( also with number_of_threads=1) !!

May be i am using a wrong way to set the threads in SLURM.
I attached in this mail the input smilei file and the SLURM script ( MPI only and MPI+threads )
Thanks in advance for any help !
Denis

simulation.txt
sub2.txt
sub1.txt

Function with arguments which are numpy arrays

Hello,
I've tried to implement an arbitrary python function for the Species profile in Smilei as a float and it was found to be very slow. So in the documentation it states that it is possible to have the input arguments as numpy arrays which allows for quicker sampling of the Species profile. I'm using Smilei v3.3.

I've externally verified that my code works (using Jupyter) and returns a numpy array which is the same size as its input arguments.
image

In my namelist, in the species block it looks like:
image

However I keeping getting an error that the function does not return a correct value:
image

What do I have to do to make smilei use numpy arrays on external functions for profiles?
Thanks
Savio

compilation error

When I compiled, i got two errors:

 (1) src/Collisions/Collisions.cpp:444:68: error: ‘H5Pset_fapl_mpio’ was not declared in this scope
 (2) src/Collisions/Collisions.cpp:461:56: error: ‘H5Pset_dxpl_mpio’ was not declared in this scope

And my environment is:

  gcc 4.9.1
  openmpi 1.6.5
  hdf5 1.8.13

slow take-off of two-stream and whistler anisotropy instability

Hello,

I ran the two stream instability in the benchmark directory. The run was completed successfully. But the simulation experienced a long quiet period before the instability starts to take off. Please find the attached scalar diagnostics to see this quiet starts. I wonder why.

two_stream_scalar

I also test this for whistler anisotropy instability with a high anisotropy (A = Tperp/Tpar -1 = 5.25) initialization. It is expected the instability should happen right after the simulation starts. But the simulation again experienced a long quite start (about 1.2 electron cyclotron period) before the instability takes off. Please find the attachment for the associated scalar diagnostics.

whistler_anisotropy_scalar

Thanks.

Seg fault when activate collisional ionization

Hi !

I'm working on laser-solid target interaction, and I'm currently trying to implement collisional ionization in my namelist. However, when I activate ionizing = True in the Collisions block, the code crashes after few timesteps.

A part of my namelist :

Zion        = [13]                          # atomic number of ions
Aion        = [27]                          # number of nucleons
ns          = [34.5]                        # ion density (in nc)
[...]
Species(
  name                      = 'e',
  mass                      = 1.0,
  charge                    = -1.0,
  position_initialization   = 'regular',
  momentum_initialization   = 'cold',
  time_frozen               = 0.,
  particles_per_cell        = 0,
  number_density            = 0.,
  boundary_conditions       = [
    ['remove'   , 'remove'],
  ],
  ionization_model          = 'none'
)

Collisions(
  species1 = ["e"],
  species2 = ["e"],
  coulomb_log = 0,
  debug_every = every,
  ionizing = False,
)

for i in range(len(Zion)):
  Species(
    name                    = 'iZ%s'%Zion[i],
    mass                    = 1836.0 * Aion[i],
    charge                  = 0.0,
    position_initialization = 'regular',
    momentum_initialization = 'cold',
    time_frozen             = 0,
    particles_per_cell      = mipc,
    number_density          = lambda x: ns[i] * profile(x),
    boundary_conditions     = [
      ['remove' , 'remove'],
    ],
    ionization_model        = 'tunnel',
    ionization_electrons    = "e",
    atomic_number           = Zion[i]
  )
  
  Collisions(
    species1 = ["e"],
    species2 = ["iZ%s"%Zion[i]],
    coulomb_log = 0,
    debug_every = every,
    ionizing = False,
  )

When the parameter  ionizing of the last Collisions block is set to False, the simulation run properly. However when I set this parameter to True, the following error is displayed in the output

 Time-Loop started: number of time-steps n_time = 15341
 --------------------------------------------------------------------------------
      timestep       sim time   cpu time [s]   (    diff [s] )
    1000/15341     7.6779e+01     2.5973e+01   (  2.5973e+01 )
    2000/15341     1.5352e+02     5.2086e+01   (  2.6113e+01 )
    3000/15341     2.3026e+02     7.9334e+01   (  2.7248e+01 )
[pc-newAV:05414] *** Process received signal ***
[pc-newAV:05414] Signal: Segmentation fault (11)
[pc-newAV:05414] Signal code: Address not mapped (1)
[pc-newAV:05414] Failing at address: 0x18
[pc-newAV:05414] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fa0d293e390]
[pc-newAV:05414] [ 1] /home/users1/esnault/Installation/Smilei/smilei(_ZN21CollisionalIonization8prepare2EP9ParticlesiS1_ib+0x1e9)[0x44ba29]
[pc-newAV:05414] [ 2] /home/users1/esnault/Installation/Smilei/smilei(_ZN10Collisions7collideER6ParamsP5PatchiRSt6vectorIP10DiagnosticSaIS6_EE+0x735)[0x449fc5]
[pc-newAV:05414] [ 3] /home/users1/esnault/Installation/Smilei/smilei(_ZN11VectorPatch15applyCollisionsER6ParamsiR6Timers+0x16b)[0x5065eb]
[pc-newAV:05414] [ 4] /home/users1/esnault/Installation/Smilei/smilei[0x552048]
[pc-newAV:05414] [ 5] /usr/lib/x86_64-linux-gnu/libgomp.so.1(GOMP_parallel+0x3f)[0x7fa0d1b85cbf]
[pc-newAV:05414] [ 6] /home/users1/esnault/Installation/Smilei/smilei(main+0xea3)[0x435513]
[pc-newAV:05414] [ 7] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fa0d15ba830]
[pc-newAV:05414] [ 8] /home/users1/esnault/Installation/Smilei/smilei(_start+0x29)[0x4367f9]
[pc-newAV:05414] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 5414 on node pc-newAV exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

Code versions :

HDF5 version 1.8.16
OpemMPI version 1.10.2
Python version 2.7.12
Smilei v3.3-15-g337fd13-master

It also seems that no debug output is displayed in the terminal (in this case every=1000).

Thank you,
Léo.

Initiating a Particle Beam at a Boundary

Hello everyone,

I would like to define a constant in-flow of a particular species at a boundary. I have attempted writing time-dependent python profiles for a species' charge density so that the particles are generated at each time step, but this appears to only generate the species once. I also noticed that 'time_profile' is not a valid parameter for Species().

How might it be possible to define a constant particle flux through a boundary/surface?

Thank you for your help.

Best regards,

Will

External field and moving window

Hello everyone,

I am trying to simulate a relativistic electron beam interacting with a plasma. In this regard, I am using the moving window feature of SMILEI.
When I apply an external magnetic field, it is present only in the first window of the system (i.e. the initial simulation domain). As the beam propagates and the simulation domain moves with the beam, the external field vanishes. As a result, I cannot study the effect of an external magnetic field for such systems.
It would be very helpful if you can suggest a solution to this.

Thank you and regards,
Ujjwal Sinha

energy conservation problem (Ubal ~ -0.1 even without plasma)

Hello everyone!

I'm experiencing some issues with the "Ubal" diagnostic, which persist even for the simplest simulations (i.e. without a plasma).

Using this input file:

import math
l0 = 2.0*math.pi 
Lx = 102.4*l0 
Ly = 204.8*l0
Tsim = 200.*t0 
resx = 50.
dx = l0/resx;
dy = l0/resx; 
dt = 0.98 /  math.sqrt( dx**(-2) + dy**(-2) )
rest = int(l0/dt) 
t0 = l0
Main(
    geometry = "2Dcartesian",
    interpolation_order = 2,
    timestep = dt,
    simulation_time = Tsim,
    cell_length  = [dx, dy],
    grid_length = [Lx, Ly],
    number_of_patches = [8,1024],
    EM_boundary_conditions = [
        ["silver-muller","silver-muller"],
        ["silver-muller","silver-muller"],
    ],
    print_every = 100.0,
    random_seed = smilei_mpi_rank
)
LaserGaussian2D(
    box_side        = "xmin",
    a0              = 2.,
    focus           = [20.*l0, Main.grid_length[1]/2.],
    waist           = 5.*l0,
    time_envelope   = tgaussian(start=0., duration=45.0*l0, fwhm=15.*l0, center=22.5*l0, order=2)
)
DiagScalar(
    every = 1.
)

Ubal oscillates around -0.1, which should mean that 10% of the laser energy is lost, if I've correctly understood the documentation. Am I doing something wrong?
Any help would be greatly appreciated!

Best regards

Arianna

Parameters in Smilei

Hi All,

Thanks for developing the Smilei code and open it to the public.
I want to use the code for a 2D laser plasma accelerator simulation.
I had some questions regarding the parameters if possible:

I'm using the 2d py file (tst2d_4_laser_wake.py) from the benchmark and I want to change the parameters for the following
simulation:

dt = 1.0920358205802873e-16
LASER_A0 = 3.52
$ XLENGTH = 3.17e-05
$ PULSE_LEN = 8.50e-06
$ NOM_DEN_E = 1.00e+25
$ RAMP_LEN = 1.00e-05
$ WAVELENGTH = 8.00e-07
$ LASER_W0 = 1.20e-05
$ NDIM = 2.00e+00
$ FOCUS_DEPTH = 6.00e-05
$ YLENGTH = 5.00e-05
$ RAMP_START = 1.00e-05
$ K0 = ((TWOPI/WAVELENGTH))
$ OMEGA = ((LIGHTSPEEDK0))
$ LASER_W0INV = (1/ LASER_W0)
$ LASER_ZRINV = (2 * LASER_W0INV**2 / K0)
$ LASER_DENOMY = (1 + (NDIM>1)
(FOCUS_DEPTH * LASER_ZRINV)**2)
$ LASER_DENOMZ = (1 + (NDIM>2)(FOCUS_DEPTH * LASER_ZRINV)2)
$ LASER_AMPFACTOR = ((LASER_DENOMY * LASER_DENOMZ)
(-0.25))
$ LASER_E0 = (ELECMASS * LIGHTSPEED * OMEGA * LASER_A0 / ELEMCHARGE)
$ RAMP_KAY = (PI/RAMP_LEN)
$ PULSE_DURATION = (PULSE_LEN/LIGHTSPEED)
$ PULSE_OMEGA = (TWOPI/PULSE_DURATION)
$ OMEGA_P2 = (NOM_DEN_E
ELEMCHARGEELEMCHARGE/(ELECMASSEPSILON0))
$ OMEGA_P = (sqrt(OMEGA_P2))
$ FREQ_P = (OMEGA_P/TWOPI)
$ PLASMA_WAVELEN = (LIGHTSPEED/FREQ_P)
$ YMAX = (0.5*YLENGTH)
$ YMIN = (-YMAX)
$ FLAT_START = (RAMP_START+RAMP_LEN)

The parameters are in SI units,
Just I want to set the number density as 1.00e+25 m-3(m-2?).
Just I do not know how number density is calculated from n_part_per_cell parameter?

and from this, omega_p would be set. Also for laser we have omega0 (laser angular frequency),
In this case that we have 2 omega_r, which one would be accounted as the reference frequency (omega_r)?

ATB

User-defined function in DiagScreen gives a SegFault

Hi,
I'm trying to use user-defined function in DiagScreen, but it gives me a SegFault.
Namelist (for the test) :

def function(particles):
    return particles.px

DiagScreen(
    shape = "plane",
    point = [11.*l0,0.],
    vector = [1.,0.],
    direction = "forward",
    deposited_quantity = "weight",
    species = ["e"],
    every = Tsim/2,
    axes = [[function, 0. , 8. , 100]]
)

Error :

 Time-Loop started: number of time-steps n_time = 2250
 --------------------------------------------------------------------------------
    timestep       sim time   cpu time [s]   (    diff [s] )
[pc-newAV:17978] *** Process received signal ***
[pc-newAV:17978] Signal: Segmentation fault (11)
[pc-newAV:17978] Signal code: Address not mapped (1)
[pc-newAV:17978] Failing at address: 0x48
[pc-newAV:17978] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f9d9ccef390]
[pc-newAV:17978] [ 1] /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyErr_Occurred+0xa)[0x7f9d9d0e321a]
[pc-newAV:17978] [ 2] /usr/lib/python2.7/dist-packages/numpy/core/multiarray.x86_64-linux-gnu.so(+0x45a92)[0x7f9d9141aa92]
[pc-newAV:17978] [ 3] /usr/lib/python2.7/dist-packages/numpy/core/multiarray.x86_64-linux-gnu.so(+0x2041e)[0x7f9d913f541e]
[pc-newAV:17978] [ 4] /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(+0x15d747)[0x7f9d9d058747]
[pc-newAV:17978] [ 5] /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyDict_SetItem+0x7b)[0x7f9d9d05cecb]
[pc-newAV:17978] [ 6] /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(_PyObject_GenericSetAttrWithDict+0xb8)[0x7f9d9d096838]
[pc-newAV:17978] [ 7] /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_SetAttr+0x87)[0x7f9d9d096d67]
[pc-newAV:17978] [ 8] /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyObject_SetAttrString+0x3c)[0x7f9d9d09702c]
[pc-newAV:17978] [ 9] /home/users1/esnault/Installation/smilei-v3.3/./smilei(_ZN12ParticleData3setEP9Particles+0xb0)[0x45e800]
[pc-newAV:17978] [10] /home/users1/esnault/Installation/smilei-v3.3/./smilei(_ZN27HistogramAxis_user_function8digitizeEP7SpeciesRSt6vectorIdSaIdEERS2_IiSaIiEEjP9SimWindow+0x3e)[0x45f05e]
[pc-newAV:17978] [11] /home/users1/esnault/Installation/smilei-v3.3/./smilei(_ZN9Histogram8digitizeEP7SpeciesRSt6vectorIdSaIdEERS2_IiSaIiEEP9SimWindow+0xd3)[0x488053]
[pc-newAV:17978] [12] /home/users1/esnault/Installation/smilei-v3.3/./smilei(_ZN16DiagnosticScreen3runEP5PatchiP9SimWindow+0x687)[0x450d27]
[pc-newAV:17978] [13] /home/users1/esnault/Installation/smilei-v3.3/./smilei(_ZN11VectorPatch11runAllDiagsER6ParamsP9SmileiMPIjR6TimersP9SimWindow+0x17a)[0x50460a]
[pc-newAV:17978] [14] /home/users1/esnault/Installation/smilei-v3.3/./smilei[0x551b58]
[pc-newAV:17978] [15] /usr/lib/x86_64-linux-gnu/libgomp.so.1(+0xf43e)[0x7f9d9bf3a43e]
[pc-newAV:17978] [16] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7f9d9cce56ba]
[pc-newAV:17978] [17] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9d9ba523dd]
[pc-newAV:17978] *** End of error message ***
./launcher : ligne 11 : 17978 Erreur de segmentation  (core dumped) ~/Installation/smilei-v3.3/./smilei namelist.py

I've also the same error with the user-defined function with deposited_quantity on DiagScreen

Local installation with :
Smilei version : 3.3
openmpi version : 1.10.2
hdf5 version : 1.8.16

Léo.

Issue with QED benchmark namelists

Hi All,
I am trying to run the namelists
tst1d_10_pair_electron_laser_collision.py
and
tst2d_10_multiphoton_Breit_Wheeler.py
using smilei-v3.3 (with QED module).
I am getting the following errors,

Initializing radiation reaction

     The Monte-Carlo Compton radiation module is requested by some species.

     Factor classical raidated power: 2.0051e+03
     Threshold on the quantum parameter for radiation: 1.0000e-03
     Threshold on the quantum parameter for discontinuous radiation: 1.0000e-02
     Table path: ./

     --- Integration F/chipa table:
         MPI repartition:
         Rank: 0 imin: 0 length: 32
         Rank: 1 imin: 32 length: 32
         Rank: 2 imin: 64 length: 32
         Rank: 3 imin: 96 length: 32
         Computation:
[ERROR](0) src/Tools/userFunctions.cpp:148 (modified_bessel_IK) x too large in modified_bessel_IK; try asymptotic expansion
[ERROR](0) src/Tools/userFunctions.cpp:148 (modified_bessel_IK) x too large in modified_bessel_IK; try asymptotic expansion
[ERROR](0) src/Tools/userFunctions.cpp:148 (modified_bessel_IK) x too large in modified_bessel_IK; try asymptotic expansion
[ERROR](0) src/Tools/userFunctions.cpp:148 (modified_bessel_IK) x too large in modified_bessel_IK; try asymptotic expansion

Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.


mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[33710,1],0]
Exit code: 1

Please help!

mask on the wave field

Hi,
I have a 2D simulation with an open boundary condition for the waves and a reflective boundary condition for particles. In the center of domain, an antenna is launching waves with frequency 0.2 * fce. But the silver-muller boundary condition does not work very well for my case. There is some reflecting waves (see the figure below). I would like to quickly experiment with a mask function that smoothly damps the waves over some wavelengths at the boundary. How can I quickly implement it in Smilei or could you send me a quick implementation of this? Thanks.

boundary

Runtime increasing with more cores (homogenous and mostly empty boxes)

Hi SMILEI developers,

I investigated how SMILEI's runtime scales with number of cores it is running on for a small plasma in a vacuum and for a homogenous plasma filling the simulaiton box. I also varied the number of patches coverign the 2D boxes.

The runtime generally increases with increasing number of cores and the number of patches does not seem to aleviate this.

The below plot shows the runtime as a function of number of cores and number of patches, with the runtime being denoted by the size of the data point. This is for a 'cold core' simulation (temperature too low for Debye length to be resolved).
coldcore

For a hot core csimulation (Debye length resolved):
hotcore

For a hot homogenous plasma:
hothomo

A hot homogenous plasma for a large number of cores with the various run time metrics given by the log file plotted:
hothomogenouscorescan

Notes:
The log file indicates I am utilizing the many cores.
The input decks and log files can by found at the bottom of this post.
The input decks are missing a line or two that defines the number of patchs (this is because a script build the final decks from the input deck provided).
The number of patches in the above plots is in base 2. i.e. 1,2,4,8,16 etc.
I am running on SCARF lexicon-2 at the rutherford appleton laboratory.
I have renamed .log and .py extensions to .txt so I can upload to this post.
I can provide the MATLAB .fig files that generated the .png files if you wish to pull the numbers out / rotate freely.
I can provide further details upon request.

All the best,
Jimmy.

coldcore-222.txt
coldcoreinput.txt
hotcore-222.txt
hotcoreinput.txt
hothomo-222.txt
hothomoinput.txt
hothomogenouscorescan256cores.txt
hothomogenouscorescaninput.txt

Crash on 3d ionization

Hi, I have met a strange problem with 3d ionization.

Basically I was trying laser ionization of a spheric target, but SMILEI crashed as long as the sphere radius (or volume) was big enough. For instance, the attached test input (inp.txt) works fine for radius of 0.1 laser wavelength (line 39, if R<0.1), but crashes for R<0.2!

I have tested 2d ionization and 3d preionized plasma, and both worked fine.

I have also tested varying number of patches and MPI thread numbers for the crashed case, and they didn't help.

The log files for the crashed simulation are attached as out_*.

The log file (make.txt) for compiling the code on my server (Ubuntu OS) is attached as well.

Could anyone have any ideas on this problem. Many thanks.

inp.txt
make.txt
out_e.txt
out_o.txt

Seg fault when 'ionization_electrons' name is mis-matched

A small issue.

When the name given to 'ionization_electrons' of a species block does not exist or is misspelt, SMILEI segfaults. A check when parsing the input deck would be useful as this kind of error can be missed by the user when double checking the input deck.

Example:

Species(
species_type = 'hydrogen',
atomic_number = 1.0,
ionization_model = 'tunnel',
ionization_electrons = 'ionisede',
etc
)

Species(
species_type = 'Ionisede',
etc
)

Attached is the input deck which exhibits this seg fault.

All the best,
Jimmy.

input.txt

about the simulation length definition

Dear developers,
I have some misunderstanding of normalizations in SMILEI-code. For the example of Practice-01 (laser propagation.py) https://smileipic.github.io/tutorials/practical1.html , in the input.py file, the normalized laser wavelength $\hat{\lambda}_0 = k_0,\lambda_0 = 2\pi$,

l0 = 2.0*math.pi # laser wavelength [in code units]
t0 = l0 # optical cycle
Lsim = [32.*l0,64.*l0] # length of the simulation
Tsim = 98.*t0 # duration of the simulation

from the definition, the Lsim will be [201.06192982974676, 402.1238596594935] in the simulation. What I would like to concern is that how to get the real simulation length size? that is, 32 and 64 is the real length of simulation box? and what is their unit (um or m)?

Regards,

Problem with space_time_profile for a user-defined laser beam

Hello everyone.

I'm experiencing some issues with the initialization of a generic laser beam with
a space_time_profile function.

The simplest case which reproduces the issue is the following.

I took the example "tst1d_0_em_propagation.py" (which runs smoothly)
and I replaced the laser definition with these lines:


def zero(t):
    return 0.0
def f(t):
    return math.cos(t)

Laser(
     space_time_profile = [  zero,  f ],
)

The output shows that the Laser definition has been parsed correctly

[WARNING] Laser #0: space-time profile defined, dismissing time_envelope space_envelope omega chirp_profile phase
Laser #0: space-time profile
first axis : 1D user-defined function
second axis : 1D user-defined function

And the code runs for all the timesteps

timestep sim time cpu time [s] ( diff [s] )
51/408 3.1724e+00 1.2952e+00 ( 1.2952e+00 )
102/408 6.3140e+00 2.6316e+00 ( 1.3364e+00 )
153/408 9.4556e+00 3.9847e+00 ( 1.3531e+00 )
204/408 1.2597e+01 5.5216e+00 ( 1.5369e+00 )
255/408 1.5739e+01 7.3499e+00 ( 1.8283e+00 )
306/408 1.8880e+01 9.1586e+00 ( 1.8087e+00 )
357/408 2.2022e+01 1.0807e+01 ( 1.6482e+00 )
408/408 2.5164e+01 1.2235e+01 ( 1.4280e+00 )

However, the execution ends with an error:

Printed times are averaged per MPI process
See advanced metrics in profil.txt
[debian:18154] *** Process received signal ***
[debian:18154] Signal: Segmentation fault (11)
[debian:18154] Signal code: Address not mapped (1)
[debian:18154] Failing at address: 0x21
Segmentation fault

And the fields in the output are zero.

Am I doing something wrong?
I would be glad if you could help me with this issue.

Best regards

Luca

ERROR: No patch to clone.

Dear SMILEI community,

I have an error which appears on the cluster occigen. During the runtime the following message is given in the output file:
4020/6501 1.5710e+01 1.1425e+03 ( 8.0868e+02 )
ERROR src/Patch/VectorPatch.cpp:1323 (createPatches) No patch to clone. This should never happen!

I had two simulations with exactly the same input file, but one had the error and another finished property. Have you ever seen anything similar? Do you have an idea about what might be wrong?

This is the whole output file:

Reading the simulation parameters

HDF5 version 1.8.18
Python version 2.7.12
Parsing pyinit.py
Parsing v3.4-71-g2866a41-master
Parsing pyprofiles.py
Parsing THzEbeamAmp2dMirror_E0z1d0_tp2d35_wp3d14_nm10000_xmp1d4_gamma20_n28d3_tn0d1_tdm0d6_qneutr_100ppc_4.py
Parsing pycontrol.py
Calling python _smilei_check
Calling python _prepare_checkpoint_dir
[WARNING] Change patches distribution to hilbert
[WARNING] Particles cluster width set to : 5

Geometry: 2Dcartesian

 Interpolation order : 2
 Maxwell solver : Yee
 (Time resolution, Total simulation time) : (255.921148, 25.402356)
 (Total number of iterations,   timestep) : (6501, 0.003907)
            timestep  = 0.996966 * CFL
 dimension 0 - (Spatial resolution, Grid length) : (254.647909, 18.849556)
             - (Number of cells,    Cell length)  : (4800, 0.003927)
             - Electromagnetic boundary conditions: (silver-muller, silver-muller)
                 - Electromagnetic boundary conditions k    : ( [1.00, 0.00] , [-1.00, -0.00] )
 dimension 1 - (Spatial resolution, Grid length) : (15.92, 25.13)
             - (Number of cells,    Cell length)  : (400, 0.06)
             - Electromagnetic boundary conditions: (silver-muller, silver-muller)
                 - Electromagnetic boundary conditions k    : ( [0.00, 1.00] , [-0.00, -1.00] )

Load Balancing:

 Patches are initially homogeneously distributed between MPI ranks. (initial_balance = false) 
 Happens: every 20 iterations
 Cell load coefficient = 1.00
 Frozen particle load coefficient = 0.10

Initializing MPI

 Number of MPI process : 28
 Number of patches : 
	 dimension 0 - number_of_patches : 64
	 dimension 1 - number_of_patches : 2
 Patch size :
	 dimension 0 - n_space : 75 cells.
	 dimension 1 - n_space : 200 cells.
 Dynamic load balancing: every 20 iterations

OpenMP

 Number of thread per MPI process : 2

Initializing the restart environment

Initializing moving window

 Moving window is active:
	 velocity_x : 1.00
	 time_start : 15.71

Initializing particles & fields

 Creating Species : ions
 Creating Species : electrons
 Creating Species : mirror_ion
 Creating Species : mirror_eon
 Laser parameters :
	Laser #0: separable profile
		omega              : 1
		chirp_profile      : 1D built-in profile `tconstant`
		time envelope      : 1D built-in profile `tgaussian`
		space envelope (y) : 1D user-defined function
		space envelope (z) : 1D user-defined function
		phase          (y) : 1D user-defined function
		phase          (z) : 1D user-defined function
	delay phase      (y) : 0
	delay phase      (z) : 1.5708
 Adding particle walls:
	 Nothing to do

Initializing Patches

 First patch created
	 Approximately 10% of patches created
	 Approximately 20% of patches created
	 Approximately 30% of patches created
	 Approximately 40% of patches created
 All patches created

Creating Diagnostics, antennas, and external fields

 Diagnostic Fields #0  :
	 Ex Ey Ez By_m Bz_m Jx Jy Jz Rho_electrons Rho_mirror_ion Rho_mirror_eon 
 Done initializing diagnostics, antennas, and external fields

Applying external fields at time t = 0

Solving Poisson at time t = 0

 Poisson solver converged at iteration: 0, relative err is ctrl = 0.00 x 1e-14
 Poisson equation solved. Maximum err = 0.00 at i= -1

Time in Poisson : 0.00

Initializing diagnostics

Running diags at time t = 0

Species creation summary

	 Species 0 (ions) created with 52776600 particles
	 Species 1 (electrons) created with 52776600 particles
	 Species 2 (mirror_ion) created with 13696000 particles
	 Species 3 (mirror_eon) created with 13696000 particles

Memory consumption

 (Master) Species part = 0 MB
 Global Species part = 6.191 GB
 Max Species part = 1177 MB
 (Master) Fields part = 10 MB
 Global Fields part = 0.250 GB
 Max Fields part = 10 MB

Expected disk usage (approximate)

 WARNING: disk usage by non-uniform particles maybe strongly underestimated,
    especially when particles are created at runtime (ionization, pair generation, etc.)
 
 Expected disk usage for diagnostics:
	 File Fields0.h5: 10.58 G
	 File scalars.txt: 1.11 M
 Total disk usage for diagnostics: 10.58 G

Cleaning up python runtime environement

 Checking for cleanup() function:
 python cleanup function does not exist
 Calling python _keep_python_running() :
	 Closing Python

Time-Loop started: number of time-steps n_time = 6501

timestep       sim time   cpu time [s]   (    diff [s] )
804/6501     3.1435e+00     6.7802e+01   (  6.7802e+01 )

1608/6501 6.2851e+00 1.1969e+02 ( 5.1892e+01 )
2412/6501 9.4267e+00 1.6992e+02 ( 5.0226e+01 )
3216/6501 1.2568e+01 3.3386e+02 ( 1.6394e+02 )

Window starts moving
4020/6501 1.5710e+01 1.1425e+03 ( 8.0868e+02 )
ERROR src/Patch/VectorPatch.cpp:1323 (createPatches) No patch to clone. This should never happen!

Restart issue

Hello,
I'm trying to start a simulation from the output of a previous simulation. I have successfully restarted a simulation from two previous checkpoints. However on the third time I obtain a rather odd error.
''
READING fields and particles for restart
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-0 is not 1D but -1D
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-1 is not 1D but -1D
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-1 is not 1D but -1D
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-0 is not 1D but -1D
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-0 is not 1D but -1D
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-0 is not 1D but -1D
ERROR src/Checkpoint/Checkpoint.cpp:588 (restartPatch) Number of species differs between dump (0) and namelist (3)
ERROR src/Tools/H5.h:324 (getVect) Reading vector Position-0 is not 1D but -1D
ERROR src/Tools/H5.h:324 (getVect) Reading vector Momentum-0 is not 1D but -1D
``

I don't understand this error as its trying to read data that smilei has output itself? I've attached all the log and input deck files that I've used for this simulation.

log_1.txt
log_2.txt
log_3.txt
log_4.txt
SimulationFiles.zip

Thanks
Savio Rozario

Collision block causing seg fault.

Dear SMILEI developers,

I am encountering a seg fault when using the collisions block. I can sometimes get the simulation completing with collisions if I choose the size of the 'core' and the number of particles-per-cell for the colliding species. I am finding it hard to generate more information than this due to the nature of the error - apologies!

Below are four decks with the .err and .log files.

  • workingwithcollisions.txt : Only works when a very narrow range of 'radius' and 'ppc' and 'ppc_core' are chosen.
  • nocollisions.txt : Works with arbituary choice of radius, ppc and ppc_core.
  • collisionson.txt : An example of the simulation seg faulting with collisions on.
  • HDF5DIAGerror.txt : The failed simulation with the most insightful error.

The output of all simulations, whether successful or not, looks sensible. That is to say the fields and particle distributions are reasonable.

Thank you for any help,
All the best,
Jimmy.

workingwithcollisions.txt
workingwithcollisionslog.txt

nocollisions.txt
nocollisionslog.txt

collisionon.txt
collisiononerr.txt
collisiononlog.txt

HDF5DIAGerrorlog.txt
HDF5DIAGerror.txt
HDF5DIAGerrorerr.txt

particle diagnostics

I have some difficulty using the current particle binning diagnostics to do a field-particle correlation. The x-axis of the phase space, is the difference between the gyro-phase of particles and the cyclotron-phase of waves, which is called the relative phase angles. The y-axis is the particle velocity parallel to the background magnetic field.

If I use the particle binning diagnostics, 5 axes are needed, i.e., x, y, vx, vy and vz. Note that x and y are needed to select the location of the wave to be correlated. In this case, the five dimensional data takes more space than the actual particle data since the five dimensional matrix is very sparse (only 288 hundred particles in each x-y cell). For practical parameters, the runs to produce the five dimensional data usually crash.

To do the above field-particle correlations, I wish we can add the diagnostics to output the particle data directly. This may save more space than the binning technique if some high dimensional data is desired in some applications. We may also restrict the range of the axes to allow only a subset of particles to be output.

Let me know if this request is possible or not.

Merry Christmas and Happy the incoming new year :)

Errors with TrackParticles.animate() and TrackParticles.toVTK()

Greetings,

I have been trying out the various Happi diagnostics, and I am receiving errors with TrackParticles.animate() and TrackParticles.toVTK(). I am uncertain what is causing these issues. Other diagnostics such as Field.animate() and Field.toVTK() seem to work perfect.

I've been trying these diagnostics with a (slightly modified) copy of "tst3d_1_thermal_plasma.py" from the benchmarks folder:

tst3d_1_thermal_plasma.txt

Here is the Happi file used for TrackParticles.animate():
happi_thermalplasma_animate.txt

This is the output for TrackParticles.animate():
animate_error.txt

A completely blank Matplotlib window appears, but never loads any data. The program seems to freeze at timestep=0. Furthermore, my laptop slows down considerably. It is possible that there is too much data for my computer to handle (although I have let the program run for quite some time without a change in the result.) I have tried adjusting the variable "n0" in "tst3d_1_thermal_plasma.py", but I still see an output indicating that species 'proton' contains 140608 particles (the same value as before).

The Happi code for TrackParticles.toVTK() is the following:
happi_thermalplasma_toVTK.txt

And the error is:
toVTK_error.txt

Essentially, toVTK() does not recognize 'rendering':

TypeError: toVTK() got an unexpected keyword argument 'rendering'

Any help would be greatly appreciated.

Best regards,

Will

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.