Comments (20)
In principle it should be possible since the checkpoint files contain only particles and fields.
@mccoys any ideas why we store the Screen diag number as attribute in the checkpoint?
the warning comes from this block:
https://github.com/SmileiPIC/Smilei/blob/master/src/Checkpoint/Checkpoint.cpp#L415
and the dump writing is here:
https://github.com/SmileiPIC/Smilei/blob/master/src/Checkpoint/Checkpoint.cpp#L211
from smilei.
Ok, I see the problem is that the screen is "incremental" and we need to store the particles that crossed the screen.
In your case, the screen "appears" at restart timestep, the diagnostic should be created and the values should restart from 0.
from smilei.
I did not fully understand why extra diagnostics at the restart run have a problem. Note that it does not have any problem when the diagnostics at the restart run is the same as that of the initial run.
Here is some extra error messages for one of the mpi processes besides the warning:
HDF5-DIAG: Error detected in HDF5 (1.8.18) MPI-process 289:
#000: H5G.c line 467 in H5Gopen2(): unable to open group
major: Symbol table
minor: Can't open object
#1: H5Gint.c line 320 in H5G__open_name(): group not found
major: Symbol table
minor: Object not found
#2: H5Gloc.c line 430 in H5G_loc_find(): can't find object
major: Symbol table
minor: Object not found
#3: H5Gtraverse.c line 861 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#4: H5Gtraverse.c line 641 in H5G_traverse_real(): traversal operator failed
major: Symbol table
minor: Callback failed
#5: H5Gloc.c line 385 in H5G_loc_find_cb(): object 'FieldsForDiag0' doesn't exist
major: Symbol table
minor: Object not found
HDF5-DIAG: Error detected in HDF5 (1.8.18) MPI-process 289:
#000: H5G.c line 812 in H5Gclose(): not a group
major: Invalid arguments to routine
minor: Inappropriate type
from smilei.
The issue is that the Screen need to integrate all the data collected since the beginning of the simulation. Every timestep, it increments its arrays of data with the new data collected from this timestep. This is why the Screen needs to store its data in the checkpoints. When you restart the simulation, the Screen expects to find its previous data stored in the checkpoint. In your case, as you did not have a Screen in your first simulation, there was no stored Screen data, and the restart failed.
We can still fix this problem by forcing the data to be put at 0, even when the diag did not exist before. We should be able to file a bugfix soon.
from smilei.
Hi @phyax,
upon reading the code around the lines mentioned above, smilei should run fine1 when you add a DiagScreen
between two checkpoints.
On the other end, the the HDF5 errors you mention later, look like the patches structure changed between the two runs.
What changes did you make between the first run and restart?
1There might be an issue in case you changed the Screen size between restart. We are investigating this
from smilei.
Hi @iltommi ,
The changes between the first run and restart are:
- turn off probe diagnostics in the first run.
- add screen diagnostics and probes at the screen location in order to do field-particle correlation.
All other setup remains the same. So I believe the patch structure should not change between the first run and restart.
In fact, I tried to use DiagParticles in the restart run at the beginning. It did not work. Then I tried to use DiagScreen. It also did not work. Perhaps I need to have these diagnostics in the first run so that they can work in the restart run.
from smilei.
We did some modifications to the code (but these should not impact much the DiagScreen
).
Anyway, here is what I tested (it's based on ../benchmarks/tst1d_4_radiation_pressure_acc.py
):
suppose you're on the smilei root:
create 2 dirs run1
and run2
:
mkdir run{1,2}
split the bechmark file in two at line 100 (separating DiagScreen
s in the run2/run2.py
):
split -l 100 benchmarks/tst1d_4_radiation_pressure_acc.py
mv xaa run1/run1.py
mv xab run2/run2.py
run the first simulation (it will create checkpoint files at timestep 10000):
cd run1
mpirun -np 4 ../smilei run1.py 'DumpRestart(dump_step = 10000)'
and run the restart (with the DiagScreen
s)
cd ../run2
mpirun -np 4 ../smilei ../run1/run1.py run2.py 'DumpRestart(restart_dir="../run1")'
And here are the resulting files:
run1
├── Fields0.h5
├── ParticleDiagnostic0.h5
├── ParticleDiagnostic1.h5
├── checkpoints
│ ├── dump-00000-0000000000.h5
│ ├── dump-00000-0000000001.h5
│ ├── dump-00000-0000000002.h5
│ └── dump-00000-0000000003.h5
├── patch_load.txt
├── profil.txt
├── run1.py
├── scalars.txt
└── smilei.py
run2
├── Fields0.h5
├── ParticleDiagnostic0.h5
├── ParticleDiagnostic1.h5
├── Screen0.h5
├── Screen1.h5
├── Screen2.h5
├── Screen3.h5
├── Screen4.h5
├── Screen5.h5
├── Screen6.h5
├── Screen7.h5
├── profil.txt
├── run2.py
├── scalars.txt
└── smilei.py
Can you confirm this behaviour?
from smilei.
Hi @iltommi ,
I tried the example you gave and got the same results as yours. I am not really sure what caused the error of restart diagnostics in my large scale runs at this time.
from smilei.
At what time during the restarted simulation did this problem appear?
Was it during initialization, in the first iteration, or in subsequent iterations?
Is it possible for you to attach part of your stdout ?
from smilei.
Oh, another possibility. Do you have time-averaged DiagFields? If yes, then the version of the code here on GitHub may require to be patched before you can restart properly the simulation.
from smilei.
I did some research on the hdf5 error and found that it is not a restart issue. It is due to the number of grids in the velocity space being too large. To reproduce my error, in the DiagScreen part of the input deck ../benchmarks/tst1d_4_radiation_pressure_acc.py, replace "axes = [["ekin", 0., 0.4, 10]]" with
axes = [["vx", -1., 1., 30],
["vy", -1., 1., 30],
["vz", -1., 1., 30]]
You will find the hdf5 error message without doing restart. But it seems some screen data is still dumped. I did not check if these data is usable.
The reason I need such a fine grid is that I intend to correlate the particle gyro-phase with the wave phase for different perpendicular and parallel velocity. Is there anyway to do it at this time?
from smilei.
@phyax, I cannot reproduce this error, and I think the problem is somewhere else.
In your original error log, you can see the error object 'FieldsForDiag0' doesn't exist
, which clearly is related to time-averaged fields diagnostics. This has been corrected in an upcoming version which we have not merged yet, but we can correct this problem today on the version you are using.
If you have another problem with the velocity space being too large, please post your error log. We can investigate what is going on, but I would be very surprised a space of 30x30x30 points be too much. Technically, HDF5 should be able to support 1000x1000x1000 at least.
from smilei.
@phyax d3dcd5b should fix the problem.
Can you check it?
from smilei.
I update my version with the new commit. The error is still there. Please see the file run1.py.txt in the attachment. The only changes I made compared to tst1d_4_radiation_pressure_acc.py are:
- "axes" in the DiagScreen
- add "DumpRestart"
Here is the screen shot of error message:
The simulation is still completed though. These error messages were gone if I remove the "DumpRestart" block.
from smilei.
from smilei.
@phyax, the error that you obtain is now a different one. It seems that the problem is related to storing DiagScreen in the checkpoint files. It is currently stored as an HDF5 attribute, not a proper dataset. It turns out that some versions of HDF5 restrict the size of the attributes in a way I am not sure I understand yet. I will look at the possibility to change the way it works: the DiagScreen information will be stored as a proper dataset.
from smilei.
@mccoys @iltommi , yes. The error just reported is different from the original one. The original problem went away with the last commit.
from smilei.
@phyax can you confirm the patch works? I tested your case like this:
2 run files (first withoud diags and second just diags for the restart):
run1/run1.py
:
# ----------------------------------------------------------------------------------------
# SIMULATION PARAMETERS FOR THE PIC-CODE SMILEI
# ----------------------------------------------------------------------------------------
import math
l0 = 2.0*math.pi # laser wavelength
t0 = l0 # optical cicle
Lsim = 10.*l0 # length of the simulation
Tsim = 40.*t0 # duration of the simulation
resx = 500. # nb of cells in on laser wavelength
rest = resx/0.95 # time of timestep in one optical cycle (0.95 * CFL)
# plasma slab
def f(x):
if l0 < x < 2.0*l0:
return 1.0
else :
return 0.0
Main(
geometry = "1d3v",
interpolation_order = 2 ,
cell_length = [l0/resx],
sim_length = [Lsim],
number_of_patches = [ 8 ],
timestep = t0/rest,
sim_time = Tsim,
bc_em_type_x = ['silver-muller'],
random_seed = smilei_mpi_rank
)
Species(
species_type = 'ion',
initPosition_type = 'regular',
initMomentum_type = 'cold',
n_part_per_cell = 10,
mass = 1836.0,
charge = 1.0,
nb_density = trapezoidal(10.,xvacuum=l0,xplateau=l0),
temperature = [0.],
bc_part_type_xmin = 'refl',
bc_part_type_xmax = 'refl'
)
Species(
species_type = 'eon',
initPosition_type = 'regular',
initMomentum_type = 'cold',
n_part_per_cell = 10,
mass = 1.0,
charge = -1.0,
nb_density = trapezoidal(10.,xvacuum=l0,xplateau=l0),
temperature = [0.],
bc_part_type_xmin = 'refl',
bc_part_type_xmax = 'refl'
)
LaserPlanar1D(
boxSide = 'xmin',
a0 = 10.,
omega = 1.,
ellipticity = 1.,
time_envelope = tconstant(),
)
every = int(rest/2.)
DumpRestart(
restart_dir = None,
dump_step = 10000,
dump_minutes = 0., # dump before maximum wall-clock time
dump_deflate = 0,
exit_after_dump = True,
dump_file_sequence = 2,
)
run the sim:
cd run1 && mpirun -n 2 ../smilei run1.py
and run2/run2.py
:
DiagFields(
every = every,
fields = ['Ex','Ey','Ez','Rho_ion','Rho_eon']
)
DiagScalar(every=every)
DiagParticles(
output = "density",
every = every,
species = ["ion"],
axes = [
["x", 0., Lsim, 200],
["px", -10., 1000., 200]
]
)
DiagParticles(
output = "density",
every = every,
species = ["ion"],
axes = [
["ekin", 0., 200., 200, "edge_inclusive"]
]
)
for direction in ["forward", "backward", "both", "canceling"]:
DiagScreen(
shape = "sphere",
point = [0.],
vector = [Lsim/3.],
direction = direction,
output = "density",
species = ["eon"],
axes = [["ekin", 0., 0.4, 30],
["vx", -1., 1., 30],
["vy", -1., 1., 30],],
every = 3000
)
DiagScreen(
shape = "plane",
point = [Lsim/3.],
vector = [1.],
direction = direction,
output = "density",
species = ["eon"],
axes = [["ekin", 0., 0.4, 30],
["vx", -1., 1., 30],
["vy", -1., 1., 30],],
every = 3000
)
DumpRestart.restart_dir="../run1"
run the restart
cd ../run2 && mpirun -n 2 ../smilei ../run1/run1.py run2.py
from smilei.
@iltommi @mccoys I confirm that the last patch fixes the conflicts between data dump and DiagScreen. Thank you!
I have not looked at the output of screen diag yet. But does the histogram by screen diag counting particles during a time period defined by 'every', by Dt of the simulation, or from the beginning of the simulation? I read through the doc but did not find the answer. I understand that the output frequency of screen diag is 'every'. But I am not sure during which time period the screen diag accumulates particle data.
from smilei.
@phyax, the data of a Screen is accumulated since the beginning of the simulation, or, in your case, since the point when you introduced the diagnostic. We will make this clearer in the doc. Thanks for this report.
from smilei.
Related Issues (20)
- Custom integrated diagnostics HOT 6
- A laser with 4 parameters is ok? HOT 4
- Injector initialization crashes if temperature profile not defined HOT 1
- I get errors whenever I have been trying to use select in TrackParticles diagnostics HOT 4
- Segmentation fault when using the ExternalField block in 3D HOT 21
- How 'reflective' particle boundary works HOT 2
- Particle injector dosen't work with profile HOT 1
- AMcylindrical geometry, origin of transverse axes in Probes HOT 9
- error compiling v5.0 with amd_gpu config HOT 17
- dump_minutes failing to correctly dump restart files HOT 3
- smilei v5.0 problems with compiling on GPU on new HPC HOT 32
- "auto" keyword for automatic range calculation in ParticleBinning diagnostic causes fatal crash HOT 2
- Failures in the patch creation due to MPI processes etc HOT 8
- Understanding performance with ionization HOT 1
- VectorPatch Compiling Errors HOT 5
- Error with VectorPatch.o when making HOT 3
- HDF5 error HOT 7
- Incorrect Implementation of output time offset HOT 1
- Implementation of position attribute of Fields HOT 5
- Error message in 1Dcartesian HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from smilei.