dougshidong / philip Goto Github PK
View Code? Open in Web Editor NEWParallel High-Order Library for PDEs through hp-adaptive Discontinuous Galerkin methods
License: Other
Parallel High-Order Library for PDEs through hp-adaptive Discontinuous Galerkin methods
License: Other
Currently, their Jacobian evaluations within the HighOrderGrid class recompute the basis gradient on the reference element every time. This is taking the bulk of the computational time in 3D.
Need to put an operator associated with a set of points.
Currently, input boundaries are hard-coded such that ID 1001 is slip wall for example. If we receive a grid from someone else who generated it with Gmsh, we would have to change the mesh such that wall ID's are 1001.
Instead, a better way to handle this is to pass
boundary_slip_wall_id
as a parameter for the user to input. We would still set the default ID's to avoid breaking existing tests. However, this gives flexibility for someone who simply wants to use the code, rather than develop it.
As described by the book I do like CFD (Masatsuka 2018), p. 240, section 7.13.2.
This test would be similar to other manufactured solution tests, so you should be able to re-use code in grid_study.cpp to generate this test. The use of periodicity can be seen in the periodic explicit advection test case.
It would also nicely verify our periodic boundary conditions.
Need to verify the collocation with some tests.
A first test would be to form a mass matrix and check that it is discretely orthogonal.
Another test would be to manually check that the quadrature points are located at the solution points.
output:
49: An error occurred in line <1715> of file </home/abtinameri/share/dealii/source/fe/mapping_fe_field.cc> in function
49: dealii::CellSimilarity::Similarity dealii::MappingFEField<dim, spacedim, VectorType, DoFHandlerType>::fill_fe_values(const typename dealii::Triangulation<dim, spacedim>::cell_iterator&, dealii::CellSimilarity::Similarity, const dealii::Quadrature&, const typename dealii::Mapping<dim, spacedim>::InternalDataBase&, dealii::internal::FEValuesImplementation::MappingRelatedData<dim, spacedim>&) const [with int dim = 2; int spacedim = 2; VectorType = dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = dealii::DoFHandler<2, 2>; typename dealii::Triangulation<dim, spacedim>::cell_iterator = dealii::TriaIterator<dealii::CellAccessor<2, 2> >; typename dealii::Mapping<dim, spacedim>::InternalDataBase = dealii::Mapping<2, 2>::InternalDataBase]
49: The violated condition was:
49: det > 1e-12 * Utilities::fixed_power( cell->diameter() / std::sqrt(double(dim)))
49: Additional information:
49: The image of the mapping applied to cell with center [0.542651 1.31008] is distorted. The cell geometry or the mapping are invalid, giving a non-positive volume fraction of -0.116411 in quadrature point 83.
49:
49: Stacktrace:
49: -----------
49: #0 /home/abtinameri/share/dealii/install/lib/libdeal_II.g.so.9.2.0-pre: dealii::MappingFEField<2, 2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, dealii::DoFHandler<2, 2> >::fill_fe_values(dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&, dealii::CellSimilarity::Similarity, dealii::Quadrature<2> const&, dealii::Mapping<2, 2>::InternalDataBase const&, dealii::internal::FEValuesImplementation::MappingRelatedData<2, 2>&) const
49: #1 /home/abtinameri/share/dealii/install/lib/libdeal_II.g.so.9.2.0-pre: dealii::FEValues<2, 2>::do_reinit()
49: #2 /home/abtinameri/share/dealii/install/lib/libdeal_II.g.so.9.2.0-pre: void dealii::FEValues<2, 2>::reinit<dealii::hp::DoFHandler, false>(dealii::TriaIterator<dealii::DoFCellAccessor<dealii::hp::DoFHandler<2, 2>, false> > const&)
49: #3 /home/abtinameri/Desktop/thesis/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::Tests::EulerCylinder<2, 4>::run_test() const
49: #4 /home/abtinameri/Desktop/thesis/PHiLiP/build/bin/PHiLiP_2D: main
49: --------------------------------------------------------
49:
49: Calling MPI_Abort now.
49: To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
49: set breakpoint pending on
49: break MPI_Abort
49: set breakpoint pending auto
49: [abtinameri-ThinkPad-T460:06505] 1 more process has sent help message help-mpi-api.txt / mpi-abort
49: [abtinameri-ThinkPad-T460:06505] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
1/1 Test #49: MPI_2D_EULER_INTEGRATION_CYLINDER_LONG ...***Failed 500.26 sec
The parameters listed below are test specific and therefore should not appear in the general parameters.
PHiLiP/src/parameters/all_parameters.cpp
Lines 61 to 67 in 9f4aa1f
Move them to a subsection of its own.
Missing the D for 1D, 2D, 3D, in the tests' name introduced in PR:
#61
This will force documentation of the code.
Inconsistent refinement has been observed using the fixed-fraction approach in parallel. For example, the following plot show the error for the case 2D_GMSH_ANISO_SSHOCK_P1 run with MPI on 1, 2 and 4 cores. Here the error convergence graphs represent:
It can be seen that the performance of the fixed-fraction is severely degraded with many partitions. For the Gmsh generated mesh, the convergence is perturbed slightly but mostly remains the same (as the size fields are written in parallel but the remeshing is done in serial from the base processor). I believe the fixed-fraction may be applying the refinement factor on each core rather than globally. It is likely tied to this snipped of code
PHiLiP/src/grid_refinement/grid_refinement_fixed_fraction.cpp
Lines 86 to 103 in 5de6b61
For parallel, an approach similar to that in dg.cpp
should be used. But this requires separate checks as the function is not compatible with certain triangulation types.
Lines 2379 to 2390 in 5de6b61
As a result, the following tests have been locked to serial in the CMakeLists.txt:
According to
@Article{Zwanenburg2016,
author = {Zwanenburg, Philip and Nadarajah, Siva},
title = {Equivalence between the energy stable flux reconstruction and filtered discontinuous {G}alerkin schemes},
journal = {Journal of Computational Physics},
year = {2016},
volume = {306},
month = feb,
pages = {343--369},
doi = {10.1016/j.jcp.2015.11.036},
file = {:HigherOrder/Zwanenburg2016 - Equivalence between the Energy Stable Flux Reconstruction and Filtered Discontinuous Galerkin Schemes.pdf:},
publisher = {Elsevier BV},
}
We can implement FR by adding a modal filter to DG equivalent to the FR correction field.
Currently, dRdX is computed using AD without taking in account the AffineConstraints resulting from hanging nodes.
In the attached picture, node 1,2,3 fully define the face of the left 2D quadratic quadrilateral. However, the right element has subdivided such that the resulting children have some additional DoFs 4,5. Those DoFs needs to be constrained through dealii::make_hanging_node_constraints() to encure a watertight mesh.
The current dRdX should be insensitive to the DoFs 4 and 5 since moving them is futile since they fully depend on DoFs 1,2, and 3. However, the current dRdX does not that that into account and evaluates the sensitivities as if they were independent. That is, it matches the finite-difference even if the finite-difference displacement moves 4 and 5 without PHiLiP::HighOrderGrid::ensure_conforming_mesh().
The fix to this would be to evaluate the metric terms using only DoFs 1,2,3 instead of 4 and 5.
I believe it doesn't currently work. Not tested = not working.
Still incomplete test.
Started out the code using .h for header suffix. .hpp makes more sense to signal that the code is only C++ and not C-compatible.
It looks like inverse optimization of a manufactured solution is very ill-conditionned. Optimization stalls.
Currently, the time steps are absolute and we have to tweak around the parameters depending on the test case. A CFL would allow a more consistent parameter to be used throughout the various test cases.
Right now OperatorBase does it all by swtiching around if-statements. The proper way to do this is to derive the various operators from OperatorBase. Operator base would then have some properties from finite element methods such as quadratures and basis functions, but the derived classes would be responsible to implement how those operators are applied, similarly to the concept of LinearOperator from deal.II.
This issue may affect many existing tests from tests/integration_tests_control_files/grid_refinement/*
when run in Debug mode. The following error message will occur:
118: --------------------------------------------------------
118: An error occurred in line <1917> of file </home/keigan/Libraries/dealii/include/deal.II/lac/full_matrix.templates.h> in function
118: void dealii::FullMatrix<number>::gauss_jordan() [with number = double]
118: The violated condition was:
118: max > 1.e-16 * typical_diagonal_element
118: Additional information:
118: The maximal pivot is 2.45876e-21, which is below the threshold. The matrix may be singular.
118:
118: Stacktrace:
118: -----------
118: #0 /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre: dealii::FullMatrix<double>::gauss_jordan()
118: #1 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: dealii::Vector<double> PHiLiP::GridRefinement::ReconstructPoly<2, 1, double>::reconstruct_H1_norm<dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > >(dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > const&, dealii::PolynomialSpace<2>, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&)
118: #2 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: dealii::Vector<double> PHiLiP::GridRefinement::ReconstructPoly<2, 1, double>::reconstruct_norm<dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > >(PHiLiP::GridRefinement::NormType, dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > const&, dealii::PolynomialSpace<2>, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&)
118: #3 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::ReconstructPoly<2, 1, double>::reconstruct_directional_derivative(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, unsigned int)
118: #4 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinement_Continuous<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::field_h_hessian()
118: #5 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinement_Continuous<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::field()
118: #6 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinement_Continuous<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::output_results_vtk_method[abi:cxx11]()
118: #7 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinementBase<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::output_results_vtk(unsigned int)
118: #8 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::Tests::GridRefinementStudy<2, 1, dealii::parallel::distributed::Triangulation<2, 2> >::run_test() const
118: #9 /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: main
118: --------------------------------------------------------
118:
118: [keigan-OptiPlex-5060:04475] *** Process received signal ***
118: [keigan-OptiPlex-5060:04475] Signal: Aborted (6)
118: [keigan-OptiPlex-5060:04475] Signal code: (-6)
118: [keigan-OptiPlex-5060:04475] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x128a0)[0x7f9661aec8a0]
118: [keigan-OptiPlex-5060:04475] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7f9661727f47]
118: [keigan-OptiPlex-5060:04475] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7f96617298b1]
118: [keigan-OptiPlex-5060:04475] [ 3] /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre(_ZN6dealii18deal_II_exceptions9internals22do_issue_error_nothrowERKNS_13ExceptionBaseE+0x0)[0x7f967778d139]
118: [keigan-OptiPlex-5060:04475] [ 4] /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre(_ZN6dealii18deal_II_exceptions9internals20issue_error_noreturnINS_10FullMatrixIdE13ExcNotRegularEEEvNS1_17ExceptionHandlingEPKciS8_S8_S8_T_+0x88)[0x7f967751f75e]
118: [keigan-OptiPlex-5060:04475] [ 5] /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre(_ZN6dealii10FullMatrixIdE12gauss_jordanEv+0x82d)[0x7f9677483e4d]
118: [keigan-OptiPlex-5060:04475] [ 6] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement15ReconstructPolyILi2ELi1EdE19reconstruct_H1_normIN6dealii18TriaActiveIteratorINS4_15DoFCellAccessorILi2ELi2ELb0EEEEEEENS4_6VectorIdEERKT_NS4_15PolynomialSpaceILi2EEERKNS4_13LinearAlgebra11distributed6VectorIdNS4_11MemorySpace4HostEEE+0x182d)[0x55acf01f8f39]
118: [keigan-OptiPlex-5060:04475] [ 7] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement15ReconstructPolyILi2ELi1EdE16reconstruct_normIN6dealii18TriaActiveIteratorINS4_15DoFCellAccessorILi2ELi2ELb0EEEEEEENS4_6VectorIdEENS0_8NormTypeERKT_NS4_15PolynomialSpaceILi2EEERKNS4_13LinearAlgebra11distributed6VectorIdNS4_11MemorySpace4HostEEE+0x82)[0x55acf01f5d3a]
118: [keigan-OptiPlex-5060:04475] [ 8] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement15ReconstructPolyILi2ELi1EdE34reconstruct_directional_derivativeERKN6dealii13LinearAlgebra11distributed6VectorIdNS3_11MemorySpace4HostEEEj+0x186)[0x55acf01e6344]
118: [keigan-OptiPlex-5060:04475] [ 9] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement25GridRefinement_ContinuousILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE15field_h_hessianEv+0x159)[0x55acf01a478d]
118: [keigan-OptiPlex-5060:04475] [10] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement25GridRefinement_ContinuousILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE5fieldEv+0x66)[0x55acf01a25cc]
118: [keigan-OptiPlex-5060:04475] [11] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement25GridRefinement_ContinuousILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE25output_results_vtk_methodB5cxx11Ev+0x189)[0x55acf01a1fa3]
118: [keigan-OptiPlex-5060:04475] [12] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement18GridRefinementBaseILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE18output_results_vtkEj+0x21a)[0x55acf009379a]
118: [keigan-OptiPlex-5060:04475] [13] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZNK6PHiLiP5Tests19GridRefinementStudyILi2ELi1EN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE8run_testEv+0x131c)[0x55acefafba5a]
118: [keigan-OptiPlex-5060:04475] [14] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(main+0x2e9)[0x55acefa0e4a3]
118: [keigan-OptiPlex-5060:04475] [15] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7f966170ab97]
118: [keigan-OptiPlex-5060:04475] [16] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_start+0x2a)[0x55acefa0e0da]
118: [keigan-OptiPlex-5060:04475] *** End of error message ***
118: --------------------------------------------------------------------------
118: mpirun noticed that process rank 0 with PID 0 on node keigan-OptiPlex-5060 exited on signal 6 (Aborted).
118: --------------------------------------------------------------------------
Snippet shown from 2D_LPCVT_FB_BOUNDARYLAYER_P3
. This is caused by the reconstruct_poly.cpp
line
PHiLiP/src/grid_refinement/reconstruct_poly.cpp
Lines 650 to 654 in 8503a9b
Luckily this does not actually cause a problem down the line if used in release mode as the mesh size is limited during the remeshing. However, it triggers an assert
statement in deal.II. To avoid this error, either a small diagonal term can be added to the matrix or a min/max size limits can be imposed ahead of time (preventing the singular inversion).
Currently only observed for 2D_LPCVT_FB_BOUNDARYLAYER_P3, but other tests that touch this line may also potentially be impacted:
Still need to ensure that neighbouring cells are no more than 1 degree apart. Furthermore, need to ensure that the correct quadrature is used on faces of the two different degrees. From a previous discussion with Philip, the degree to be used is the lower one. Still to be revisited.
The Triangulation vertices should not play any role after using a MappingFEField. However, it does seem to have some impact for a P1 geometry. See
PHiLiP/tests/unit_tests/grid/high_order_grid_test.cpp
Lines 182 to 192 in 7ed44b8
Lost adjoint consistency by computing F* = F*(Uin, Ubc) instead of F*= F*(Ubc, Ubc). See comment in assemble_boundary_term_implicit.
Find the adjoint consistent boundary condition for Neumann boundary conditions for diffusion problem and convection-diffusion problem.
Basically, what is the correct formulation for the exterior gradient? Do we need some extra terms in the functional evaluation?
This issue needs further investigation
Look into residual smoothing paper for more robust nonlinear solver:
@Article{Mavriplis2021,
author = {Dimitri J. Mavriplis},
date = {2021-02},
journaltitle = {Computers {&} Fluids},
title = {A Residual Smoothing Strategy for Accelerating Newton Method Continuation},
doi = {10.1016/j.compfluid.2021.104859},
pages = {104859},
file = {:Mavriplis2021 - A Residual Smoothing Strategy for Accelerating Newton Method Continuation.pdf:PDF},
publisher = {Elsevier {BV}},
}
The lines of OperatorBase::create_collection_tuple
have obviously been copied from DGBase. We should avoid copies and instead have a common function that both DGBase and OperatorBase calls.
PHiLiP/src/operators/operators.cpp
Lines 93 to 202 in 9f4aa1f
I am currently using Prof. Heltai's method described in this issue summarized below to extract a MappingFEField from a known Triangulation and Manifold. This initial Triangulation can(must) be linear, and I would use a TransfiniteInterpolation to curve the initial grid.
have a DoFHandler with spacedim FE_Q/FE_Bernstein describing the absolute geometry
attach manifolds (as expensive as you wish)
call VectorTools::get_position_vector
create MappingFEField with the computed position vector
discard manifolds and only use the MappingFEField afterwards
However, I would now like to refine the mesh. This means that the grid refinement process needs a Manifold to query the new points.
Therefore, it seems like I would need to derive some kind of PolynomialManifold (as mentionned in previous post) from the Manifold class, which would take a reference to the MappingFEField in its constructor. I would then use MappingFEField to compute the necessary values in the overridden virtual function of the Manifold class. As a result, I would have a PolynomialManifold that would allow my triangulation to be refined using the current polynomial mapping of the element.
This would go back and forth between the refining process and the deformation process.
When a deformation occurs, simply displace the Triangulation->vertices and the MappingFEField->euler_vector. The PolynomialManifold, having a reference to the mapping would automatically update.
When refinement occurs, extract a new MappingFEField using Prof. Heltai's method, construct a new PolynomialManifold, and assign it to the Triangulation.
As a result of using a pure polynomial representation due to the feature implemented by this issue #9, we simply cannot use an exact boundary to refine the surface.
As we refine, we would asymptotically have a constant error since the geometry will never be exact. For test cases that depend on having a smoother and smoother geometry, such as the convergence to zero entropy for the Euler cylinder or Gaussian bump, we need the initial surface grid to be fine enough such that this constant error is less than the error improvement we get from refining. This increases the computational cost since the initial grid now needs to be much larger.
A solution to this would be to move/project the surface nodes to the exact geometry every time a surface cell has been refined. The geometry would still be polynomial, but the refinement would then have new nodes on the exact surface.
MPI_2D_ADVECTION_EXPLICIT_PERIODIC_LONG will pass no matter what in Release mode since it return 0.
https://github.com/dougshidong/PHiLiP/blob/master/src/testing/advection_explicit_periodic.cpp#L103
Please fix such that the test fails for invalid answers. Since it is linear advection, the exact solution about time T is known. Convergence orders can/should be checked.
The boundary gradient is used in the artificial dissipation.
Currently, the artificial dissipation is set to 0 at the boundaries, so we do not require the gradient. However, other papers are able to have non-zero dissipation, and we should too since the shocks usually extend down to the boundary.
Right now, when non-zero dissipation at the boundary is used, some odd artifacts start showing up.
Right now, the gradient is simply copied in the ghost cell.
Consider using Navier-Stokes-like boundary conditions
Current methods for continuous grid refinement need the use of external Gmsh command line calls to perform remeshing between steps. At the moment this is done through a series of file writes and command line calls, originating from
PHiLiP/src/grid_refinement/grid_refinement_continuous.cpp
Lines 252 to 253 in 8503a9b
which also needs associated .geo
and .pos
files generated from src/grid_refinement/gmsh_out.cpp
. A better solution would be to use the Gmsh API to directly perform these tasks. This change would avoid unnecessary IO, give more direct control to options used by the mesh generator and combine well with other future enhancements in this area.
Mainly the changes would involve interfacing between the Deal.II and GMSH libraries to be able to directly import the updated mesh into the existing data structures.
This change will affect Issue #57 and impact the following set of existing tests:
Many .cpp files have their class or function instantiated for many types. This becomes tedious and error prone. The use of Boost's metaprogramming library can become useful to remedy this issue:
https://www.boost.org/doc/libs/1_31_0/libs/mpl/doc/paper/html/usage.html#sequences
Another way is to do something similar to deal.II in their *.inst.in files such as
https://github.com/dougshidong/dealii/blob/master/source/grid/grid_generator.inst.in
Currently, the manufactured solution is generated using some very bad practice of pre-processor #if statements.
This should be replaced by created a Manufactured Solution class and a Manufactured solution factory to return the solution selected at run-time.
This will allow us to capture shocks by locally adding viscosity and increase the stability of our steady state convergence as any temporary shocks will dissipate.
See paper from
@InProceedings{Persson2006,
author = {Persson, Per-Olof and Peraire, Jaime},
title = {Sub-Cell Shock Capturing for Discontinuous {Galerkin} Methods},
booktitle = {44th {AIAA} Aerospace Sciences Meeting and Exhibit},
year = {2006},
publisher = {American Institute of Aeronautics and Astronautics},
month = {jan},
doi = {10.2514/6.2006-112},
file = {:Persson2006 - Sub Cell Shock Capturing for Discontinuous Galerkin Methods.pdf:PDF},
timestamp = {2018-11-05},
}
Interpolating the flux using an order P polynomial results in a loss of order for even P. This happens for nonlinear problems, but not for linear advection.
No tests currently fail, but using an existing nonlinear problem (like 1D Burgers or Euler), and setting
set use_weak_form = false
will demonstrate the loss of convergence.
This can be resolved by interpolating (or maybe we should be projecting according to Jameson (2012) and Zwanenburg (2016) ) to a P+1 polynomial.
An earlier attempt has been made to project the flux to an order P polynomial (check branch flux_projection
) and overintegrating this projection. No success with a P polynomial.
Another possible source of error could be that the strong form is not adjoint consistent. Harriman et al. (2002) show that an adjoint inconsistent discretization results in a P convergence for even, and P+1 convergence for odd, just like we're witnessing.
Need to set some hanging node constraints when displacing the grid, otherwise, we will end up with cells that do not touch each other.
https://www.dealii.org/current/doxygen/deal.II/group__constraints.html
Abtin's test case currently fails. Currently in the works.
Return 0 no matter what.
Should check that kinetic energy does not increase as done for the burgers_stability test
PHiLiP/src/testing/burgers_stability.cpp
Lines 97 to 101 in efc614b
For example,
The current residual loop is extremely messy, especially with the new periodic boundary conditions added in.
We probably want to break it down into multiple smaller functions.
The current implementation uses FFD as a default design variable. This is not ideal if we want to use different parameterization, or simply surface points.
Have a hierarchy such that
where the current implementation would be a form of FlowConstraints_SurfaceParameterization.
Figure out a way to overlay the MPI reductions for l2-norms and error checking with computations by using non-blocking MPI calls.
Due to the recent merge from Abtin, the fe_indices do not necessarily correspond to the polynomial order of the solution.
This is because using collocation requires the minimum polynomial order to be 1, and not 0, since we can't have a single GLL node. As a result, fe_index = 0, would correspond to a polynomial of order 1.
The VTK output was wrong last time Abtin showed it to me.
Right now, the when a manufactured solution boundary condition is imposed, the derived classes copied the function definition in the base class. Ideally, we want something like
DerivedClass::boundary_face_values() {
if (manufactured_BC_id) {
BaseClass::boundary_face_values()
}
}
The current framework for mesh adaption based on a continuous mesh model relies on Gmsh to perform the domain remeshing step. However, some discrepancies have been observed between versions during these steps. Primarily, the current code has been tested with v4.6.0, although this behavior also varied between users, e.g. Mine vs. Dougs. This problem becomes more severe in the case of goal-oriented adaption where the new mesh is generated as an iterative update. Therefore, if the series of adaptations leads to non-suitable intermediary mesh, it may not be able to find a reasonable update (leading to an error).
This primarily affects the following set of tests:
Additionally, in some rare cases during these iterations, the Blossom-Quad recombination step may not fully merge the mesh to form an all-quad mesh. The remaining triangular elements prevent Deal.II from reading the mesh and hence the iterations from continuing. In some cases, these issue have been relieved by slightly perturbing the otherwise failing step size. However, this is a highly version/user dependent fix. An example of this can seen in
This issue could be addressed by including a subdivision step in the Gmsh call with the .geo
command
Mesh.SubdivisionAlgorithm = 1;
as also used in some related deal.II library function. This could be applied either as the default or a backup when the remeshing would otherwise fail. However, this comes with the tradeoff that the mesh size field resolution can only be controlled on a coarser H=2h grid. Other adjustments would also be needed to src/grid_refinement/field.cpp
to generate this grid.
Appearance of triangles may impact all tests involving Gmsh in a loop (some duplication from above):
Resolving this issue could potentially be linked with changes to adopt Gmsh header files as in Issue #56.
This seems to be a deal.II issue or how we are setting up the KKT system sizes?
Something about operators using MPI_COMM_SELF versus MPI_COMM_WORLD to count the elements.
Look into the performance of AD for assembling the Jacobian.
The current framework sets the degree of freedom to an AD type and then perform the residual evaluation using that AD type. However, many operations are simply linearly acting on this AD type and is trivial to manually differentiate.
A performance enhancement should be seen if we simply differentiate the complicated parts, which is the nonlinear fluxes using AD.
Ideally, I wouldn't have to run all the tests myself when a pull request occurs.
Made a lot of attempts to integrate Travis CI into this without any success. Right now, it seems like the Docker file from the deal.II DockerHub repository does not have MPI (not sure since it says it does), and therefore, cannot compile PHiLiP.
A solution would be to have our own Docker container with the settings we currently use in PHiLiP and pull from that.
The dXvdXs sensitivities are correct when used with MPI. For some reason, when done serially, the sensitivities do not match with a finite-difference. This is likely due to how the linear_constraint_operator is being formed. More investigation is needed.
Test created as a reminder that dXvdXs is incorrect serially.
With the increasing number of parameters affecting DG, we should have a different subsection for everything related to DG, such as overintegration, type of FR, collocation, split forms, etc.
Currently creating a HighOrderGrid class that will be responsible to generate this MappingFEField from the given Manifold
Starting in commit 4dd6d62
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.