Code Monkey home page Code Monkey logo

philip's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

philip's Issues

Jacobian evaluation too slow

Currently, their Jacobian evaluations within the HighOrderGrid class recompute the basis gradient on the reference element every time. This is taking the bulk of the computational time in 3D.

Need to put an operator associated with a set of points.

Making boundary ID's a parameters input

Currently, input boundaries are hard-coded such that ID 1001 is slip wall for example. If we receive a grid from someone else who generated it with Gmsh, we would have to change the mesh such that wall ID's are 1001.

Instead, a better way to handle this is to pass

boundary_slip_wall_id

as a parameter for the user to input. We would still set the default ID's to avoid breaking existing tests. However, this gives flexibility for someone who simply wants to use the code, rather than develop it.

Add entropy waves as a test

As described by the book I do like CFD (Masatsuka 2018), p. 240, section 7.13.2.

This test would be similar to other manufactured solution tests, so you should be able to re-use code in grid_study.cpp to generate this test. The use of periodicity can be seen in the periodic explicit advection test case.

It would also nicely verify our periodic boundary conditions.

Verify collocation

Need to verify the collocation with some tests.

A first test would be to form a mass matrix and check that it is discretely orthogonal.

Another test would be to manually check that the quadrature points are located at the solution points.

Euler Cylinder Test Case fails in Debug Mode

output:

49: An error occurred in line <1715> of file </home/abtinameri/share/dealii/source/fe/mapping_fe_field.cc> in function
49: dealii::CellSimilarity::Similarity dealii::MappingFEField<dim, spacedim, VectorType, DoFHandlerType>::fill_fe_values(const typename dealii::Triangulation<dim, spacedim>::cell_iterator&, dealii::CellSimilarity::Similarity, const dealii::Quadrature&, const typename dealii::Mapping<dim, spacedim>::InternalDataBase&, dealii::internal::FEValuesImplementation::MappingRelatedData<dim, spacedim>&) const [with int dim = 2; int spacedim = 2; VectorType = dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = dealii::DoFHandler<2, 2>; typename dealii::Triangulation<dim, spacedim>::cell_iterator = dealii::TriaIterator<dealii::CellAccessor<2, 2> >; typename dealii::Mapping<dim, spacedim>::InternalDataBase = dealii::Mapping<2, 2>::InternalDataBase]
49: The violated condition was:
49: det > 1e-12 * Utilities::fixed_power( cell->diameter() / std::sqrt(double(dim)))
49: Additional information:
49: The image of the mapping applied to cell with center [0.542651 1.31008] is distorted. The cell geometry or the mapping are invalid, giving a non-positive volume fraction of -0.116411 in quadrature point 83.
49:
49: Stacktrace:
49: -----------
49: #0 /home/abtinameri/share/dealii/install/lib/libdeal_II.g.so.9.2.0-pre: dealii::MappingFEField<2, 2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, dealii::DoFHandler<2, 2> >::fill_fe_values(dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&, dealii::CellSimilarity::Similarity, dealii::Quadrature<2> const&, dealii::Mapping<2, 2>::InternalDataBase const&, dealii::internal::FEValuesImplementation::MappingRelatedData<2, 2>&) const
49: #1 /home/abtinameri/share/dealii/install/lib/libdeal_II.g.so.9.2.0-pre: dealii::FEValues<2, 2>::do_reinit()
49: #2 /home/abtinameri/share/dealii/install/lib/libdeal_II.g.so.9.2.0-pre: void dealii::FEValues<2, 2>::reinit<dealii::hp::DoFHandler, false>(dealii::TriaIterator<dealii::DoFCellAccessor<dealii::hp::DoFHandler<2, 2>, false> > const&)
49: #3 /home/abtinameri/Desktop/thesis/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::Tests::EulerCylinder<2, 4>::run_test() const
49: #4 /home/abtinameri/Desktop/thesis/PHiLiP/build/bin/PHiLiP_2D: main
49: --------------------------------------------------------
49:
49: Calling MPI_Abort now.
49: To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
49: set breakpoint pending on
49: break MPI_Abort
49: set breakpoint pending auto
49: [abtinameri-ThinkPad-T460:06505] 1 more process has sent help message help-mpi-api.txt / mpi-abort
49: [abtinameri-ThinkPad-T460:06505] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
1/1 Test #49: MPI_2D_EULER_INTEGRATION_CYLINDER_LONG ...***Failed 500.26 sec

Move parameters relevant to FR tests to its own subsection.

The parameters listed below are test specific and therefore should not appear in the general parameters.

prm.declare_entry("use_energy", "false",
dealii::Patterns::Bool(),
"Not calculate energy by default. Otherwise, get energy per iteration.");
prm.declare_entry("use_L2_norm", "false",
dealii::Patterns::Bool(),
"Not calculate L2 norm by default (M+K). Otherwise, get L2 norm per iteration.");

Move them to a subsection of its own.

Behavior of grid_refinement_fixed_fraction in parallel

Inconsistent refinement has been observed using the fixed-fraction approach in parallel. For example, the following plot show the error for the case 2D_GMSH_ANISO_SSHOCK_P1 run with MPI on 1, 2 and 4 cores. Here the error convergence graphs represent:

  • l2error.0: Uniform refinement
  • l2error.1: Fixed fraction refinement with 30% flagging
  • l2error.2: Continuous Gmsh/Bamg refinement with 1.5x complexity growth
  • l2error.3: Continuous Gmsh/Bamg refinement with 2.0x complexity growth

MPIMAX=1
ErrorPlot_mpi_1
MPIMAX=2
ErrorPlot_mpi_2
MPIMAX=4
ErrorPlot_mpi_4

It can be seen that the performance of the fixed-fraction is severely degraded with many partitions. For the Gmsh generated mesh, the convergence is perturbed slightly but mostly remains the same (as the size fields are written in parallel but the remeshing is done in serial from the base processor). I believe the fixed-fraction may be applying the refinement factor on each core rather than globally. It is likely tied to this snipped of code

template <int dim, int nstate, typename real, typename MeshType>
void GridRefinement_FixedFraction<dim,nstate,real,MeshType>::refine_grid_h()
{
// Performing the call for refinement
// #if PHILIP_DIM==1
dealii::GridRefinement::refine_and_coarsen_fixed_number(
*(this->tria),
this->indicator,
this->grid_refinement_param.refinement_fraction,
this->grid_refinement_param.coarsening_fraction);
// #else
// dealii::parallel::distributed::GridRefinement::refine_and_coarsen_fixed_number(
// *(this->tria),
// this->indicator,
// this->grid_refinement_param.refinement_fraction,
// this->grid_refinement_param.coarsening_fraction);
// #endif
}

For parallel, an approach similar to that in dg.cpp should be used. But this requires separate checks as the function is not compatible with certain triangulation types.

PHiLiP/src/dg/dg.cpp

Lines 2379 to 2390 in 5de6b61

if constexpr(dim == 1 ||
!std::is_same<MeshType, dealii::parallel::distributed::Triangulation<dim>>::value) {
dealii::GridRefinement::refine_and_coarsen_fixed_number(*high_order_grid->triangulation,
gradient_indicator,
0.05,
0.025);
} else {
dealii::parallel::distributed::GridRefinement::refine_and_coarsen_fixed_number(*(high_order_grid->triangulation),
gradient_indicator,
0.05,
0.01);
}

As a result, the following tests have been locked to serial in the CMakeLists.txt:

  • 2D_GMSH_ANISO_SSHOCK_P1
  • 2D_GMSH_ANISO_SSHOCK_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P3
  • 2D_GMSH_ADJOINT_SSHOCK_P1
  • 2D_GMSH_ADJOINT_SSHOCK_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P3

Flux reconstruction through DG

According to

@Article{Zwanenburg2016,
author = {Zwanenburg, Philip and Nadarajah, Siva},
title = {Equivalence between the energy stable flux reconstruction and filtered discontinuous {G}alerkin schemes},
journal = {Journal of Computational Physics},
year = {2016},
volume = {306},
month = feb,
pages = {343--369},
doi = {10.1016/j.jcp.2015.11.036},
file = {:HigherOrder/Zwanenburg2016 - Equivalence between the Energy Stable Flux Reconstruction and Filtered Discontinuous Galerkin Schemes.pdf:},
publisher = {Elsevier BV},
}

We can implement FR by adding a modal filter to DG equivalent to the FR correction field.

dRdX is wrong for h-adaptivity

Currently, dRdX is computed using AD without taking in account the AffineConstraints resulting from hanging nodes.

In the attached picture, node 1,2,3 fully define the face of the left 2D quadratic quadrilateral. However, the right element has subdivided such that the resulting children have some additional DoFs 4,5. Those DoFs needs to be constrained through dealii::make_hanging_node_constraints() to encure a watertight mesh.

Screenshot from 2020-12-18 17-08-34

The current dRdX should be insensitive to the DoFs 4 and 5 since moving them is futile since they fully depend on DoFs 1,2, and 3. However, the current dRdX does not that that into account and evaluates the sensitivities as if they were independent. That is, it matches the finite-difference even if the finite-difference displacement moves 4 and 5 without PHiLiP::HighOrderGrid::ensure_conforming_mesh().

The fix to this would be to evaluate the metric terms using only DoFs 1,2,3 instead of 4 and 5.

Rename header files to .hpp

Started out the code using .h for header suffix. .hpp makes more sense to signal that the code is only C++ and not C-compatible.

Put in a CFL for time-step control

Currently, the time steps are absolute and we have to tweak around the parameters depending on the test case. A CFL would allow a more consistent parameter to be used throughout the various test cases.

Use inheritance to create the various Operators from OperatorBase

Right now OperatorBase does it all by swtiching around if-statements. The proper way to do this is to derive the various operators from OperatorBase. Operator base would then have some properties from finite element methods such as quadratures and basis functions, but the derived classes would be responsible to implement how those operators are applied, similarly to the concept of LinearOperator from deal.II.

Tests involving reconstruct_poly.cpp fail in debug

This issue may affect many existing tests from tests/integration_tests_control_files/grid_refinement/* when run in Debug mode. The following error message will occur:

118: --------------------------------------------------------
118: An error occurred in line <1917> of file </home/keigan/Libraries/dealii/include/deal.II/lac/full_matrix.templates.h> in function
118:     void dealii::FullMatrix<number>::gauss_jordan() [with number = double]
118: The violated condition was: 
118:     max > 1.e-16 * typical_diagonal_element
118: Additional information: 
118:     The maximal pivot is 2.45876e-21, which is below the threshold. The matrix may be singular.
118: 
118: Stacktrace:
118: -----------
118: #0  /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre: dealii::FullMatrix<double>::gauss_jordan()
118: #1  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: dealii::Vector<double> PHiLiP::GridRefinement::ReconstructPoly<2, 1, double>::reconstruct_H1_norm<dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > >(dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > const&, dealii::PolynomialSpace<2>, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&)
118: #2  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: dealii::Vector<double> PHiLiP::GridRefinement::ReconstructPoly<2, 1, double>::reconstruct_norm<dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > >(PHiLiP::GridRefinement::NormType, dealii::TriaActiveIterator<dealii::DoFCellAccessor<2, 2, false> > const&, dealii::PolynomialSpace<2>, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&)
118: #3  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::ReconstructPoly<2, 1, double>::reconstruct_directional_derivative(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, unsigned int)
118: #4  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinement_Continuous<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::field_h_hessian()
118: #5  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinement_Continuous<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::field()
118: #6  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinement_Continuous<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::output_results_vtk_method[abi:cxx11]()
118: #7  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::GridRefinement::GridRefinementBase<2, 1, double, dealii::parallel::distributed::Triangulation<2, 2> >::output_results_vtk(unsigned int)
118: #8  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: PHiLiP::Tests::GridRefinementStudy<2, 1, dealii::parallel::distributed::Triangulation<2, 2> >::run_test() const
118: #9  /usr/local/include/PHiLiP/build/bin/PHiLiP_2D: main
118: --------------------------------------------------------
118: 
118: [keigan-OptiPlex-5060:04475] *** Process received signal ***
118: [keigan-OptiPlex-5060:04475] Signal: Aborted (6)
118: [keigan-OptiPlex-5060:04475] Signal code:  (-6)
118: [keigan-OptiPlex-5060:04475] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x128a0)[0x7f9661aec8a0]
118: [keigan-OptiPlex-5060:04475] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7f9661727f47]
118: [keigan-OptiPlex-5060:04475] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7f96617298b1]
118: [keigan-OptiPlex-5060:04475] [ 3] /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre(_ZN6dealii18deal_II_exceptions9internals22do_issue_error_nothrowERKNS_13ExceptionBaseE+0x0)[0x7f967778d139]
118: [keigan-OptiPlex-5060:04475] [ 4] /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre(_ZN6dealii18deal_II_exceptions9internals20issue_error_noreturnINS_10FullMatrixIdE13ExcNotRegularEEEvNS1_17ExceptionHandlingEPKciS8_S8_S8_T_+0x88)[0x7f967751f75e]
118: [keigan-OptiPlex-5060:04475] [ 5] /home/keigan/Libraries/dealii/install/lib/libdeal_II.g.so.9.3.0-pre(_ZN6dealii10FullMatrixIdE12gauss_jordanEv+0x82d)[0x7f9677483e4d]
118: [keigan-OptiPlex-5060:04475] [ 6] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement15ReconstructPolyILi2ELi1EdE19reconstruct_H1_normIN6dealii18TriaActiveIteratorINS4_15DoFCellAccessorILi2ELi2ELb0EEEEEEENS4_6VectorIdEERKT_NS4_15PolynomialSpaceILi2EEERKNS4_13LinearAlgebra11distributed6VectorIdNS4_11MemorySpace4HostEEE+0x182d)[0x55acf01f8f39]
118: [keigan-OptiPlex-5060:04475] [ 7] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement15ReconstructPolyILi2ELi1EdE16reconstruct_normIN6dealii18TriaActiveIteratorINS4_15DoFCellAccessorILi2ELi2ELb0EEEEEEENS4_6VectorIdEENS0_8NormTypeERKT_NS4_15PolynomialSpaceILi2EEERKNS4_13LinearAlgebra11distributed6VectorIdNS4_11MemorySpace4HostEEE+0x82)[0x55acf01f5d3a]
118: [keigan-OptiPlex-5060:04475] [ 8] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement15ReconstructPolyILi2ELi1EdE34reconstruct_directional_derivativeERKN6dealii13LinearAlgebra11distributed6VectorIdNS3_11MemorySpace4HostEEEj+0x186)[0x55acf01e6344]
118: [keigan-OptiPlex-5060:04475] [ 9] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement25GridRefinement_ContinuousILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE15field_h_hessianEv+0x159)[0x55acf01a478d]
118: [keigan-OptiPlex-5060:04475] [10] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement25GridRefinement_ContinuousILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE5fieldEv+0x66)[0x55acf01a25cc]
118: [keigan-OptiPlex-5060:04475] [11] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement25GridRefinement_ContinuousILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE25output_results_vtk_methodB5cxx11Ev+0x189)[0x55acf01a1fa3]
118: [keigan-OptiPlex-5060:04475] [12] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZN6PHiLiP14GridRefinement18GridRefinementBaseILi2ELi1EdN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE18output_results_vtkEj+0x21a)[0x55acf009379a]
118: [keigan-OptiPlex-5060:04475] [13] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_ZNK6PHiLiP5Tests19GridRefinementStudyILi2ELi1EN6dealii8parallel11distributed13TriangulationILi2ELi2EEEE8run_testEv+0x131c)[0x55acefafba5a]
118: [keigan-OptiPlex-5060:04475] [14] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(main+0x2e9)[0x55acefa0e4a3]
118: [keigan-OptiPlex-5060:04475] [15] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7f966170ab97]
118: [keigan-OptiPlex-5060:04475] [16] /usr/local/include/PHiLiP/build/bin/PHiLiP_2D(_start+0x2a)[0x55acefa0e0da]
118: [keigan-OptiPlex-5060:04475] *** End of error message ***
118: --------------------------------------------------------------------------
118: mpirun noticed that process rank 0 with PID 0 on node keigan-OptiPlex-5060 exited on signal 6 (Aborted).
118: --------------------------------------------------------------------------

Snippet shown from 2D_LPCVT_FB_BOUNDARYLAYER_P3. This is caused by the reconstruct_poly.cpp line

// solving the system
dealii::Vector<real> coeffs(n_poly);
mat.gauss_jordan();
mat.vmult(coeffs, rhs);

Luckily this does not actually cause a problem down the line if used in release mode as the mesh size is limited during the remeshing. However, it triggers an assert statement in deal.II. To avoid this error, either a small diagonal term can be added to the matrix or a min/max size limits can be imposed ahead of time (preventing the singular inversion).

Currently only observed for 2D_LPCVT_FB_BOUNDARYLAYER_P3, but other tests that touch this line may also potentially be impacted:

  • 2D_ANISOTROPIC_FF_H
  • 2D_MSH_QUADRATIC_FIELD_H
  • 2D_GMSH_ANISO_SSHOCK_P1
  • 2D_GMSH_ANISO_SSHOCK_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P3
  • 2D_GMSH_ADJOINT_SSHOCK_P1
  • 2D_GMSH_ADJOINT_SSHOCK_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P3
  • 2D_LPCVT_FB_SSHOCK_P1
  • 2D_LPCVT_FB_SSHOCK_P2
  • 2D_LPCVT_FB_BOUNDARYLAYER_P2
  • 2D_LPCVT_FB_BOUNDARYLAYER_P3
  • 2D_LPCVT_ADJ_SSHOCK_P1
  • 2D_LPCVT_ADJ_SSHOCK_P2
  • 2D_LPCVT_ADJ_BOUNDARYLAYER_P2
  • 2D_LPCVT_ADJ_BOUNDARYLAYER_P3

P-adaptivity still not stable.

Still need to ensure that neighbouring cells are no more than 1 degree apart. Furthermore, need to ensure that the correct quadrature is used on faces of the two different degrees. From a previous discussion with Philip, the degree to be used is the lower one. Still to be revisited.

P1 MappingFEField is not agnostic to Triangulation vertices

The Triangulation vertices should not play any role after using a MappingFEField. However, it does seem to have some impact for a P1 geometry. See

// Weirdly enough, it does affect the p1 geometry discretization. Not sure why.
// It might be using some of the Triangulation's vertices instead of the MappingFEField nodes?
dealii::GridTools::transform(
[](const dealii::Point<dim> &old_point) -> dealii::Point<dim> {
dealii::Point<dim> new_point;
for (int d=0;d<dim;++d) {
new_point[d] = 0.0 * old_point[d] + 1.0;
}
return new_point;
},
grid);

Adjoint consistency

Lost adjoint consistency by computing F* = F*(Uin, Ubc) instead of F*= F*(Ubc, Ubc). See comment in assemble_boundary_term_implicit.

Find the adjoint consistent boundary condition for Neumann boundary conditions for diffusion problem and convection-diffusion problem.

Basically, what is the correct formulation for the exterior gradient? Do we need some extra terms in the functional evaluation?

This issue needs further investigation

Residual Smoothing for Newton PTC

Look into residual smoothing paper for more robust nonlinear solver:

@Article{Mavriplis2021,
author = {Dimitri J. Mavriplis},
date = {2021-02},
journaltitle = {Computers {&} Fluids},
title = {A Residual Smoothing Strategy for Accelerating Newton Method Continuation},
doi = {10.1016/j.compfluid.2021.104859},
pages = {104859},
file = {:Mavriplis2021 - A Residual Smoothing Strategy for Accelerating Newton Method Continuation.pdf:PDF},
publisher = {Elsevier {BV}},
}

Remove duplication of create_collection_tuple

The lines of OperatorBase::create_collection_tuple have obviously been copied from DGBase. We should avoid copies and instead have a common function that both DGBase and OperatorBase calls.

template <int dim, int nstate, typename real>
std::tuple<
dealii::hp::FECollection<dim>, // Solution FE basis functions
dealii::hp::QCollection<dim>, // Volume quadrature
dealii::hp::QCollection<dim-1>, // Face quadrature
dealii::hp::QCollection<1>, // 1D quadrature for strong form
dealii::hp::FECollection<dim> > // Flux Basis polynomials for strong form
OperatorBase<dim,nstate,real>::create_collection_tuple(const unsigned int max_degree, const Parameters::AllParameters *const parameters_input) const
{
dealii::hp::FECollection<dim> fe_coll;//basis functions collection
dealii::hp::QCollection<dim> volume_quad_coll;//volume flux nodes
dealii::hp::QCollection<dim-1> face_quad_coll;//facet flux nodes
dealii::hp::QCollection<1> oned_quad_coll;//1D flux nodes
dealii::hp::FECollection<dim> fe_coll_lagr;//flux basis collocated on flux nodes
// for p=0, we use a p=1 FE for collocation, since there's no p=0 quadrature for Gauss Lobatto
if (parameters_input->use_collocated_nodes==true)
{
int degree = 1;
const dealii::FE_DGQ<dim> fe_dg(degree);
const dealii::FESystem<dim,dim> fe_system(fe_dg, nstate);
fe_coll.push_back (fe_system);
dealii::Quadrature<1> oned_quad(degree+1);
dealii::Quadrature<dim> volume_quad(degree+1);
dealii::Quadrature<dim-1> face_quad(degree+1); //removed const
if (parameters_input->use_collocated_nodes) {
dealii::QGaussLobatto<1> oned_quad_Gauss_Lobatto (degree+1);
dealii::QGaussLobatto<dim> vol_quad_Gauss_Lobatto (degree+1);
oned_quad = oned_quad_Gauss_Lobatto;
volume_quad = vol_quad_Gauss_Lobatto;
if(dim == 1) {
dealii::QGauss<dim-1> face_quad_Gauss_Legendre (degree+1);
face_quad = face_quad_Gauss_Legendre;
} else {
dealii::QGaussLobatto<dim-1> face_quad_Gauss_Lobatto (degree+1);
face_quad = face_quad_Gauss_Lobatto;
}
} else {
dealii::QGauss<1> oned_quad_Gauss_Legendre (degree+1);
dealii::QGauss<dim> vol_quad_Gauss_Legendre (degree+1);
dealii::QGauss<dim-1> face_quad_Gauss_Legendre (degree+1);
oned_quad = oned_quad_Gauss_Legendre;
volume_quad = vol_quad_Gauss_Legendre;
face_quad = face_quad_Gauss_Legendre;
}
volume_quad_coll.push_back (volume_quad);
face_quad_coll.push_back (face_quad);
oned_quad_coll.push_back (oned_quad);
dealii::FE_DGQArbitraryNodes<dim,dim> lagrange_poly(oned_quad);
fe_coll_lagr.push_back (lagrange_poly);
}
int minimum_degree = (parameters_input->use_collocated_nodes==true) ? 1 : 0;
for (unsigned int degree=minimum_degree; degree<=max_degree; ++degree) {
// Solution FECollection
const dealii::FE_DGQ<dim> fe_dg(degree);
//const dealii::FE_DGQArbitraryNodes<dim,dim> fe_dg(dealii::QGauss<1>(degree+1));
//std::cout << degree << " fe_dg.tensor_degree " << fe_dg.tensor_degree() << " fe_dg.n_dofs_per_cell " << fe_dg.n_dofs_per_cell() << std::endl;
const dealii::FESystem<dim,dim> fe_system(fe_dg, nstate);
fe_coll.push_back (fe_system);
dealii::Quadrature<1> oned_quad(degree+1);
dealii::Quadrature<dim> volume_quad(degree+1);
dealii::Quadrature<dim-1> face_quad(degree+1); //removed const
if (parameters_input->use_collocated_nodes) {
dealii::QGaussLobatto<1> oned_quad_Gauss_Lobatto (degree+1);
dealii::QGaussLobatto<dim> vol_quad_Gauss_Lobatto (degree+1);
oned_quad = oned_quad_Gauss_Lobatto;
volume_quad = vol_quad_Gauss_Lobatto;
if(dim == 1)
{
dealii::QGauss<dim-1> face_quad_Gauss_Legendre (degree+1);
face_quad = face_quad_Gauss_Legendre;
}
else
{
dealii::QGaussLobatto<dim-1> face_quad_Gauss_Lobatto (degree+1);
face_quad = face_quad_Gauss_Lobatto;
}
} else {
const unsigned int overintegration = parameters_input->overintegration;
dealii::QGauss<1> oned_quad_Gauss_Legendre (degree+1+overintegration);
dealii::QGauss<dim> vol_quad_Gauss_Legendre (degree+1+overintegration);
dealii::QGauss<dim-1> face_quad_Gauss_Legendre (degree+1+overintegration);
oned_quad = oned_quad_Gauss_Legendre;
volume_quad = vol_quad_Gauss_Legendre;
face_quad = face_quad_Gauss_Legendre;
}
volume_quad_coll.push_back (volume_quad);
face_quad_coll.push_back (face_quad);
oned_quad_coll.push_back (oned_quad);
dealii::FE_DGQArbitraryNodes<dim,dim> lagrange_poly(oned_quad);
fe_coll_lagr.push_back (lagrange_poly);
}
return std::make_tuple(fe_coll, volume_quad_coll, face_quad_coll, oned_quad_coll, fe_coll_lagr);
}

Create a manifold to allow grid refinement

I am currently using Prof. Heltai's method described in this issue summarized below to extract a MappingFEField from a known Triangulation and Manifold. This initial Triangulation can(must) be linear, and I would use a TransfiniteInterpolation to curve the initial grid.
have a DoFHandler with spacedim FE_Q/FE_Bernstein describing the absolute geometry
attach manifolds (as expensive as you wish)
call VectorTools::get_position_vector
create MappingFEField with the computed position vector
discard manifolds and only use the MappingFEField afterwards
However, I would now like to refine the mesh. This means that the grid refinement process needs a Manifold to query the new points.

Therefore, it seems like I would need to derive some kind of PolynomialManifold (as mentionned in previous post) from the Manifold class, which would take a reference to the MappingFEField in its constructor. I would then use MappingFEField to compute the necessary values in the overridden virtual function of the Manifold class. As a result, I would have a PolynomialManifold that would allow my triangulation to be refined using the current polynomial mapping of the element.

This would go back and forth between the refining process and the deformation process.
When a deformation occurs, simply displace the Triangulation->vertices and the MappingFEField->euler_vector. The PolynomialManifold, having a reference to the mapping would automatically update.
When refinement occurs, extract a new MappingFEField using Prof. Heltai's method, construct a new PolynomialManifold, and assign it to the Triangulation.

Moving the surface nodes to exact geometry

As a result of using a pure polynomial representation due to the feature implemented by this issue #9, we simply cannot use an exact boundary to refine the surface.

As we refine, we would asymptotically have a constant error since the geometry will never be exact. For test cases that depend on having a smoother and smoother geometry, such as the convergence to zero entropy for the Euler cylinder or Gaussian bump, we need the initial surface grid to be fine enough such that this constant error is less than the error improvement we get from refining. This increases the computational cost since the initial grid now needs to be much larger.

A solution to this would be to move/project the surface nodes to the exact geometry every time a surface cell has been refined. The geometry would still be polynomial, but the refinement would then have new nodes on the exact surface.

Euler wall BC for gradient

The boundary gradient is used in the artificial dissipation.
Currently, the artificial dissipation is set to 0 at the boundaries, so we do not require the gradient. However, other papers are able to have non-zero dissipation, and we should too since the shocks usually extend down to the boundary.

Right now, when non-zero dissipation at the boundary is used, some odd artifacts start showing up.
Right now, the gradient is simply copied in the ghost cell.

Consider using Navier-Stokes-like boundary conditions

Screenshot from 2021-02-13 09-36-51

Gmsh interface via API

Current methods for continuous grid refinement need the use of external Gmsh command line calls to perform remeshing between steps. At the moment this is done through a series of file writes and command line calls, originating from

if(!GmshOut<dim,real>::call_gmsh(write_geoname, output_name))
return;

which also needs associated .geo and .pos files generated from src/grid_refinement/gmsh_out.cpp. A better solution would be to use the Gmsh API to directly perform these tasks. This change would avoid unnecessary IO, give more direct control to options used by the mesh generator and combine well with other future enhancements in this area.

Mainly the changes would involve interfacing between the Deal.II and GMSH libraries to be able to directly import the updated mesh into the existing data structures.

This change will affect Issue #57 and impact the following set of existing tests:

  • 2D_GMSH_ANISO_SSHOCK_P1
  • 2D_GMSH_ANISO_SSHOCK_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P3
  • 2D_GMSH_ADJOINT_SSHOCK_P1
  • 2D_GMSH_ADJOINT_SSHOCK_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P3

Manufactured Solution class

Currently, the manufactured solution is generated using some very bad practice of pre-processor #if statements.

This should be replaced by created a Manufactured Solution class and a Manufactured solution factory to return the solution selected at run-time.

Add Euler with artificial dissipation

This will allow us to capture shocks by locally adding viscosity and increase the stability of our steady state convergence as any temporary shocks will dissipate.

See paper from

@InProceedings{Persson2006,
author = {Persson, Per-Olof and Peraire, Jaime},
title = {Sub-Cell Shock Capturing for Discontinuous {Galerkin} Methods},
booktitle = {44th {AIAA} Aerospace Sciences Meeting and Exhibit},
year = {2006},
publisher = {American Institute of Aeronautics and Astronautics},
month = {jan},
doi = {10.2514/6.2006-112},
file = {:Persson2006 - Sub Cell Shock Capturing for Discontinuous Galerkin Methods.pdf:PDF},
timestamp = {2018-11-05},
}

Strong DG convergence orders

Interpolating the flux using an order P polynomial results in a loss of order for even P. This happens for nonlinear problems, but not for linear advection.

No tests currently fail, but using an existing nonlinear problem (like 1D Burgers or Euler), and setting
set use_weak_form = false
will demonstrate the loss of convergence.

This can be resolved by interpolating (or maybe we should be projecting according to Jameson (2012) and Zwanenburg (2016) ) to a P+1 polynomial.

An earlier attempt has been made to project the flux to an order P polynomial (check branch flux_projection) and overintegrating this projection. No success with a P polynomial.

Another possible source of error could be that the strong form is not adjoint consistent. Harriman et al. (2002) show that an adjoint inconsistent discretization results in a P convergence for even, and P+1 convergence for odd, just like we're witnessing.

Re-arrange assemble residual loop

The current residual loop is extremely messy, especially with the new periodic boundary conditions added in.

We probably want to break it down into multiple smaller functions.

Parameterization for various control variables

The current implementation uses FFD as a default design variable. This is not ideal if we want to use different parameterization, or simply surface points.

Have a hierarchy such that

  • FlowConstraints_VolumePoints uses volume mesh points as an input
  • FlowConstraints_SurfacePoints derived from FlowConstraints_VolumePoints uses surface mesh points as an input, and moves the VolumePoints based on an input MeshMover
  • FlowConstraints_SurfaceParameterization derived from FlowConstraints_SurfacePoints uses the parameterization design points as inputs

where the current implementation would be a form of FlowConstraints_SurfaceParameterization.

VTK output of solution order

Due to the recent merge from Abtin, the fe_indices do not necessarily correspond to the polynomial order of the solution.

This is because using collocation requires the minimum polynomial order to be 1, and not 0, since we can't have a single GLL node. As a result, fe_index = 0, would correspond to a polynomial of order 1.

The VTK output was wrong last time Abtin showed it to me.

Inconsistent behavior with Gmsh tests

The current framework for mesh adaption based on a continuous mesh model relies on Gmsh to perform the domain remeshing step. However, some discrepancies have been observed between versions during these steps. Primarily, the current code has been tested with v4.6.0, although this behavior also varied between users, e.g. Mine vs. Dougs. This problem becomes more severe in the case of goal-oriented adaption where the new mesh is generated as an iterative update. Therefore, if the series of adaptations leads to non-suitable intermediary mesh, it may not be able to find a reasonable update (leading to an error).

This primarily affects the following set of tests:

  • 2D_GMSH_ADJOINT_SSHOCK_P1
  • 2D_GMSH_ADJOINT_SSHOCK_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P3

Additionally, in some rare cases during these iterations, the Blossom-Quad recombination step may not fully merge the mesh to form an all-quad mesh. The remaining triangular elements prevent Deal.II from reading the mesh and hence the iterations from continuing. In some cases, these issue have been relieved by slightly perturbing the otherwise failing step size. However, this is a highly version/user dependent fix. An example of this can seen in

set complexity_scale = 2.01 # perturbed to prevent triangles appearing

This issue could be addressed by including a subdivision step in the Gmsh call with the .geo command

Mesh.SubdivisionAlgorithm = 1;

as also used in some related deal.II library function. This could be applied either as the default or a backup when the remeshing would otherwise fail. However, this comes with the tradeoff that the mesh size field resolution can only be controlled on a coarser H=2h grid. Other adjustments would also be needed to src/grid_refinement/field.cpp to generate this grid.

Appearance of triangles may impact all tests involving Gmsh in a loop (some duplication from above):

  • 2D_GMSH_ANISO_SSHOCK_P1
  • 2D_GMSH_ANISO_SSHOCK_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P2
  • 2D_GMSH_ANISO_BOUNDARYLAYER_P3
  • 2D_GMSH_ADJOINT_SSHOCK_P1
  • 2D_GMSH_ADJOINT_SSHOCK_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P2
  • 2D_GMSH_ADJOINT_BOUNDARYLAYER_P3

Resolving this issue could potentially be linked with changes to adopt Gmsh header files as in Issue #56.

Automatic differentiation of the residual performance

Look into the performance of AD for assembling the Jacobian.

The current framework sets the degree of freedom to an AD type and then perform the residual evaluation using that AD type. However, many operations are simply linearly acting on this AD type and is trivial to manually differentiate.

A performance enhancement should be seen if we simply differentiate the complicated parts, which is the nonlinear fluxes using AD.

Use Travis CI for continuous integration

Ideally, I wouldn't have to run all the tests myself when a pull request occurs.

Made a lot of attempts to integrate Travis CI into this without any success. Right now, it seems like the Docker file from the deal.II DockerHub repository does not have MPI (not sure since it says it does), and therefore, cannot compile PHiLiP.

A solution would be to have our own Docker container with the settings we currently use in PHiLiP and pull from that.

2D_dXvdXs_serial test fail

The dXvdXs sensitivities are correct when used with MPI. For some reason, when done serially, the sensitivities do not match with a finite-difference. This is likely due to how the linear_constraint_operator is being formed. More investigation is needed.

Test created as a reminder that dXvdXs is incorrect serially.

Create a subsection for DG parameters

With the increasing number of parameters affecting DG, we should have a different subsection for everything related to DG, such as overintegration, type of FR, collocation, split forms, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.