Code Monkey home page Code Monkey logo

fbpinns's Introduction

Finite basis physics-informed neural networks (FBPINNs)


This repository allows you to solve forward and inverse problems related to partial differential equations (PDEs) using finite basis physics-informed neural networks (FBPINNs).

🔥 MAJOR UPDATE 🔥: we have rewritten the fbpinns library in JAX: it now runs 10-1000X faster than the original PyTorch code (by parallelising subdomain computations using jax.vmap) and scales to 1000s+ subdomains. We have also added extra functionality: you can now solve inverse problems, add arbitrary types of boundary/data constraints, define irregular/multilevel domain decompositions and custom subdomain networks, and the high-level interface is much more flexible and easier to use. See the Release note for more info.

FBPINNs are described in detail here: Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations, B. Moseley, T. Nissen-Meyer and A. Markham, Jul 2023 Advances in Computational Mathematics. See the slides from our 2023 Maths4DL conference talk here.


FBPINN solving the high-frequency 1D harmonic oscillator PINN solving the high-frequency 1D harmonic oscillator

Test loss comparison

Fig 1: FBPINN vs PINN solving the high-frequency 1D harmonic oscillator

Why FBPINNs?

  • Physics-informed neural networks (PINNs) are a popular approach for solving forward and inverse problems related to PDEs
  • However, PINNs often struggle to solve problems with high frequencies and/or multi-scale solutions
  • This is due to the spectral bias of neural networks and the heavily increasing complexity of the PINN optimisation problem
  • FBPINNs improve the performance of PINNs in this regime by combining them with domain decomposition, individual subdomain normalisation and flexible subdomain training schedules
  • Empirically, FBPINNs significantly outperform PINNs (in terms of accuracy and computational efficiency) when solving problems with high frequencies and multi-scale solutions (Fig 1 and 2)

FBPINN solution of the (2+1)D wave equation with multiscale sources

Fig 2: FBPINN solution of the (2+1)D wave equation with multiscale sources

How are FBPINNs different to PINNs?

FBPINN workflow overview

Fig 3: FBPINN workflow overview

To improve the scalability of PINNs to high frequency/ multiscale solutions:

  • FBPINNs divide the problem domain into many small, overlapping subdomains (Fig 3).

  • A neural network is placed within each subdomain, and the solution to the PDE is defined as the summation over all subdomain networks.

  • Each subdomain network is locally confined to its subdomain by multiplying it by a smooth, differentiable window function.

  • Finally, the inputs of each network are individually normalised over their subdomain.

The hypothesis is that this "divide and conquer" approach significantly reduces the complexity of the PINN optimisation problem. Furthermore, individual subdomain normalisation ensures the "effective" frequency each subdomain network sees is low, reducing the effect of spectral bias.

Subdomain scheduling

Solving the time-dependent Burgers' equation using a time-stepping subdomain scheduler

Fig 4: Solving the time-dependent Burgers' equation using a time-stepping subdomain scheduler

Another advantage of using domain decomposition is that we can control which parts of the domain are solved at each training step.

This is useful if we want to control how boundary conditions are communicated across the domain.

For example, we can define a time-stepping scheduler to solve time-dependent PDEs, and learn the solution forwards in time from a set of initial conditions (Fig 4).

This is done by specifying a subdomain scheduler (from fbpinns.schedulers), which defines which subdomains are actively training and which subdomains have fixed parameters at each training step.

Installation

fbpinns only requires Python libraries to run.

JAX is used as the main computational engine for fbpinns.

To install fbpinns, we recommend setting up a new Python environment, for example:

conda create -n fbpinns python=3  # Using conda
conda activate fbpinns

then cloning this repository:

git clone [email protected]:benmoseley/FBPINNs.git

and running this command in the base FBPINNs/ directory (will also install all of the dependencies):

pip install -e .

Note this installs the fbpinns package in "editable mode" - you can make changes to the source code and they are immediately present in the package.

Getting started

Forward and inverse PDE problems are defined and solved by carrying out the following steps:

  1. Define the problem domain, by selecting or defining your own fbpinns.domains.Domain class
  2. Define the PDE to solve, and any problem constraints (such as boundary conditions or data constraints), by selecting or defining your own fbpinns.problems.Problem class
  3. Define the domain decomposition used by the FBPINN, by selecting or defining your own fbpinns.decompositions.Decomposition class
  4. Define the neural network placed in each subdomain, by selecting or defining your own fbpinns.networks.Network class
  5. Keep track of all the training hyperparameters by passing these classes and their initialisation values to a fbpinns.constants.Constants object
  6. Start the FBPINN training by instantiating a fbpinns.trainers.FBPINNTrainer using the Constants object.

For example, to solve the 1D harmonic oscillator problem shown above (Fig 1):

import numpy as np

from fbpinns.domains import RectangularDomainND
from fbpinns.problems import HarmonicOscillator1D
from fbpinns.decompositions import RectangularDecompositionND
from fbpinns.networks import FCN
from fbpinns.constants import Constants
from fbpinns.trainers import FBPINNTrainer

c = Constants(
    domain=RectangularDomainND,# use a 1D problem domain [0, 1]
    domain_init_kwargs=dict(
        xmin=np.array([0,]),
        xmax=np.array([1,]),
    ),
    problem=HarmonicOscillator1D,# solve the 1D harmonic oscillator problem
    problem_init_kwargs=dict(
        d=2, w0=80,# define the ODE parameters
    ),
    decomposition=RectangularDecompositionND,# use a rectangular domain decomposition
    decomposition_init_kwargs=dict(
        subdomain_xs=[np.linspace(0,1,15)],# use 15 equally spaced subdomains
        subdomain_ws=[0.15*np.ones((15,))],# with widths of 0.15
        unnorm=(0.,1.),# define unnormalisation of the subdomain networks
    ),
    network=FCN,# place a fully-connected network in each subdomain
    network_init_kwargs=dict(
        layer_sizes=[1,32,1],# with 2 hidden layers
    ),
    ns=((200,),),# use 200 collocation points for training
    n_test=(500,),# use 500 points for testing
    n_steps=20000,# number of training steps
    optimiser_kwargs=dict(learning_rate=1e-3),
    show_figures=True,# display plots during training
)

run = FBPINNTrainer(c)
run.train()# start training the FBPINN

The FBPINNTrainer will automatically start outputting training statistics, plots and tensorboard summaries. The tensorboard summaries can be viewed by installing tensorboard and then running tensorboard --logdir results/summaries/

Comparing to PINNs

You can easily train a PINN using the same hyperparameters above, using:

from fbpinns.trainers import PINNTrainer

c["network_init_kwargs"] = dict(layer_sizes=[1,64,64,1])# use a larger neural network
run = PINNTrainer(c)
run.train()# start training a PINN on the same problem

Going further

See the examples folder for more advanced examples covering:

  • how to define your own Problem class
  • how to use hard boundary constraints
  • how to solve an inverse problem
  • how to use subdomain scheduling

FAQs

Installation

I get the error: RuntimeError: This version of jaxlib was built using AVX instructions, which your CPU and/or operating system do not support. when using Apple GPUs.

  • As of this commit, JAX only has experimental support for Apple GPUs. Either build JAX from source or install a CPU-only version using conda: pip uninstall jax jaxlib and conda install jax -c conda-forge

Using GPUs

How do I train FBPINNs using a GPU?

  • Exactly the same code should run on a GPU automatically, without needing any modification. Make sure you have installed the GPU version of JAX, and that JAX can see your GPU devices (e.g. by checking jax.devices())

Understanding the repository

But I don't know JAX!?

  • We highly recommend becoming familiar with JAX - it is a fantastic, general-purpose library for accelerated differentiable computing. But even if you don't want to learn JAX, that's ok - all of the front-end classes (Domain, Problem, Decomposition, and Network) can be defined with only basic understanding of jax.numpy (which is essentially the same as numpy anyway).

Methodology

How are FBPINNs different to other PINN + domain decomposition methods?

  • In contrast to other PINN + domain decomposition methods (such as XPINNs), FBPINNs by their mathematical construction do not require additional interface terms in their loss function, and their solution is continuous across subdomain interfaces. Essentially, FBPINNs can just be thought of as defining a custom neural network architecture for PINNs - everything else stays the same.

Citation

If you find FBPINNs useful and use them in your own work, please use the following citations:

@article{Moseley2023,
author = {Moseley, Ben and Markham, Andrew and Nissen-Meyer, Tarje},
doi = {10.1007/S10444-023-10065-9},
journal = {Advances in Computational Mathematics 2023 49:4},
month = {jul},
number = {4},
pages = {1--39},
publisher = {Springer},
title = {{Finite basis physics-informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations}},
url = {https://link.springer.com/article/10.1007/s10444-023-10065-9},
volume = {49},
year = {2023}
}

@article{Dolean2024,
author = {Dolean, Victorita and Heinlein, Alexander and Mishra, Siddhartha and Moseley, Ben},
doi = {https://doi.org/10.1016/j.cma.2024.117116},
issn = {0045-7825},
journal = {Computer Methods in Applied Mechanics and Engineering},
pages = {117116},
title = {{Multilevel domain decomposition-based architectures for physics-informed neural networks}},
url = {https://www.sciencedirect.com/science/article/pii/S0045782524003724},
volume = {429},
year = {2024}
}

Reproducing our papers

To reproduce the exact results of our original FBPINN paper: Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations, B. Moseley, T. Nissen-Meyer and A. Markham, Jul 2023 Advances in Computational Mathematics, you will need to use the legacy PyTorch FBPINN implementation, which is available at this commit.

To reproduce the results of our paper: Multilevel domain decomposition-based architectures for physics-informed neural networks, please see this branch.

Further questions?

Please raise a GitHub issue or feel free to contact us.

fbpinns's People

Contributors

benmoseley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fbpinns's Issues

Modifiying the wave 3D Problem

download
Thank you for the innovative contribution!

I tried modifying the wave 3D problem to have the following boundary conditions:

u(x,y,0) = 0
u(0,0,t) = 2 sin (2 pi t) #time-dependent source

in this way:


 def boundary_condition(self, x, u, dudt, d2udx2, d2udy2, d2udt2, sd):
        
        # Apply u = tanh^2((t-0)/sd)*NN + sigmoid((d-t)/sd)*exp( -(1/2)((x/sd)^2+(y/sd)^2) )  ansatz
        
        t_2, dudt_2, d2udt_22 = boundary_conditions.tanh2_2(x[:,2:3], 0, sd)
        s, _, d2uds2   = boundary_conditions.sigmoid_2(-x[:,2:3], -2*sd, 0.2*sd)# beware (!) this gives correct 2nd order gradients but negative 1st order (sign flip!)
        
        mx = my = 0; 
        sx = sy = self.source_sd
        xnx, xny = (x[:,0:1]-mx)/sx, (x[:,1:2]-my)/sy
        #exp = torch.exp(-0.5*(xnx**2 + xny**2))
        exp = torch.exp(-0.5*(xnx**2 + xny**2))*0 #IC = 0 instead of exp
        #Initial GP
        f = exp
        d2udfx2 = (1/sx**2) * ((xnx**2) - 1)*exp
        d2udfy2 = (1/sy**2) * ((xny**2) - 1)*exp
        
        u_new   = t_2*u + s*f
        d2udx2_new = t_2*d2udx2 + s*d2udfx2
        d2udy2_new = t_2*d2udy2 + s*d2udfy2
        d2udt2_new = d2udt_22*u + 2*dudt_2*dudt + t_2*d2udt2 + d2uds2*f

        #Zero Ic and BC
#         u_new   = t_2*u *0
#         d2udx2_new = t_2*d2udx2 
#         d2udy2_new = t_2*d2udy2 
#         d2udt2_new = d2udt_22*u 
        
        return u_new, dudt, d2udx2_new, d2udy2_new, d2udt2_new# skip updating first order gradients (not needed for loss)
    

I also made some changes for the FD file to be this way:


import numpy as np
import time
from seismic_CPML_helper import get_dampening_profiles

# todo: is this faster in parallel with np.roll?

def seismicCPML2D_wS(NX,
                NY,
                NSTEPS,
                DELTAX,
                DELTAY,
                DELTAT,
                NPOINTS_PML,
                velocity,
                density,
                initial_pressures,
                f0=20.,
                dtype=np.float32,
                output_wavefields=True,
                gather_is=None):
    
    "Run seismicCPML2D"
    
    ## INPUT PARAMETERS
    velocity = velocity.astype(dtype)
    density = density.astype(dtype)
    
    if type(gather_is) != type(None): output_gather = True
    else: output_gather = False
    
    K_MAX_PML = 1.
    ALPHA_MAX_PML = 2.*np.pi*(f0/2.)# from Festa and Vilotte
    NPOWER = 2.# power to compute d0 profile
    Rcoef = 0.001
    
    STABILITY_THRESHOLD = 1e25
    ##
    
    
    # STABILITY CHECKS
    
    # basically: delta x > np.sqrt(3) * max(v) * delta t
    courant_number = np.max(velocity) * DELTAT * np.sqrt(1/(DELTAX**2) + 1/(DELTAY**2))
    if courant_number > 1.: raise Exception("ERROR: time step is too large, simulation will be unstable %.2f"%(courant_number))
    if NPOWER < 1: raise Exception("ERROR: NPOWER must be greater than 1")
    
    
    # GET DAMPENING PROFILES
    
    [[a_x, a_x_half, b_x, b_x_half, K_x, K_x_half],
     [a_y, a_y_half, b_y, b_y_half, K_y, K_y_half]] = get_dampening_profiles(velocity, NPOINTS_PML, Rcoef, K_MAX_PML, ALPHA_MAX_PML, NPOWER, DELTAT, DELTAS=(DELTAX, DELTAY), dtype=dtype, qc=False)
    

    # INITIALISE ARRAYS
    
    kappa = density*(velocity**2)
    
    # pressure_present = initial_pressures[1].astype(dtype)
    # pressure_past = initial_pressures[0].astype(dtype)

    #zero IC
    pressure_present = np.zeros((NX, NY), dtype=dtype)
    pressure_past = np.zeros((NX, NY), dtype=dtype)
    
    
    memory_dpressure_dx = np.zeros((NX, NY), dtype=dtype)
    memory_dpressure_dy = np.zeros((NX, NY), dtype=dtype)
    
    memory_dpressurexx_dx = np.zeros((NX, NY), dtype=dtype)
    memory_dpressureyy_dy = np.zeros((NX, NY), dtype=dtype)
    
    if output_wavefields: wavefields = np.zeros((NSTEPS, NX, NY), dtype=dtype)
    if output_gather: gather = np.zeros((gather_is.shape[0], NSTEPS), dtype=dtype)
    
    # precompute density_half arrays
    density_half_x = np.pad(0.5 * (density[1:NX,:]+density[:NX-1,:]), [[0,1],[0,0]], mode="edge")
    density_half_y = np.pad(0.5 * (density[:,1:NY]+density[:,:NY-1]), [[0,0],[0,1]], mode="edge")
    
    
    # RUN SIMULATION
    
    start = time.time()
    for it in range(NSTEPS):
                
        # compute the first spatial derivatives divided by density
        
        value_dpressure_dx = np.pad((pressure_present[1:NX,:]-pressure_present[:NX-1,:]) / DELTAX, [[0,1],[0,0]], mode="constant", constant_values=0.)
        value_dpressure_dy = np.pad((pressure_present[:,1:NY]-pressure_present[:,:NY-1]) / DELTAY, [[0,0],[0,1]], mode="constant", constant_values=0.)
    
        memory_dpressure_dx = b_x_half * memory_dpressure_dx + a_x_half * value_dpressure_dx
        memory_dpressure_dy = b_y_half * memory_dpressure_dy + a_y_half * value_dpressure_dy
    
        value_dpressure_dx = value_dpressure_dx / K_x_half + memory_dpressure_dx
        value_dpressure_dy = value_dpressure_dy / K_y_half + memory_dpressure_dy
    
        pressure_xx = value_dpressure_dx / density_half_x
        pressure_yy = value_dpressure_dy / density_half_y
        
        # compute the second spatial derivatives
        
        value_dpressurexx_dx = np.pad((pressure_xx[1:NX,:]-pressure_xx[:NX-1,:]) / DELTAX, [[1,0],[0,0]], mode="constant", constant_values=0.)
        value_dpressureyy_dy = np.pad((pressure_yy[:,1:NY]-pressure_yy[:,:NY-1]) / DELTAY, [[0,0],[1,0]], mode="constant", constant_values=0.)
    
        memory_dpressurexx_dx = b_x * memory_dpressurexx_dx + a_x * value_dpressurexx_dx
        memory_dpressureyy_dy = b_y * memory_dpressureyy_dy + a_y * value_dpressureyy_dy
        
        value_dpressurexx_dx = value_dpressurexx_dx / K_x + memory_dpressurexx_dx
        value_dpressureyy_dy = value_dpressureyy_dy / K_y + memory_dpressureyy_dy
        
        dpressurexx_dx = value_dpressurexx_dx
        dpressureyy_dy = value_dpressureyy_dy
        
        # apply the time evolution scheme
        # we apply it everywhere, including at some points on the edges of the domain that have not be calculated above,
        # which is of course wrong (or more precisely undefined), but this does not matter because these values
        # will be erased by the Dirichlet conditions set on these edges below
        # pressure_future =   - pressure_past \
        #                     + 2 * pressure_present \
        #                     + DELTAT*DELTAT*(dpressurexx_dx+dpressureyy_dy)*kappa

        # Stepping with a source function, p is passed from the main file as p0 (Gaussian pulse)
        # location of source is passed within p0
        def func_t(p,t_inst):
            Amp = 1
            freq = 1
            t = t_inst*DELTAT
            return p*Amp*np.sin(2*np.pi**freq*t)
            
        pressure_future =   - pressure_past \
                            + 2 * pressure_present \
                            + DELTAT*DELTAT*(dpressurexx_dx+dpressureyy_dy)*kappa \
                            + DELTAT*DELTAT*func_t(initial_pressures[1].astype(dtype),it)
                
        
        # apply Dirichlet conditions at the bottom of the C-PML layers,
        # which is the right condition to implement in order for C-PML to remain stable at long times
        
        # Dirichlet conditions
        pressure_future[0,:] = pressure_future[-1,:] = 0.
        pressure_future[:,0] = pressure_future[:,-1] = 0.
        
        if output_wavefields: wavefields[it,:,:] = np.copy(pressure_present)
        if output_gather:
            gather[:,it] = np.copy(pressure_present[gather_is[:,0], gather_is[:,1]])# nb important to copy

        
        # check stability of the code, exit if unstable
        if(np.max(np.abs(pressure_present)) > STABILITY_THRESHOLD):
            raise Exception('code became unstable and blew up')
    
        # move new values to old values (the present becomes the past, the future becomes the present)
        pressure_past = pressure_present
        pressure_present = pressure_future
    
        #print(pressure_past.dtype, pressure_future.dtype, wavefields.dtype, gather.dtype)
        if it % 10000 == 0 and it!=0:
            rate = (time.time()-start)/10.
            print("[%i/%i] %.2f s per step"%(it, NSTEPS, rate))
            start = time.time()
    
    output = [None, None]
    if output_wavefields: output[0]=wavefields
    if output_gather: output[1]=gather
    return output

Mainly attempting to change the IC and add a time-dependent source term to the equation so it becomes:


 d^2 u     d^2 u        1   d^2 u
 ------ + ------  -  ---   ------   =   S(x,y,t)
 dx^2       dy^2      c^2  dt^2
        

where,

Amp = 1
freq = 1
sx = sy =self.source_sd
mx = 0; my = 0
GP = torch.exp(-0.5*(( (x[:,0:1]-mx)/sx)**2 + ((x[:,1:2]-my)/sy)**2 ))

S = Amp * GP * torch.sin(2*np.pi*freq*x[:,2:3])  #The Source function

but the results are as shown in the image. So, my questions are:

  1. How can I implement my specified boundary and initial conditions in a better way than the one I tried (if my attempt was correct)? I don't fully understand how to use the implemented boundary condition helper functions to implement my specific equations.

 d^2 u     d^2 u        1   d^2 u
 ------ + ------  -  ---   ------   =   S(x,y,t)
 dx^2       dy^2      c^2  dt^2
        
Boundary conditions:
        u(x,y,0) = 0
        u(0,0,t) = 2 * sin (2 * pi * t)	#time-dependent source
        

The results in the image were executed with these batch sizes:

batch_size = (30,30,30)
batch_size_test = (40,40,15)

because of the limited memory on my GPU.

  1. Does this affect the results? If so, how can I increase batch_size_test without getting OOM error?

Thanks again! Looking forward to your reply.

Run Error, need help!

I run the code "1. Defining your own problem - 1D harmonic oscillator", meet the error below:

[INFO] 2024-02-17 20:46:06 - <fbpinns.constants.Constants object at 0x0000025EF2406950>
run: test
domain: <class 'fbpinns.domains.RectangularDomainND'>
domain_init_kwargs: {'xmin': array([0.]), 'xmax': array([1.])}
problem: <class 'main.HarmonicOscillator1D'>
problem_init_kwargs: {'d': 2, 'w0': 80}
decomposition: <class 'fbpinns.decompositions.RectangularDecompositionND'>
decomposition_init_kwargs: {'subdomain_xs': [array([0. , 0.07142857, 0.14285714, 0.21428571, 0.28571429,
0.35714286, 0.42857143, 0.5 , 0.57142857, 0.64285714,
0.71428571, 0.78571429, 0.85714286, 0.92857143, 1. ])], 'subdomain_ws': [array([0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15,
0.15, 0.15, 0.15, 0.15])], 'unnorm': (0.0, 1.0)}
network: <class 'fbpinns.networks.FCN'>
network_init_kwargs: {'layer_sizes': [1, 32, 1]}
n_steps: 20000
scheduler: <class 'fbpinns.schedulers.AllActiveSchedulerND'>
scheduler_kwargs: {}
ns: ((200,),)
n_test: (500,)
sampler: grid
optimiser: <function adam at 0x0000025EF2756CB0>
optimiser_kwargs: {'learning_rate': 0.001}
seed: 0
summary_freq: 1000
test_freq: 1000
model_save_freq: 10000
show_figures: True
save_figures: False
clear_output: True
hostname: desktop-lostn3v

[INFO] 2024-02-17 20:46:07 - Total number of subdomains: 15
[INFO] 2024-02-17 20:46:08 - Total number of trainable parameters:
[INFO] 2024-02-17 20:46:08 - network: 1,455
[INFO] 2024-02-17 20:46:09 - Total number of constraints: 2
[INFO] 2024-02-17 20:46:09 - Computing exact solution..
[INFO] 2024-02-17 20:46:09 - Computing done
[INFO] 2024-02-17 20:46:09 - Getting test data inputs..
[INFO] 2024-02-17 20:46:12 - [i: 0/20000] Updating active inputs..
[INFO] 2024-02-17 20:46:12 - [i: 0/20000] Average number of points/dimension in active subdomains: 28.00
[INFO] 2024-02-17 20:46:15 - [i: 0/20000] Updating active inputs done (2.92 s)
[INFO] 2024-02-17 20:46:15 - [i: 0/20000] Compiling update step..
[INFO] 2024-02-17 20:46:15 - x_batch
[INFO] 2024-02-17 20:46:15 - (200, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:15 - x_take
[INFO] 2024-02-17 20:46:15 - (418, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:15 - x_batch
[INFO] 2024-02-17 20:46:15 - (1, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:15 - x_take
[INFO] 2024-02-17 20:46:15 - (2, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:17 - [i: 0/20000] Compiling done (2.35 s)

ValueError Traceback (most recent call last)
Cell In[7], line 4
1 from fbpinns.trainers import FBPINNTrainer
3 run = FBPINNTrainer(c)
----> 4 all_params = run.train()

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:660, in FBPINNTrainer.train(self)
657 # report initial model
658 if i == 0:
659 u_test_losses, start1, report_time =
--> 660 self._report(i, pstep, fstep, u_test_losses, start0, start1, report_time,
661 u_exact, x_batch_test, test_inputs, all_params, all_opt_states, model_fns, problem, decomposition,
662 active, merge_active, active_opt_states, active_params, x_batch,
663 lossval)
665 # take a training step
666 lossval, active_opt_states, active_params = update(active_opt_states,
667 active_params, fixed_params, static_params_dynamic,
668 takess, constraints)# note compiled function only accepts dynamic arguments

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:715, in FBPINNTrainer.report(self, i, pstep, fstep, u_test_losses, start0, start1, report_time, u_exact, x_batch_test, test_inputs, all_params, all_opt_states, model_fns, problem, decomposition, active, merge_active, active_opt_states, active_params, x_batch, lossval)
713 # take test step
714 if test
:
--> 715 u_test_losses = self.test(
716 x_batch_test, u_exact, u_test_losses, x_batch, test_inputs, i, pstep, fstep, start0, active, all_params, model_fns, problem, decomposition)
718 # save model
719 if model_save
:

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:736, in FBPINNTrainer.test(self, x_batch_test, u_exact, u_test_losses, x_batch, test_inputs, i, pstep, fstep, start0, active, all_params, model_fns, problem, decomposition)
733 takes, all_ims, cut_all = test_inputs
734 all_params_cut = {"static":cut_all(all_params["static"]),
735 "trainable":cut_all(all_params["trainable"])}
--> 736 u_test, wp_test
, us_test_, ws_test_, us_raw_test_ = FBPINN_model_jit(all_params_cut, x_batch_test, takes, model_fns, verbose=False)
737 if all_params["static"]["problem"]["dims"][1] == 1:# 1D plots require full lines, not just hist stats
739 m, ud, n = all_params["static"]["decomposition"]["m"], all_params["static"]["problem"]["dims"][0], x_batch_test.shape[0]

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:320, in FBPINN_model_jit(all_params, x_batch, takes, model_fns, verbose)
318 def FBPINN_model_jit(all_params, x_batch, takes, model_fns, verbose=True):
319 all_params_dynamic, all_params_static = partition(all_params)
--> 320 return _FBPINN_model_jit(all_params_dynamic, all_params_static, x_batch, takes, model_fns, verbose)

[... skipping hidden 12 frame]

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:317, in _FBPINN_model_jit(all_params_dynamic, all_params_static, x_batch, takes, model_fns, verbose)
314 @partial(jax.jit, static_argnums=(1,4,5))
315 def _FBPINN_model_jit(all_params_dynamic, all_params_static, x_batch, takes, model_fns, verbose):
316 all_params = combine(all_params_dynamic, all_params_static)
--> 317 return FBPINN_model(all_params, x_batch, takes, model_fns, verbose)

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:164, in FBPINN_model(all_params, x_batch, takes, model_fns, verbose)
162 # apply POU and sum
163 u = jnp.concatenate([us, ws], axis=1)# (s, ud+1)
--> 164 u = jax.ops.segment_sum(u, p_take, indices_are_sorted=False, num_segments=len(np_take))# (_, ud+1)
165 wp = u[:,-1:]
166 u = u[:,:-1]/wp

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:251, in segment_sum(data, segment_ids, num_segments, indices_are_sorted, unique_indices, bucket_size, mode)
201 def segment_sum(data: ArrayLike,
202 segment_ids: ArrayLike,
203 num_segments: int | None = None,
(...)
206 bucket_size: int | None = None,
207 mode: lax.GatherScatterMode | None = None) -> Array:
208 """Computes the sum within segments of an array.
209
210 Similar to TensorFlow's `segment_sum
(...)
249 Array([1, 5, 4], dtype=int32)
250 """
--> 251 return _segment_update(
252 "segment_sum", data, segment_ids, lax.scatter_add, num_segments,
253 indices_are_sorted, unique_indices, bucket_size, reductions.sum, mode=mode)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:183, in _segment_update(name, data, segment_ids, scatter_op, num_segments, indices_are_sorted, unique_indices, bucket_size, reducer, mode)
180 if bucket_size is None:
181 out = jnp.full((num_segments,) + data.shape[1:],
182 _get_identity(scatter_op, dtype), dtype=dtype)
--> 183 return _scatter_update(
184 out, segment_ids, data, scatter_op, indices_are_sorted,
185 unique_indices, normalize_indices=False, mode=mode)
187 # Bucketize indices and perform segment_update on each bucket to improve
188 # numerical stability for operations like product and sum.
189 assert reducer is not None

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:80, in _scatter_update(x, idx, y, scatter_op, indices_are_sorted, unique_indices, mode, normalize_indices)
77 # XLA gathers and scatters are very similar in structure; the scatter logic
78 # is more or less a transpose of the gather equivalent.
79 treedef, static_idx, dynamic_idx = jnp._split_index_for_jit(idx, x.shape)
---> 80 return _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx,
81 indices_are_sorted, unique_indices, mode,
82 normalize_indices)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:115, in _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx, indices_are_sorted, unique_indices, mode, normalize_indices)
112 x, y = promote_dtypes(x, y)
114 # Broadcast y to the slice output shape.
--> 115 y = jnp.broadcast_to(y, tuple(indexer.slice_shape))
116 # Collapse any None/jnp.newaxis dimensions.
117 y = jnp.squeeze(y, axis=indexer.newaxis_dims)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\numpy\lax_numpy.py:1218, in broadcast_to(array, shape)
1214 @util.implements(np.broadcast_to, lax_description="""
1215 The JAX version does not necessarily return a view of the input.
1216 """)
1217 def broadcast_to(array: ArrayLike, shape: DimSize | Shape) -> Array:
-> 1218 return util._broadcast_to(array, shape)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\numpy\util.py:428, in _broadcast_to(arr, shape)
426 if nlead < 0 or not compatible:
427 msg = "Incompatible shapes for broadcasting: {} and requested shape {}"
--> 428 raise ValueError(msg.format(arr_shape, shape))
429 diff, = np.where(tuple(not core.definitely_equal(arr_d, shape_d)
430 for arr_d, shape_d in safe_zip(arr_shape, shape_tail)))
431 new_dims = tuple(range(nlead)) + tuple(nlead + diff)

ValueError: Incompatible shapes for broadcasting: (1046, 2) and requested shape (1046, 1, 2)

Higher-Order Gradient Derivative Problem

Thank you for sharing your work, it's very interesting! The new version using JAX is indeed much faster, but I'm not very familiar with it (I use PyTorch more). Recently, when solving a PDE, I encountered this problem:

$\mathrm{Loss}_1=\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}$

$\mathrm{Loss}_2=\frac{\partial}{\partial y}\left[ \left( v+\frac{v_t}{\sigma _k} \right) \frac{\partial k}{\partial y} \right] $

$\sigma _k$ is given, the input of the neural network is $x$, $y$, and the output of the neural network is $u$, $v$, and $k$.

When constructing the physical loss $Loss_2$ of the above equation, $\frac{\partial k}{\partial y}$ needs to be used. The current FBPINN framework uses required_ujs_phys to callback gradients, as shown in the following code framework:

    def sample_constraints(all_params, domain, key, sampler, batch_shapes):
        
        # physics loss
        y_batch_phys = domain.sample_interior(all_params, key, sampler, batch_shapes[0])
        required_ujs_phys = (
            (0,()),     # u
            (1,()),     # v
            (2,()),     # k
            (2,(1,)),   # k_y
        )
        
        return [[y_batch_phys, required_ujs_phys]]

This causes a problem: I can't calculate the gradient of $\frac{\partial}{\partial y}\left[ \left( v+\frac{v_t}{\sigma _k} \right) \frac{\partial k}{\partial y} \right] $, because it's a mixed second-order gradient that requires the first-order $\frac{\partial k}{\partial y}$ to calculate the final gradient. It can't be recalled through required_ujs_phys.

This kind of composite gradient is quite common. Do you have any good suggestions to solve this problem?

Thank you for your reading!

About the dataset

I am a beginner and I would like to know if this method requires a dataset or data for training, which I didn't find in the case, thanks

functioning of sample constraints

Good morning, I'm not sure I fully understand how "sample_constraints" works. For example in the WaveEquationConstantVelocity3D problem, if I understand correctly:
(0,(0,0)) the first zero indicates u and the other two the second derivative in t.
While (0,(1,1)) and (0,(2,2)) are the second derivative of u in x and y respectively. it's correct?
Now my problem arises because I would like to determine the individual components of u along x and y. For example, if u is a velocity, I need the gradients of the x component and the y component to appear in the loss fn, for example:
phys = u_x + d(u_x)/dx + d(u_x)/dy + d(u_y)/dy + u_y
with u_x the x component of u and u_y the y component.
it is possible to determine them by going to edit
(0,(1,))
(1,(0,))
(0,(0,))
(1,(1,))
or similarly?

Empty gradient when the dim of u is 2

I am trying to modeling 1D maxwell equation: dH/dt - dE/dx = 0 ; dE/dt - dH/dx = source, The problem dim is set to (2,2) and I can implement this equation in pytorch but when comes to jax, I found the gradient of E, dE/dt, dE/dx are both empty([]) which results in nan in loss. Please help to identify the issue. Thanks @benmoseley

import jax
import jax.numpy as jnp
import numpy as np

from fbpinns.domains import RectangularDomainND
from fbpinns.problems import Problem
from fbpinns.decompositions import RectangularDecompositionND
from fbpinns.networks import FCN
from fbpinns.constants import Constants, get_subdomain_ws
from fbpinns.trainers import FBPINNTrainer, PINNTrainer

class FDTD2D(Problem):
    """Solves the time-dependent (1+1)D Maxwell equation with constant velocity
        u = [H, E]
        d H     dE
        ---- - ----  =  0
        dt       dx

        d E     dH
        ---- - ----  =  0
        dt      dx

        Boundary conditions:
        E(x,0) = exp( -(1/2)((x/sd)^2) )
        du
        --(x,0) = 0
        dt
    """

    @staticmethod
    def init_params(c=1, sd=1):

        static_params = {
            "dims":(2,2),
            "c":c,
            "sd":sd,
            }
        return static_params, {}

    @staticmethod
    def sample_constraints(all_params, domain, key, sampler, batch_shapes):

        # physics loss
        x_batch_phys = domain.sample_interior(all_params, key, sampler, batch_shapes[0])
        required_ujs_phys = (
            (0,(0,)),#dH / dx
            (1,(0,)),#dE / dx
            (0,(1,)),#dH / dt
            (1,(1,)),#dE /dt
        )
        return [[x_batch_phys, required_ujs_phys],]

    @staticmethod
    def constraining_fn(all_params, x_batch, u):
        c = all_params["static"]["problem"]["c"]
        sd = all_params["static"]["problem"]["sd"]
        t = x_batch[:,1:2]

        u = (jax.nn.tanh(c*t/(2*sd))**2)*u# constrains u(x,y,0) = u_t(x,y,0) = 0
        return u

    @staticmethod
    def loss_fn(all_params, constraints):
        c = all_params["static"]["problem"]["c"]
        sd = all_params["static"]["problem"]["sd"]
        x_batch, dHdx, dEdx, dHdt, dEdt = constraints[0]
#        jax.debug.print("ret {}", dEdx) # dEdx and dEdt being [] while dHdx and dHdt are OK

        x, t = x_batch[:,0:1], x_batch[:,1:2]

        e = -0.5*(x**2 + t**2)/(sd**2)
        s = 2e3*(1+e)*jnp.exp(e)# ricker source term

        phys1 = jnp.mean((dHdx - dEdt - s)**2)
        phys2 = jnp.mean((dEdx - dHdt)**2)
        phys = phys1 + phys2
        return phys

    @staticmethod
    def exact_solution(all_params, x_batch, batch_shape):

        key = jax.random.PRNGKey(0)
        return jax.random.normal(key, (x_batch.shape[0],1))


subdomain_xs = [np.linspace(-1,1,5), np.linspace(0,1,5)]
subdomain_ws = get_subdomain_ws(subdomain_xs, 1.9)

c = Constants(
    run="test",
    domain=RectangularDomainND,
    domain_init_kwargs=dict(
        xmin=np.array([-1,0]),
        xmax=np.array([1,1]),
    ),
    problem=FDTD2D,
    problem_init_kwargs=dict(
        c=1, sd=0.1,
    ),
    decomposition=RectangularDecompositionND,
    decomposition_init_kwargs=dict(
        subdomain_xs=subdomain_xs,
        subdomain_ws=subdomain_ws,
        unnorm=(0.,1.),
    ),
    network=FCN,
    network_init_kwargs=dict(
        layer_sizes=[2,32,2],
    ),
    ns=((100,50),),
    n_test=(100,5),
    n_steps=5000,
    optimiser_kwargs=dict(learning_rate=1e-3),
    summary_freq=200,
    test_freq=200,
    show_figures=True,
    clear_output=True,
)

#run = FBPINNTrainer(c)
#run.train()

c["network_init_kwargs"] = dict(layer_sizes=[2,128,128,2])
run = PINNTrainer(c)
run.train()

Wavefield "c" Value

Dear Ben,

Thank you for this creative work! I saw in the code that you can use two types of c values; one is obtained by _constant_c() and the other by _gaussian_c(). I understand the conceptual difference between them, but I wonder if I wanted to test a two wavefeild with two values of c (e.g.: c1 and c2) how can I write a function to do that?

My initial attempt was to write:

def _variable_c(x):
        "Defines a variable velocity model"
        return ((np.tanh(80*(x-2.5))+5)/4) * torch.ones((x.shape[0],1), dtype=x.dtype, device= x.device) 

I then use this assignment:

 c = _variable_c 
 c = c(torch.from_numpy(np.stack([xx,yy],-1).reshape((NX*NY,2)))).numpy().reshape(NX,NY)

The idea that I'm trying to achieve is to assign the value of c=1 to x<2.5 and c=1.5 to x>2.5 by using the "tanh" function for a smooth transition of c value.
I'm getting this error:

~\AppData\Local\Temp/ipykernel_16716/2797370787.py in exact_solution(x, batch_size)
     51 
     52         c = _variable_c #_constant_c
---> 53         c = c(torch.from_numpy(np.stack([xx,yy],-1).reshape((NX*NY,2)))).numpy().reshape(NX,NY)
     54         #c(c0,torch.from_numpy(np.stack([xx,yy],-1).reshape((NX*NY,2)))).numpy().reshape(NX,NY)
     55 

ValueError: cannot reshape array of size 2000000 into shape (1000,1000)

Any ideas on how to achieve this and get around the error?

no attribute 'cost_analysis' in training

Dear Ben,
Hello, when I run the example such as 1D harmonic oscillator problem , a code error was encountered in trainers.py as following:
Traceback (most recent call last):
\test.py", line 38, in
run.train()# start training the FBPINN
\FBPINNs-main\fbpinns\trainers.py", line 654, in train
cost_ = update.cost_analysis()
AttributeError: 'Compiled' object has no attribute 'cost_analysis'.
Could you please help me figure out where the error is? Thank you very much!

problem with non-constant boundary condition

Hello.
I tried to solve 2d Laplace equation div(grad(p))=0 at domain 0<x<L, 0<y<H with Neumman boundary conditions:

  1. x=0,L: dp/dx = psi(y), where psi(y) = 1/c if |y-H/2|<c/2, else 0
  2. y=0,H dp/dy = 0

I used weak bc:
loss = physics_loss+weight*bc_loss, where bc_loss = sum( RMSE( dp/dn[i] - dp_FBPINN/dn[i])

However, instead of the boundary conditions being fulfilled and the derivatives at the left and right boundaries being equal to the psi function, fbpinn ''averaged'' the psi values:
x=0,L: dp_FBPINN/dx = 1/L
for this constant (1/L), the MSE(psi(y)-CONST) value is minimal, but I need dp_FBPINN/dx to be a function, not a constant.
image_2024-06-16_13-36-39
At picture c = 0.2 and psi function was psi(y) =1 if |y-H/2|<c/2, else 0, so dp_FBPINN/dx = CONST = c/L, not 1/L

my code:

class Laplace2D(Problem):
    """Solves the 2D Laplace equation with constant velocity
        
        div mobility grad p = 0
        dp/dx = psi(y), x=0,L
        dp/dy = 0 y=0,H
    """

    @staticmethod
    def init_params(chi=0.2):

        static_params = {
            "dims":(1,2),
            "chi":chi
            }
        return static_params, {}

    @staticmethod
    def sample_constraints(all_params, domain, key, sampler, batch_shapes):

        chi = all_params["static"]["problem"]["chi"]
        # physics loss
        x_batch_phys = domain.sample_interior(all_params, key, sampler, batch_shapes[0])
        required_ujs_phys = (
            (0,(0,)), # p_x
            (0,(1,)), # p_y
            (0,(0,0)), # p_xx
            (0,(1,1)), # p_yy
            )
        
        Nx, Ny = batch_shapes[0]
        x_min, x_max = x_batch_phys[0,0], x_batch_phys[-1,0]
        y_min, y_max = x_batch_phys[0,1], x_batch_phys[-1,1]

        # bc loss
        x = np.linspace(x_min,x_max,Nx)
        y = np.linspace(y_min,y_max,Ny)

        # x = 0
        batch_boundary_left = np.vstack((x[0]*np.ones_like(y), y)).T
        p_x_left = np.where(np.abs(batch_boundary_left[:,1]-y_max/2)<chi/2,1/chi,0)
        required_ujs_left = (
            (0,(0,)), # p_x
            )
        # x = L
        batch_boundary_right = np.vstack((x[-1]*np.ones_like(y), y)).T
        p_x_right = p_x_left
        required_ujs_right = (
            (0,(0,)), # p_x
            )
        # y = 0
        batch_boundary_top = np.vstack((x, y[0]*np.ones_like(x))).T
        p_y_top = np.zeros((Nx,1))
        required_ujs_top = (
            (0,(1,)), # p_y
            )        
        # y = H
        batch_boundary_bottom = np.vstack((x, y[-1]*np.ones_like(x))).T
        p_y_bottom = np.zeros((Nx,1))
        required_ujs_bottom = (
            (0,(1,)), # p_y
            )     


        return [[x_batch_phys, required_ujs_phys],
                [batch_boundary_left, p_x_left, required_ujs_left],
                [batch_boundary_right, p_x_right, required_ujs_right],
                [batch_boundary_top, p_y_top, required_ujs_top],
                [batch_boundary_bottom, p_y_bottom, required_ujs_bottom],
                ]

    @staticmethod
    def constraining_fn(all_params, x_batch, u):

        x, y = x_batch[:,0:1], x_batch[:,1:2]
        return u

    @staticmethod
    def loss_fn(all_params, constraints):

        x_batch, p_x, p_y, p_xx, p_yy = constraints[0]

        x, y  = x_batch[:,0:1], x_batch[:,1:2]

        mobility = 1
        mobility_x = 0
        mobility_y = 0
        pressure_loss = mobility*(p_xx+p_yy) + mobility_x*p_x + mobility_y*p_y

        physics_loss = jnp.mean(pressure_loss**2)


        bc_loss = 0
        # x = 0
        batch_boundary_left, p_x_left, p_x_left_predicted = constraints[1]
        bc_loss += jnp.mean((p_x_left_predicted - p_x_left)**2)
        # x = L
        batch_boundary_right, p_x_right,  p_x_right_predicted = constraints[2]        
        bc_loss += jnp.mean((p_x_right_predicted - p_x_right)**2)
        # y = 0
        batch_boundary_top, p_y_top, p_y_top_predicted = constraints[3]        
        bc_loss += jnp.mean((p_y_top_predicted - p_y_top)**2)
        # y = H
        batch_boundary_bottom, p_y_bottom, p_y_bottom_predicted = constraints[4] 
        bc_loss += jnp.mean((p_y_bottom_predicted - p_y_bottom)**2)

        return physics_loss + (10**7)*bc_loss

    @staticmethod
    def exact_solution(all_params, x_batch, batch_shape):
        x, y = x_batch[:,0:1], x_batch[:,1:2]
        sol = np.zeros((x.shape[0],1))
        N = 10
        L, H = x[-1], y[-1]
        chi = all_params["static"]["problem"]["chi"]
        for n in range(1,N):
            sol += 2/(np.pi*n*chi)*((-1)**(n%2))*np.sin(np.pi*n*chi/H)*np.cos(2*np.pi*n*y/H)*np.sinh(2*np.pi*n*(x-L/2)/H)/(2*np.pi*n/H*np.cosh(2*np.pi*n*L/2/H))
        sol += x/H 
        return sol
domain = RectangularDomainND
domain_init_kwargs = dict(
    xmin=np.array([0,0]),
    xmax=np.array([1,1])
)
problem = Laplace2D()
problem_init_kwargs=dict(
    chi=0.4
)
decomposition = RectangularDecompositionND# use a rectangular domain decomposition
decomposition_init_kwargs=dict(
    subdomain_xs = [np.linspace(0,1,15), np.linspace(0,1,15)],
    subdomain_ws = get_subdomain_ws([np.linspace(0,1,15), np.linspace(0,1,15)], 2),
    unnorm=(0.,1.),
    )
network = FCN
network_init_kwargs=dict(
    layer_sizes=[2,32,1])
c = Constants(
    domain=domain,
    domain_init_kwargs=domain_init_kwargs,
    problem=problem,
    problem_init_kwargs=problem_init_kwargs,
    decomposition=decomposition,
    decomposition_init_kwargs=decomposition_init_kwargs,
    network=network,
    network_init_kwargs=network_init_kwargs,
    ns=((50,50),),
    n_test=(50,50),
    n_steps=25000,
    clear_output=False,
    summary_freq = 1000,
    test_freq = 1000,
)

print(c)
run = FBPINNTrainer(c)
all_params = run.train()

running error

Hello,I am trying to use your program to write a FBPINN that solves a 3D temperature field, because the problem is more complex, I use the soft-constrained loss function form in the problem class.Because used a soft constrained loss function, so def boundary_condition() is not used in the problem class.However, when commenting out FBPINN and just running PINN, the error still occurs.
I am getting an error while running the file paper_main_3D.py.,
The screenlog:
RUN: final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All
P: <problems.TemperatureField3D object at 0x0000020521F32808>
SUBDOMAIN_XS: [array([0. , 0.25, 0.5 , 0.75, 1. ]), array([0. , 0.25, 0.5 , 0.75, 1. ]), array([0. , 0.25, 0.5 , 0.75, 1. ])]
SUBDOMAIN_WS: [array([0.025, 0.025, 0.025, 0.025, 0.025]), array([0.025, 0.025, 0.025, 0.025, 0.025]), array([0.025, 0.025, 0.025, 0.025, 0.025])]
BOUNDARY_N: (0.1,)
Y_N: (0, 1)
ACTIVE_SCHEDULER: <class 'active_schedulers.AllActiveSchedulerND'>
ACTIVE_SCHEDULER_ARGS: ()
DEVICE: 0
MODEL: <class 'models.FCN'>
N_HIDDEN: 16
N_LAYERS: 2
BATCH_SIZE: (30, 30, 30)
RANDOM: True
LRATE: 0.001
N_STEPS: 150000
SEED: 123
BATCH_SIZE_TEST: (100, 100, 10)
PLOT_LIMS: (0.4, True)
SUMMARY_FREQ: 250
TEST_FREQ: 5000
MODEL_SAVE_FREQ: 10000
SHOW_FIGURES: False
SAVE_FIGURES: False
CLEAR_OUTPUT: False
SUMMARY_OUT_DIR: results/summaries/final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All/
MODEL_OUT_DIR: results/models/final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All/
HOSTNAME: laptop-8t1qinad

Device: cuda:0
Main thread ID: 7240
Torch seed: 123
0 Active updated:
[[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]]
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
Process Process-1:1:
Traceback (most recent call last):
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\trainersBase.py", line 111, in train_models_multiprocess
run.train()
File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\main.py", line 267, in train
xs, yjs, yjs_sum, loss = self._train_step(models, optimizers, c, D, i)
File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\main.py", line 185, in _train_step
yj = c.P.boundary_condition(x, *yj, *c.BOUNDARY_N)# problem-specific
TypeError: boundary_condition() missing 1 required keyword-only argument: 'args'
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 302, in _recv_bytes
overlapped=True)
BrokenPipeError: [WinError 109] 管道已结束。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\site-packages\tensorboardX\event_file_writer.py", line 202, in run
data = self._queue.get(True, queue_wait_duration)
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\queues.py", line 108, in get
res = self._recv_bytes()
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 321, in _recv_bytes
raise EOFError
EOFError

Environment Setting

Hi, could you provide a .yml file in anaconda about your running environment setting, there are many environment errors when I try to reproduce your work, thank you!

Definition of Helmholtz Equations

To whom it may concern,

I am trying to implement the Helmholtz problem in your paper. I tried many times in the basic PINNs, and the prediction is different from the publication, even in the lowest wave number case. After going through your partial code, I found that it might not be consistent with the definition of scalar problem parameters, k and sigma, as well as the source term, in the paper.
Screenshot from 2024-06-26 16-26-44
image

Could you please help me clarify the correct definition of the governing equation? The figure 9 is indeed an attractive picture that I want to explore. Thank you very much!

Best,
Qifeng

Modify plot_trainer

Hi.
when I tried to solve the system of equations, I got an error when plotting, so I changed your code by adding a cycle for ud

base code

@_to_numpy
def plot_3D_FBPINN(x_batch_test, u_exact, u_test, us_test, ws_test, us_raw_test, x_batch, all_params, i, active, decomposition, n_test):

    xlim, ulim = _plot_setup(x_batch_test, u_exact)
    xlim0 = x_batch_test.min(0), x_batch_test.max(0)

    nt = n_test[-1]# slice across last dimension
    shape = (1+nt+1, 3)# nrows, ncols
    f = plt.figure(figsize=(8,8*shape[0]/3))

    # plot domain + x_batch
    for iplot, (a,b) in enumerate([[0,1],[0,2],[1,2]]):
        plt.subplot2grid(shape,(0,iplot))
        plt.title(f"[{i}] Domain decomposition")
        plt.scatter(x_batch[:,a], x_batch[:,b], alpha=0.5, color="k", s=1)
        decomposition.plot(all_params, active=active, create_fig=False, iaxes=[a,b])
        plt.xlim(xlim[0][a], xlim[1][a])
        plt.ylim(xlim[0][b], xlim[1][b])
        plt.gca().set_aspect("equal")

    # plot full solutions
    for it in range(nt):
        plt.subplot2grid(shape,(1+it,0))
        plt.title(f"[{i}] Full solution")
        _plot_test_im(u_test, xlim0, ulim, n_test, it=it)

        plt.subplot2grid(shape,(1+it,1))
        plt.title(f"[{i}] Ground truth")
        _plot_test_im(u_exact, xlim0, ulim, n_test, it=it)

        plt.subplot2grid(shape,(1+it,2))
        plt.title(f"[{i}] Difference")
        _plot_test_im(u_exact - u_test, xlim0, ulim, n_test, it=it)

    # plot raw hist
    plt.subplot2grid(shape,(1+nt,0))
    plt.title(f"[{i}] Raw solutions")
    plt.hist(us_raw_test.flatten(), bins=100, label=f"{us_raw_test.min():.1f}, {us_raw_test.max():.1f}")
    plt.legend(loc=1)
    plt.xlim(-5,5)

    plt.tight_layout()

    return (("test",f),)

modified code

@_to_numpy
def plot_3D_FBPINN(x_batch_test, u_exact, u_test, us_test, ws_test, us_raw_test, x_batch, all_params, i, active, decomposition, n_test):

    ud = u_exact.shape[-1] # number of u dimensions
    for iud in range(ud):

        xlim, ulim = _plot_setup(x_batch_test, u_exact[:,iud])
        xlim0 = x_batch_test.min(0), x_batch_test.max(0)
        
        nt = n_test[-1]# slice across last dimension
        shape = (1+nt+1, 3)# nrows, ncols
        f = plt.figure(figsize=(8,8*shape[0]/3))

        # plot domain + x_batch
        for iplot, (a,b) in enumerate([[0,1],[0,2],[1,2]]):
            plt.subplot2grid(shape,(0,iplot))
            plt.title(f"[{i}] Domain decomposition")
            plt.scatter(x_batch[:,a], x_batch[:,b], alpha=0.5, color="k", s=1)
            decomposition.plot(all_params, active=active, create_fig=False, iaxes=[a,b])
            plt.xlim(xlim[0][a], xlim[1][a])
            plt.ylim(xlim[0][b], xlim[1][b])
            plt.gca().set_aspect("equal")

        # plot full solutions
        for it in range(nt):
            plt.subplot2grid(shape,(1+it,0))
            plt.title(f"[{i}] Full solution")
            _plot_test_im(u_test[:,iud], xlim0, ulim, n_test, it=it)

            plt.subplot2grid(shape,(1+it,1))
            plt.title(f"[{i}] Ground truth")
            _plot_test_im(u_exact[:,iud], xlim0, ulim, n_test, it=it)

            plt.subplot2grid(shape,(1+it,2))
            plt.title(f"[{i}] Difference")
            _plot_test_im(u_exact[:,iud] - u_test[:,iud], xlim0, ulim, n_test, it=it)

        # plot raw hist
        plt.subplot2grid(shape,(1+nt,0))
        plt.title(f"[{i}] Raw solutions")
        plt.hist(us_raw_test.flatten(), bins=100, label=f"{us_raw_test.min():.1f}, {us_raw_test.max():.1f}")
        plt.legend(loc=1)
        plt.xlim(-5,5)

        plt.tight_layout()

    return (("test",f),)

I hope my changes will help prevent errors.

problem with PINNTrainer and CUDA

Hi.
FBPINNTrainer works without any problems, but when I tried to solve the same PDE using PINNTrainer, I got an error:
INTERNAL: Failed to enqueue async memset operation: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered.
I used a server with Quadro rtx 4000 and CUDA drivers version 12.3
How can i fix this error?

how to use it in an irregular domain

I want to know that how to use it in a irregular domain and could you provide an example. By the way,could you explain more about the following code. I can't figure out that how it work and what kind of input should i provide(like use it to a complex boundary condition). Thanks.
# boundary loss x_batch_boundary = jnp.array([0.]).reshape((1,1)) u_boundary = jnp.array([1.]).reshape((1,1)) ut_boundary = jnp.array([0.]).reshape((1,1)) required_ujs_boundary = ( (0,()), (0,(0,)), )

Running Error

Hello,

I am getting an error while running the file paper_main_1D.py. I am using Spyder IDE on Anaconda.

"OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\gaura\Anaconda3\envs\torch\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies."

Installation Issue of FBPINN

Hi Dr. Moseley,

Thank you for sharing your code. I am trying to install the FBPINN package to explore its application in domain decomposition, but I'm encountering an issue with the instructions provided. When cloning the repository, I receive the following response:
image
Could you please check if the repository path has changed or advise if there is something I might need to adjust on my end?

Thank you,
Qifeng

some confusion about unormalization

Your code is awesome! 👍
And i have some difficulties in understanding your code.

   #  codes from main.full_model_FBPINN
    y = y * c.Y_N[1] + c.Y_N[0]

I think this sentence is to achieve unnormalization and i found define c.Y_N in constants.py.

  # codes from constants.Constants
   w = 1e-10
   self.Y_N = (0,1/self.P.w**2)# mu, sd

This seems to multiply a large constant. I don't understand why this is necessary and why choose this value?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.