Investigating symbolic_discretize with the 1-D Burgers' Equation

Let's consider the Burgers' equation:

\[\begin{gather*} ∂_t u + u ∂_x u - (0.01 / \pi) ∂_x^2 u = 0 \, , \quad x \in [-1, 1], t \in [0, 1] \, , \\ u(0, x) = - \sin(\pi x) \, , \\ u(t, -1) = u(t, 1) = 0 \, , \end{gather*}\]

with Physics-Informed Neural Networks. Here is an example of using the low-level API:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL
import ModelingToolkit: Interval, infimum, supremum

@parameters t, x
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2

#2D PDE
eq  = Dt(u(t,x)) + u(t,x)*Dx(u(t,x)) - (0.01/pi)*Dxx(u(t,x)) ~ 0

# Initial and boundary conditions
bcs = [u(0,x) ~ -sin(pi*x),
       u(t,-1) ~ 0.,
       u(t,1) ~ 0.,
       u(t,-1) ~ u(t,1)]

# Space and time domains
domains = [t ∈ Interval(0.0,1.0),
           x ∈ Interval(-1.0,1.0)]

# Discretization
dx = 0.05

# Neural network
chain = Lux.Chain(Dense(2,16,Lux.σ),Dense(16,16,Lux.σ),Dense(16,1))
strategy = NeuralPDE.GridTraining(dx)

indvars = [t,x]
depvars = [u(t,x)]
@named pde_system = PDESystem(eq,bcs,domains,indvars,depvars)

discretization = PhysicsInformedNN(chain, strategy)
sym_prob = symbolic_discretize(pde_system,discretization)

phi = sym_prob.phi

pde_loss_functions = sym_prob.loss_functions.pde_loss_functions
bc_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
    println("loss: ", l)
    println("pde_losses: ", map(l_ -> l_(p), pde_loss_functions))
    println("bcs_losses: ", map(l_ -> l_(p), bc_loss_functions))
    return false
end

loss_functions =  [pde_loss_functions;bc_loss_functions]

function loss_function(θ,p)
    sum(map(l->l(θ) ,loss_functions))
end

f_ = OptimizationFunction(loss_function, Optimization.AutoZygote())
prob = Optimization.OptimizationProblem(f_, sym_prob.flat_init_params)

res = Optimization.solve(prob,OptimizationOptimJL.BFGS(); callback = callback, maxiters=2000)
u: ComponentVector{Float64}(layer_1 = (weight = [0.9816179816200445 2.5295361184504053; -11.657797840975336 5.6973591759553095; … ; 0.04278661882862088 2.104120950003158; -0.7850517448157732 -5.156903950984768], bias = [3.838157285834502; -8.274099078957391; … ; 1.896340707024434; 3.1631884582715513;;]), layer_2 = (weight = [1.0999325996477365 0.764188880511313 … -2.9183555574898707 2.382805749310893; 1.5343614182198804 3.4748499952317697 … 2.2872134904439196 0.25358645712619804; … ; 3.768755593910811 -3.1542095122658824 … -3.496814409772775 -2.058968243466204; -0.4748284138641689 0.30087303776391777 … -0.6068818795339372 1.1877582157029232], bias = [1.202961443496067; 5.004209724692641; … ; -0.5660108388529412; 0.5829568640783986;;]), layer_3 = (weight = [-4.137352506279428 3.4726938658731377 … 0.9084561096695505 0.4533203831144774], bias = [-5.181704355258601;;]))

And some analysis:

using Plots

ts,xs = [infimum(d.domain):dx:supremum(d.domain) for d in domains]
u_predict_contourf = reshape([first(phi([t,x],res.u)) for t in ts for x in xs] ,length(xs),length(ts))
plot(ts, xs, u_predict_contourf, linetype=:contourf,title = "predict")

u_predict = [[first(phi([t,x],res.u)) for x in xs] for t in ts ]
p1= plot(xs, u_predict[3],title = "t = 0.1");
p2= plot(xs, u_predict[11],title = "t = 0.5");
p3= plot(xs, u_predict[end],title = "t = 1");
plot(p1,p2,p3)

burgers

burgers2