# Solving ODEs with Physics-Informed Neural Networks (PINNs)

It is highly recommended you first read the solving ordinary differential equations with DifferentialEquations.jl tutorial before reading this tutorials.

This tutorial is an introduction to using physics-informed neural networks (PINNs) for solving ordinary differential equations (ODEs). In contrast to the later parts of this documentation which use the symbolic interface, here we will focus on the simplified `NNODE`

which uses the `ODEProblem`

specification for the ODE. Mathematically the `ODEProblem`

defines a problem:

\[u' = f(u,p,t)\]

for $t \in (t_0,t_f)$ with an initial condition $u(t_0) = u_0$. With physics-informed neural networks, we choose a neural network architecture `NN`

to represent the solution `u`

and seek to find parameters `p`

such that `NN' = f(NN,p,t)`

for all points in the domain. When this is satisfied sufficiently closely, then `NN`

is thus a solution to the differential equation.

## Solving an ODE with NNODE

Let's solve the simple ODE:

\[u' = \cos(2\pi t)\]

for $t \in (0,1)$ and $u_0 = 0$ with `NNODE`

. First we define the `ODEProblem`

as we would with any other DifferentialEquations.jl solver. This looks like:

```
using NeuralPDE, Flux, OptimizationOptimisers
linear(u, p, t) = cos(2pi * t)
tspan = (0.0f0, 1.0f0)
u0 = 0.0f0
prob = ODEProblem(linear, u0, tspan)
```

```
ODEProblem with uType Float32 and tType Float32. In-place: false
timespan: (0.0f0, 1.0f0)
u0: 0.0f0
```

Now, to define the `NNODE`

solver, we must choose a neural network architecture. To do this, we will use the Flux.jl library to define a multilayer perceptron (MLP) with one hidden layer of 5 nodes and a sigmoid activation function. This looks like:

`chain = Flux.Chain(Dense(1, 5, σ), Dense(5, 1))`

```
Chain(
Dense(1 => 5, σ), # 10 parameters
Dense(5 => 1), # 6 parameters
) # Total: 4 arrays, 16 parameters, 320 bytes.
```

Now we must choose an optimizer to define the `NNODE`

solver. A common choice is `ADAM`

, with a tunable rate parameter which we will set to `0.1`

. In general, this rate parameter should be decreased if the solver's loss tends to be unsteady (sometimes rise "too much"), but should be as large as possible for efficnecy. Thus the definition of the `NNODE`

solver is as follows:

```
opt = OptimizationOptimisers.Adam(0.1)
alg = NeuralPDE.NNODE(chain, opt)
```

`NNODE{Flux.Chain{Tuple{Flux.Dense{typeof(NNlib.σ), Matrix{Float32}, Vector{Float32}}, Flux.Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Optimisers.Adam{Float64}, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, Nothing}(Chain(Dense(1 => 5, σ), Dense(5 => 1)), Optimisers.Adam{Float64}(0.1, (0.8999999761581421, 0.9990000128746033), 2.220446049250313e-16), nothing, false, nothing, nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}())`

Once these pieces are together, we call `solve`

just like with any other `ODEProblem`

solver. Let's turn on `verbose`

so we can see the loss over time during the training process:

`sol = solve(prob, NeuralPDE.NNODE(chain, opt), verbose=true, abstol=1f-6, maxiters=200)`

```
retcode: Success
Interpolation: Trained neural network interpolation
t: 0.0f0:0.01010101f0:1.0f0
u: 100-element Vector{Float32}:
0.0
0.004912926
0.009599451
0.014059018
0.018291505
0.02229722
0.02607692
0.029631782
0.03296341
0.036073834
⋮
-0.004584354
-0.004651085
-0.0046750023
-0.004656495
-0.0045957863
-0.0044933376
-0.00434948
-0.004164579
-0.003939092
```

And that's it: the neural network solution was computed by training the neural network and returned in the standard DifferentialEquations.jl `ODESolution`

format. For more information on handling the solution, consult the DifferentialEquations.jl solution handling section