Training strategies are the choices for how the points are sampled for the definition the physics-informed loss.
QuasiRandomTraining with its default
LatinHyperCubeSample() is a well-rounded training strategy which can be used for most situations. It scales well for high dimensional spaces and is GPU-compatible.
QuadratureTraining can lead to faster or more robust convergence with one of the H-Cubature or P-Cubature methods, but are not currently GPU compatible. For very high dimensional cases,
QuadratureTraining with an adaptive Monte Carlo quadrature method, such as
CubaVegas, can be beneficial for difficult or stiff problems.
GridTraining should only be used for testing purposes and should not be relied upon for real training cases.
StochasticTraining achieves a lower convergence rate the quasi-Monte Carlo methods and thus
QuasiRandomTraining should be preferred in most cases.
A training strategy that uses the grid points in a multidimensional grid with spacings
dx. If the grid is multidimensional, then
dx is expected to be an array of
dx values matching the dimension of the domain, corresponding to the grid spacing in each dimension.
dx: the discretization of the grid.
StochasticTraining(points; bcs_points = points)
points: number of points in random select training set
bcs_points: number of points in random select training set for boundry conditions (by default, it equals
QuasiRandomTraining(points; bcs_points = points, sampling_alg = LatinHypercubeSample(), resampling = true, minibatch = 0)
A training strategy which uses quasi-Monte Carlo sampling for low discrepency sequences that accelerate the convergence in high dimensional spaces over pure random sequences.
points: the number of quasi-random points in a sample
bcs_points: the number of quasi-random points in a sample for boundry conditions (by default, it equals
sampling_alg: the quasi-Monte Carlo sampling algorithm,
resampling: if it's false - the full training set is generated in advance before training, and at each iteration, one subset is randomly selected out of the batch. if it's true - the training set isn't generated beforehand, and one set of quasi-random points is generated directly at each iteration in runtime. In this case
minibatchhas no effect,
minibatch: the number of subsets, if resampling == false.
For more information, see QuasiMonteCarlo.jl
QuadratureTraining(; quadrature_alg = CubatureJLh(), reltol = 1e-6, abstol = 1e-3, maxiters = 1_000, batch = 100)
A training strategy which treats the loss function as the integral of ||condition|| over the domain. Uses an Integrals.jl algorithm for computing the (adaptive) quadrature of this loss with respect to the chosen tolerances with a batching
batch corresponding to the maximum number of points to evaluate in a given integrand call.
quadrature_alg: quadrature algorithm,
reltol: relative tolerance,
abstol: absolute tolerance,
maxiters: the maximum number of iterations in quadrature algorithm,
batch: the preferred number of points to batch.
For more information on the argument values and algorithm choices, see Integrals.jl.