Skip to content

DiffEqGPU DAE initialization compatability #3649

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ChrisRackauckas opened this issue May 21, 2025 · 0 comments
Open

DiffEqGPU DAE initialization compatability #3649

ChrisRackauckas opened this issue May 21, 2025 · 0 comments
Assignees

Comments

@ChrisRackauckas
Copy link
Member

ChrisRackauckas commented May 21, 2025

In order to get MTK compatible with DiffEq, I needed to cut out the initialization system in the adaptor:

https://github.com/SciML/DiffEqGPU.jl/blob/master/src/ensemblegpukernel/integrators/nonstiff/types.jl#L11-L20

However, because of this it cannot run the custom initialization. The MWE is:

using OrdinaryDiffEqTsit5, ModelingToolkit, StaticArrays
using ModelingToolkit: t_nounits as t, D_nounits as D

@parameters σ ρ β
@variables x(t) y(t) z(t)

eqs = [D(D(x)) ~ σ * (y - x),
    D(y) ~ x *- z) - y,
    D(z) ~ x * y - β * z]

@mtkbuild sys = ODESystem(eqs, t)
u0 = SA[D(x) => 2f0,
    x => 1f0,
    y => 0f0,
    z => 0f0]

p = SA[σ => 28f0,
    ρ => 10f0,
    β => 8f0 / 3f0]

tspan = (0f0, 100f0)
prob = ODEProblem{false}(sys, u0, tspan, p)

With the main case being:

using Adapt
adapt(CuArray, adapt.(CuArray, [prob, prob]))

Now generally to get that to work you need to make things isbits, though some functions can be non-isbits? Not sure the rules on the exceptions.

Getting this to work would build on SciML/SciMLBase.jl#1031. In particular, https://github.com/SciML/SciMLBase.jl/pull/1031/files#diff-6466a316bf7b1c6a59975d6769c4b5c96cd72f2423d68daaa08e87c23cbfe7a5R20 needs to add the initialize prob. But there's an oddity in that ODEFunction dispatch, where the initialization problem is always added, so it needs something like:

if initialization_data === nothing
    initdata = reconstruct_initialization_data(
        initialization_data, initializeprob, update_initializeprob!,
        initializeprobmap, initializeprobpmap)
else
    initdata = initialization_data
end

if that's safe. Otherwise it always builds a new one, and it builds a non-immutable one. I set that up and did:

function adapt_structure(to, f::ODEFunction{iip}) where {iip}
    if f.mass_matrix !== I && f.initialization_data !== nothing
        error("Adaptation to GPU failed: DAEs of ModelingToolkit currently not supported.")
    end
    ODEFunction{iip, FullSpecialize}(f.f, jac = f.jac, mass_matrix = f.mass_matrix, initialization_data = f.initialization_data)
end

function Adapt.adapt_structure(to, data::SciMLBase.OverrideInitData)
    SciMLBase.OverrideInitData(nothing,
        nothing,
        nothing,
        nothing,
        nothing, data.is_update_oop)
end

which works, then

function Adapt.adapt_structure(to, data::SciMLBase.OverrideInitData)
    SciMLBase.OverrideInitData(nothing,
        data.initializeprobmap,
        nothing,
        nothing,
        nothing, data.is_update_oop)
end

which fails (for changing any of the nothings) which confirms the issue is the initialization data. Thus the first major issue is to get an initialization_data which is static-compatible enough to live as an ImmutableNonlinearProblem + a few functions that can adapt to GPU.

If that's the case, then the kernel ODE methods need to add the initialization phase, which would go here https://github.com/SciML/DiffEqGPU.jl/blob/v3.6.0/src/ensemblegpukernel/kernels.jl and needs to be a fully static call. If it's an ImmutableNonlinearProblem then we're good as I just got that working the other day https://docs.sciml.ai/NonlinearSolve/dev/tutorials/nonlinear_solve_gpus/ so SimpleNewtonRhapson is kernel-ready. With static arrays, if the function is static, then https://github.com/SciML/SciMLBase.jl/blob/v2.91.1/src/initialization.jl#L242-L310 this is kernel-ready too, so we can just slap that right into the kernel and we're done.

So the major issue is getting initialization_data to be adaptable. Which I think the main issue is making adapt on the SII functions give a static array version of the indexers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants