You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if initialization_data ===nothing
initdata =reconstruct_initialization_data(
initialization_data, initializeprob, update_initializeprob!,
initializeprobmap, initializeprobpmap)
else
initdata = initialization_data
end
if that's safe. Otherwise it always builds a new one, and it builds a non-immutable one. I set that up and did:
functionadapt_structure(to, f::ODEFunction{iip}) where {iip}
if f.mass_matrix !== I && f.initialization_data !==nothingerror("Adaptation to GPU failed: DAEs of ModelingToolkit currently not supported.")
endODEFunction{iip, FullSpecialize}(f.f, jac = f.jac, mass_matrix = f.mass_matrix, initialization_data = f.initialization_data)
endfunction Adapt.adapt_structure(to, data::SciMLBase.OverrideInitData)
SciMLBase.OverrideInitData(nothing,
nothing,
nothing,
nothing,
nothing, data.is_update_oop)
end
which works, then
function Adapt.adapt_structure(to, data::SciMLBase.OverrideInitData)
SciMLBase.OverrideInitData(nothing,
data.initializeprobmap,
nothing,
nothing,
nothing, data.is_update_oop)
end
which fails (for changing any of the nothings) which confirms the issue is the initialization data. Thus the first major issue is to get an initialization_data which is static-compatible enough to live as an ImmutableNonlinearProblem + a few functions that can adapt to GPU.
So the major issue is getting initialization_data to be adaptable. Which I think the main issue is making adapt on the SII functions give a static array version of the indexers.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
In order to get MTK compatible with DiffEq, I needed to cut out the initialization system in the adaptor:
https://github.com/SciML/DiffEqGPU.jl/blob/master/src/ensemblegpukernel/integrators/nonstiff/types.jl#L11-L20
However, because of this it cannot run the custom initialization. The MWE is:
With the main case being:
Now generally to get that to work you need to make things
isbits
, though some functions can be non-isbits? Not sure the rules on the exceptions.Getting this to work would build on SciML/SciMLBase.jl#1031. In particular, https://github.com/SciML/SciMLBase.jl/pull/1031/files#diff-6466a316bf7b1c6a59975d6769c4b5c96cd72f2423d68daaa08e87c23cbfe7a5R20 needs to add the initialize prob. But there's an oddity in that
ODEFunction
dispatch, where the initialization problem is always added, so it needs something like:if that's safe. Otherwise it always builds a new one, and it builds a non-immutable one. I set that up and did:
which works, then
which fails (for changing any of the
nothing
s) which confirms the issue is the initialization data. Thus the first major issue is to get aninitialization_data
which is static-compatible enough to live as an ImmutableNonlinearProblem + a few functions that can adapt to GPU.If that's the case, then the kernel ODE methods need to add the initialization phase, which would go here https://github.com/SciML/DiffEqGPU.jl/blob/v3.6.0/src/ensemblegpukernel/kernels.jl and needs to be a fully static call. If it's an ImmutableNonlinearProblem then we're good as I just got that working the other day https://docs.sciml.ai/NonlinearSolve/dev/tutorials/nonlinear_solve_gpus/ so SimpleNewtonRhapson is kernel-ready. With static arrays, if the function is static, then https://github.com/SciML/SciMLBase.jl/blob/v2.91.1/src/initialization.jl#L242-L310 this is kernel-ready too, so we can just slap that right into the kernel and we're done.
So the major issue is getting
initialization_data
to be adaptable. Which I think the main issue is making adapt on the SII functions give a static array version of the indexers.The text was updated successfully, but these errors were encountered: