-
Notifications
You must be signed in to change notification settings - Fork 24
Adding supporting code for blog post #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
d35594a
2183e8e
c54037b
eaa1994
0156c93
50c13ce
287ed60
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
function layer = fourierLayer(tWidth,numModes,args) | ||
|
||
arguments | ||
tWidth | ||
numModes | ||
args.Name = "" | ||
end | ||
name = args.Name; | ||
|
||
net = dlnetwork; | ||
|
||
layers = [ | ||
identityLayer(Name="in") | ||
spectralConvolution1dLayer(tWidth,numModes,Name="specConv") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would prefer the arguments here to be the other way around, The There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1 to Ben's comment about the ordering of the inputs! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Changed it to (numModes, tWidth) in the most recent commit. Btw I got this from the doc example: https://www.mathworks.com/help/deeplearning/ug/solve-pde-using-fourier-neural-operator.html, should it be changed there too? |
||
additionLayer(2,Name="add")]; | ||
|
||
net = addLayers(net,layers); | ||
|
||
layer = convolution1dLayer(1,tWidth,Name="fc"); | ||
net = addLayers(net,layer); | ||
|
||
net = connectLayers(net,"in","fc"); | ||
net = connectLayers(net,"fc","add/in2"); | ||
|
||
layer = networkLayer(net,Name=name); | ||
|
||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,97 @@ | ||
classdef spectralConvolution1dLayer < nnet.layer.Layer ... | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I expect we could use the same version here and in the 1d FNO example - maybe we should consider a directory of shared tools for the examples. That's something we'll have to look into in future I think, not something to try deal with in this PR. |
||
& nnet.layer.Formattable ... | ||
& nnet.layer.Acceleratable | ||
% spectralConvolution1dLayer 1-D Spectral Convolution Layer | ||
|
||
properties | ||
NumChannels | ||
OutputSize | ||
NumModes | ||
end | ||
|
||
properties (Learnable) | ||
Weights | ||
end | ||
|
||
methods | ||
function this = spectralConvolution1dLayer(outChannels,numModes,args) | ||
% spectralConvolution1dLayer 1-D Spectral Convolution Layer | ||
% | ||
% layer = spectralConvolution1dLayer(outChannels, numModes) | ||
% creates a spectral convolution 1d layer. outChannels | ||
% specifies the number of channels in the layer output. | ||
% numModes specifies the number of modes which are combined | ||
% in Fourier space. | ||
% | ||
% layer = spectralConvolution1dLayer(outChannels, numModes, | ||
% Name=Value) specifies additional options using one or more | ||
% name-value arguments: | ||
% | ||
% Name - Name for the layer. The default value is "". | ||
% | ||
% Weights - Complex learnable array of size | ||
% (inChannels)x(outChannels)x(numModes). The | ||
% default value is []. | ||
|
||
arguments | ||
outChannels (1,1) double | ||
numModes (1,1) double | ||
args.Name {mustBeTextScalar} = "spectralConv1d" | ||
args.Weights = [] | ||
end | ||
|
||
this.OutputSize = outChannels; | ||
this.NumModes = numModes; | ||
this.Name = args.Name; | ||
this.Weights = args.Weights; | ||
end | ||
|
||
function this = initialize(this, ndl) | ||
inChannels = ndl.Size( finddim(ndl,'C') ); | ||
outChannels = this.OutputSize; | ||
numModes = this.NumModes; | ||
|
||
if isempty(this.Weights) | ||
this.NumChannels = inChannels; | ||
this.Weights = 1./(inChannels*outChannels).*( ... | ||
rand([inChannels outChannels numModes]) + ... | ||
1i.*rand([inChannels outChannels numModes]) ); | ||
else | ||
assert( inChannels == this.NumChannels, 'The input channel size must match the layer' ); | ||
end | ||
end | ||
|
||
function y = predict(this, x) | ||
% First compute the rfft, normalized and one-sided | ||
x = real(x); | ||
x = stripdims(x); | ||
N = size(x, 1); | ||
xft = iRFFT(x, 1, N); | ||
|
||
% Multiply selected Fourier modes | ||
xft = permute(xft(1:this.NumModes, :, :), [3 2 1]); | ||
yft = pagemtimes( xft, this.Weights ); | ||
yft = permute(yft, [3 2 1]); | ||
|
||
S = floor(N/2)+1 - this.NumModes; | ||
yft = cat(1, yft, zeros([S size(yft, 2:3)], 'like', yft)); | ||
|
||
% Return to physical space via irfft, normalized and one-sided | ||
y = iIRFFT(yft, 1, N); | ||
|
||
% Re-apply labels | ||
y = dlarray(y, 'SCB'); | ||
y = real(y); | ||
end | ||
end | ||
end | ||
|
||
function y = iRFFT(x, dim, N) | ||
y = fft(x, [], dim); | ||
y = y(1:floor(N/2)+1, :, :)./N; | ||
end | ||
|
||
function y = iIRFFT(x, dim, N) | ||
x(end+1:N, :, :, :) = conj( x(ceil(N/2):-1:2, :, :, :) ); | ||
y = ifft(N.*x, [], dim, 'symmetric'); | ||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
function varargout = trainingPartitions(numObservations,splits) | ||
%TRAININGPARTITONS Random indices for splitting training data | ||
% [idx1,...,idxN] = trainingPartitions(numObservations,splits) returns | ||
% random vectors of indices to help split a data set with the specified | ||
% number of observations, where SPLITS is a vector of length N of | ||
% partition sizes that sum to one. | ||
% | ||
% % Example: Get indices for 50%-50% training-test split of 500 | ||
% % observations. | ||
% [idxTrain,idxTest] = trainingPartitions(500,[0.5 0.5]) | ||
% | ||
% % Example: Get indices for 80%-10%-10% training, validation, test split | ||
% % of 500 observations. | ||
% [idxTrain,idxValidation,idxTest] = trainingPartitions(500,[0.8 0.1 0.1]) | ||
|
||
arguments | ||
numObservations (1,1) {mustBePositive} | ||
splits {mustBeVector,mustBeInRange(splits,0,1,"exclusive"),mustSumToOne} | ||
end | ||
|
||
numPartitions = numel(splits); | ||
varargout = cell(1,numPartitions); | ||
|
||
idx = randperm(numObservations); | ||
|
||
idxEnd = 0; | ||
|
||
for i = 1:numPartitions-1 | ||
idxStart = idxEnd + 1; | ||
idxEnd = idxStart + floor(splits(i)*numObservations) - 1; | ||
|
||
varargout{i} = idx(idxStart:idxEnd); | ||
end | ||
|
||
% Last partition. | ||
varargout{end} = idx(idxEnd+1:end); | ||
|
||
end | ||
|
||
function mustSumToOne(v) | ||
% Validate that value sums to one. | ||
|
||
if sum(v,"all") ~= 1 | ||
error("Value must sum to one.") | ||
end | ||
|
||
end |
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I find it frustrating working with MLX files on GitHub because no-one can see the code. This also makes it hard to review. My personal preference is to work locally in the MLX, then generate a markdown version which is added as a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hi Conor, thank you so much, I made a bunch of these changes. Regarding the formatting: I removed the formatting (I pushed these changes hopefully you can see them?) but now I have a bunch of transposes all over the place so not sure if I did this correctly. For example, in the function hamiltonianDerivatives (was dlderivative), I used to have Further, when I go to make predictions by solving Hamilton's equations, I now need to transpose the x or else I get "was expecting input of size 2" error: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Regarding "The original ode45 can't use dlarray..." -- I copied and pasted that from our HNN example on our Github: https://github.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/tree/main/hamiltonian-neural-network, which was created with 22b so it's good you are helping me update the code for new releases! Do I still need to have the lines There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Most recent commits adds the readme.md and .m files for each corresponding live script, as suggested. |
mmarkows17 marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
function sol = solveNonlinearPendulum(tspan,omega0,thetaDot0) | ||
F = ode; | ||
F.ODEFcn = @(t,x) [x(2); -omega0^2.*sin(x(1))]; | ||
F.InitialValue = [0; thetaDot0]; | ||
F.Solver = "ode45"; | ||
sol = solve(F,tspan); | ||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# Physics Informed Machine Learning Methods and Implementation supporting code | ||
This repository provides examples demonstrating a variety of Physics-Informed Machine Learning (PhiML) techniques, applied to a simple pendulum system. The examples provided here are discussed in detail in the accompanying blog post "Physics-Informed Machine Learning: Methods and Implementation". | ||
mmarkows17 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
 | ||
|
||
The techniques described for the pendulum are categorized by their primary objectives: | ||
- **Modeling complex systems from data** | ||
1. **Neural Ordinary Differential Equation**: learns underlying dynamics $f$ in the differential equation $\frac{d \mathbf{x}}{d t} = f(\mathbf{x},\mathbf{u})$ using a neural network. | ||
2. **Neural State-Space**: learns both the system dynamics $f$ and the measurement function $g$ in the state-space model $\frac{d \mathbf{x}}{dt} = f(\mathbf{x},\mathbf{u}), \mathbf{y}=g(\mathbf{x},\mathbf{u})$ using neural networks. Not shown in this repository, but illustrated in the documentation example [Neural State-Space Model of Simple Pendulum System](https://www.mathworks.com/help/ident/ug/training-a-neural-state-space-model-for-a-simple-pendulum-system.html). | ||
3. **Universal Differential Equation**: combines known dynamics $g$ with unknown dynamics $h$, e.g. $\frac{d\mathbf{x}}{dt} = f(\mathbf{x},\mathbf{u}) = g(\mathbf{x},\mathbf{u}) + h(\mathbf{x},\mathbf{u})$, learning the unknown part using a neural network. | ||
4. **Hamiltonian Neural Network**: learns the system's Hamiltonian $\mathcal{H}$, and accounts for energy conservation by enforcing Hamilton's equations $\frac{dq}{dt} = \frac{\partial \mathcal{H}}{\partial p}, \frac{dp}{dt} = -\frac{\partial \mathcal{H}}{\partial q}$. | ||
- **Discovering governing equations from data:** | ||
1. **SINDy (Sparse Identification of Nonlinear Dynamics)**: learns a mathematical representation of the system dynamics $f$ by performing sparse regression on a library of candidate functions. | ||
- **Solving known ordinary and partial differential equations:** | ||
1. **Physics-Informed Neural Networks**: learns the solution to a differential equation by embedding the governing equations directly into the neural network's loss function. | ||
2. **Fourier Neural Operator**: learns the solution operator that maps input functions (e.g. forcing function) to the solution of a differential equation, utilizing Fourier transforms to parameterize the integral kernel and efficiently capture global dependencies. | ||
|
||
## Setup | ||
Open the project file physics-informed-ml-blog-supporting-code.prj to correctly set the path. | ||
|
||
## MathWorks Products ([https://www.mathworks.com](https://www.mathworks.com/)) | ||
Requires MATLAB® R2024a or newer | ||
- [Deep Learning Toolbox™](https://www.mathworks.com/products/deep-learning.html) | ||
|
||
## License | ||
The license is available in the [license.txt](license.txt) file in this Github repository. | ||
|
||
## Community Support | ||
[MATLAB Central](https://www.mathworks.com/matlabcentral) | ||
|
||
Copyright 2025 The MathWorks, Inc. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it absolutely necessary to include this MAT-file (and the others) in the repo? Can the data be created on-the-fly? We should avoid including data with the repo -- unless it is absolutely necessary for some reason. The last time I looked into this, every time the repo is versioned git records a copy of binary files. This causes the repo to become exorbitantly large and leads to a painful clone experience for users. I think this is particularly important for the SciML repo which houses lots of examples. If every example stored data or networks, the repo would quickly balloon in size and become unusable. The best approach is for binaries -- whether it is data or saved networks -- to be stored on supportfiles. You can see how we've done that in other repos for with large binaries: https://github.com/matlab-deep-learning/resnet-50/blob/317ea186da1cb3d429d53559fed7386548ff8e47/assembleResNet50.m#L18 . There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I requested support files be made. Until then, I have added the files which generate the data. In my Live Scripts, I add a flag for I am wondering if this is expected behavior? I have not come across this before. I would have thought that setting |
Uh oh!
There was an error while loading. Please reload this page.