Skip to content

MLXFast on Linux (CPU)#359

Open
Joannis wants to merge 1 commit intoml-explore:mainfrom
Joannis:jo/mlxfast-linux
Open

MLXFast on Linux (CPU)#359
Joannis wants to merge 1 commit intoml-explore:mainfrom
Joannis:jo/mlxfast-linux

Conversation

@Joannis
Copy link
Contributor

@Joannis Joannis commented Feb 20, 2026

Proposed changes

Fills the gap of MLXNN and MLXFast on Linux (CPU mode)

Checklist

Put an x in the boxes that apply.

  • I have read the CONTRIBUTING document
  • I have run pre-commit run --all-files to format my code / installed pre-commit prior to committing changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have updated the necessary documentation (if needed)

scale: Float,
offset: Int,
freqs: MLXArray? = nil,
stream: StreamOrDevice = .default
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do both of these files build on all platforms? I think maybe they don't on CPU only linux?

Just thinking out loud here:

  • should we use something like #if canImport(Metal) to select which file we use?
  • if both build, I wonder if we should switch on stream to route to the correct backend? Then a GPU use could run the CPU implementation if they wanted.

Comment on lines 1396 to 1397
// note: this gives a warning but it does in fact do something
// in the case where this is e.g. ParameterInfo<MLXArray?>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this works then I think this comment could just indicate which case it was (remove the part about the warning).

We have a test for it:

    func testOptionInfos() {
        class Layer1: Module {
            @ParameterInfo var x: MLXArray?
        }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants