Skip to content

Enable Native External Audio Processing via TurboModules #469

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

jerryseigle
Copy link

This PR introduces a clean and modular system for injecting native audio processing logic into react-native-audio-api through the use of ExternalAudioProcessor and a global registry.

Developers can now:
• Register and unregister external processors from a TurboModule at runtime
• Write custom DSP (digital signal processing) code in C++
• Perform advanced buffer-level audio manipulation directly within the native render cycle
• Avoid any reliance on the JS thread — all processing runs fully native

This allows integration of virtually any kind of audio processing, whether:
• Custom-written code tailored to your app’s needs
• Third-party DSP libraries (e.g., for pitch/tempo manipulation, watermarking, audio detection, etc.)

All without modifying the core AudioNode logic — keeping the system clean, flexible, and decoupled.

✅ Checklist
• Added ExternalAudioProcessor interface
• Implemented singleton processor registry
• Injected processing safely into AudioNode::processAudio
• Provided TurboModule demo for runtime control (volume reducer)
• Documented and isolated logic for easy integration

Introduced ExternalAudioProcessor and ExternalAudioProcessorRegistry to enable modular, native-side buffer-level audio processing. This design allows developers to register or unregister custom DSP logic (e.g., 3rd party dsp libraries or custom dsp, volume reduction, etc) directly from a TurboModule, without modifying AudioNode internals or routing audio through the JS layer.  All processing occurs natively in C++ for optimal performance. This structure keeps the core engine untouched while offering flexible runtime control for external processors.
Updated AudioNode::processAudio to optionally route raw buffer data to an external processor, if one is registered. This enables native buffer-level DSP (e.g., gating, eq, 3rd party DSP, things that may not be offer directly with react-native-audio-api) without modifying internal engine structures. The design supports full runtime control from TurboModules while preserving core stability.

All audio processing remains on the native side and bypasses JS execution for performance.
@jerryseigle
Copy link
Author

I’m not sure where to include a working example, so I’ve attached a few sample files here. If approved, I’ll also add proper documentation. You can test the implementation using these files, and there’s also a demo video available to showcase it in action.
shared.zip

DemoVideo.mp4

@michalsek
Copy link
Member

Hey, not sure where to respond, so will write everything here :)

Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :)

Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. CustomAudioNode, it would be inheriting from AudioNode and overwrite the processAudio or processNode method.

Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e.

const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();

absn1.connect(externalProcessor);
absn2.connect(externalProcessor);

externalProcessor.connect(audioContext.destination);

Regarding your question about timing, if we have:

absn1.start(now + 0.01);
absn2.start(now + 0.01);

both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :)

But for this case, it might be not necessary.

But overall great job! 🔥
I've imagined that external c++ integrations would be much harder to implement :)

@jerryseigle
Copy link
Author

jerryseigle commented May 28, 2025

Hey, not sure where to respond, so will write everything here :)

Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :)

Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. CustomAudioNode, it would be inheriting from AudioNode and overwrite the processAudio or processNode method.

Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e.

const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();

absn1.connect(externalProcessor);
absn2.connect(externalProcessor);

externalProcessor.connect(audioContext.destination);

Regarding your question about timing, if we have:

absn1.start(now + 0.01);
absn2.start(now + 0.01);

both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :)

But for this case, it might be not necessary.

But overall great job! 🔥 I've imagined that external c++ integrations would be much harder to implement :)

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

@michalsek
Copy link
Member

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

@jerryseigle
Copy link
Author

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

Yes, I can wait if you prep the required interfaces. Thanks!

@jerryseigle
Copy link
Author

Just so I understand so am I to model it similar to the GainNode and placed in the effect directory?

Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :)

Hey! Hope you had a good weekend 🙂 Just checking in to see if you had a chance to put together the interfaces. No rush at all—just excited to keep moving forward when you’re ready. Thanks again!

@michalsek
Copy link
Member

Hey, hey,

unfortunately hadn't a chance to look at it, will figure out something along the week :)

Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
@michalsek
Copy link
Member

Hey, hey how it is going with the PR? :)

@jerryseigle
Copy link
Author

Hey, hey how it is going with the PR? :)

Hey, it’s progressing well. I anticipate completing it by Tuesday. I’m currently testing and making a few adjustments.

@jerryseigle
Copy link
Author

jerryseigle commented Jun 8, 2025

@michalsek Everything is complete and ready for your review. Let me know if any changes are needed.

For testing purposes, I’ve included a zip file containing my Turbo module and the index.ts file.
Archive.zip

demo-2.mp4

@vikalp-mightybyte
Copy link

@jerryseigle
Thanks for this awesome feature - you added just at the right time when I needed it.

If you don't mind, could you please share the complete codebase .zip file?
Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.

Super thanks. ✨

@jerryseigle
Copy link
Author

@jerryseigle Thanks for this awesome feature - you added just at the right time when I needed it.

If you don't mind, could you please share the complete codebase .zip file? Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.

Super thanks. ✨

@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command npx expo run:ios --device and if you need for android run same command but with android. Then after follow the steps from the React Native site on setting up Turbo Module in pure c++. After you can just copy the content from the shared.zip I uploaded above

@vikalp-mightybyte
Copy link

@jerryseigle Thanks for this awesome feature - you added just at the right time when I needed it.
If you don't mind, could you please share the complete codebase .zip file? Since, I'm also using Expo (I see in your demo video you use Expo) and unable to configure project for Turbo modules.
Super thanks. ✨

@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command npx expo run:ios --device and if you need for android run same command but with android. Then after follow the steps from the React Native site on setting up Turbo Module in pure c++. After you can just copy the content from the shared.zip I uploaded above

That part I understood, but for me adding #include <audioapi/core/effects/CustomProcessorNode.h> isn't working.
It's unable to find it. Any suggestions? (I already spent quite a lot of hours with ChatGPT)

@jerryseigle
Copy link
Author

@michalsek PR is complete. Added support for CustomProcessorNode via TurboModule. Ready for review.

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a new CustomProcessorNode to enable native audio processing via TurboModules, including registry management, JS/TS bindings, and a C++ implementation for real-time DSP.

  • Defines ProcessorMode and UUID in TS, plus a new ICustomProcessorNode interface
  • Implements CustomProcessorNode in JS/TS and integrates it into BaseAudioContext
  • Provides a full C++ implementation with factory/handler registries and host-object bindings

Reviewed Changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
packages/react-native-audio-api/src/types.ts Added ProcessorMode and UUID type aliases
packages/react-native-audio-api/src/interfaces.ts Introduced ICustomProcessorNode and createCustomProcessor
packages/react-native-audio-api/src/core/CustomProcessorNode.ts JS wrapper for the custom processor node
packages/react-native-audio-api/src/core/BaseAudioContext.ts Exposed createCustomProcessor() in JS context
packages/react-native-audio-api/src/api.ts Exported CustomProcessorNode from the public API
common/cpp/audioapi/core/effects/CustomProcessorNode.h Declared native CustomProcessorNode and processor interface
common/cpp/audioapi/core/effects/CustomProcessorNode.cpp Implemented processing logic, factories, and registries
common/cpp/audioapi/core/BaseAudioContext.h/.cpp Added native factory method and node registration
common/cpp/audioapi/HostObjects/CustomProcessorNodeHostObject.h Exposed JS host bindings for CustomProcessorNode
common/cpp/audioapi/HostObjects/BaseAudioContextHostObject.h Hooked up createCustomProcessor in the host object

Comment on lines 19 to 22
createOscillator(): IOscillatorNode;
createCustomProcessor(): ICustomProcessorNode;
createGain(): IGainNode;
createStereoPanner(): IStereoPannerNode;
Copy link
Preview

Copilot AI Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Mixed method signature styles: other methods use arrow syntax (e.g., createBiquadFilter: () => IBiquadFilterNode;). Consider standardizing interface definitions for consistency.

Suggested change
createOscillator(): IOscillatorNode;
createCustomProcessor(): ICustomProcessorNode;
createGain(): IGainNode;
createStereoPanner(): IStereoPannerNode;
createOscillator: () => IOscillatorNode;
createCustomProcessor: () => ICustomProcessorNode;
createGain: () => IGainNode;
createStereoPanner: () => IStereoPannerNode;

Copilot uses AI. Check for mistakes.

Comment on lines +137 to +151
std::vector<float*> output(numChannels);

for (int ch = 0; ch < numChannels; ++ch) {
input[ch] = bus->getChannel(ch)->getData();
output[ch] = new float[frames];
}

processor_->processThrough(input.data(), output.data(), numChannels, frames);

for (int ch = 0; ch < numChannels; ++ch) {
std::memcpy(bus->getChannel(ch)->getData(), output[ch], sizeof(float) * frames);
delete[] output[ch];
}
}

Copy link
Preview

Copilot AI Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allocating and deleting buffers inside the real-time audio callback can cause jitter. Consider reusing a preallocated buffer or using a memory pool to avoid heap allocations in the audio thread.

Suggested change
std::vector<float*> output(numChannels);
for (int ch = 0; ch < numChannels; ++ch) {
input[ch] = bus->getChannel(ch)->getData();
output[ch] = new float[frames];
}
processor_->processThrough(input.data(), output.data(), numChannels, frames);
for (int ch = 0; ch < numChannels; ++ch) {
std::memcpy(bus->getChannel(ch)->getData(), output[ch], sizeof(float) * frames);
delete[] output[ch];
}
}
for (int ch = 0; ch < numChannels; ++ch) {
input[ch] = bus->getChannel(ch)->getData();
}
processor_->processThrough(input.data(), preallocatedOutputBuffers_.data(), numChannels, frames);
for (int ch = 0; ch < numChannels; ++ch) {
std::memcpy(bus->getChannel(ch)->getData(), preallocatedOutputBuffers_[ch], sizeof(float) * frames);
}
}
std::vector<std::vector<float>> preallocatedOutputBuffers_;

Copilot uses AI. Check for mistakes.


std::shared_ptr<AudioParam> customProcessorParam_; ///< Optional real-time modifiable parameter.
std::string identifier_; ///< Processor identifier used to bind factories and handlers.
std::string processorMode_ = "processInPlace"; ///< Determines processing style.
Copy link
Preview

Copilot AI Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using raw strings for processor modes is error-prone. Define an enum or enum class for processor modes to enforce valid values at compile time.

Suggested change
std::string processorMode_ = "processInPlace"; ///< Determines processing style.
enum class ProcessorMode { ProcessInPlace, ProcessThrough }; ///< Enum for processor modes.
ProcessorMode processorMode_ = ProcessorMode::ProcessInPlace; ///< Determines processing style.

Copilot uses AI. Check for mistakes.

@michalsek michalsek added the development Develop some feature or integrate it with sth label Jun 13, 2025
@michalsek michalsek added development Develop some feature or integrate it with sth high priority labels Jun 13, 2025

constructor(
context: BaseAudioContext,
customProcessor: ICustomProcessorNode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is identifier necessary in each instance of CustomProcessorNode? If yes, why we enable user to create node without it, if not can you please explain how are you handling it?

Copy link
Author

@jerryseigle jerryseigle Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the identifier is necessary for each CustomProcessorNode, as it’s used to link the node to its matching processor and control handlers.

Would you prefer that I enforce this on the TypeScript side—either by requiring the identifier at creation like:

const processor = audioContext.createCustomProcessor("my-id");

or by continuing to allow:

const processor = audioContext.createCustomProcessor();
processor.identifier = "my-id";

Would you prefer that I enforce this on the TypeScript side (e.g., by requiring the identifier during creation or throwing if it’s missing), or would you like to handle that change? I’m happy to implement it if needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We thought about just passing it in createNode function / constructor :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that makes perfect sense! Would you like me to handle that change and update the API to accept the identifier during creation, or will you be taking care of it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends entirely on you :)

We can merge it at the current stage and interate over later or if you can still bear our whining you can update the PR :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good! I’ll go ahead and make the update should have it done in a few hours. 👍🏾

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
development Develop some feature or integrate it with sth high priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants