-
-
Notifications
You must be signed in to change notification settings - Fork 19
Enable Native External Audio Processing via TurboModules #469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Introduced ExternalAudioProcessor and ExternalAudioProcessorRegistry to enable modular, native-side buffer-level audio processing. This design allows developers to register or unregister custom DSP logic (e.g., 3rd party dsp libraries or custom dsp, volume reduction, etc) directly from a TurboModule, without modifying AudioNode internals or routing audio through the JS layer. All processing occurs natively in C++ for optimal performance. This structure keeps the core engine untouched while offering flexible runtime control for external processors.
Updated AudioNode::processAudio to optionally route raw buffer data to an external processor, if one is registered. This enables native buffer-level DSP (e.g., gating, eq, 3rd party DSP, things that may not be offer directly with react-native-audio-api) without modifying internal engine structures. The design supports full runtime control from TurboModules while preserving core stability. All audio processing remains on the native side and bypasses JS execution for performance.
I’m not sure where to include a working example, so I’ve attached a few sample files here. If approved, I’ll also add proper documentation. You can test the implementation using these files, and there’s also a demo video available to showcase it in action. DemoVideo.mp4 |
Hey, not sure where to respond, so will write everything here :) Yes, each node is a separate AudioNode, which means that in order to do such external processing node and be able to hook it up anywhere in the graph, I think we have to iterate it over a bit :) Instead of modifying the AudioNode directly, I would go for new separate node, that implements this, f.e. Which could be then exposed to JS/RN as a new node type, that we can connect to, f.e. const absn1 = audioContext.createBufferSource();
const absn2 = audioContext.createBufferSource();
const externalProcessor = audioContext.createCustomProcessor();
absn1.connect(externalProcessor);
absn2.connect(externalProcessor);
externalProcessor.connect(audioContext.destination); Regarding your question about timing, if we have: absn1.start(now + 0.01);
absn2.start(now + 0.01); both nodes will start exactly at the same time (or at the exact sample frame to be precise), but as I stated above, they do not share the same AudioNode instance. It is a bit more complex and I will be happy to explain how the pull-graph works within audio api or web audio, if you're interested :) But for this case, it might be not necessary. But overall great job! 🔥 |
Just so I understand so am I to model it similar to the GainNode and placed in the effect directory? |
Yes, exactly, if you are willing to wait, I will prep all the required interfaces for you over the weekend :) |
Yes, I can wait if you prep the required interfaces. Thanks! |
Hey! Hope you had a good weekend 🙂 Just checking in to see if you had a chance to put together the interfaces. No rush at all—just excited to keep moving forward when you’re ready. Thanks again! |
Hey, hey, unfortunately hadn't a chance to look at it, will figure out something along the week :) |
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Removed externalCustomProcessor utility and reverted changes to AudioNode as suggested by the maintainer. Moving forward with implementing a proper Custom Node approach as recommended.
Hey, hey how it is going with the PR? :) |
Hey, it’s progressing well. I anticipate completing it by Tuesday. I’m currently testing and making a few adjustments. |
@michalsek Everything is complete and ready for your review. Let me know if any changes are needed. For testing purposes, I’ve included a zip file containing my Turbo module and the index.ts file. demo-2.mp4 |
@jerryseigle If you don't mind, could you please share the complete codebase .zip file? Super thanks. ✨ |
@vikalp-mightybyte zip file is too large to upload. Yes I am using Expo. You will need Expo Development not Expo Go. You also need the iOS and android folders. If you need those folders you can run this command |
That part I understood, but for me adding |
@michalsek PR is complete. Added support for CustomProcessorNode via TurboModule. Ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a new CustomProcessorNode
to enable native audio processing via TurboModules, including registry management, JS/TS bindings, and a C++ implementation for real-time DSP.
- Defines
ProcessorMode
andUUID
in TS, plus a newICustomProcessorNode
interface - Implements
CustomProcessorNode
in JS/TS and integrates it intoBaseAudioContext
- Provides a full C++ implementation with factory/handler registries and host-object bindings
Reviewed Changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
packages/react-native-audio-api/src/types.ts | Added ProcessorMode and UUID type aliases |
packages/react-native-audio-api/src/interfaces.ts | Introduced ICustomProcessorNode and createCustomProcessor |
packages/react-native-audio-api/src/core/CustomProcessorNode.ts | JS wrapper for the custom processor node |
packages/react-native-audio-api/src/core/BaseAudioContext.ts | Exposed createCustomProcessor() in JS context |
packages/react-native-audio-api/src/api.ts | Exported CustomProcessorNode from the public API |
common/cpp/audioapi/core/effects/CustomProcessorNode.h | Declared native CustomProcessorNode and processor interface |
common/cpp/audioapi/core/effects/CustomProcessorNode.cpp | Implemented processing logic, factories, and registries |
common/cpp/audioapi/core/BaseAudioContext.h/.cpp | Added native factory method and node registration |
common/cpp/audioapi/HostObjects/CustomProcessorNodeHostObject.h | Exposed JS host bindings for CustomProcessorNode |
common/cpp/audioapi/HostObjects/BaseAudioContextHostObject.h | Hooked up createCustomProcessor in the host object |
createOscillator(): IOscillatorNode; | ||
createCustomProcessor(): ICustomProcessorNode; | ||
createGain(): IGainNode; | ||
createStereoPanner(): IStereoPannerNode; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Mixed method signature styles: other methods use arrow syntax (e.g., createBiquadFilter: () => IBiquadFilterNode;). Consider standardizing interface definitions for consistency.
createOscillator(): IOscillatorNode; | |
createCustomProcessor(): ICustomProcessorNode; | |
createGain(): IGainNode; | |
createStereoPanner(): IStereoPannerNode; | |
createOscillator: () => IOscillatorNode; | |
createCustomProcessor: () => ICustomProcessorNode; | |
createGain: () => IGainNode; | |
createStereoPanner: () => IStereoPannerNode; |
Copilot uses AI. Check for mistakes.
std::vector<float*> output(numChannels); | ||
|
||
for (int ch = 0; ch < numChannels; ++ch) { | ||
input[ch] = bus->getChannel(ch)->getData(); | ||
output[ch] = new float[frames]; | ||
} | ||
|
||
processor_->processThrough(input.data(), output.data(), numChannels, frames); | ||
|
||
for (int ch = 0; ch < numChannels; ++ch) { | ||
std::memcpy(bus->getChannel(ch)->getData(), output[ch], sizeof(float) * frames); | ||
delete[] output[ch]; | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allocating and deleting buffers inside the real-time audio callback can cause jitter. Consider reusing a preallocated buffer or using a memory pool to avoid heap allocations in the audio thread.
std::vector<float*> output(numChannels); | |
for (int ch = 0; ch < numChannels; ++ch) { | |
input[ch] = bus->getChannel(ch)->getData(); | |
output[ch] = new float[frames]; | |
} | |
processor_->processThrough(input.data(), output.data(), numChannels, frames); | |
for (int ch = 0; ch < numChannels; ++ch) { | |
std::memcpy(bus->getChannel(ch)->getData(), output[ch], sizeof(float) * frames); | |
delete[] output[ch]; | |
} | |
} | |
for (int ch = 0; ch < numChannels; ++ch) { | |
input[ch] = bus->getChannel(ch)->getData(); | |
} | |
processor_->processThrough(input.data(), preallocatedOutputBuffers_.data(), numChannels, frames); | |
for (int ch = 0; ch < numChannels; ++ch) { | |
std::memcpy(bus->getChannel(ch)->getData(), preallocatedOutputBuffers_[ch], sizeof(float) * frames); | |
} | |
} | |
std::vector<std::vector<float>> preallocatedOutputBuffers_; |
Copilot uses AI. Check for mistakes.
|
||
std::shared_ptr<AudioParam> customProcessorParam_; ///< Optional real-time modifiable parameter. | ||
std::string identifier_; ///< Processor identifier used to bind factories and handlers. | ||
std::string processorMode_ = "processInPlace"; ///< Determines processing style. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using raw strings for processor modes is error-prone. Define an enum or enum class for processor modes to enforce valid values at compile time.
std::string processorMode_ = "processInPlace"; ///< Determines processing style. | |
enum class ProcessorMode { ProcessInPlace, ProcessThrough }; ///< Enum for processor modes. | |
ProcessorMode processorMode_ = ProcessorMode::ProcessInPlace; ///< Determines processing style. |
Copilot uses AI. Check for mistakes.
|
||
constructor( | ||
context: BaseAudioContext, | ||
customProcessor: ICustomProcessorNode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is identifier
necessary in each instance of CustomProcessorNode
? If yes, why we enable user to create node without it, if not can you please explain how are you handling it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the identifier is necessary for each CustomProcessorNode, as it’s used to link the node to its matching processor and control handlers.
Would you prefer that I enforce this on the TypeScript side—either by requiring the identifier at creation like:
const processor = audioContext.createCustomProcessor("my-id");
or by continuing to allow:
const processor = audioContext.createCustomProcessor();
processor.identifier = "my-id";
Would you prefer that I enforce this on the TypeScript side (e.g., by requiring the identifier during creation or throwing if it’s missing), or would you like to handle that change? I’m happy to implement it if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We thought about just passing it in createNode function / constructor :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that makes perfect sense! Would you like me to handle that change and update the API to accept the identifier during creation, or will you be taking care of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends entirely on you :)
We can merge it at the current stage and interate over later or if you can still bear our whining you can update the PR :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! I’ll go ahead and make the update should have it done in a few hours. 👍🏾
This PR introduces a clean and modular system for injecting native audio processing logic into react-native-audio-api through the use of ExternalAudioProcessor and a global registry.
Developers can now:
• Register and unregister external processors from a TurboModule at runtime
• Write custom DSP (digital signal processing) code in C++
• Perform advanced buffer-level audio manipulation directly within the native render cycle
• Avoid any reliance on the JS thread — all processing runs fully native
This allows integration of virtually any kind of audio processing, whether:
• Custom-written code tailored to your app’s needs
• Third-party DSP libraries (e.g., for pitch/tempo manipulation, watermarking, audio detection, etc.)
All without modifying the core AudioNode logic — keeping the system clean, flexible, and decoupled.
✅ Checklist
• Added ExternalAudioProcessor interface
• Implemented singleton processor registry
• Injected processing safely into AudioNode::processAudio
• Provided TurboModule demo for runtime control (volume reducer)
• Documented and isolated logic for easy integration