Skip to content

AudioContext currentTime after using iOS SFSpeechRecognizer #465

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
bandwich opened this issue May 25, 2025 · 4 comments
Open

AudioContext currentTime after using iOS SFSpeechRecognizer #465

bandwich opened this issue May 25, 2025 · 4 comments

Comments

@bandwich
Copy link

bandwich commented May 25, 2025

Description

Hello, thanks for the great work on this library!

I have one problem and I'm not sure if it's an issue on my end or not. I am attempting to playback an audio file through this API, perform some speech recognition, and then play more audio.

I have some native code using iOS's SFSpeechRecognizer that creates an AVAudioSession sharedInstance. When I turn off the microphone for this speech recognition, I set the audioSession category back to .playback and reset the AVAudioEngine.

However, I notice that when I come back to the AudioContext created through react-native-audio-api, the currentTime is frozen. AudioContext.resume() does not seem to help, and the state shows that it is running.

Anybody have any ideas? Let me know if I should provide more info.

Steps to reproduce

  1. Create new AudioContext
  2. While the context is running, concurrently create a new AVAudioEngine instance and change the sharedInstance's category to .record
  3. The currentTime is then frozen in AudioContext

Snack or a link to a repository

n/a

React Native Audio API version

0.6.0

React Native version

0.76.7

Platforms

iOS

JavaScript runtime

None

Workflow

None

Architecture

None

Build type

None

Device

None

Device model

No response

Acknowledgements

Yes

@michalsek
Copy link
Member

Hey, thank you for your kind words! :)

Regarding your issue, there might be a couple of problems which prevents the internal AVAudioEngine to not start:

  • try using the playbackAndRecord category for entire session of playback and recording. Setting an category and making the session active, does a lot of changes in the system audio layer.

  • there might be a potential probelm in handling the AVAudioEngine, sometimes AVAudioEngine "pretends" it has correctly started, while in reality it is paused or teardown by the system. Usually calling suspend then resume helps. At least until we figure out how to fix that internally 🙂

  • if it is your native code, we might want to try to use audio-apis instance of the AVAudioEngine. Usually it is better to have only instance of it for the entire application. It is accessible through a singleton-like class instance - unless something has changed while I wasn't watching 😄 - so in theory it should be achievable

@bandwich
Copy link
Author

Thanks! I'll try out these fixes :)

@michalsek
Copy link
Member

Has something worked for you? :)

@bandwich
Copy link
Author

Hey! I dug around a bit but didn't have too much time to diagnose it properly. I ended up just recreating the whole context every time I need it - performance wise it's acceptable for my use, and it's working well so far. I may revisit later, but for now I'm leaving it alone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants