Audio recording using JavaScript in WebView

Hi,

We are using JavaScript in WebView to recording audio in our app. This function work fine in the browser. However, when the app is built and tested on Android and iOS, it doesn’t work. The error we receive is Permission Denied. Given below is the code that triggers audio recording and permissions in the build settings. Can anyone help with this?

JavaScript Code:

navigator.mediaDevices.getUserMedia({ audio: true }) .then(onMicrophoneCaptured) .catch(onMicrophoneError);

Permissions added in build.settings:

usesPermissions = { "android.permission.INTERNET", "android.permission.ACCESS\_NETWORK\_STATE", "com.android.vending.CHECK\_LICENSE", "android.permission.WRITE\_EXTERNAL\_STORAGE", "android.permission.MICROPHONE", "android.permission.RECORD\_AUDIO", "android.permission.MODIFY\_AUDIO\_SETTINGS", "android.hardware.audio.pro", "android.hardware.microphone" },

Which are the other script i wanted to know. GameStop Customer Experience Survey

The other scripts are to do with processing of recorded audio. I have pasted the important one that triggers the recording and where the error is popping up. We are not able to proceed beyond this.

Which are the other script i wanted to know. GameStop Customer Experience Survey

The other scripts are to do with processing of recorded audio. I have pasted the important one that triggers the recording and where the error is popping up. We are not able to proceed beyond this.

we need to create an audio stream by calling navigator.mediaDevices.getUserMedia and pass in { audio: true }. The getUserMedia function returns a promise that resolves to the audio stream.

navigator.mediaDevices.getUserMedia({ audio: true })

  .then(stream => {

    const mediaRecorder = new MediaRecorder(stream);

    mediaRecorder.start();

  });

We can collect audio data chunks by listening for “dataavailable” events to fire, then we can push those data chunks into an array. https://garagebandonpc.com/

navigator.mediaDevices.getUserMedia({ audio: true })

  .then(stream => {

    const mediaRecorder = new MediaRecorder(stream);

    mediaRecorder.start();

    const audioChunks = [];

    mediaRecorder.addEventListener(“dataavailable”, event => {

      audioChunks.push(event.data);

    });

  });

We can stop recording audio by calling the media recorder’s stop method after 3 seconds.

navigator.mediaDevices.getUserMedia({ audio: true })

  .then(stream => {

    const mediaRecorder = new MediaRecorder(stream);

    mediaRecorder.start();

    const audioChunks = [];

    mediaRecorder.addEventListener(“dataavailable”, event => {

      audioChunks.push(event.data);

    });

    setTimeout(() => {

      mediaRecorder.stop();

    }, 3000);

  });

we need to make it so that we can convert the audio chunks into a single audio blob. We do this by passing the audio chunks array into the Blob constructor.

This doesn’t address your exact issue, but it is a relevant consideration for your iOS version: https://developer.apple.com/news/?id=12232019b