Recently I have embarked on a journey to build the mobile version of our chat application, Talckatoo, using React Native and Expo (an open-source platform for making universal native apps for Android & iOS).
In this post, I’ll delve into the intricacies of developing audio messages, a process that I found to be one of the trickiest aspects of the project. While there are numerous resources available on voice recording with Expo, I discovered that saving the audio and seamlessly sending it to the backend presented its own set of challenges. In this demonstration, I’ll guide you through the process of recording your voice and prepare an audio to send to backend.
Let’s start by importing the necessary components and setting up variables for the recording process.
import { Audio, RecordingOptionsPresets } from "expo-av";
const [isRecording, setIsRecording] = useState(false);
const [recording, setRecording] = useState(null);
const [recordingStatus, setRecordingStatus] = useState("idle");
const [audioPermission, setAudioPermission] = useState(null);
const [recordedAudio, setRecordedAudio] = useState(null);
The initial task upon user login is to request audio permissions. Achieving this can be seamlessly implemented using the useEffect hook.
useEffect(() => {
async function getPermission() {
await Audio.requestPermissionsAsync()
.then((permission) => {
console.log("Permission Granted: " + permission.granted);
setAudioPermission(permission.granted);
})
.catch((error) => {
console.log(error);
});
}
getPermission();
return () => {
if (recording) {
stopRecording();
}
};
}, []);
Then we will need to create an audio button (CSS file will be provided in the end)
<View style={styles.voiceContainer}>
<TouchableOpacity
style={styles.voiceButton}
onPress={handleRecordButtonPress}
>
<FontAwesome name="microphone" size={24} color="white" />
</TouchableOpacity>
</View>
Following that, we need to create a function called handleRecordButtonPress to manage the button press event.
async function handleRecordButtonPress() {
setRecordedAudio(null);
setRecording(null);
if (recording) {
const audioUri = await stopRecording(recording);
if (audioUri) {
console.log("Saved audio file to", savedUri);
}
} else {
await startRecording();
}
}
We’ll have two distinct functions: startRecording and stopRecording. In startRecording, it’s crucial to note that we want the audio type to be .m4a, so we need to include the option RecordingOptionsPresets.HIGH_QUALITY. This is one of the options provided by Expo Audio. If you omit this option, you will likely receive the audio type as .caf. Depending on your further actions with the audio, it’s essential to consider this choice. In my case, I want to send the audio to the backend, and our backend only supports .m4a, so we have to include this option.
async function startRecording() {
setIsRecording(true);
setRecording(null);
setRecordedAudio(null);
// Check if a recording is already in progress
if (isRecording) {
console.warn("A recording is already in progress");
return;
}
// Check for permissions before starting the recording
if (!audioPermission) {
console.warn("Audio permission is not granted");
return;
}
try {
// needed for IOS, If you develop mainly on IOS device or emulator,
// there will be error if you don't include this.
if (audioPermission) {
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
playsInSilentModeIOS: true,
});
}
const newRecording = new Audio.Recording();
console.log("Starting Recording");
await newRecording.prepareToRecordAsync(
Audio.RecordingOptionsPresets.HIGH_QUALITY
);
await newRecording.startAsync();
setRecording(newRecording);
setRecordingStatus("recording");
} catch (error) {
console.error("Failed to start recording", error);
}
}
After pressing the stop button, we’ll execute the stopRecording() function. The setRecordedAudio will help us update the recordedAudio, which we will use to send in the POST request to the backend.
async function stopRecording() {
setIsRecording(false);
try {
if (recordingStatus === "recording") {
console.log("Stopping Recording");
await recording.stopAndUnloadAsync();
const uri = recording.getURI();
setRecordedAudio({
uri,
name: `recording-${Date.now()}.m4a`, // Change the file extension to .m4a
type: "audio/m4a", // Update the type to M4A
});
// resert our states to record again
setRecording(null);
setRecordingStatus("stopped");
}
} catch (error) {
console.error("Failed to stop recording", error);
}
}
CSS for the button
const styles = StyleSheet.create({
voiceButton: {
alignItems: "center",
justifyContent: "center",
width: 30,
height: 30,
borderRadius: 15,
},
voiceContainer: {
paddingTop: 4,
// flex: 1,
// alignItems: "left",
// justifyContent: "left",
},
});
This covers the essentials to start recording on your phone. I hope it proves helpful, and don’t hesitate to reach out with any questions.

Leave a comment