当前位置: 首页 > news >正文

【AAudio】A2dp sink创建音频轨道的源码流程分析

一、AAudio概述

AAudio 是 Android 8.0(API 级别 26)引入的 C/C++ 原生音频 API,专为需要低延迟、高性能音频处理的应用设计,尤其适用于实时音频应用(如音频合成器、音乐制作工具、游戏音效等)。

1.1 主要特点

  • 低延迟:通过减少音频数据在内核与用户空间之间的拷贝,直接访问音频设备,绕过中间层,减少延迟(理想情况可低于10ms)。

  • 高性能:优化了音频流的管理与数据传输,适合需要高实时性的音频应用。

  • 简洁的 API:提供简单易用的接口,便于开发者快速实现音频功能。

  • 支持两种工作模式

    • MMap 模式:通过内存映射(Memory-Mapped I/O)直接访问音频设备,减少数据拷贝,进一步降低延迟。

    • 传统模式:基于标准的音频流接口,兼容性更好,但延迟略高。

  • 灵活数据流:支持阻塞和非阻塞模式,通过回调机制异步处理数据。

1.2 核心功能

  • 音频流创建与管理

    • 提供 AAudioStreamAAudioStreamBuilder,用于创建和管理音频流。

    • 支持配置音频流的参数,如采样率、通道数、音频格式(PCM 16 位或浮点型)、共享模式(独占或共享)等。

  • 音频数据读写

    • 使用 AAudioStream_read()AAudioStream_write() 方法进行音频数据的读取和写入。

    • 支持阻塞和非阻塞模式,开发者可以根据需求选择合适的模式。

  • 状态管理

    • 提供状态管理方法,如 AAudioStream_requestStart()AAudioStream_requestStop()AAudioStream_requestPause() 等,用于控制音频流的启动、停止和暂停。

1.3 核心概念

  • 音频流(Stream):输入流(录音)和输出流(播放),通过 AAudioStream 对象管理。

  • 流构建器(Builder):使用 AAudioStreamBuilder 配置流参数(格式、采样率、通道数等)。

  • 音频设备:物理或虚拟设备,可通过 AAudioStream 查询设备ID。

  • 数据回调(Callback):异步填充/消费音频数据,减少延迟。

1.4 工作模式

  • MMap 模式

    • 直接将音频设备的内存映射到用户空间,应用可以直接读写这块内存,减少数据拷贝和延迟。

    • 适用于需要极低延迟的场景,如实时音频合成。

  • 传统模式

    • 基于标准的音频流接口,数据通过内核缓冲区传输。

    • 兼容性更好,但延迟略高于 MMap 模式。

1.5 使用场景

  • 实时音频应用:如音频合成器、音乐制作工具、实时效果器等。

  • 游戏音效:需要低延迟音频反馈的游戏。

  • 专业音频应用:如音频录制、音频分析工具等。

  • 实时通信:语音通话、会议的低延迟传输。

1.6 限制与注意事项

  • 仅支持 PCM 数据:AAudio 不支持压缩音频格式(如 MP3、AAC),仅支持原始 PCM 数据。

  • 权限要求:使用 AAudio 需要适当的权限,尤其是在访问某些音频设备时。

  • 兼容性:最低支持 API 26(Android 8.0),需版本检查。

  • 独占模式限制:部分设备不支持,需回退到共享模式(AAUDIO_SHARING_MODE_SHARED)。

  • 性能权衡:低延迟模式(LOW_LATENCY)可能增加功耗,省电模式(POWER_SAVING)延迟较高。

  • 资源管理:及时关闭流和构建器,避免内存泄漏。

  • 错误处理:监听 onError 回调,处理设备断开等异常(如重新初始化流)。

1.7 与其他音频 API 的对比

  • OpenSL ES

    • AAudio 是 OpenSL ES 的轻量级替代方案,API 更简洁,延迟更低。

    • OpenSL ES 更复杂,但支持更广泛的音频功能。

  • AudioTrack

    • AudioTrack 是 Java 层的音频 API,适用于简单的音频播放。

    • AAudio 是原生 C/C++ API,适用于需要低延迟和高性能的场景。

二、BtifAvrcpAudioTrackCreate概述

BtifAvrcpAudioTrackCreate 是一个在 Android 蓝牙协议栈中用于创建音频轨道(AudioTrack)的函数,主要服务于 A2DP(Advanced Audio Distribution Profile)音频流,尤其是在通过蓝牙传输音频数据到设备(如车载音响、耳机等)时。它的主要功能是创建并初始化一个 BtifAvrcpAudioTrack 类型的音频轨道对象,同时对与之关联的音频流进行配置和管理。使用 AAudio 库创建音频流构建器,设置音频流的各项参数(如采样率、声道数、数据格式等),打开音频流,为音频轨道对象分配内存并进行初始化,最后将创建好的音频轨道对象指针返回。

应用场景:

  • 车载蓝牙音频:在车载音响系统中,通过蓝牙接收手机或其他设备的音频流,并使用 BtifAvrcpAudioTrackCreate 创建音频轨道进行播放。

  • 蓝牙耳机:在蓝牙耳机中,接收来自手机的音频数据,并通过该函数创建音频轨道进行解码和播放。

  • 音频流媒体:在支持蓝牙音频流媒体传输的设备中,使用该函数处理接收到的音频数据。

注意事项

  • 音频焦点管理:在创建音频轨道时,需要考虑音频焦点的管理,避免与其他音频应用冲突。

  • 性能优化:由于蓝牙音频传输对延迟敏感,函数中配置了低延迟模式(AAUDIO_PERFORMANCE_MODE_LOW_LATENCY)以优化音频播放性能。

  • 错误处理:在创建音频流或分配内存时,需要进行错误处理,确保程序的稳定性。

三、源码分析

BtifAvrcpAudioTrackCreate

packages\modules\Bluetooth\system\btif\src\btif_avrcp_audio_track.cc
void* BtifAvrcpAudioTrackCreate(int trackFreq, int bitsPerSample,int channelCount) {log::info("Track.cpp: btCreateTrack freq {} bps {} channel {}", trackFreq,bitsPerSample, channelCount);// 1. 创建音频流构建器AAudioStreamBuilder* builder;AAudioStream* stream;aaudio_result_t result = AAudio_createStreamBuilder(&builder); // 创建音频流构建器if (result != AAUDIO_OK) {log::error("Failed to create AAudioStreamBuilder, result: {}", result);return nullptr;}log::info("Successfully created AAudioStreamBuilder");// 2. 设置音频流参数AAudioStreamBuilder_setSampleRate(builder, trackFreq); // 设置音频流的采样率AAudioStreamBuilder_setFormat(builder, AAUDIO_FORMAT_PCM_FLOAT); // 数据格式AAudioStreamBuilder_setChannelCount(builder, channelCount); // 声道数AAudioStreamBuilder_setSessionId(builder, AAUDIO_SESSION_ID_ALLOCATE); // 会话 IDAAudioStreamBuilder_setPerformanceMode(builder,AAUDIO_PERFORMANCE_MODE_LOW_LATENCY); // 性能模式(低延迟模式)AAudioStreamBuilder_setErrorCallback(builder, ErrorCallback, nullptr); // 错误回调函数// 3. 打开音频流result = AAudioStreamBuilder_openStream(builder, &stream);if (result != AAUDIO_OK) {log::error("Failed to open AAudio stream, result: {}", result);AAudioStreamBuilder_delete(builder);return nullptr;}log::info("Successfully opened AAudio stream");// 4. 删除音频流构建器: 在音频流打开成功后,删除音频流构建器,释放相关资源AAudioStreamBuilder_delete(builder);// 5. 创建音频轨道对象BtifAvrcpAudioTrack* trackHolder = new BtifAvrcpAudioTrack;if (trackHolder == nullptr) {log::error("Failed to allocate memory for BtifAvrcpAudioTrack");AAudioStream_close(stream);return nullptr;}log::info("Successfully allocated memory for BtifAvrcpAudioTrack");// 6. 初始化音频轨道对象:设置其音频流指针、每个样本的位数、声道数、缓冲区长度、增益,并为缓冲区分配内存trackHolder->stream = stream;trackHolder->bitsPerSample = bitsPerSample;trackHolder->channelCount = channelCount;trackHolder->bufferLength =trackHolder->channelCount * AAudioStream_getBufferSizeInFrames(stream);trackHolder->gain = kMaxTrackGain;trackHolder->buffer = new float[trackHolder->bufferLength]();if (trackHolder->buffer == nullptr) {log::error("Failed to allocate memory for track buffer");delete trackHolder;AAudioStream_close(stream);return nullptr;}log::info("Successfully allocated memory for track buffer");// 7.处理 PCM 数据转储(可选)
#if (DUMP_PCM_DATA == TRUE)outputPcmSampleFile = fopen(outputFilename, "ab");if (outputPcmSampleFile == nullptr) {log::error("Failed to open output PCM sample file: {}", outputFilename);} else {log::info("Successfully opened output PCM sample file: {}", outputFilename);}
#endif// 8. 设置全局音频引擎参数并返回:设置全局 s_AudioEngine 对象的采样频率、声道数和音频轨道句柄s_AudioEngine.trackFreq = trackFreq;s_AudioEngine.channelCount = channelCount;s_AudioEngine.trackHandle = (void*)trackHolder;log::info("Returning BtifAvrcpAudioTrack pointer: {}", (void*)trackHolder);return (void*)trackHolder;
}

参数说明

  • trackFreq:代表音频轨道的采样频率,也就是每秒对音频信号进行采样的次数,常见值如 44100Hz、48000Hz 等。

  • bitsPerSample:表示每个音频样本所占用的位数,该参数决定了音频的量化精度,例如 16 位、24 位等。

  • channelCount:指的是音频轨道的声道数,比如单声道(值为 1)、立体声(值为 2)等。

创建一个音频轨道相关的对象(BtifAvrcpAudioTrack),并对其进行初始化,同时设置与音频流相关的参数。通过配置音频参数、创建音频流和封装音频轨道对象,实现了蓝牙音频数据的接收和播放功能。

AAudio_createStreamBuilder

frameworks\av\media\libaaudio\src\core\AAudioAudio.cpp
AAUDIO_API aaudio_result_t AAudio_createStreamBuilder(AAudioStreamBuilder** builder)
{ALOGI("Attempting to allocate memory for AudioStreamBuilder");// 使用 new 运算符分配内存来创建一个 AudioStreamBuilder 对象// std::nothrow 是一个特殊的标志,它告诉 new 运算符在内存分配失败时不抛出异常,而是返回 nullptrAudioStreamBuilder *audioStreamBuilder =  new(std::nothrow) AudioStreamBuilder();if (audioStreamBuilder == nullptr) {ALOGE("Memory allocation for AudioStreamBuilder failed, returning AAUDIO_ERROR_NO_MEMORY");return AAUDIO_ERROR_NO_MEMORY;}ALOGI("Memory allocated successfully for AudioStreamBuilder");// 将创建好的 AudioStreamBuilder 对象的指针赋值给 builder 指向的指针,这样调用者就可以通过这个指针访问创建好的对象*builder = (AAudioStreamBuilder*) audioStreamBuilder;ALOGI("Returning AAUDIO_OK");return AAUDIO_OK;
}

AAudio API 的一部分,用于创建一个 AAudioStreamBuilder 对象。AAudioStreamBuilder 是一个用于配置和创建 AAudioStream 的构建器类,通过它可以设置音频流的各种参数,如采样率、声道数、数据格式等。

AAudioStreamBuilder_setSampleRate

frameworks\av\media\libaaudio\src\core\AAudioAudio.cpp
AAUDIO_API void AAudioStreamBuilder_setSampleRate(AAudioStreamBuilder* builder,int32_t sampleRate)
{AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);streamBuilder->setSampleRate(sampleRate);
}static AudioStreamBuilder *convertAAudioBuilderToStreamBuilder(AAudioStreamBuilder* builder)
{return (AudioStreamBuilder*) builder;
}

AAudioStreamBuilder 对象设置采样率。AAudioStreamBuilder_setSampleRate 是对外提供设置采样率功能的接口,convertAAudioBuilderToStreamBuilder 用于类型转换的辅助函数,通过这种方式实现了不同类型指针之间的转换,以便能够调用 AudioStreamBuilder 的成员函数来设置采样率。

AAudioStreamBuilder_setFormat

frameworks\av\media\libaaudio\src\core\AAudioAudio.cpp
AAUDIO_API void AAudioStreamBuilder_setFormat(AAudioStreamBuilder* builder,aaudio_format_t format)
{// 1. 类型转换AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);// 2. 格式转换// Use audio_format_t everywhere internally.    const audio_format_t internalFormat = AAudioConvert_aaudioToAndroidDataFormat(format);// 3. 设置格式streamBuilder->setFormat(internalFormat);
}

AAudio 框架的音频流构建器 AAudioStreamBuilder 中指定的音频数据格式,转换为 Android 系统内部使用的格式,然后将其应用到内部的 AudioStreamBuilder 对象上,从而完成音频流数据格式的设置。保证 AAudio 框架和 Android 系统内部模块之间的数据格式兼容。

AAudioConvert_aaudioToAndroidDataFormat

frameworks\av\media\libaaudio\include\aaudio\AAudio.h
enum {AAUDIO_FORMAT_INVALID = -1,AAUDIO_FORMAT_UNSPECIFIED = 0,/*** This format uses the int16_t data type.* The maximum range of the data is -32768 (0x8000) to 32767 (0x7FFF).*/AAUDIO_FORMAT_PCM_I16,/*** This format uses the float data type.* The nominal range of the data is [-1.0f, 1.0f).* Values outside that range may be clipped.** See also the float Data in* <a href="/reference/android/media/AudioTrack#write(float[],%20int,%20int,%20int)">*   write(float[], int, int, int)</a>.*/AAUDIO_FORMAT_PCM_FLOAT,/*** This format uses 24-bit samples packed into 3 bytes.* The bytes are in little-endian order, so the least significant byte* comes first in the byte array.** The maximum range of the data is -8388608 (0x800000)* to 8388607 (0x7FFFFF).** Note that the lower precision bits may be ignored by the device.** Available since API level 31.*/AAUDIO_FORMAT_PCM_I24_PACKED,/*** This format uses 32-bit samples stored in an int32_t data type.* The maximum range of the data is -2147483648 (0x80000000)* to 2147483647 (0x7FFFFFFF).** Note that the lower precision bits may be ignored by the device.** Available since API level 31.*/AAUDIO_FORMAT_PCM_I32,/*** This format is used for compressed audio wrapped in IEC61937 for HDMI* or S/PDIF passthrough.** Unlike PCM playback, the Android framework is not able to do format* conversion for IEC61937. In that case, when IEC61937 is requested, sampling* rate and channel count or channel mask must be specified. Otherwise, it may* fail when opening the stream. Apps are able to get the correct configuration* for the playback by calling* <a href="/reference/android/media/AudioManager#getDevices(int)">*   AudioManager#getDevices(int)</a>.** Available since API level 34.*/AAUDIO_FORMAT_IEC61937};
typedef int32_t aaudio_format_t;frameworks\av\media\libaaudio\src\utility\AAudioUtilities.cpp
audio_format_t AAudioConvert_aaudioToAndroidDataFormat(aaudio_format_t aaudioFormat) {audio_format_t androidFormat;switch (aaudioFormat) {case AAUDIO_FORMAT_UNSPECIFIED:androidFormat = AUDIO_FORMAT_DEFAULT;break;case AAUDIO_FORMAT_PCM_I16:androidFormat = AUDIO_FORMAT_PCM_16_BIT;break;case AAUDIO_FORMAT_PCM_FLOAT:androidFormat = AUDIO_FORMAT_PCM_FLOAT;break;case AAUDIO_FORMAT_PCM_I24_PACKED:androidFormat = AUDIO_FORMAT_PCM_24_BIT_PACKED;break;case AAUDIO_FORMAT_PCM_I32:androidFormat = AUDIO_FORMAT_PCM_32_BIT;break;case AAUDIO_FORMAT_IEC61937:androidFormat = AUDIO_FORMAT_IEC61937;break;default:androidFormat = AUDIO_FORMAT_INVALID;ALOGE("%s() 0x%08X unrecognized", __func__, aaudioFormat);break;}return androidFormat;
}

将 AAudio 框架中的音频格式 aaudioFormat 转换为 Android 系统中对应的音频格式 androidFormat。有助于在 AAudio 库和 Android 系统音频模块之间进行格式兼容处理。

AAudioStreamBuilder_setChannelCount

frameworks\av\media\libaaudio\src\core\AAudioAudio.cpp
AAUDIO_API void AAudioStreamBuilder_setChannelCount(AAudioStreamBuilder* builder,int32_t channelCount)
{AAudioStreamBuilder_setSamplesPerFrame(builder, channelCount);
}AAUDIO_API void AAudioStreamBuilder_setSamplesPerFrame(AAudioStreamBuilder* builder,int32_t samplesPerFrame)
{AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);const aaudio_channel_mask_t channelMask = AAudioConvert_channelCountToMask(samplesPerFrame); // 声道掩码转换streamBuilder->setChannelMask(channelMask); // 设置声道掩码
}

设置声道数量实际上是通过设置每帧的样本数来完成的,因为在音频处理中,声道数量和每帧的样本数通常是相等的。

AAudioStreamBuilder_setSessionId

AAUDIO_API void AAudioStreamBuilder_setSessionId(AAudioStreamBuilder* builder,aaudio_session_id_t sessionId)
{AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);streamBuilder->setSessionId(sessionId);
}

AAudioStreamBuilder 对象设置音频会话 ID(sessionId)。音频会话 ID 用于标识一个特定的音频会话,在 Android 系统中,它可以帮助管理音频资源、控制音频的路由和交互等。

AAudioStreamBuilder_setPerformanceMode

AAUDIO_API void AAudioStreamBuilder_setPerformanceMode(AAudioStreamBuilder* builder,aaudio_performance_mode_t mode)
{AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);streamBuilder->setPerformanceMode(mode);
}

AAudioStreamBuilder 对象设置音频流的性能模式。性能模式决定了音频流在处理过程中的性能表现,例如是否追求低延迟、高吞吐量等。

AAudioStreamBuilder_setErrorCallback

AAUDIO_API void AAudioStreamBuilder_setErrorCallback(AAudioStreamBuilder* builder,AAudioStream_errorCallback callback,void *userData)
{AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);streamBuilder->setErrorCallbackProc(callback); // 设置错误回调函数streamBuilder->setErrorCallbackUserData(userData); // 设置用户数据
}

AAudio 音频流构建器指定一个错误回调函数。当音频流在运行过程中出现错误时,系统会调用这个回调函数,可以在回调函数中编写相应的错误处理逻辑,从而对错误情况做出及时响应。

AAudioStreamBuilder_openStream

frameworks\av\media\libaaudio\src\core\AAudioAudio.cpp
AAUDIO_API aaudio_result_t  AAudioStreamBuilder_openStream(AAudioStreamBuilder* builder,AAudioStream** streamPtr)
{AudioStream *audioStream = nullptr; // 用于存储内部创建的音频流对象的指针aaudio_stream_id_t id = 0;// Please leave these logs because they are very helpful when debugging.ALOGI("%s() called ----------------------------------------", __func__);// 获取内部构建器对象AudioStreamBuilder *streamBuilder = COMMON_GET_FROM_BUILDER_OR_RETURN(streamPtr);// 构建音频流aaudio_result_t result = streamBuilder->build(&audioStream);if (result == AAUDIO_OK) {*streamPtr = (AAudioStream*) audioStream;id = audioStream->getId(); // 获取音频流的 ID 并存储在 id 变量中} else {*streamPtr = nullptr;}ALOGI("%s() returns %d = %s for s#%u ----------------",__func__, result, AAudio_convertResultToText(result), id);return result;
}

根据 AAudioStreamBuilder 对象中设置的参数来创建并打开一个 AAudioStream 音频流。函数接收一个构建器对象和一个指向 AAudioStream 指针的指针,返回一个表示操作结果的 aaudio_result_t 类型的值。

// Macros for common code that includes a return.
// TODO Consider using do{}while(0) construct. I tried but it hung AndroidStudio
#define CONVERT_BUILDER_HANDLE_OR_RETURN() \convertAAudioBuilderToStreamBuilder(builder);#define COMMON_GET_FROM_BUILDER_OR_RETURN(resultPtr) \CONVERT_BUILDER_HANDLE_OR_RETURN() \if ((resultPtr) == nullptr) { \return AAUDIO_ERROR_NULL; \}static AudioStreamBuilder *convertAAudioBuilderToStreamBuilder(AAudioStreamBuilder* builder)
{return (AudioStreamBuilder*) builder;
}

AAudioStreamBuilder_delete

frameworks\av\media\libaaudio\src\core\AAudioAudio.cpp
AAUDIO_API aaudio_result_t  AAudioStreamBuilder_delete(AAudioStreamBuilder* builder)
{AudioStreamBuilder *streamBuilder = convertAAudioBuilderToStreamBuilder(builder);if (streamBuilder != nullptr) {delete streamBuilder;return AAUDIO_OK;}return AAUDIO_ERROR_NULL;
}

释放 AAudioStreamBuilder 对象所占用的内存资源。在音频处理过程中,当不再需要使用 AAudioStreamBuilder 对象时,就可以调用此函数进行资源清理,避免内存泄漏。

class AAudioStreamParameters

frameworks\av\media\libaaudio\src\core\AAudioStreamParameters.h
/** Copyright 2017 The Android Open Source Project** Licensed under the Apache License, Version 2.0 (the "License");* you may not use this file except in compliance with the License.* You may obtain a copy of the License at**      http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/#ifndef AAUDIO_STREAM_PARAMETERS_H
#define AAUDIO_STREAM_PARAMETERS_H#include <stdint.h>#include <aaudio/AAudio.h>
#include <utility/AAudioUtilities.h>namespace aaudio {class AAudioStreamParameters {
public:// 默认构造函数使用 = default 表示由编译器自动生成AAudioStreamParameters() = default;// 虚析构函数也是由编译器自动生成,用于在派生类对象销毁时正确调用析构函数virtual ~AAudioStreamParameters() = default;int32_t getDeviceId() const {return mDeviceId;}void setDeviceId(int32_t deviceId) {mDeviceId = deviceId;}int32_t getSampleRate() const {return mSampleRate;}void setSampleRate(int32_t sampleRate) {mSampleRate = sampleRate;}int32_t getSamplesPerFrame() const {return mSamplesPerFrame;}audio_format_t getFormat() const {return mAudioFormat;}void setFormat(audio_format_t audioFormat) {mAudioFormat = audioFormat;}aaudio_sharing_mode_t getSharingMode() const {return mSharingMode;}void setSharingMode(aaudio_sharing_mode_t sharingMode) {mSharingMode = sharingMode;}int32_t getBufferCapacity() const {return mBufferCapacity;}void setBufferCapacity(int32_t frames) {mBufferCapacity = frames;}aaudio_direction_t getDirection() const {return mDirection;}void setDirection(aaudio_direction_t direction) {mDirection = direction;}aaudio_usage_t getUsage() const {return mUsage;}void setUsage(aaudio_usage_t usage) {mUsage = usage;}aaudio_content_type_t getContentType() const {return mContentType;}void setContentType(aaudio_content_type_t contentType) {mContentType = contentType;}aaudio_spatialization_behavior_t getSpatializationBehavior() const {return mSpatializationBehavior;}void setSpatializationBehavior(aaudio_spatialization_behavior_t spatializationBehavior) {mSpatializationBehavior = spatializationBehavior;}bool isContentSpatialized() const {return mIsContentSpatialized;}void setIsContentSpatialized(bool isSpatialized) {mIsContentSpatialized = isSpatialized;}aaudio_input_preset_t getInputPreset() const {return mInputPreset;}void setInputPreset(aaudio_input_preset_t inputPreset) {mInputPreset = inputPreset;}aaudio_allowed_capture_policy_t getAllowedCapturePolicy() const {return mAllowedCapturePolicy;}void setAllowedCapturePolicy(aaudio_allowed_capture_policy_t policy) {mAllowedCapturePolicy = policy;}aaudio_session_id_t getSessionId() const {return mSessionId;}void setSessionId(aaudio_session_id_t sessionId) {mSessionId = sessionId;}bool isPrivacySensitive() const {return mIsPrivacySensitive;}void setPrivacySensitive(bool privacySensitive) {mIsPrivacySensitive = privacySensitive;}const std::optional<std::string> getOpPackageName() const {return mOpPackageName;}// TODO b/182392769: reexamine if Identity can be usedvoid setOpPackageName(const std::optional<std::string>& opPackageName) {mOpPackageName = opPackageName;}const std::optional<std::string> getAttributionTag() const {return mAttributionTag;}void setAttributionTag(const std::optional<std::string>& attributionTag) {mAttributionTag = attributionTag;}aaudio_channel_mask_t getChannelMask() const {return mChannelMask;}void setChannelMask(aaudio_channel_mask_t channelMask) {mChannelMask = channelMask;mSamplesPerFrame = AAudioConvert_channelMaskToCount(channelMask);}int32_t getHardwareSamplesPerFrame() const {return mHardwareSamplesPerFrame;}void setHardwareSamplesPerFrame(int32_t hardwareSamplesPerFrame) {mHardwareSamplesPerFrame = hardwareSamplesPerFrame;}int32_t getHardwareSampleRate() const {return mHardwareSampleRate;}void setHardwareSampleRate(int32_t hardwareSampleRate) {mHardwareSampleRate = hardwareSampleRate;}audio_format_t getHardwareFormat() const {return mHardwareAudioFormat;}void setHardwareFormat(audio_format_t hardwareAudioFormat) {mHardwareAudioFormat = hardwareAudioFormat;}/*** @return bytes per frame of getFormat()*/// 用于计算每帧的字节数,通过获取每帧的样本数和每个样本的字节数来计算int32_t calculateBytesPerFrame() const {return getSamplesPerFrame() * audio_bytes_per_sample(getFormat());}/*** Copy variables defined in other AAudioStreamParameters instance to this one.* @param other*/// 用于从另一个 AAudioStreamParameters 对象中复制参数到当前对象void copyFrom(const AAudioStreamParameters &other);virtual aaudio_result_t validate() const;void dump() const;private:
// 这些私有成员变量用于存储音频流的各种参数,如每帧样本数、采样率、设备 ID、共享模式、音频格式、方向、用途、内容类型、空间化行为、是否内容空间化、输入预设、缓冲区容量、允许的捕获策略、会话 ID、是否隐私敏感、操作包名、归因标签、声道掩码、硬件每帧样本数、硬件采样率和硬件音频格式aaudio_result_t validateChannelMask() const;int32_t                         mSamplesPerFrame      = AAUDIO_UNSPECIFIED;int32_t                         mSampleRate           = AAUDIO_UNSPECIFIED;int32_t                         mDeviceId             = AAUDIO_UNSPECIFIED;aaudio_sharing_mode_t           mSharingMode          = AAUDIO_SHARING_MODE_SHARED;audio_format_t                  mAudioFormat          = AUDIO_FORMAT_DEFAULT;aaudio_direction_t              mDirection            = AAUDIO_DIRECTION_OUTPUT;aaudio_usage_t                  mUsage                = AAUDIO_UNSPECIFIED;aaudio_content_type_t           mContentType          = AAUDIO_UNSPECIFIED;aaudio_spatialization_behavior_t mSpatializationBehavior= AAUDIO_UNSPECIFIED;bool                            mIsContentSpatialized = false;aaudio_input_preset_t           mInputPreset          = AAUDIO_UNSPECIFIED;int32_t                         mBufferCapacity       = AAUDIO_UNSPECIFIED;aaudio_allowed_capture_policy_t mAllowedCapturePolicy = AAUDIO_UNSPECIFIED;aaudio_session_id_t             mSessionId            = AAUDIO_SESSION_ID_NONE;bool                            mIsPrivacySensitive   = false;std::optional<std::string>      mOpPackageName        = {};std::optional<std::string>      mAttributionTag       = {};aaudio_channel_mask_t           mChannelMask          = AAUDIO_UNSPECIFIED;int                             mHardwareSamplesPerFrame= AAUDIO_UNSPECIFIED;int                             mHardwareSampleRate   = AAUDIO_UNSPECIFIED;audio_format_t                  mHardwareAudioFormat  = AUDIO_FORMAT_DEFAULT;
};} /* namespace aaudio */#endif //AAUDIO_STREAM_PARAMETERS_H

AAudioStreamParameters 类用于管理 AAudio 音频流的各种参数。

class AudioStreamBuilder

frameworks\av\media\libaaudio\src\core\AudioStreamBuilder.h
/** Copyright 2015 The Android Open Source Project** Licensed under the Apache License, Version 2.0 (the "License");* you may not use this file except in compliance with the License.* You may obtain a copy of the License at**      http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/#ifndef AAUDIO_AUDIO_STREAM_BUILDER_H
#define AAUDIO_AUDIO_STREAM_BUILDER_H#include <stdint.h>#include <aaudio/AAudio.h>#include "AAudioStreamParameters.h"
#include "AudioStream.h"namespace aaudio {/*** Factory class for an AudioStream.*/
class AudioStreamBuilder : public AAudioStreamParameters {
public:AudioStreamBuilder() = default;~AudioStreamBuilder() = default;bool isSharingModeMatchRequired() const {return mSharingModeMatchRequired;}AudioStreamBuilder* setSharingModeMatchRequired(bool required) {mSharingModeMatchRequired = required;return this;}int32_t getPerformanceMode() const {return mPerformanceMode;}AudioStreamBuilder* setPerformanceMode(aaudio_performance_mode_t performanceMode) {mPerformanceMode = performanceMode;return this;}AAudioStream_dataCallback getDataCallbackProc() const {return mDataCallbackProc;}AudioStreamBuilder* setDataCallbackProc(AAudioStream_dataCallback proc) {mDataCallbackProc = proc;return this;}void *getDataCallbackUserData() const {return mDataCallbackUserData;}AudioStreamBuilder* setDataCallbackUserData(void *userData) {mDataCallbackUserData = userData;return this;}AAudioStream_errorCallback getErrorCallbackProc() const {return mErrorCallbackProc;}AudioStreamBuilder* setErrorCallbackProc(AAudioStream_errorCallback proc) {mErrorCallbackProc = proc;return this;}AudioStreamBuilder* setErrorCallbackUserData(void *userData) {mErrorCallbackUserData = userData;return this;}void *getErrorCallbackUserData() const {return mErrorCallbackUserData;}int32_t getFramesPerDataCallback() const {return mFramesPerDataCallback;}AudioStreamBuilder* setFramesPerDataCallback(int32_t sizeInFrames) {mFramesPerDataCallback = sizeInFrames;return this;}AudioStreamBuilder* setPrivacySensitiveRequest(bool privacySensitive) {mPrivacySensitiveReq =privacySensitive ? PRIVACY_SENSITIVE_ENABLED : PRIVACY_SENSITIVE_DISABLED;return this;}aaudio_result_t build(AudioStream **streamPtr);virtual aaudio_result_t validate() const override;void logParameters() const;// Mark the stream so it can be deleted.static void stopUsingStream(AudioStream *stream);private:// Extract a raw pointer that we can pass to a 'C' app.static AudioStream *startUsingStream(android::sp<AudioStream> &spAudioStream);bool                       mSharingModeMatchRequired = false; // must match sharing mode requestedaaudio_performance_mode_t  mPerformanceMode = AAUDIO_PERFORMANCE_MODE_NONE;AAudioStream_dataCallback  mDataCallbackProc = nullptr;  // external callback functionsvoid                      *mDataCallbackUserData = nullptr;int32_t                    mFramesPerDataCallback = AAUDIO_UNSPECIFIED; // framesAAudioStream_errorCallback mErrorCallbackProc = nullptr;void                      *mErrorCallbackUserData = nullptr;enum {PRIVACY_SENSITIVE_DEFAULT = -1,PRIVACY_SENSITIVE_DISABLED = 0,PRIVACY_SENSITIVE_ENABLED = 1,};typedef int32_t privacy_sensitive_t;privacy_sensitive_t        mPrivacySensitiveReq = PRIVACY_SENSITIVE_DEFAULT;
};} /* namespace aaudio */#endif //AAUDIO_AUDIO_STREAM_BUILDER_H

AudioStreamBuilder 类提供了一种方便的方式来设置和管理创建 AudioStream 所需的参数,并通过 build 函数构建音频流对象。

class AudioStream

frameworks\av\media\libaaudio\src\core\AudioStream.h
/** Copyright 2016 The Android Open Source Project** Licensed under the Apache License, Version 2.0 (the "License");* you may not use this file except in compliance with the License.* You may obtain a copy of the License at**      http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/#ifndef AAUDIO_AUDIOSTREAM_H
#define AAUDIO_AUDIOSTREAM_H#include <atomic>
#include <mutex>
#include <stdint.h>#include <android-base/thread_annotations.h>
#include <binder/IServiceManager.h>
#include <binder/Status.h>
#include <utils/StrongPointer.h>#include <aaudio/AAudio.h>
#include <media/AudioSystem.h>
#include <media/PlayerBase.h>
#include <media/VolumeShaper.h>#include "utility/AAudioUtilities.h"
#include "utility/MonotonicCounter.h"// Cannot get android::media::VolumeShaper to compile!
#define AAUDIO_USE_VOLUME_SHAPER  0namespace aaudio {typedef void *(*aaudio_audio_thread_proc_t)(void *);
typedef uint32_t aaudio_stream_id_t;class AudioStreamBuilder;constexpr pid_t        CALLBACK_THREAD_NONE = 0;/*** AAudio audio stream.*/
// By extending AudioDeviceCallback, we also inherit from RefBase.
class AudioStream : public android::AudioSystem::AudioDeviceCallback {
public:AudioStream();virtual ~AudioStream();protected:/*** Check the state to see if Pause is currently legal.** @param result pointer to return code* @return true if OK to continue, if false then return result*/bool checkPauseStateTransition(aaudio_result_t *result);virtual bool isFlushSupported() const {// Only implement FLUSH for OUTPUT streams.return false;}virtual bool isPauseSupported() const {// Only implement PAUSE for OUTPUT streams.return false;}/* Asynchronous requests.* Use waitForStateChange() to wait for completion.*/virtual aaudio_result_t requestStart_l() REQUIRES(mStreamLock) = 0;virtual aaudio_result_t requestPause_l() REQUIRES(mStreamLock) {// Only implement this for OUTPUT streams.return AAUDIO_ERROR_UNIMPLEMENTED;}virtual aaudio_result_t requestFlush_l() REQUIRES(mStreamLock) {// Only implement this for OUTPUT streams.return AAUDIO_ERROR_UNIMPLEMENTED;}virtual aaudio_result_t requestStop_l() REQUIRES(mStreamLock) = 0;public:virtual aaudio_result_t getTimestamp(clockid_t clockId,int64_t *framePosition,int64_t *timeNanoseconds) = 0;/*** Update state machine.* @return result of the operation.*/aaudio_result_t updateStateMachine() {if (isDataCallbackActive()) {return AAUDIO_OK; // state is getting updated by the callback thread read/write call}return processCommands();};virtual aaudio_result_t processCommands() = 0;// =========== End ABSTRACT methods ===========================virtual aaudio_result_t waitForStateChange(aaudio_stream_state_t currentState,aaudio_stream_state_t *nextState,int64_t timeoutNanoseconds);/*** Open the stream using the parameters in the builder.* Allocate the necessary resources.*/virtual aaudio_result_t open(const AudioStreamBuilder& builder);// log to MediaMetricsvirtual void logOpenActual();void logReleaseBufferState();/* Note about naming for "release"  and "close" related methods.** These names are intended to match the public AAudio API.* The original AAudio API had an AAudioStream_close() function that* released the hardware and deleted the stream. That made it difficult* because apps want to release the HW ASAP but are not in a rush to delete* the stream object. So in R we added an AAudioStream_release() function* that just released the hardware.* The AAudioStream_close() method releases if needed and then closes.*/protected:/*** Free any hardware or system resources from the open() call.* It is safe to call release_l() multiple times.*/virtual aaudio_result_t release_l() REQUIRES(mStreamLock) {setState(AAUDIO_STREAM_STATE_CLOSING);return AAUDIO_OK;}/*** Free any resources not already freed by release_l().* Assume release_l() already called.*/virtual void close_l() REQUIRES(mStreamLock);public:// This is only used to identify a stream in the logs without// revealing any pointers.aaudio_stream_id_t getId() {return mStreamId;}virtual aaudio_result_t setBufferSize(int32_t requestedFrames) = 0;aaudio_result_t createThread(int64_t periodNanoseconds,aaudio_audio_thread_proc_t threadProc,void *threadArg)EXCLUDES(mStreamLock) {std::lock_guard<std::mutex> lock(mStreamLock);return createThread_l(periodNanoseconds, threadProc, threadArg);}aaudio_result_t joinThread(void **returnArg) EXCLUDES(mStreamLock);virtual aaudio_result_t registerThread() {return AAUDIO_OK;}virtual aaudio_result_t unregisterThread() {return AAUDIO_OK;}/*** Internal function used to call the audio thread passed by the user.* It is unfortunately public because it needs to be called by a static 'C' function.*/void* wrapUserThread();// ============== Queries ===========================aaudio_stream_state_t getState() const {return mState.load();}aaudio_stream_state_t getStateExternal() const;virtual int32_t getBufferSize() const {return AAUDIO_ERROR_UNIMPLEMENTED;}virtual int32_t getBufferCapacity() const {return mBufferCapacity;}virtual int32_t getDeviceBufferCapacity() const {return mDeviceBufferCapacity;}virtual int32_t getFramesPerBurst() const {return mFramesPerBurst;}virtual int32_t getDeviceFramesPerBurst() const {return mDeviceFramesPerBurst;}virtual int32_t getXRunCount() const {return AAUDIO_ERROR_UNIMPLEMENTED;}bool isActive() const {return mState == AAUDIO_STREAM_STATE_STARTING || mState == AAUDIO_STREAM_STATE_STARTED;}virtual bool isMMap() {return false;}aaudio_result_t getSampleRate() const {return mSampleRate;}aaudio_result_t getDeviceSampleRate() const {return mDeviceSampleRate;}aaudio_result_t getHardwareSampleRate() const {return mHardwareSampleRate;}audio_format_t getFormat()  const {return mFormat;}audio_format_t getHardwareFormat()  const {return mHardwareFormat;}aaudio_result_t getSamplesPerFrame() const {return mSamplesPerFrame;}aaudio_result_t getDeviceSamplesPerFrame() const {return mDeviceSamplesPerFrame;}aaudio_result_t getHardwareSamplesPerFrame() const {return mHardwareSamplesPerFrame;}virtual int32_t getPerformanceMode() const {return mPerformanceMode;}void setPerformanceMode(aaudio_performance_mode_t performanceMode) {mPerformanceMode = performanceMode;}int32_t getDeviceId() const {return mDeviceId;}aaudio_sharing_mode_t getSharingMode() const {return mSharingMode;}bool isSharingModeMatchRequired() const {return mSharingModeMatchRequired;}virtual aaudio_direction_t getDirection() const = 0;aaudio_usage_t getUsage() const {return mUsage;}aaudio_content_type_t getContentType() const {return mContentType;}aaudio_spatialization_behavior_t getSpatializationBehavior() const {return mSpatializationBehavior;}bool isContentSpatialized() const {return mIsContentSpatialized;}aaudio_input_preset_t getInputPreset() const {return mInputPreset;}aaudio_allowed_capture_policy_t getAllowedCapturePolicy() const {return mAllowedCapturePolicy;}int32_t getSessionId() const {return mSessionId;}bool isPrivacySensitive() const {return mIsPrivacySensitive;}bool getRequireMonoBlend() const {return mRequireMonoBlend;}float getAudioBalance() const {return mAudioBalance;}/*** This is only valid after setChannelMask() and setFormat()* have been called.*/int32_t getBytesPerFrame() const {return mSamplesPerFrame * getBytesPerSample();}/*** This is only valid after setFormat() has been called.*/int32_t getBytesPerSample() const {return audio_bytes_per_sample(mFormat);}/*** This is only valid after setDeviceSamplesPerFrame() and setDeviceFormat() have been called.*/int32_t getBytesPerDeviceFrame() const {return getDeviceSamplesPerFrame() * audio_bytes_per_sample(getDeviceFormat());}virtual int64_t getFramesWritten() = 0;virtual int64_t getFramesRead() = 0;AAudioStream_dataCallback getDataCallbackProc() const {return mDataCallbackProc;}AAudioStream_errorCallback getErrorCallbackProc() const {return mErrorCallbackProc;}aaudio_data_callback_result_t maybeCallDataCallback(void *audioData, int32_t numFrames);void maybeCallErrorCallback(aaudio_result_t result);void *getDataCallbackUserData() const {return mDataCallbackUserData;}void *getErrorCallbackUserData() const {return mErrorCallbackUserData;}int32_t getFramesPerDataCallback() const {return mFramesPerDataCallback;}aaudio_channel_mask_t getChannelMask() const {return mChannelMask;}void setChannelMask(aaudio_channel_mask_t channelMask) {mChannelMask = channelMask;mSamplesPerFrame = AAudioConvert_channelMaskToCount(channelMask);}void setDeviceSamplesPerFrame(int32_t deviceSamplesPerFrame) {mDeviceSamplesPerFrame = deviceSamplesPerFrame;}/*** @return true if data callback has been specified*/bool isDataCallbackSet() const {return mDataCallbackProc != nullptr;}/*** @return true if data callback has been specified and stream is running*/bool isDataCallbackActive() const {return isDataCallbackSet() && isActive();}/*** @return true if called from the same thread as the callback*/bool collidesWithCallback() const;// Implement AudioDeviceCallbackvoid onAudioDeviceUpdate(audio_io_handle_t audioIo,audio_port_handle_t deviceId) override {};// ============== I/O ===========================// A Stream will only implement read() or write() depending on its direction.virtual aaudio_result_t write(const void *buffer __unused,int32_t numFrames __unused,int64_t timeoutNanoseconds __unused) {return AAUDIO_ERROR_UNIMPLEMENTED;}virtual aaudio_result_t read(void *buffer __unused,int32_t numFrames __unused,int64_t timeoutNanoseconds __unused) {return AAUDIO_ERROR_UNIMPLEMENTED;}// This is used by the AudioManager to duck and mute the stream when changing audio focus.void setDuckAndMuteVolume(float duckAndMuteVolume) EXCLUDES(mStreamLock);float getDuckAndMuteVolume() const {return mDuckAndMuteVolume;}// Implement this in the output subclasses.virtual android::status_t doSetVolume() { return android::NO_ERROR; }#if AAUDIO_USE_VOLUME_SHAPERvirtual ::android::binder::Status applyVolumeShaper(const ::android::media::VolumeShaper::Configuration& configuration __unused,const ::android::media::VolumeShaper::Operation& operation __unused);
#endif/*** Register this stream's PlayerBase with the AudioManager if needed.* Only register output streams.* This should only be called for client streams and not for streams* that run in the service.*/virtual void registerPlayerBase() {if (getDirection() == AAUDIO_DIRECTION_OUTPUT) {mPlayerBase->registerWithAudioManager(this);}}/*** Unregister this stream's PlayerBase with the AudioManager.* This will only unregister if already registered.*/void unregisterPlayerBase() {mPlayerBase->unregisterWithAudioManager();}aaudio_result_t systemStart() EXCLUDES(mStreamLock);aaudio_result_t systemPause() EXCLUDES(mStreamLock);aaudio_result_t safeFlush() EXCLUDES(mStreamLock);/*** This is called when an app calls AAudioStream_requestStop();* It prevents calls from a callback.*/aaudio_result_t systemStopFromApp();/*** This is called internally when an app callback returns AAUDIO_CALLBACK_RESULT_STOP.*/aaudio_result_t systemStopInternal() EXCLUDES(mStreamLock);/*** Safely RELEASE a stream after taking mStreamLock and checking* to make sure we are not being called from a callback.* @return AAUDIO_OK or a negative error*/aaudio_result_t safeRelease() EXCLUDES(mStreamLock);/*** Safely RELEASE and CLOSE a stream after taking mStreamLock and checking* to make sure we are not being called from a callback.* @return AAUDIO_OK or a negative error*/aaudio_result_t safeReleaseClose();aaudio_result_t safeReleaseCloseInternal() EXCLUDES(mStreamLock);protected:// PlayerBase allows the system to control the stream volume.class MyPlayerBase : public android::PlayerBase {public:MyPlayerBase() = default;virtual ~MyPlayerBase() = default;/*** Register for volume changes and remote control.*/void registerWithAudioManager(const android::sp<AudioStream>& parent);/*** UnRegister.*/void unregisterWithAudioManager();/*** Just calls unregisterWithAudioManager().*/void destroy() override;// Just a stub. The ability to start audio through PlayerBase is being deprecated.android::status_t playerStart() override {return android::NO_ERROR;}// Just a stub. The ability to pause audio through PlayerBase is being deprecated.android::status_t playerPause() override {return android::NO_ERROR;}// Just a stub. The ability to stop audio through PlayerBase is being deprecated.android::status_t playerStop() override {return android::NO_ERROR;}android::status_t playerSetVolume() override;#if AAUDIO_USE_VOLUME_SHAPER::android::binder::Status applyVolumeShaper();
#endifaaudio_result_t getResult() {return mResult;}// Returns the playerIId if registered, -1 otherwise.int32_t getPlayerIId() const {return mPIId;}private:// Use a weak pointer so the AudioStream can be deleted.std::mutex               mParentLock;android::wp<AudioStream> mParent GUARDED_BY(mParentLock);aaudio_result_t          mResult = AAUDIO_OK;bool                     mRegistered = false;};/*** This should not be called after the open() call.* TODO for multiple setters: assert(mState == AAUDIO_STREAM_STATE_UNINITIALIZED)*/void setSampleRate(int32_t sampleRate) {mSampleRate = sampleRate;}// This should not be called after the open() call.void setDeviceSampleRate(int32_t deviceSampleRate) {mDeviceSampleRate = deviceSampleRate;}// This should not be called after the open() call.void setHardwareSampleRate(int32_t hardwareSampleRate) {mHardwareSampleRate = hardwareSampleRate;}// This should not be called after the open() call.void setFramesPerBurst(int32_t framesPerBurst) {mFramesPerBurst = framesPerBurst;}// This should not be called after the open() call.void setDeviceFramesPerBurst(int32_t deviceFramesPerBurst) {mDeviceFramesPerBurst = deviceFramesPerBurst;}// This should not be called after the open() call.void setBufferCapacity(int32_t bufferCapacity) {mBufferCapacity = bufferCapacity;}// This should not be called after the open() call.void setDeviceBufferCapacity(int32_t deviceBufferCapacity) {mDeviceBufferCapacity = deviceBufferCapacity;}// This should not be called after the open() call.void setSharingMode(aaudio_sharing_mode_t sharingMode) {mSharingMode = sharingMode;}// This should not be called after the open() call.void setFormat(audio_format_t format) {mFormat = format;}// This should not be called after the open() call.void setHardwareFormat(audio_format_t format) {mHardwareFormat = format;}// This should not be called after the open() call.void setHardwareSamplesPerFrame(int32_t hardwareSamplesPerFrame) {mHardwareSamplesPerFrame = hardwareSamplesPerFrame;}// This should not be called after the open() call.void setDeviceFormat(audio_format_t format) {mDeviceFormat = format;}audio_format_t getDeviceFormat() const {return mDeviceFormat;}void setState(aaudio_stream_state_t state);bool isDisconnected() const {return mDisconnected.load();}void setDisconnected();void setDeviceId(int32_t deviceId) {mDeviceId = deviceId;}// This should not be called after the open() call.void setSessionId(int32_t sessionId) {mSessionId = sessionId;}aaudio_result_t createThread_l(int64_t periodNanoseconds,aaudio_audio_thread_proc_t threadProc,void *threadArg)REQUIRES(mStreamLock);aaudio_result_t joinThread_l(void **returnArg) REQUIRES(mStreamLock);std::atomic<bool>    mCallbackEnabled{false};float                mDuckAndMuteVolume = 1.0f;protected:/*** Either convert the data from device format to app format and return a pointer* to the conversion buffer,* OR just pass back the original pointer.** Note that this is only used for the INPUT path.** @param audioData* @param numFrames* @return original pointer or the conversion buffer*/virtual const void * maybeConvertDeviceData(const void *audioData, int32_t /*numFrames*/) {return audioData;}void setPeriodNanoseconds(int64_t periodNanoseconds) {mPeriodNanoseconds.store(periodNanoseconds, std::memory_order_release);}int64_t getPeriodNanoseconds() {return mPeriodNanoseconds.load(std::memory_order_acquire);}/*** This should not be called after the open() call.*/void setUsage(aaudio_usage_t usage) {mUsage = usage;}/*** This should not be called after the open() call.*/void setContentType(aaudio_content_type_t contentType) {mContentType = contentType;}void setSpatializationBehavior(aaudio_spatialization_behavior_t spatializationBehavior) {mSpatializationBehavior = spatializationBehavior;}void setIsContentSpatialized(bool isContentSpatialized) {mIsContentSpatialized = isContentSpatialized;}/*** This should not be called after the open() call.*/void setInputPreset(aaudio_input_preset_t inputPreset) {mInputPreset = inputPreset;}/*** This should not be called after the open() call.*/void setAllowedCapturePolicy(aaudio_allowed_capture_policy_t policy) {mAllowedCapturePolicy = policy;}/*** This should not be called after the open() call.*/void setPrivacySensitive(bool privacySensitive) {mIsPrivacySensitive = privacySensitive;}/*** This should not be called after the open() call.*/void setRequireMonoBlend(bool requireMonoBlend) {mRequireMonoBlend = requireMonoBlend;}/*** This should not be called after the open() call.*/void setAudioBalance(float audioBalance) {mAudioBalance = audioBalance;}std::string mMetricsId; // set once during open()std::mutex                 mStreamLock;const android::sp<MyPlayerBase>   mPlayerBase;private:aaudio_result_t safeStop_l() REQUIRES(mStreamLock);/*** Release then close the stream.*/void releaseCloseFinal_l() REQUIRES(mStreamLock) {if (getState() != AAUDIO_STREAM_STATE_CLOSING) { // not already released?// Ignore result and keep closing.(void) release_l();}close_l();}std::atomic<aaudio_stream_state_t>          mState{AAUDIO_STREAM_STATE_UNINITIALIZED};std::atomic_bool            mDisconnected{false};// These do not change after open().int32_t                     mSamplesPerFrame = AAUDIO_UNSPECIFIED;int32_t                     mDeviceSamplesPerFrame = AAUDIO_UNSPECIFIED;int32_t                     mHardwareSamplesPerFrame = AAUDIO_UNSPECIFIED;aaudio_channel_mask_t       mChannelMask = AAUDIO_UNSPECIFIED;int32_t                     mSampleRate = AAUDIO_UNSPECIFIED;int32_t                     mDeviceSampleRate = AAUDIO_UNSPECIFIED;int32_t                     mHardwareSampleRate = AAUDIO_UNSPECIFIED;int32_t                     mDeviceId = AAUDIO_UNSPECIFIED;aaudio_sharing_mode_t       mSharingMode = AAUDIO_SHARING_MODE_SHARED;bool                        mSharingModeMatchRequired = false; // must match sharing mode requestedaudio_format_t              mFormat = AUDIO_FORMAT_DEFAULT;audio_format_t              mHardwareFormat = AUDIO_FORMAT_DEFAULT;aaudio_performance_mode_t   mPerformanceMode = AAUDIO_PERFORMANCE_MODE_NONE;int32_t                     mFramesPerBurst = 0;int32_t                     mDeviceFramesPerBurst = 0;int32_t                     mBufferCapacity = 0;int32_t                     mDeviceBufferCapacity = 0;aaudio_usage_t              mUsage           = AAUDIO_UNSPECIFIED;aaudio_content_type_t       mContentType     = AAUDIO_UNSPECIFIED;aaudio_spatialization_behavior_t mSpatializationBehavior = AAUDIO_UNSPECIFIED;bool                        mIsContentSpatialized = false;aaudio_input_preset_t       mInputPreset     = AAUDIO_UNSPECIFIED;aaudio_allowed_capture_policy_t mAllowedCapturePolicy = AAUDIO_ALLOW_CAPTURE_BY_ALL;bool                        mIsPrivacySensitive = false;bool                        mRequireMonoBlend = false;float                       mAudioBalance = 0;int32_t                     mSessionId = AAUDIO_UNSPECIFIED;// Sometimes the hardware is operating with a different format from the app.// Then we require conversion in AAudio.audio_format_t              mDeviceFormat = AUDIO_FORMAT_INVALID;// callback ----------------------------------AAudioStream_dataCallback   mDataCallbackProc = nullptr;  // external callback functionsvoid                       *mDataCallbackUserData = nullptr;int32_t                     mFramesPerDataCallback = AAUDIO_UNSPECIFIED; // framesstd::atomic<pid_t>          mDataCallbackThread{CALLBACK_THREAD_NONE};AAudioStream_errorCallback  mErrorCallbackProc = nullptr;void                       *mErrorCallbackUserData = nullptr;std::atomic<pid_t>          mErrorCallbackThread{CALLBACK_THREAD_NONE};// background thread ----------------------------------// Use mHasThread to prevent joining twice, which has undefined behavior.bool                        mHasThread GUARDED_BY(mStreamLock) = false;pthread_t                   mThread  GUARDED_BY(mStreamLock) = {};// These are set by the application thread and then read by the audio pthread.std::atomic<int64_t>        mPeriodNanoseconds; // for tuning SCHED_FIFO threads// TODO make atomic?aaudio_audio_thread_proc_t  mThreadProc = nullptr;void                       *mThreadArg = nullptr;aaudio_result_t             mThreadRegistrationResult = AAUDIO_OK;const aaudio_stream_id_t    mStreamId;};} /* namespace aaudio */#endif /* AAUDIO_AUDIOSTREAM_H */

关键字

BtifAvrcpAudioTrackCreate|AAudioStreamBuilder_openStream|AudioStreamBuilder|AAUDIO_OK

相关文章:

  • TCP/IP协议新手友好详解
  • 使用C#写的HTTPS简易服务器
  • Rest Client插件写http文件直接发送请求
  • 深度解析:基于卷积神经网络的宠物识别
  • Feign 深度解析:Java 声明式 HTTP 客户端的终极指南
  • Linux操作系统--进程程序替换and做一个简单的shell
  • Node.js 模块导入的基本流程
  • 【操作系统间文件共享_Samba】一、Samba 技术基础与核心功能剖析​
  • Python爬虫从入门到实战详细版教程
  • 【leetcode100】零钱兑换
  • list底层原理
  • Python基础知识语法归纳总结(数据类型-2)
  • 调和平均数通俗易懂的解释以及为什么这样定义,有什么用
  • Git ——提交至github,Vercel拉取,更新不了项目的问题解决
  • redis数据类型-基数统计HyperLogLog
  • 典籍知识问答典籍查询界面前端界面设计效果实现
  • C# byte[]字节数组常用的一些操作。
  • 实战交易策略 篇十七:翻倍黑马交易策略
  • npm的基本使用安装所有包,安装删除指定版本的包,配置命名别名
  • 解决方案 | 晶尊微智能马桶着座感应模块
  • 体坛联播|曼城击败维拉迎英超三连胜,巴萨遭遇魔鬼赛程
  • 细说汇率 ⑬ 美元进入“全是坏消息”阶段
  • 耐克领跑女性运动市场:持续加码、创新,更多新增长点有望涌现
  • “从山顶到海洋”科技成果科普巡展在重庆启动,免费开放
  • 全球首个AI价值观数据集出炉
  • 包邮到高原,跨越4083公里送妈妈一张按摩椅