HarmonyOS 鸿蒙Next OH_AVMuxer在录制麦克风与视频时,时间如何保持同步
HarmonyOS 鸿蒙Next OH_AVMuxer在录制麦克风与视频时,时间如何保持同步
1、首先将从OH_AudioCodec_RegisterCallback/OH_VideoEncoder_RegisterCallback注册的回调函数中得到的OH_AVBuffer存放到缓存队列std::queue<CodecBufferInfo> outputBufferInfoQueue_中,CodecBufferInfo参照下面:
struct CodecBufferInfo {
uint32_t bufferIndex = 0;
uintptr_t *buffer = nullptr;
uint8_t *bufferAddr = nullptr;
OH_AVCodecBufferAttr attr = { 0, 0, 0, AVCODEC_BUFFER_FLAGS_NONE };
CodecBufferInfo(uint8_t *addr) : bufferAddr(addr){};
CodecBufferInfo(uint8_t *addr, int32_t bufferSize)
: bufferAddr(addr),
attr({ 0, bufferSize, 0, AVCODEC_BUFFER_FLAGS_NONE }){};
CodecBufferInfo(uint32_t argBufferIndex, OH_AVMemory *argBuffer, OH_AVCodecBufferAttr argAttr)
: bufferIndex(argBufferIndex),
buffer(reinterpret_cast<uintptr_t *>(argBuffer)),
attr(argAttr){};
CodecBufferInfo(uint32_t argBufferIndex, OH_AVMemory *argBuffer)
: bufferIndex(argBufferIndex),
buffer(reinterpret_cast<uintptr_t *>(argBuffer)){};
CodecBufferInfo(uint32_t argBufferIndex, OH_AVBuffer *argBuffer)
: bufferIndex(argBufferIndex),
buffer(reinterpret_cast<uintptr_t *>(argBuffer))
{
OH_AVBuffer_GetBufferAttr(argBuffer, &attr);
};
};
2、在视频处理线程中,修改pts时戳为当前最新的时间,然后使用OH_AVBuffer_SetBufferAttr将其设置为从outputBufferInfoQueue_取出的OH_AVBuffer的属性,并调用OH_AVMuxer_WriteSampleBuffer将sample写入封装器。
修改时间片段参考:
uint64_t systemTimeUs = std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
bufferInfo.attr.pts = systemTimeUs;
视频处理线程参考:
void Recorder::VideoEncOutputThread()
{
while (true) {
std::unique_lock<std::mutex> lock(videoEncContext_->outputMutex_);
bool condRet = videoEncContext_->outputCond_.wait_for(
lock, 5s, [this]() { return !isStarted_ || !videoEncContext_->outputBufferInfoQueue_.empty(); });
CodecBufferInfo bufferInfo = videoEncContext_->outputBufferInfoQueue_.front();
videoEncContext_->outputBufferInfoQueue_.pop();
lock.unlock();
if ((bufferInfo.attr.flags & AVCODEC_BUFFER_FLAGS_SYNC_FRAME) ||
(bufferInfo.attr.flags == AVCODEC_BUFFER_FLAGS_NONE)) {
videoEncContext_->outputFrameCount_++;
uint64_t systemTimeUs = std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
bufferInfo.attr.pts = systemTimeUs;
} else {
bufferInfo.attr.pts = 0;
}
muxer_->WriteSample(muxer_->GetVideoTrackId(), reinterpret_cast<OH_AVBuffer *>(bufferInfo.buffer),
bufferInfo.attr);
int32_t ret = videoEncoder_->FreeOutputData(bufferInfo.bufferIndex);
}
AVCODEC_SAMPLE_LOGI("Exit, frame count: %{public}u", videoEncContext_->inputFrameCount_);
StartRelease();
}
WriteSample参考:
int32_t Muxer::WriteSample(int32_t trackId, OH_AVBuffer *buffer, OH_AVCodecBufferAttr &attr)
{
std::lock_guard<std::mutex> lock(writeMutex_);
int32_t ret = OH_AVBuffer_SetBufferAttr(buffer, &attr);
ret = OH_AVMuxer_WriteSampleBuffer(muxer_, trackId, buffer);
return AVCODEC_SAMPLE_ERR_OK;
}
3、与视频处理类似在音频处理线程中,修改pts时戳为当前最新的时间,最后编码输出数据写入封装器,这样和视频的pts时戳就都是以系统时间为参照,从而实现音画同步。
音频处理线程参考:
void Recorder::AudioEncOutputThread()
{
while (true) {
std::unique_lock<std::mutex> lock(audioEncContext_->outputMutex_);
bool condRet = audioEncContext_->outputCond_.wait_for(
lock, 5s, [this]() { return !isStarted_ || !audioEncContext_->outputBufferInfoQueue_.empty(); });
CodecBufferInfo bufferInfo = audioEncContext_->outputBufferInfoQueue_.front();
audioEncContext_->outputBufferInfoQueue_.pop();
audioEncContext_->outputFrameCount_++;
uint64_t systemTimeUs = std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
bufferInfo.attr.pts = systemTimeUs;
lock.unlock();
// 编码输出数据写入封装器
muxer_->WriteSample(muxer_->GetAudioTrackId(), reinterpret_cast<OH_AVBuffer *>(bufferInfo.buffer),
bufferInfo.attr);
int32_t ret = audioEncoder_->FreeOutputData(bufferInfo.bufferIndex);
}
StartRelease();
}
WriteSample参考:
int32_t Muxer::WriteSample(int32_t trackId, OH_AVBuffer *buffer, OH_AVCodecBufferAttr &attr)
{
std::lock_guard<std::mutex> lock(writeMutex_);
int32_t ret = OH_AVBuffer_SetBufferAttr(buffer, &attr);
ret = OH_AVMuxer_WriteSampleBuffer(muxer_, trackId, buffer);
return AVCODEC_SAMPLE_ERR_OK;
}
在HarmonyOS鸿蒙系统中,使用Next OH_AVMuxer录制麦克风与视频时,保持时间同步主要依赖于精确的时间戳管理和同步控制机制。以下是一些关键措施:
- 时间戳管理:确保每个视频帧和音频包都带有正确的时间戳(PTS和DTS),用于指定解码和显示的时间点。
- 同步基准:选择系统时钟作为统一的同步基准,确保音视频数据在编码、传输和解码过程中保持时间一致性。
- 缓冲区管理:维护音视频数据的缓冲区,避免数据丢失或延迟,确保数据能够按照时间顺序被及时送入播放或录制队列。
- 智能调度:根据音视频的时间戳,智能调度解码和录制过程,确保音视频数据按照正确的时序进行处理。
- 网络抖动处理:在网络传输中,使用网络抖动缓冲区和重排序机制,减少因网络不稳定导致的音视频数据延迟或乱序。
如果以上措施无法完全解决时间同步问题,可能是由于系统或硬件的特定限制。此时,建议联系HarmonyOS官网客服,获取更专业的技术支持。官网地址是:https://www.itying.com/category-93-b0.html。