HarmonyOS 鸿蒙Next音视频录制问题

发布于 1周前 作者 phonegap100 来自 鸿蒙OS

HarmonyOS 鸿蒙Next音视频录制问题

目前使用系统api时,录制的屏幕流、摄像头流以及音频流,录制的时长各不一样,这个怎么处理呢

2 回复
  1. 各个模块回调处理的时机不一样,有一点误差是正常的,我本地测试1分半,摄像和音频时长一致,录频又少一秒。
  2. 录屏目前还没有支持动态帧率。

录屏和录制视频都可以集成音频的,合在一起录制不会出现不同步问题。

不能同时开启多个mic录制流,后起的mic录制流会被拒绝。

分开录没法保证时长一样,播放时音画同步demo参考:https://gitee.com/harmonyos_samples/AVCodecVideo。

void Player::VideoDecOutputThread() {

  sampleInfo_.frameInterval = MICROSECOND / sampleInfo_.frameRate;

  while (true) {

thread_local auto lastPushTime = std::chrono::system_clock::now();

CHECK_AND_BREAK_LOG(isStarted_, "Decoder output thread out");

std::unique_lock<std::mutex> lock(videoDecContext_->outputMutex);

bool condRet = videoDecContext_->outputCond.wait_for(

lock, 5s, [this]() { return !isStarted_ || !videoDecContext_->outputBufferInfoQueue.empty(); });

  CHECK_AND_BREAK_LOG(isStarted_, "Decoder output thread out");

  CHECK_AND_CONTINUE_LOG(!videoDecContext_->outputBufferInfoQueue.empty(),

    "Buffer queue is empty, continue, cond ret: %{public}d", condRet);

  CodecBufferInfo bufferInfo = videoDecContext_->outputBufferInfoQueue.front();

  videoDecContext_->outputBufferInfoQueue.pop();

  AVCODEC_SAMPLE_LOGI("bufferInfo.bufferIndex: %{public}ld", bufferInfo.bufferIndex);

  CHECK_AND_BREAK_LOG(!(bufferInfo.attr.flags & AVCODEC_BUFFER_FLAGS_EOS), "Catch EOS, thread out");

  videoDecContext_->outputFrameCount++;

  AVCODEC_SAMPLE_LOGW("Out buffer count: %{public}u, size: %{public}d, flag: %{public}u, pts: %{public}" PRId64,

  videoDecContext_->outputFrameCount, bufferInfo.attr.size, bufferInfo.attr.flags,

  bufferInfo.attr.pts);

  lock.unlock();

  // get audio render position

  int64_t framePosition = 0;

  int64_t timestamp = 0;

  int32_t ret = OH_AudioRenderer_GetTimestamp(audioRenderer_, CLOCK_MONOTONIC, &framePosition, &timestamp);

  AVCODEC_SAMPLE_LOGI("framePosition: %{public}ld, nowTimeStamp_: %{public}ld", framePosition, nowTimeStamp_);

  audioTimeStamp_ = timestamp ; // ns

  // audio render getTimeStamp error, render it

  if (ret != AUDIOSTREAM_SUCCESS || (timestamp == 0) || (framePosition == 0)) {

    // first frame, render without wait

    videoDecoder_->FreeOutputBuffer(bufferInfo.bufferIndex, true);

    std::this_thread::sleep_until(lastPushTime + std::chrono::microseconds(sampleInfo_.frameInterval));

    lastPushTime = std::chrono::system_clock::now();

    continue;

  }

  // after seek, audio render flush, framePosition = 0, then writtenSampleCnt_ = 0

  int64_t latency = (writtenSampleCnt_ - framePosition) * 1000 * 1000 / sampleInfo_.audioSampleRate; //us

  AVCODEC_SAMPLE_LOGI("latency: %{public}ld writtenSampleCnt_: %{public}ld", latency, writtenSampleCnt_);

  nowTimeStamp_ = getCurrentTime();

  int64_t anchordiff = (nowTimeStamp_ - audioTimeStamp_) / 1000; //ns to us

  int64_t audioPlayedTime = audioBufferPts_ - latency + anchordiff; //us, audio buffer accelerate render time

  int64_t videoPlayedTime = bufferInfo.attr.pts; //us, video buffer expected render time

  // audio render timestamp and now timestamp diff

  int64_t waitTimeUs = videoPlayedTime - audioPlayedTime; //us

  AVCODEC_SAMPLE_LOGI("bufferInfo.bufferIndex: %{public}ld", bufferInfo.bufferIndex);

  AVCODEC_SAMPLE_LOGI("audioPlayedTime: %{public}ld, videoPlayedTime: %{public}ld, nowTimeStamp_ :%{public}ld, audioTimeStamp_ :%{public}ld, waitTimeUs :%{public}ld, anchordiff :%{public}ld",

    audioPlayedTime, videoPlayedTime, nowTimeStamp_ ,audioTimeStamp_, waitTimeUs, anchordiff);

  bool dropFrame = false;

  // video buffer is too late, drop it

  if (waitTimeUs < WAIT_TIME_US_THRESHOLD_WARNING) {

    dropFrame = true;

    AVCODEC_SAMPLE_LOGE("buffer is too late");

  } else {

    AVCODEC_SAMPLE_LOGE("buffer is too early waitTimeUs: %{public}ld", waitTimeUs);

    // [0, ), render it with waitTimeUs, max 1.5s

    // [-40, 0), render it

    if (waitTimeUs > WAIT_TIME_US_THRESHOLD) {

      waitTimeUs = WAIT_TIME_US_THRESHOLD;

    }

    // per frame render time reduced by 33ms

    if (waitTimeUs > sampleInfo_.frameInterval + PER_SINK_TIME_THRESHOLD) {

      waitTimeUs = sampleInfo_.frameInterval + PER_SINK_TIME_THRESHOLD;

      AVCODEC_SAMPLE_LOGE("buffer is too early and reduced 33ms, waitTimeUs: %{public}ld", waitTimeUs);

    }

  }

  if (waitTimeUs > 0) {

    std::this_thread::sleep_for(std::chrono::microseconds(waitTimeUs));

  }

  lastPushTime = std::chrono::system_clock::now();

  ret = videoDecoder_->FreeOutputBuffer(bufferInfo.bufferIndex, !dropFrame);

  CHECK_AND_BREAK_LOG(ret == AVCODEC_SAMPLE_ERR_OK, "xxx Decoder output thread out");

}

writtenSampleCnt_ = 0;

audioBufferPts_ = 0;

StartRelease();

}

更多关于HarmonyOS 鸿蒙Next音视频录制问题的实战系列教程也可以访问 https://www.itying.com/category-93-b0.html


针对HarmonyOS 鸿蒙Next音视频录制问题,以下提供直接相关的解决信息:

HarmonyOS 鸿蒙Next在音视频录制功能上,如果遇到问题,首先需要检查系统权限设置。确保应用已获得麦克风和摄像头的访问权限,这是进行音视频录制的基础。在系统设置的应用管理中,可以查看并调整这些权限。

其次,检查音视频录制功能的代码实现。确认使用的API是否符合HarmonyOS的最新规范,特别是关于媒体录制的接口。如果使用的是第三方库或框架,确保它们已适配HarmonyOS,并且版本是最新的。

此外,考虑硬件设备的兼容性。不同设备在音视频处理上可能存在差异,确保测试的设备支持所需的音视频录制规格。

最后,查看系统日志和错误报告,以获取更具体的错误信息。这些信息有助于定位问题,并采取相应的解决措施。

如果在进行上述检查后,音视频录制问题依旧存在,可能是由于系统bug或特定场景的兼容性问题。此时,建议直接联系HarmonyOS的官方客服,以获取更专业的技术支持。

如果问题依旧没法解决请联系官网客服,官网地址是:https://www.itying.com/category-93-b0.html

回到顶部