HarmonyOS鸿蒙Next中应用侧特定区域的视频录制

HarmonyOS鸿蒙Next中应用侧特定区域的视频录制 【问题描述】:想实现应用侧特定区域的视频录制,我们场景是uvc摄像头进行红外测温,上面的水印和icon等都是应用侧处理的,视频流是uvc底层渲染的。现在我们要有个录像功能,基于红框区域对整个视频画面和水印等进行录像

【问题现象】:看官方的这块文档写的很简陋,而且好像只能从native层实现,有没有可以参考的相关demo啊,参照文档:屏幕录制支持矩形区域录制-录制-媒体开发指导(C/C++)-Media Kit(媒体服务)-媒体 - 华为HarmonyOS开发者

【版本信息】:未涉及

【复现代码】:未涉及

【尝试解决方案】:未涉及


更多关于HarmonyOS鸿蒙Next中应用侧特定区域的视频录制的实战教程也可以访问 https://www.itying.com/category-93-b0.html

3 回复

【背景知识】

 【解决方案】

参考基于AVScreenCapture实现屏幕录制,提供了native进行屏幕录制的Demo

src/main/cpp/CAVScreenCaptureToFile/CAVScreenCaptureToFile.cpp中调用OH_AVScreenCapture_SetCaptureArea()接口传入希望录制的矩形区,可以完成矩形区的录制

/*
 * Copyright (c) 2025 Huawei Device Co., Ltd.
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
#include "CAVScreenCaptureToFile.h"
#include "hilog/log.h"
#include <js_native_api.h>
#include <multimedia/player_framework/native_avscreen_capture.h>
#include <multimedia/player_framework/native_avscreen_capture_errors.h>
#include <node_api.h>
#include <thread>

#undef LOG_DOMAIN
#undef LOG_TAG
#define LOG_DOMAIN 0x3200
#define LOG_TAG "MY_CAVSCREENCAPTURE"

bool m_IsRunning = false;
OH_AVScreenCapture *g_avCapture_;
napi_threadsafe_function tsFn;


void CAVScreenCaptureToFile::StopScreenCaptureRecording(struct OH_AVScreenCapture *capture) {
    if (m_IsRunning && capture != nullptr) {
        OH_AVScreenCapture_StopScreenRecording(capture);
        m_IsRunning = false;
        OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture StopScreenCapture");
    }
}

/*
 * Screen recording Error callback
 */
void CAVScreenCaptureToFile::OnErrorSaveFile(OH_AVScreenCapture *capture, int32_t errorCode, void *userData) {
    (void)capture;
    OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture OnError errorCode is %{public}d", errorCode);
    (void)userData;
}

void CAVScreenCaptureToFile::ReleaseSCWorker(struct OH_AVScreenCapture *capture) {
    OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture ReleaseSCInstanceWorker S");
    OH_AVScreenCapture_Release(capture);
    m_IsRunning = false;
    g_avCapture_ = nullptr;
    OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture ReleaseSCInstanceWorker E");
}

/*
 * Screen recording state change callback
 */
void CAVScreenCaptureToFile::OnStateChangeSaveFile(struct OH_AVScreenCapture *capture,
                                                   OH_AVScreenCaptureStateCode stateCode, void *userData) {
    (void)capture;
    switch (stateCode) {
    case OH_SCREEN_CAPTURE_STATE_STARTED: {
        OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_STARTED");
        break;
    }

    case OH_SCREEN_CAPTURE_STATE_CANCELED: {
        OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_CANCELED ");
        StopScreenCaptureRecording(capture);
        break;
    }
    case OH_SCREEN_CAPTURE_STATE_STOPPED_BY_CALL: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_STOPPED_BY_CALL");
        break;
    }
    case OH_SCREEN_CAPTURE_STATE_MIC_UNAVAILABLE: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_MIC_UNAVAILABLE");
        break;
    }
    case OH_SCREEN_CAPTURE_STATE_INTERRUPTED_BY_OTHER: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_INTERRUPTED_BY_OTHER");
        break;
    }
    case OH_SCREEN_CAPTURE_STATE_MIC_MUTED_BY_USER: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_MIC_MUTED_BY_USER");
        break;
    }

    case OH_SCREEN_CAPTURE_STATE_MIC_UNMUTED_BY_USER: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_MIC_MUTED_BY_USER");
        break;
    }

    case OH_SCREEN_CAPTURE_STATE_ENTER_PRIVATE_SCENE: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_ENTER_PRIVATE_SCENE");
        std::thread releaseSCInstanceThread(ReleaseSCWorker, capture);
        releaseSCInstanceThread.detach();
        break;
    }

    case OH_SCREEN_CAPTURE_STATE_EXIT_PRIVATE_SCENE: {
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_EXIT_PRIVATE_SCENE");
        break;
    }

    case OH_SCREEN_CAPTURE_STATE_STOPPED_BY_USER: {
        napi_acquire_threadsafe_function(tsFn);
        napi_call_threadsafe_function(tsFn, nullptr, napi_tsfn_nonblocking);
        napi_release_threadsafe_function(tsFn, napi_tsfn_release);
        tsFn = nullptr;
        OH_LOG_INFO(LOG_APP,
                    "CAVScreenCaptureToFile ScreenCapture OnStateChange OH_SCREEN_CAPTURE_STATE_STOPPED_BY_USER");
        std::thread releaseSCInstanceThread(ReleaseSCWorker, capture);
        releaseSCInstanceThread.detach();
        break;
    }

    default:
        break;
    }
    
    (void)userData;
}

/*
 * Configuration parameters
 */
void CAVScreenCaptureToFile::SetConfigAsFile(OH_AVScreenCaptureConfig &config, int32_t videoWidth,
                                             int32_t videoHeight) {
    OH_AudioCaptureInfo micCapInfo = {.audioSampleRate = 48000, .audioChannels = 2, .audioSource = OH_SOURCE_DEFAULT};
    OH_AudioCaptureInfo innerCapInfo = {.audioSampleRate = 48000, .audioChannels = 2, .audioSource = OH_ALL_PLAYBACK};
    OH_AudioEncInfo audioEncInfo = {.audioBitrate = 96000, .audioCodecformat = OH_AudioCodecFormat::OH_AAC_LC};
    OH_AudioInfo audioInfo = {.micCapInfo = micCapInfo, .innerCapInfo = innerCapInfo, .audioEncInfo = audioEncInfo};

    OH_VideoCaptureInfo videoCapInfo = {
        .videoFrameWidth = videoWidth, .videoFrameHeight = videoHeight, .videoSource = OH_VIDEO_SOURCE_SURFACE_RGBA};
    OH_VideoEncInfo videoEncInfo = {
        .videoCodec = OH_VideoCodecFormat::OH_H264, .videoBitrate = 10000000, .videoFrameRate = 30};
    OH_VideoInfo videoInfo = {.videoCapInfo = videoCapInfo, .videoEncInfo = videoEncInfo};

    config = {
        .captureMode = OH_CAPTURE_HOME_SCREEN,
        .dataType = OH_ORIGINAL_STREAM,
        .audioInfo = audioInfo,
        .videoInfo = videoInfo,
        .recorderInfo = {},
    };
//
////// 初始化录屏,传入配置信息OH_AVScreenRecorderConfig。
//OH_AVScreenCapture_Init(g_avCapture_, config);
//// 1. 可选,可以根据需要设置区域坐标和大小,设置想要捕获的区域,如下方创建了一个从(0, 0)为起点的长100,宽100的矩形区域。
//OH_Rect* region = new OH_Rect;
//region->x = 0;
//region->y = 0;
//region->width = 400;
//region->height = 400;
//// 2. 传入矩形区域所在的屏幕Id。
//uint64_t regionDisplayId = 0;
//OH_AVScreenCapture_SetCaptureArea(g_avCapture_, regionDisplayId, region);
}

napi_value CAVScreenCaptureToFile::StopScreenCaptureToFile(napi_env env, napi_callback_info info) {
    (void)info;
    OH_AVSCREEN_CAPTURE_ErrCode result = AV_SCREEN_CAPTURE_ERR_OPERATE_NOT_PERMIT;
    napi_value res;

    if (m_IsRunning && g_avCapture_ != nullptr) {
        OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture File Stop");
        result = OH_AVScreenCapture_StopScreenRecording(g_avCapture_);
        if (result != AV_SCREEN_CAPTURE_ERR_BASE) {
            OH_LOG_ERROR(
                LOG_APP,
                "CAVScreenCaptureToFile StopScreenCapture OH_AVScreenCapture_StopScreenRecording Result: %{public}d",
                result);
        } else {
            OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile StopScreenCapture OH_AVScreenCapture_StopScreenRecording");
        }
        result = OH_AVScreenCapture_Release(g_avCapture_);
        if (result != AV_SCREEN_CAPTURE_ERR_BASE) {
            OH_LOG_ERROR(LOG_APP, "CAVScreenCaptureToFile StopScreenCapture OH_AVScreenCapture_Release: %{public}d",
                         result);
        } else {
            OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile OH_AVScreenCapture_Release success");
        }
        m_IsRunning = false;
        g_avCapture_ = nullptr;
    }
    napi_create_int32(env, result, &res);
    return res;
}

napi_value CAVScreenCaptureToFile::StartScreenCaptureToFile(napi_env env, napi_callback_info info) {
    size_t argc = 3;
    napi_value args[3] = {nullptr};
    napi_get_cb_info(env, info, &argc, args, nullptr, nullptr);

    int32_t outputFd, videoWidth, videoHeight;
    napi_get_value_int32(env, args[0], &outputFd);
    napi_get_value_int32(env, args[1], &videoWidth);
    napi_get_value_int32(env, args[2], &videoHeight);
    OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile FD %{public}d", outputFd);
    if (outputFd <= 0) {
        OH_LOG_ERROR(LOG_APP, "CAVScreenCaptureToFile FD ERROR  %{public}d", outputFd);
        napi_value res;
        napi_create_int32(env, -1, &res);
        return res;
    }

    if (g_avCapture_ != nullptr) {
        StopScreenCaptureRecording(g_avCapture_);
        OH_AVScreenCapture_Release(g_avCapture_);
    }
    g_avCapture_ = OH_AVScreenCapture_Create();
    if (g_avCapture_ == nullptr) {
        OH_LOG_ERROR(LOG_APP, "CAVScreenCaptureToFile create screen capture failed");
    }
    OH_AVScreenCaptureConfig config_;
    OH_RecorderInfo recorderInfo;

    std::string fileUrl = "fd://" + std::to_string(outputFd);
    recorderInfo.url = const_cast<char *>(fileUrl.c_str());
    recorderInfo.fileFormat = OH_ContainerFormatType::CFT_MPEG_4;
    OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture fileUrl %{public}s", fileUrl.c_str());

    SetConfigAsFile(config_, videoWidth, videoHeight);
    config_.captureMode = OH_CAPTURE_HOME_SCREEN;
    config_.dataType = OH_CAPTURE_FILE;
    config_.recorderInfo = recorderInfo;

    bool isMicrophone = true;
    OH_AVScreenCapture_SetMicrophoneEnabled(g_avCapture_, isMicrophone);
    OH_AVScreenCapture_SetErrorCallback(g_avCapture_, OnErrorSaveFile, nullptr);
    OH_AVScreenCapture_SetStateCallback(g_avCapture_, OnStateChangeSaveFile, nullptr);
    OH_AVScreenCapture_SetCanvasRotation(g_avCapture_, true);

    OH_AVSCREEN_CAPTURE_ErrCode result = OH_AVScreenCapture_Init(g_avCapture_, config_);
    if (result != AV_SCREEN_CAPTURE_ERR_OK) {
        OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture OH_AVScreenCapture_Init failed %{public}d", result);
    }
    OH_LOG_INFO(LOG_APP, "CAVScreenCapt ureToFile ScreenCapture OH_AVScreenCapture_Init succ %{public}d", result);

// 1. 可选,可以根据需要设置区域坐标和大小,设置想要捕获的区域,如下方创建了一个从(0, 0)为起点的长100,宽100的矩形区域。
OH_Rect* region = new OH_Rect;
region->x = 0;
region->y = 0;
region->width = 400;
region->height = 400;
// 2. 传入矩形区域所在的屏幕Id。
uint64_t regionDisplayId = 0;
OH_AVScreenCapture_SetCaptureArea(g_avCapture_, regionDisplayId, region);

    result = OH_AVScreenCapture_StartScreenRecording(g_avCapture_);
    if (result != AV_SCREEN_CAPTURE_ERR_OK) {
        OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture Started failed %{public}d", result);
        OH_AVScreenCapture_Release(g_avCapture_);
    }
    OH_LOG_INFO(LOG_APP, "CAVScreenCaptureToFile ScreenCapture Started succ %{public}d", result);
    m_IsRunning = true;
    napi_value res;
    napi_create_int32(env, result, &res);
    return res;
}

void CAVScreenCaptureToFile::SetStopFlag(napi_env env, napi_value jsCb, void *context, void *data) {
    if (env == nullptr) {
        return;
    }
    napi_value res;
    napi_call_function(env, nullptr, jsCb, 0, nullptr, &res);
}

/*
 * Set the callback function after stopping
 */
napi_value CAVScreenCaptureToFile::SetStopCallbackToFile(napi_env env, napi_callback_info info) {
    size_t argc = 1;
    napi_value args[1] = {nullptr};
    napi_get_cb_info(env, info, &argc, args, nullptr, nullptr);

    napi_value resourceName = nullptr;
    napi_create_string_utf8(env, "ThreadSafeSetStopFlag", NAPI_AUTO_LENGTH, &resourceName);
    napi_create_threadsafe_function(env, args[0], nullptr, resourceName, 0, 1, nullptr, nullptr, nullptr, SetStopFlag,
                                    &tsFn);
    return nullptr;
}

更多关于HarmonyOS鸿蒙Next中应用侧特定区域的视频录制的实战系列教程也可以访问 https://www.itying.com/category-93-b0.html


HarmonyOS Next中应用侧特定区域视频录制通过AVRecorder API实现。开发者需配置录制参数,包括视频源类型、输出格式、编码规格及帧率。使用SurfaceProvider创建指定区域的绘制表面,绑定到AVRecorder进行画面捕获。录制过程支持实时预览,可控制开始、暂停、恢复和停止操作。最终视频文件保存至应用沙箱路径,支持后续处理或分享。

针对您提出的在HarmonyOS Next中实现应用侧特定区域视频录制的需求,结合您提到的UVC摄像头红外测温场景,核心思路是捕获并合成应用层UI与摄像头视频流。官方文档侧重于Native层基础能力,要实现您的需求,关键在于在应用层进行画面合成与录制

以下是具体的技术路径和要点:

  1. 核心方案:离屏渲染与画面合成 您需要将两个画面源进行合成:

    • 摄像头视频流:通过UVCCameraKit获取原始视频帧(PixelMapArrayBuffer)。
    • 应用层UI(水印、图标、红框):在CanvasCustomNode上绘制。

    推荐使用XComponentOffscreenCanvas创建一个离屏渲染区域。在此区域中,首先绘制摄像头视频帧作为背景,然后在对应坐标上叠加绘制您的UI元素(水印、红框等)。这个合成后的画面,才是您需要录制的最终画面。

  2. 录制实现:使用MediaRecorder 将上述合成后的画面帧,通过MediaRecorder API进行编码和录制。关键步骤包括:

    • 配置视频源:创建VideoRecorderConfig时,视频源应设置为来自您合成画面的数据流。您需要将离屏画布中合成好的每一帧图像数据,转换为MediaRecorder可接受的格式(如ImageArrayBuffer),并通过回调或Surface提供给录制器。
    • 设置录制区域:您提到的“红框区域”是整个合成画面的一部分。可以在合成时直接以红框区域为边界进行绘制和裁剪,确保离屏画布的输出尺寸就是红框区域的大小。这样,MediaRecorder录制的就是已经裁剪好的目标区域内容,无需在录制器层面再做复杂的区域设置。
  3. 性能与参考

    • 性能考虑:画面合成与帧数据转换是CPU/GPU密集型操作,建议在Worker线程中进行,避免阻塞UI。同时,注意控制录制分辨率与帧率,以平衡画质和性能。
    • 参考方向:目前公开Demo可能不直接匹配。建议重点参考MediaRecorder开发指南和XComponent/Canvas的离屏渲染示例。您的实现本质上是自定义视频源的录制,即由应用生成每一帧画面,而非直接录制系统屏幕或单个摄像头视图。

总结,您需要构建一个自定义的合成渲染管线,将UVC视频流与应用UI在指定区域合成,并将结果帧输送给MediaRecorder。这完全可以在应用层(ArkTS)实现,无需深入Native层处理复杂的屏幕捕获逻辑。

回到顶部