HarmonyOS鸿蒙Next中相机视频录制如何做大点击开始录制 停止后不销毁相机资源,再次点击可以开始新的录制?

HarmonyOS鸿蒙Next中相机视频录制如何做大点击开始录制 停止后不销毁相机资源,再次点击可以开始新的录制?

// 模型任务接口
export interface ModelTask {
  accessKey: string;
  bucket: string;
  coverPath: string;
  inputPath: string;
  region: string;
  secretKey: string;
  sessionToken: string;
  taskId: string;
}

// 相机资源管理接口
export interface CameraResources {
  cameraManager?: camera.CameraManager;
  videoOutput?: camera.VideoOutput;
  captureSession?: camera.Session;
  cameraInput?: camera.CameraInput;
  previewOutput?: camera.PreviewOutput;
  avRecorder?: media.AVRecorder;
}

// 确认退出对话框配置选项
export interface ExitDialogOptions {
  onConfirm: () => void;
  onCancel: () => void;
}

// 确认退出对话框参数类
export class ExitDialogParams {
  onConfirm: () => void;
  onCancel: () => void;

  constructor(options: ExitDialogOptions) {
    this.onConfirm = options.onConfirm;
    this.onCancel = options.onCancel;
  }
}

// 定义观察者接口
// 观察者模式在MVVM中的作用
// 观察者模式允许对象(观察者)订阅另一个对象(被观察者/主题)的状态变化。当被观察者状态发生变化时,它会通知所有观察者。在你的代码中:
// Model层(CaptureModel)是被观察者/主题
// ViewModel层(CaptureViewModel)是观察者,实现了CaptureModelObserverIntf接口

// 解耦:Model不需要知道具体是谁在观察它,只负责发送通知;ViewModel不需要主动查询Model状态,只需要被动接收通知。
// 实时性:Model状态变化时能立即通知ViewModel,保证UI状态的及时更新。
// 一对多:一个Model可以有多个观察者,便于扩展。
// 符合MVVM架构:维持了Model和ViewModel之间的清晰边界,遵循单向数据流原则。
// 这种观察者模式是鸿蒙ArkTS MVVM架构中实现数据通信的常用方式之一,特别适合处理异步操作(如上传进度)和持续变化的状态(如录制时间)。

export class CaptureModelObserverIntf {
  // OnUploadProgress(progress: number): void
  //   作用:当上传进度更新时通知观察者
  // 参数:progress - 表示当前上传进度的百分比值(0-100)
  // 使用场景:当视频上传到云端过程中,Model层会定期调用此方法通知ViewModel当前进度,ViewModel再更新UI显示进度条
  OnUploadProgress(progress: number): void {
  }

  // OnUploadComplete(): void
  //   作用:当上传完成时通知观察者
  // 参数:无
  // 使用场景:上传成功完成后,Model层调用此方法通知ViewModel,ViewModel可以关闭进度对话框、显示成功提示、进行页面导航等
  OnUploadComplete(): void {
  }

  // OnUploadError(error: Error): void
  //   作用:当上传过程中发生错误时通知观察者
  // 参数:error - 包含错误信息的Error对象
  // 使用场景:上传失败时,Model层调用此方法通知ViewModel,ViewModel可以显示错误提示、重试或执行其他错误处理逻辑
  OnUploadError(error: Error): void {
  }

  // OnRecordTimeUpdate(seconds: number): void
  //   作用:当录制时间更新时通知观察者
  // 参数:seconds - 当前录制的秒数
  // 使用场景:录制视频过程中,Model层调用此方法通知ViewModel当前录制时长,ViewModel更新UI显示录制时间
  OnRecordTimeUpdate(seconds: number): void {
  }
}

export class CaptureModel {
  private static instance: CaptureModel;
  private observers: CaptureModelObserverIntf[] = [];
  private context: common.UIAbilityContext | null = null;
  private videoUri: string = '';
  private url: string = '';
  private coverPath: string = '';
  private cameraResources: CameraResources = {};

  private constructor() {}

  public static getInstance(): CaptureModel {
    if (!CaptureModel.instance) {
      CaptureModel.instance = new CaptureModel();
    }
    return CaptureModel.instance;
  }

  public setContext(context: common.UIAbilityContext): void {
    this.context = context;
  }

  public registerObserver(observer: CaptureModelObserverIntf): void {
    this.observers.push(observer);
  }

  public unregisterObserver(observer: CaptureModelObserverIntf): void {
    const index = this.observers.indexOf(observer);
    if (index !== -1) {
      this.observers.splice(index, 1);
    }
  }

  // 创建视频文件并获取URI
  public initVideoFile(): string {
    // 本地沙箱路径
    const path = `${this.context?.tempDir}/VIDEO_${new Date().getTime()}.mp4`;
    let file = FileUtil.createOrOpen(path);
    this.url = 'fd://' + file.fd;
    this.videoUri = fileUri.getUriFromPath(path);
    return this.videoUri;
  }

  // 初始化相机资源
  public async initCamera(surfaceId: string): Promise<boolean> {
    try {
      this.cameraResources.cameraManager = camera.getCameraManager(this.context);
      if (!this.cameraResources.cameraManager) {
        LogUtil.error(TAG, 'camera.getCameraManager error');
        return false;
      }

      this.cameraResources.cameraManager.on(
        'cameraStatus',
        (err: BusinessError, cameraStatusInfo: camera.CameraStatusInfo) => {
          LogUtil.info(TAG, `camera : ${cameraStatusInfo.camera.cameraId}`);
          LogUtil.info(TAG, `status:  ${cameraStatusInfo.status}`);
        });

      let cameraArray: Array<camera.CameraDevice> = [];
      try {
        cameraArray = this.cameraResources.cameraManager.getSupportedCameras();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `getSupportedCameras call failed. error code: ${err.code}`);
        return false;
      }

      if (cameraArray.length <= 0) {
        LogUtil.error(TAG, 'cameraManager.getSupportedCameras error');
        return false;
      }

      let cameraOutputCap: camera.CameraOutputCapability =
        this.cameraResources.cameraManager.getSupportedOutputCapability(cameraArray[0], camera.SceneMode.NORMAL_VIDEO);
      if (!cameraOutputCap) {
        LogUtil.error(TAG, 'cameraManager.getSupportedOutputCapability error');
        return false;
      }

      // 获取视频配置
      let videoProfilesArray: Array<camera.VideoProfile> = cameraOutputCap.videoProfiles;
      if (!videoProfilesArray) {
        LogUtil.error(TAG, 'createOutput videoProfilesArray === null || undefined');
        return false;
      }

      let videoSize: camera.Size = {
        width: 1920,
        height: 1080
      }
      let videoProfile: undefined | camera.VideoProfile = videoProfilesArray.find(
        (profile: camera.VideoProfile) => {
          return profile.size.width === videoSize.width && profile.size.height === videoSize.height;
        });

      if (!videoProfile) {
        LogUtil.error(TAG, 'videoProfile is not found');
        return false;
      }

      // 配置录像器
      let aVRecorderProfile: media.AVRecorderProfile = {
        audioBitrate: 48000,
        audioChannels: 2,
        audioCodec: media.CodecMimeType.AUDIO_AAC,
        audioSampleRate: 48000,
        fileFormat: media.ContainerFormatType.CFT_MPEG_4,
        videoBitrate: 8000000,
        videoCodec: media.CodecMimeType.VIDEO_AVC,
        videoFrameWidth: videoSize.width,
        videoFrameHeight: videoSize.height,
        videoFrameRate: 30
      };

      let aVRecorderConfig: media.AVRecorderConfig = {
        audioSourceType: media.AudioSourceType.AUDIO_SOURCE_TYPE_MIC,
        videoSourceType: media.VideoSourceType.VIDEO_SOURCE_TYPE_SURFACE_YUV,
        profile: aVRecorderProfile,
        url: this.url,
        rotation: 90,
        location: {
          latitude: 30,
          longitude: 130
        }
      };

      // 创建录像器
      try {
        this.cameraResources.avRecorder = await media.createAVRecorder();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `createAVRecorder call failed. error code: ${err.code}`);
        return false;
      }

      // 准备录像器
      try {
        await this.cameraResources.avRecorder.prepare(aVRecorderConfig);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `prepare call failed. error code: ${err.code}`);
        return false;
      }

      // 获取视频输入表面
      let videoSurfaceId: string | undefined = undefined;
      try {
        videoSurfaceId = await this.cameraResources.avRecorder.getInputSurface();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `getInputSurface call failed. error code: ${err.code}`);
        return false;
      }

      if (videoSurfaceId === undefined) {
        return false;
      }

      // 创建视频输出
      try {
        this.cameraResources.videoOutput =
          this.cameraResources.cameraManager.createVideoOutput(videoProfile, videoSurfaceId);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to create the videoOutput instance. error: ${JSON.stringify(err)}`);
        return false;
      }

      this.cameraResources.videoOutput.on('frameStart', () => {
        LogUtil.info(TAG, 'Video frame started');
      });

      this.cameraResources.videoOutput.on('error', (error: BusinessError) => {
        LogUtil.info(TAG, `Preview output error code: ${error.code}`);
      });

      // 创建会话
      try {
        this.cameraResources.captureSession =
          this.cameraResources.cameraManager.createSession(camera.SceneMode.NORMAL_VIDEO) as camera.VideoSession;
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to create the CaptureSession instance. errorCode = ${err.code}`);
        return false;
      }

      // 开始配置
      try {
        this.cameraResources.captureSession.beginConfig();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to beginConfig. errorCode = ${err.code}`);
        return false;
      }

      // 创建相机输入
      try {
        this.cameraResources.cameraInput = this.cameraResources.cameraManager.createCameraInput(cameraArray[0]);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to createCameraInput. error: ${JSON.stringify(err)}`);
        return false;
      }

      this.cameraResources.cameraInput.on('error', cameraArray[0], (error: BusinessError) => {
        LogUtil.info(TAG, `Camera input error code: ${error.code}`);
      });

      // 打开相机
      try {
        await this.cameraResources.cameraInput.open();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to open cameraInput. error: ${JSON.stringify(err)}`);
        return false;
      }

      // 添加输入到会话
      try {
        this.cameraResources.captureSession.addInput(this.cameraResources.cameraInput);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to add cameraInput. error: ${JSON.stringify(err)}`);
        return false;
      }

      // 创建预览输出
      try {
        this.cameraResources.previewOutput =
          this.cameraResources.cameraManager.createPreviewOutput(videoProfile, surfaceId);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to create the PreviewOutput instance. error: ${JSON.stringify(err)}`);
        return false;
      }

      // 添加预览输出到会话
      try {
        this.cameraResources.captureSession.addOutput(this.cameraResources.previewOutput);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to add previewOutput. error: ${JSON.stringify(err)}`);
        return false;
      }

      // 添加视频输出到会话
      try {
        this.cameraResources.captureSession.addOutput(this.cameraResources.videoOutput);
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `Failed to add videoOutput. error: ${JSON.stringify(err)}`);
        return false;
      }

      // 提交配置
      try {
        await this.cameraResources.captureSession.commitConfig();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `captureSession commitConfig error: ${JSON.stringify(err)}`);
        return false;
      }

      // 启动会话
      try {
        await this.cameraResources.captureSession.start();
      } catch (error) {
        let err = error as BusinessError;
        LogUtil.error(TAG, `captureSession start error: ${JSON.stringify(err)}`);
        return false;
      }

      // 启动视频输出
      this.cameraResources.videoOutput.start((err: BusinessError) => {
        if (err) {
          LogUtil.error(TAG, `Failed to start the video output. error: ${JSON.stringify(err)}`);
          return;
        }
        LogUtil.info(TAG, 'Callback invoked to indicate the video output start success.');
      });

      return true;
    } catch (error) {
      LogUtil.error(TAG, `initCamera error: ${JSON.stringify(error)}`);
      return false;
    }
  }

  // 开始录制
  public async startRecord(): Promise<boolean> {
    try {
      if (this.cameraResources.avRecorder) {
        await this.cameraResources.avRecorder.start();
        return true;
      }
      return false;
    } catch (error) {
      let err = error as BusinessError;
      LogUtil.error(TAG, `avRecorder start error: ${JSON.stringify(err)}`);
      return false;
    }
  }

  // 停止录制
  public async stopRecord(): Promise<boolean> {
    if (!this.cameraResources.avRecorder) {
      return false;
    }

    try {
      // 停止录像输出流
      if (this.cameraResources.videoOutput) {
        this.cameraResources.videoOutput.stop((err: BusinessError) => {
          if (err) {
            LogUtil.error(TAG, `Failed to stop the video output. error: ${JSON.stringify(err)}`);
            return;
          }
          LogUtil.info(TAG, 'Callback invoked to indicate the video output stop success.');
        });
      }

      // 停止录像
      await this.cameraResources.avRecorder.stop();
      await this.cameraResources.avRecorder.release();

      // 停止当前会话
      if (this.cameraResources.captureSession) {
        this.cameraResources.captureSession.stop();
      }

      // 释放相机输入流
      if (this.cameraResources.cameraInput) {
        this.cameraResources.cameraInput.close();
      }

      // 释放预览输出流
      if (this.cameraResources.previewOutput) {
        this.cameraResources.previewOutput.release();
      }

      // 释放录像输出流
      if (this.cameraResources.videoOutput) {
        this.cameraResources.videoOutput.release();
      }

      // 释放会话
      if (this.cameraResources.captureSession) {
        this.cameraResources.captureSession.release();
      }

      // 会话置空
      this.cameraResources.captureSession = undefined;

      return true;
    } catch (error) {
      let err = error as BusinessError;
      LogUtil.error(TAG, `avRecorder stop error: ${JSON.stringify(err)}`);
      return false;
    }
  }

  // 创建模型任务
  public async createModelTask(): Promise<ModelTask> {
    const response = await session.post("/new/v1/app/calculateNerf/add", {
      property: 2,
      aiTrain: 1,
      isVisibility: 1,
      duration: 0,
      uploadType: 1,
      scanType: 2,
    });

    const responseObject = await response.toJSON() as ResponseBody<ModelTask>;
    return responseObject.data as ModelTask;
  }

  // 上传文件
  public async uploadFiles(modelTask: ModelTask): Promise<void> {

    // 定义一个内部私有函数用于S3上传
    const uploadToS3 = async (
      localPath: string, // 本地文件路径
      remotePath: string, // 远程存储路径
      mimeType: string // 文件MIME类型
    ): Promise<void> => {
      const s3 = S3.instance;
      await s3.upload(
        localPath,
        modelTask.region,
        modelTask.bucket,
        remotePath,
        mimeType,
        modelTask.accessKey,
        modelTask.secretKey,
        modelTask.sessionToken
      );
    };

    // 根据视频提取视频封面
    this.coverPath = await VideoUtil.extractCover(this.videoUri);

    // 上传封面图
    await uploadToS3(
      this.coverPath,
      `${modelTask.coverPath}/cover.jpg`,
      "image/jpeg"
    );

    // 上传视频文件
    await uploadToS3(
      this.videoUri.slice(26),
      `${modelTask.inputPath}/${modelTask.taskId}.mp4`,
      "video/mp4"
    );
  }

  // 提交模型任务
  public async commitModelTask(taskId: string): Promise<ResponseBody<void>> {
    const response = await session.get(`/new/v1/app/calculateNerf/finish?taskId=${taskId}`);
    const responseBody = await response.toJSON() as ResponseBody<void>;
    return responseBody;
  }

  // 通知上传进度
  public notifyUploadProgress(progress: number): void {
    this.observers.forEach(
      (observer) => {
        // 通知所有观察者
        observer.OnUploadProgress(progress);
      }
    );
  }

  // 通知上传完成
  public notifyUploadComplete(): void {
    this.observers.forEach(
      (observer) => {
        observer.OnUploadComplete();
      }
    );
  }

  // 通知上传错误
  public notifyUploadError(error: Error): void {
    this.observers.forEach(
      (observer) => {
        observer.OnUploadError(error);
      }
    );
  }

  // 通知录制时间更新
  public notifyRecordTimeUpdate(seconds: number): void {
    this.observers.forEach(
      (observer) => {
        observer.OnRecordTimeUpdate(seconds);
      }
    );
  }

  // 获取视频URI
  public getVideoUri(): string {
    return this.videoUri;
  }
}

这是转换后的Markdown文档。


更多关于HarmonyOS鸿蒙Next中相机视频录制如何做大点击开始录制 停止后不销毁相机资源,再次点击可以开始新的录制?的实战教程也可以访问 https://www.itying.com/category-93-b0.html

3 回复

更多关于HarmonyOS鸿蒙Next中相机视频录制如何做大点击开始录制 停止后不销毁相机资源,再次点击可以开始新的录制?的实战系列教程也可以访问 https://www.itying.com/category-93-b0.html


在HarmonyOS Next中,使用CameraController控制相机。调用startVideoRecording()开始录制,stopVideoRecording()停止录制时保留相机资源。需设置RetainCaptureSession为true。再次录制时直接调用startVideoRecording()即可。关键代码示例:

// 保留会话配置
cameraController.setRetainCaptureSession(true)
// 首次录制
await cameraController.startVideoRecording()
// 停止时不释放资源
await cameraController.stopVideoRecording()
// 再次录制
await cameraController.startVideoRecording()

在HarmonyOS Next中要实现相机资源复用,关键在于避免在stopRecord()中释放所有资源。以下是修改建议:

  1. 修改stopRecord()方法,只停止录制不释放资源:
public async stopRecord(): Promise<boolean> {
  if (!this.cameraResources.avRecorder) {
    return false;
  }

  try {
    // 停止视频输出流
    if (this.cameraResources.videoOutput) {
      this.cameraResources.videoOutput.stop((err: BusinessError) => {
        if (err) {
          LogUtil.error(TAG, `Failed to stop video output. error: ${JSON.stringify(err)}`);
        }
      });
    }

    // 停止录像器
    await this.cameraResources.avRecorder.stop();
    await this.cameraResources.avRecorder.release();
    this.cameraResources.avRecorder = undefined;

    return true;
  } catch (error) {
    LogUtil.error(TAG, `stopRecord error: ${JSON.stringify(error)}`);
    return false;
  }
}
  1. 修改startRecord()方法,重新初始化AVRecorder:
public async startRecord(): Promise<boolean> {
  try {
    // 重新初始化视频文件
    this.initVideoFile();
    
    // 重新配置AVRecorder
    const aVRecorderProfile = {
      // 保持原有配置
    };
    
    const aVRecorderConfig = {
      // 保持原有配置
      url: this.url
    };

    this.cameraResources.avRecorder = await media.createAVRecorder();
    await this.cameraResources.avRecorder.prepare(aVRecorderConfig);
    
    // 获取新的surfaceId
    const videoSurfaceId = await this.cameraResources.avRecorder.getInputSurface();
    
    // 重新启动视频输出
    if (this.cameraResources.videoOutput) {
      this.cameraResources.videoOutput.start();
    }
    
    await this.cameraResources.avRecorder.start();
    return true;
  } catch (error) {
    LogUtil.error(TAG, `startRecord error: ${JSON.stringify(error)}`);
    return false;
  }
}
  1. 在完全退出时再释放所有资源:
public async releaseAll() {
  // 释放所有资源的代码保持原样
}

这样修改后,录制停止时只会释放AVRecorder资源,而保留CameraManager、Session等核心资源,再次录制时只需重新初始化AVRecorder即可。

回到顶部