HarmonyOS 鸿蒙Next Camera中使用ArkTs实现双路预览,ImageReceiver中获取的源流数据怎么处理

发布于 1周前 作者 phonegap100 来自 鸿蒙OS

HarmonyOS 鸿蒙Next Camera中使用ArkTs实现双路预览,ImageReceiver中获取的源流数据怎么处理 Camera中使用ArkTs实现双路预览, ImageReceiver中获取的源流数据怎么处理 我怎样将getComponent方法中的图片数据存储到到相册
目前有以下几个问题

  1. 怎样将ArrayBuffer转成图片PixelMap

  2. 怎样将图片存储到相册或沙盒路径

2 回复

保存图片请参考代码:

import { image } from '@kit.ImageKit';
import fs from '@ohos.file.fs';

let fd: number | null = null

export async function saveToFile(pixelMap: image.PixelMap, imagePath: string): Promise<void> {
  try {
    const filePath = imagePath;
    const imagePacker = image.createImagePacker();
    const imageBuffer = await imagePacker.packing(pixelMap, { format: 'image/png', quality: 100 });
    const mode = fs.OpenMode.READ_WRITE | fs.OpenMode.CREATE;
    fd = (await fs.open(filePath, mode)).fd;
    await fs.truncate(fd);
    await fs.write(fd, imageBuffer);
  } catch (err) {
  } finally {
    if (fd) {
      fs.close(fd);
    }
  }
}

可以尝试这个demo:

import camera from '@ohos.multimedia.camera';
import image from '@ohos.multimedia.image';
import abilityAccessCtrl from '@ohos.abilityAccessCtrl';
import common from '@ohos.app.ability.common';
import fs from '@ohos.file.fs';
import { BusinessError } from '@kit.BasicServicesKit';
import { PhotoAccessHelper } from '@ohos.file.photoAccessHelper';
import { ifaa } from '@kit.OnlineAuthenticationKit';

@Entry
@Component
struct Index3 {
  @State message: string = 'Hello World'
  private mXComponentController: XComponentController = new XComponentController;
  private surfaceId: string = '-1';
  @State imgUrl: PixelMap | undefined = undefined;
  private context: ESObject = undefined
  private previewProfilesObj2: camera.Profile | undefined = undefined;
  private receiver: image.ImageReceiver | undefined = undefined;
  @State pixma: PixelMap | undefined = undefined
  @State photoOutput: camera.PhotoOutput | undefined = undefined;

  aboutToAppear() {
    //申请权限
    let context = getContext() as common.UIAbilityContext;
    abilityAccessCtrl.createAtManager().requestPermissionsFromUser(context, ['ohos.permission.CAMERA']).then(() => {
      this.createDualChannelPreview(this.surfaceId);
    });
  }

  onPageShow(): void {
    this.createDualChannelPreview(this.surfaceId);
  }

  async createDualChannelPreview(XComponentSurfaceId: string): Promise<void> {
    let cameraManager = await camera.getCameraManager(getContext() as ESObject);
    let camerasDevices: Array<camera.CameraDevice> = cameraManager.getSupportedCameras(); // 获取支持的相机设备对象
    // 获取profile对象
    let profiles: camera.CameraOutputCapability = cameraManager.getSupportedOutputCapability(camerasDevices[0]); // 获取对应相机设备profiles
    let previewProfiles: Array<camera.Profile> = profiles.previewProfiles;

    // 预览流2
    this.previewProfilesObj2 = previewProfiles[0];
    this.receiver = image.createImageReceiver(this.previewProfilesObj2.size.width, this.previewProfilesObj2.size.height, 2000, 8);

    // 创建 预览流2 输出对象
    let imageReceiverSurfaceId: string = await this.receiver.getReceivingSurfaceId();
    let previewOutput2: camera.PreviewOutput = cameraManager.createPreviewOutput(this.previewProfilesObj2, imageReceiverSurfaceId);

    // 创建拍照输出流
    let photoProfilesArray: Array<camera.Profile> = profiles.photoProfiles;

    try {
      this.photoOutput = cameraManager.createPhotoOutput(photoProfilesArray[0]);
    } catch (error) {
      let err = error as BusinessError;
      console.error('Failed to createPhotoOutput errorCode = ' + err.code);
    }

    if (this.photoOutput === undefined) {
      return;
    }

    // 创建cameraInput对象
    let cameraInput: camera.CameraInput = cameraManager.createCameraInput(camerasDevices[0]);

    // 打开相机
    await cameraInput.open();

    // 会话流程
    let captureSession: camera.CaptureSession = cameraManager.createCaptureSession();

    // 开始配置会话
    captureSession.beginConfig();

    // 把CameraInput加入到会话
    captureSession.addInput(cameraInput);

    // 把 预览流2 加入到会话
    captureSession.addOutput(previewOutput2);

    try {
      captureSession.addOutput(this.photoOutput);
    } catch (error) {
      let err = error as BusinessError;
      console.error('Failed to addOutput(photoOutput). errorCode = ' + err.code);
    }

    // 提交配置信息
    await captureSession.commitConfig();

    // 会话开始
    await captureSession.start();

    this.onImageArrival(this.receiver);
    this.setPhotoOutputCb(this.photoOutput)
  }

  async savePicture(buffer: ArrayBuffer, img: image.Image) {
    const context = getContext(this);

    let photoAccessHelper: PhotoAccessHelper.PhotoAccessHelper = PhotoAccessHelper.getPhotoAccessHelper(context);

    let options: PhotoAccessHelper.CreateOptions = {
      title: Date.now().toString()
    };

    let photoUri: string = await photoAccessHelper.createAsset(PhotoAccessHelper.PhotoType.IMAGE, 'jpg', options);

    //createAsset的调用需要ohos.permission.READ_IMAGEVIDEO和ohos.permission.WRITE_IMAGEVIDEO的权限
    let file: fs.File = fs.openSync(photoUri, fs.OpenMode.READ_WRITE | fs.OpenMode.CREATE);

    await fs.write(file.fd, buffer);
    fs.closeSync(file);
    img.release();
  }

  setPhotoOutputCb(photoOutput: camera.PhotoOutput) {
    //设置回调之后,调用photoOutput的capture方法,就会将拍照的buffer回传到回调中
    photoOutput.on('photoAvailable', (errCode: BusinessError, photo: camera.Photo): void => {
      console.info('getPhoto start');
      console.info(`err: ${JSON.stringify(errCode)}`);
      if (errCode || photo === undefined) {
        console.error('getPhoto failed');
        return;
      }

      let imageObj = photo.main;
      imageObj.getComponent(image.ComponentType.JPEG, (errCode: BusinessError, component: image.Component): void => {
        console.info('getComponent start');
        if (errCode || component === undefined) {
          console.error('getComponent failed');
          return;
        }

        let buffer: ArrayBuffer;
        if (component.byteBuffer) {
          buffer = component.byteBuffer;
          if (component.byteBuffer as ArrayBuffer) {
            let sourceOptions: image.SourceOptions = {
              sourceDensity: 120,
              // sourcePixelFormat: 3, // NV21
              sourceSize: {
                height: 240,
                width: 320
              }
            }
            try {
              let imageResource = image.createImageSource(component.byteBuffer);
              imageResource.createPixelMap({}).then(res=>{
                this.pixma = res;
              });
            } catch (error) {
              let err = error as BusinessError;
              console.error('Failed to addOutput(photoOutput). errorCode = ' + err.code);
            }
          } else {
            return;
          }
        } else {
          console.error('byteBuffer is null');
          return;
        }
        this.savePicture(buffer, imageObj);
      });
    });
  }

  async onImageArrival(receiver: image.ImageReceiver): Promise<void> {
    receiver.on('imageArrival', () => {
      console.error("imageArrival callback");
      receiver.readLatestImage((err, nextImage: image.Image) => {
        if (err || nextImage === undefined) {
          return;
        }
        nextImage.getComponent(image.ComponentType.JPEG, async (err, imgComponent: image.Component) => {
          if (err || imgComponent === undefined) {
            return;
          }
          this.saveImageToFile(imgComponent.byteBuffer);
          if (imgComponent.byteBuffer as ArrayBuffer) {
            let sourceOptions: image.SourceOptions = {
              sourceDensity: 120,
              sourcePixelFormat: 8, // NV21
              sourceSize: {
                height: this.previewProfilesObj2!.size.height,
                width: this.previewProfilesObj2!.size.width
              }
            }
            let imageResource = image.createImageSource(imgComponent.byteBuffer, sourceOptions);
            let decodingOptions : image.DecodingOptions = {
              editable: true,
              desiredPixelFormat: 3
            }
            imageResource.createPixelMap(decodingOptions).then(res=>{
              this.imgUrl = res;
            });
          } else {
            return;
          }
          nextImage.release();
        })
      })
    })
  }

  saveImageToFile(data: ArrayBuffer) {
    const context = getContext(this);

    let filePath = context.tempDir + "/test.jpg";
    console.info("path is " + filePath);

    let file = fs.openSync(filePath, fs.OpenMode.READ_WRITE | fs.OpenMode.CREATE);
    fs.write(file.fd, data, (err, writeLen) => {
      if (err) {
        console.info("write failed with error message: " + err.message + ", error code: " + err.code);
      } else {
        console.info("write data to file succeed and size is:" + writeLen);
        fs.closeSync(file);
      }
    });
  }

  build() {
    Column() {
      Row() {
        // 将编辑好的pixelMap传递给状态变量imagePixelMap后,通过Image组件进行渲染
        Image(this.imgUrl).objectFit(ImageFit.None)
      }.width('100%').height('50%').backgroundColor('#F0F0F0')

      Row() {
        Button() {
          Text("拍照")
            .fontColor(Color.Black)
            .alignSelf(ItemAlign.Center)
            .onClick(() => {
              let settings: camera.PhotoCaptureSetting = {
                quality: camera.QualityLevel.QUALITY_LEVEL_HIGH, // 设置图片质量高
                rotation: camera.ImageRotation.ROTATION_0, // 设置图片旋转角度0
                mirror: false // 设置镜像使能开关(默认关)
              };
              if (this.photoOutput){
                this.photoOutput.capture(settings, (err: BusinessError) => {
                  if (err) {
                    console.error(`Failed to capture the photo. error: ${JSON.stringify(err)}`);
                    return;
                  }
                  console.info('Callback invoked to indicate the photo capture request success.');
                });
              }
            })
        }
        .width(100)
        .height(100)
        .Image(this.pixma)
        .width(200)
        .height(200)
      }
    }
  }
}

更多关于HarmonyOS 鸿蒙Next Camera中使用ArkTs实现双路预览,ImageReceiver中获取的源流数据怎么处理的实战系列教程也可以访问 https://www.itying.com/category-93-b0.html


在HarmonyOS鸿蒙Next Camera中使用ArkTs实现双路预览时,ImageReceiver中获取的源流数据处理通常涉及以下几个步骤:

  1. 数据接收:首先,ImageReceiver会接收到来自摄像头的原始图像数据流。这些数据通常以帧的形式存在,包含图像的像素信息、时间戳等。

  2. 格式转换:由于原始数据可能不是直接可用的格式,因此需要进行格式转换。这可能包括色彩空间转换(如从YUV到RGB)、分辨率调整等。

  3. 双路处理:对于双路预览,需要对接收到的每一帧数据进行拆分处理。一路可能用于显示预览,另一路可能用于其他处理(如分析、存储等)。

  4. 显示与存储:对于用于预览的数据流,可以将其传递给UI组件进行显示。对于另一路数据,可以根据需求进行进一步处理,如保存到文件或进行实时分析。

  5. 资源管理:在处理过程中,要注意资源的有效管理,包括内存使用、CPU占用等,以确保系统的稳定性和流畅性。

如果问题依旧没法解决请联系官网客服,官网地址是 https://www.itying.com/category-93-b0.html

回到顶部