Flutter苹果视觉对象识别插件apple_vision_object的使用

Flutter 苹果视觉对象识别插件 apple_vision_object 的使用

apple_vision_object

Apple Vision 自拍检测是一个 Flutter 插件,使 Flutter 应用能够使用 Apple Vision 自拍检测

  • 该插件并非由 Apple 赞助或维护。作者是希望为 macOS 创建类似 Google ML Kit 的开发者。

要求

MacOS

  • 最低 osx 部署目标:10.13
  • Xcode 13 或更新版本
  • Swift 5
  • ML Kit 仅支持 64 位架构(x86_64 和 arm64)

iOS

  • 最低 osx 部署目标:14.0
  • Xcode 13 或更新版本
  • Swift 5
  • ML Kit 仅支持 64 位架构(x86_64 和 arm64)

开始使用

首先需要导入 'package:apple_vision/apple_vision.dart';

final GlobalKey cameraKey = GlobalKey(debugLabel: "cameraKey");
AppleVisionObjectController visionController = AppleVisionObjectController();
InsertCamera camera = InsertCamera();
Size imageSize = const Size(640, 640 * 9 / 16);
String? deviceId;
bool loading = true;

List<ObjectData>? objectData;
late double deviceWidth;
late double deviceHeight;

[@override](/user/override)
void initState() {
  camera.setupCameras().then((value) {
    setState(() {
      loading = false;
    });
    camera.startLiveFeed((InputImage i) {
      if (i.metadata?.size != null) {
        imageSize = i.metadata!.size;
      }
      if (mounted) {
        Uint8List? image = i.bytes;
        visionController.processImage(image!, imageSize).then((data) {
          objectData = data;
          // print(objectData!.objects);
          setState(() {});
        });
      }
    });
  });
  super.initState();
}

[@override](/user/override)
void dispose() {
  camera.dispose();
  super.dispose();
}

[@override](/user/override)
Widget build(BuildContext context) {
  deviceWidth = MediaQuery.of(context).size.width;
  deviceHeight = MediaQuery.of(context).size.height;
  return Stack(
    children: [
      SizedBox(
        width: imageSize.width,
        height: imageSize.height,
        child: loading ? Container() : CameraSetup(camera: camera, size: imageSize)
      ),
    ] + showRects()
  );
}

List<Widget> showRects() {
  if (objectData == null || objectData!.isEmpty) return [];
  List<Widget> widgets = [];

  for (int i = 0; i < objectData!.length; i++) {
    // if(objectData!.objects[i]. confidence > 0.5){
      widgets.add(
        Positioned(
          top: objectData![i].object.top,
          left: objectData![i].object.left,
          child: Container(
            width: objectData![i].object.width * imageSize.width,
            height: objectData![i].object.height * imageSize.height,
            decoration: BoxDecoration(
              color: Colors.transparent,
              border: Border.all(width: 1, color: Colors.green),
              borderRadius: BorderRadius.circular(5)
            ),
            child: Text(
              '${objectData![i].label}: ${objectData![i].confidence}',
              style: const TextStyle(
                color: Colors.white,
                fontSize: 12
              ),
            )
          )
        )
      );
    //}
  }
  return widgets;
}

Widget loadingWidget() {
  return Container(
    width: deviceWidth,
    height: deviceHeight,
    color: Theme.of(context).canvasColor,
    alignment: Alignment.center,
    child: const CircularProgressIndicator(color: Colors.blue)
  );
}

示例

完整的示例代码可以在以下链接找到: example/lib/main.dart

import 'package:apple_vision_object/apple_vision_object.dart';
import 'package:flutter/material.dart';
import '../camera/camera_insert.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter/services.dart';
import 'camera/input_image.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  [@override](/user/override)
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: const VisionObject(),
    );
  }
}

class VisionObject extends StatefulWidget {
  const VisionObject({
    Key? key,
    this.onScanned
  }) : super(key: key);

  final Function(dynamic data)? onScanned; 

  [@override](/user/override)
  _VisionObject createState() => _VisionObject();
}

class _VisionObject extends State<VisionObject> {
  final GlobalKey cameraKey = GlobalKey(debugLabel: "cameraKey");
  AppleVisionObjectController visionController = AppleVisionObjectController();
  InsertCamera camera = InsertCamera();
  Size imageSize = const Size(640, 640 * 9 / 16);
  String? deviceId;
  bool loading = true;

  List<ObjectData>? objectData;
  late double deviceWidth;
  late double deviceHeight;

  [@override](/user/override)
  void initState() {
    camera.setupCameras().then((value) {
      setState(() {
        loading = false;
      });
      camera.startLiveFeed((InputImage i) {
        if (i.metadata?.size != null) {
          imageSize = i.metadata!.size;
        }
        if (mounted) {
          Uint8List? image = i.bytes;
          visionController.processImage(image!, imageSize).then((data) {
            objectData = data;
            // print(objectData!.objects);
            setState(() {});
          });
        }
      });
    });
    super.initState();
  }

  [@override](/user/override)
  void dispose() {
    camera.dispose();
    super.dispose();
  }

  [@override](/user/override)
  Widget build(BuildContext context) {
    deviceWidth = MediaQuery.of(context).size.width;
    deviceHeight = MediaQuery.of(context).size.height;
    return Stack(
      children: [
        SizedBox(
          width: imageSize.width,
          height: imageSize.height,
          child: loading ? Container() : CameraSetup(camera: camera, size: imageSize)
        ),
      ] + showRects()
    );
  }

  List<Widget> showRects() {
    if (objectData == null || objectData!.isEmpty) return [];
    List<Widget> widgets = [];

    for (int i = 0; i < objectData!.length; i++) {
      // if(objectData!.objects[i]. confidence > 0.5){
        widgets.add(
          Positioned(
            top: objectData![i].object.top,
            left: objectData![i].object.left,
            child: Container(
              width: objectData![i].object.width * imageSize.width,
              height: objectData![i].object.height * imageSize.height,
              decoration: BoxDecoration(
                color: Colors.transparent,
                border: Border.all(width: 1, color: Colors.green),
                borderRadius: BorderRadius.circular(5)
              ),
              child: Text(
                '${objectData![i].label}: ${objectData![i].confidence}',
                style: const TextStyle(
                  color: Colors.white,
                  fontSize: 12
                ),
              )
            )
          )
        );
      //}
    }
    return widgets;
  }

  Widget loadingWidget() {
    return Container(
      width: deviceWidth,
      height: deviceHeight,
      color: Theme.of(context).canvasColor,
      alignment: Alignment.center,
      child: const CircularProgressIndicator(color: Colors.blue)
    );
  }
}

更多关于Flutter苹果视觉对象识别插件apple_vision_object的使用的实战教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter苹果视觉对象识别插件apple_vision_object的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


apple_vision_object 是一个 Flutter 插件,用于在 iOS 设备上利用 Apple 的 Vision 框架进行对象识别。这个插件可以帮助开发者在 Flutter 应用中轻松集成对象识别功能。以下是如何使用 apple_vision_object 插件的基本步骤:

1. 添加依赖

首先,在 pubspec.yaml 文件中添加 apple_vision_object 插件的依赖:

dependencies:
  flutter:
    sdk: flutter
  apple_vision_object: ^latest_version

然后,运行 flutter pub get 来安装依赖。

2. 导入插件

在你的 Dart 文件中导入 apple_vision_object 插件:

import 'package:apple_vision_object/apple_vision_object.dart';

3. 初始化对象识别

你可以通过 AppleVisionObject 类来初始化和管理对象识别。

AppleVisionObject visionObject = AppleVisionObject();

4. 执行对象识别

要执行对象识别,你需要提供一个图像(例如从相机或图库中获取的图像)。假设你已经有了一个 UIImage 对象(在 iOS 上),你可以将其传递给 AppleVisionObject 进行识别。

void recognizeObjects(UIImage image) async {
  try {
    List<RecognizedObject> objects = await visionObject.recognizeObjects(image);
    
    for (var object in objects) {
      print("Detected object: ${object.label}");
      print("Confidence: ${object.confidence}");
      print("Bounding box: ${object.boundingBox}");
    }
  } catch (e) {
    print("Error recognizing objects: $e");
  }
}

5. 处理识别结果

recognizeObjects 方法返回一个 List<RecognizedObject>,其中每个 RecognizedObject 包含以下信息:

  • label: 识别出的对象的标签(例如 “dog”, “car” 等)。
  • confidence: 识别结果的置信度,范围从 0 到 1。
  • boundingBox: 对象在图像中的边界框,通常是一个 CGRect

你可以根据这些信息在你的应用中显示识别结果,例如绘制边界框或在 UI 中显示标签。

6. 处理权限

在 iOS 上使用相机或访问图库时,可能需要请求相应的权限。你需要在 Info.plist 文件中添加相应的权限描述。

<key>NSCameraUsageDescription</key>
<string>需要访问相机以进行对象识别</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>需要访问图库以选择图像进行对象识别</string>

7. 集成相机或图库

为了获取图像,你可以使用 camera 插件或 image_picker 插件来从相机或图库中获取图像。

import 'package:image_picker/image_picker.dart';

void pickImage() async {
  final ImagePicker _picker = ImagePicker();
  final XFile? image = await _picker.pickImage(source: ImageSource.gallery);
  
  if (image != null) {
    // 将 XFile 转换为 UIImage(假设你有一个方法可以做到这一点)
    UIImage uiImage = await convertXFileToUIImage(image);
    recognizeObjects(uiImage);
  }
}
回到顶部