Flutter机器学习相机插件flutter_camera_ml_vision的使用

发布于 1周前 作者 nodeper 来自 Flutter

Flutter 机器学习相机插件 flutter_camera_ml_vision 的使用

flutter_camera_ml_vision 是一个用于 iOS 和 Android 平台的 Flutter 插件,它可以在应用中显示相机预览并使用 Firebase ML Vision 进行物体检测。本文将详细介绍如何安装和使用该插件,并提供完整的示例代码。

安装

首先,在您的 pubspec.yaml 文件中添加 flutter_camera_ml_vision 依赖:

dependencies:
  flutter:
    sdk: flutter
  flutter_camera_ml_vision: ^2.2.4

配置 Firebase

您还需要为每个平台项目(Android 和 iOS)配置 Firebase(请参阅 Firebase Codelab 获取详细步骤)。

iOS 配置

ios/Runner/Info.plist 中添加以下两行:

<key>NSCameraUsageDescription</key>
<string>Can I use the camera please?</string>
<key>NSMicrophoneUsageDescription</key>
<string>Can I use the mic please?</string>

如果您正在使用设备上的 API,请在 Podfile 中包含相应的 ML Kit 库模型,然后在终端中运行 pod update

pod 'Firebase/MLVisionBarcodeModel'
pod 'Firebase/MLVisionFaceModel'
pod 'Firebase/MLVisionLabelModel'
pod 'Firebase/MLVisionTextModel'

Android 配置

android/app/build.gradle 文件中更改最低 Android SDK 版本到 21 或更高版本:

minSdkVersion 21

如果您使用的是设备上的 LabelDetector,请在 app-level build.gradle 文件中包含最新的 ML Kit Image Labeling 依赖项:

android {
    dependencies {
        // ...
        api 'com.google.firebase:firebase-ml-vision-image-label-model:19.0.0'
    }
}

为了自动下载 ML 模型到设备上,您可以向 AndroidManifest.xml 文件中添加以下声明:

<application ...>
  ...
  <meta-data
    android:name="com.google.firebase.ml.vision.DEPENDENCIES"
    android:value="ocr" />
  <!-- To use multiple models: android:value="ocr,label,barcode,face" -->
</application>

使用方法

示例:条形码扫描

以下是一个使用条形码检测器的示例:

import 'package:flutter/material.dart';
import 'package:flutter_camera_ml_vision/flutter_camera_ml_vision.dart';
import 'package:firebase_ml_vision/firebase_ml_vision.dart';

class ScanPage extends StatefulWidget {
  [@override](/user/override)
  _ScanPageState createState() => _ScanPageState();
}

class _ScanPageState extends State<ScanPage> {
  bool resultSent = false;
  BarcodeDetector detector = FirebaseVision.instance.barcodeDetector();

  [@override](/user/override)
  Widget build(BuildContext context) {
    return Scaffold(
      body: SafeArea(
        child: SizedBox(
          width: MediaQuery.of(context).size.width,
          child: CameraMlVision<List<Barcode>>(
            overlayBuilder: (context) {
              return Container(
                decoration: ShapeDecoration(
                  shape: _ScannerOverlayShape(
                    borderColor: Theme.of(context).primaryColor,
                    borderWidth: 3.0,
                  ),
                ),
              );
            },
            detector: detector.detectInImage,
            onResult: (List<Barcode> barcodes) {
              if (!mounted || resultSent || barcodes == null || barcodes.isEmpty) {
                return;
              }
              resultSent = true;
              Navigator.of(context).pop<Barcode>(barcodes.first);
            },
            onDispose: () {
              detector.close();
            },
          ),
        ),
      ),
    );
  }
}

其他检测器

CameraMlVision 小部件可以使用不同的 FirebaseVision Detector,例如:

FirebaseVision.instance.barcodeDetector().detectInImage
FirebaseVision.instance.cloudLabelDetector().detectInImage
FirebaseVision.instance.faceDetector().processImage
FirebaseVision.instance.labelDetector().detectInImage
FirebaseVision.instance.textRecognizer().processImage

当检测到目标时,onResult 回调会被触发,并传递检测到的数据作为参数。

相机控制器功能

flutter_camera_ml_vision 还暴露了一些来自 CameraController 类的功能,包括:

  • value
  • prepareForVideoRecording
  • startVideoRecording
  • stopVideoRecording
  • takePicture

完整示例 Demo

以下是一个完整的示例应用程序,展示了如何使用 flutter_camera_ml_vision 来扫描条形码并将结果显示在列表中:

import 'package:flutter/material.dart';
import 'package:flutter_camera_ml_vision/flutter_camera_ml_vision.dart';
import 'package:firebase_ml_vision/firebase_ml_vision.dart';

void main() => runApp(MyApp());

class MyApp extends StatelessWidget {
  [@override](/user/override)
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  MyHomePage({Key key, this.title}) : super(key: key);

  final String title;

  [@override](/user/override)
  _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  List<String> data = [];

  [@override](/user/override)
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text(widget.title),
      ),
      body: Column(
        mainAxisSize: MainAxisSize.min,
        crossAxisAlignment: CrossAxisAlignment.center,
        children: [
          ElevatedButton(
            onPressed: () async {
              final barcode = await Navigator.of(context).push<Barcode>(
                MaterialPageRoute(
                  builder: (c) {
                    return ScanPage();
                  },
                ),
              );
              if (barcode == null) {
                return;
              }

              setState(() {
                data.add(barcode.displayValue);
              });
            },
            child: Text('Scan product'),
          ),
          Expanded(
            child: ListView(
              children: data.map((d) => Text(d)).toList(),
            ),
          ),
        ],
      ),
    );
  }
}

class ScanPage extends StatefulWidget {
  [@override](/user/override)
  _ScanPageState createState() => _ScanPageState();
}

class _ScanPageState extends State<ScanPage> {
  bool resultSent = false;
  BarcodeDetector detector = FirebaseVision.instance.barcodeDetector();

  [@override](/user/override)
  Widget build(BuildContext context) {
    return Scaffold(
      body: SafeArea(
        child: SizedBox(
          width: MediaQuery.of(context).size.width,
          child: CameraMlVision<List<Barcode>>(
            overlayBuilder: (context) {
              return Container(
                decoration: ShapeDecoration(
                  shape: _ScannerOverlayShape(
                    borderColor: Theme.of(context).primaryColor,
                    borderWidth: 3.0,
                  ),
                ),
              );
            },
            detector: detector.detectInImage,
            onResult: (List<Barcode> barcodes) {
              if (!mounted || resultSent || barcodes == null || barcodes.isEmpty) {
                return;
              }
              resultSent = true;
              Navigator.of(context).pop<Barcode>(barcodes.first);
            },
            onDispose: () {
              detector.close();
            },
          ),
        ),
      ),
    );
  }
}

class _ScannerOverlayShape extends ShapeBorder {
  final Color borderColor;
  final double borderWidth;
  final Color overlayColor;

  _ScannerOverlayShape({
    this.borderColor = Colors.white,
    this.borderWidth = 1.0,
    this.overlayColor = const Color(0x88000000),
  });

  [@override](/user/override)
  EdgeInsetsGeometry get dimensions => EdgeInsets.all(10.0);

  [@override](/user/override)
  Path getInnerPath(Rect rect, {TextDirection textDirection}) {
    return Path()
      ..fillType = PathFillType.evenOdd
      ..addPath(getOuterPath(rect), Offset.zero);
  }

  [@override](/user/override)
  Path getOuterPath(Rect rect, {TextDirection textDirection}) {
    Path _getLeftTopPath(Rect rect) {
      return Path()
        ..moveTo(rect.left, rect.bottom)
        ..lineTo(rect.left, rect.top)
        ..lineTo(rect.right, rect.top);
    }

    return _getLeftTopPath(rect)
      ..lineTo(rect.right, rect.bottom)
      ..lineTo(rect.left, rect.bottom)
      ..lineTo(rect.left, rect.top);
  }

  [@override](/user/override)
  void paint(Canvas canvas, Rect rect, {TextDirection textDirection}) {
    const lineSize = 30;

    final width = rect.width;
    final borderWidthSize = width * 10 / 100;
    final height = rect.height;
    final borderHeightSize = height - (width - borderWidthSize);
    final borderSize = Size(borderWidthSize / 2, borderHeightSize / 2);

    var paint = Paint()
      ..color = overlayColor
      ..style = PaintingStyle.fill;

    canvas
      ..drawRect(
        Rect.fromLTRB(rect.left, rect.top, rect.right, borderSize.height + rect.top),
        paint,
      )
      ..drawRect(
        Rect.fromLTRB(rect.left, rect.bottom - borderSize.height, rect.right, rect.bottom),
        paint,
      )
      ..drawRect(
        Rect.fromLTRB(rect.left, rect.top + borderSize.height, rect.left + borderSize.width, rect.bottom - borderSize.height),
        paint,
      )
      ..drawRect(
        Rect.fromLTRB(rect.right - borderSize.width, rect.top + borderSize.height, rect.right, rect.bottom - borderSize.height),
        paint,
      );

    paint = Paint()
      ..color = borderColor
      ..style = PaintingStyle.stroke
      ..strokeWidth = borderWidth;

    final borderOffset = borderWidth / 2;
    final realReact = Rect.fromLTRB(
      borderSize.width + borderOffset,
      borderSize.height + borderOffset + rect.top,
      width - borderSize.width - borderOffset,
      height - borderSize.height - borderOffset + rect.top,
    );

    // Draw top right corner
    canvas
      ..drawPath(
        Path()
          ..moveTo(realReact.right, realReact.top)
          ..lineTo(realReact.right, realReact.top + lineSize),
        paint,
      )
      ..drawPath(
        Path()
          ..moveTo(realReact.right, realReact.top)
          ..lineTo(realReact.right - lineSize, realReact.top),
        paint,
      )
      ..drawPoints(PointMode.points, [Offset(realReact.right, realReact.top)], paint)

      // Draw top left corner
      ..drawPath(
        Path()
          ..moveTo(realReact.left, realReact.top)
          ..lineTo(realReact.left, realReact.top + lineSize),
        paint,
      )
      ..drawPath(
        Path()
          ..moveTo(realReact.left, realReact.top)
          ..lineTo(realReact.left + lineSize, realReact.top),
        paint,
      )
      ..drawPoints(PointMode.points, [Offset(realReact.left, realReact.top)], paint)

      // Draw bottom right corner
      ..drawPath(
        Path()
          ..moveTo(realReact.right, realReact.bottom)
          ..lineTo(realReact.right, realReact.bottom - lineSize),
        paint,
      )
      ..drawPath(
        Path()
          ..moveTo(realReact.right, realReact.bottom)
          ..lineTo(realReact.right - lineSize, realReact.bottom),
        paint,
      )
      ..drawPoints(PointMode.points, [Offset(realReact.right, realReact.bottom)], paint)

      // Draw bottom left corner
      ..drawPath(
        Path()
          ..moveTo(realReact.left, realReact.bottom)
          ..lineTo(realReact.left, realReact.bottom - lineSize),
        paint,
      )
      ..drawPath(
        Path()
          ..moveTo(realReact.left, realReact.bottom)
          ..lineTo(realReact.left + lineSize, realReact.bottom),
        paint,
      )
      ..drawPoints(PointMode.points, [Offset(realReact.left, realReact.bottom)], paint);
  }

  [@override](/user/override)
  ShapeBorder scale(double t) {
    return _ScannerOverlayShape(
      borderColor: borderColor,
      borderWidth: borderWidth,
      overlayColor: overlayColor,
    );
  }
}

更多关于Flutter机器学习相机插件flutter_camera_ml_vision的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter机器学习相机插件flutter_camera_ml_vision的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,下面是一个使用Flutter的flutter_camera_ml_vision插件进行相机操作和机器学习视觉识别的示例代码。这个插件结合了cameragoogle_ml_kit两个库,用于实时图像识别。

首先,确保你的pubspec.yaml文件中包含以下依赖项:

dependencies:
  flutter:
    sdk: flutter
  camera: ^0.10.0+1  # 请检查最新版本
  google_ml_kit: ^0.14.0  # 请检查最新版本
  flutter_camera_ml_vision: ^0.5.0  # 假设有这个插件,具体版本请根据实际情况调整

然后,运行flutter pub get来安装这些依赖。

接下来是主要代码实现:

import 'package:flutter/material.dart';
import 'package:camera/camera.dart';
import 'package:google_ml_kit/google_ml_kit.dart';
import 'package:flutter_camera_ml_vision/flutter_camera_ml_vision.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Camera ML Vision Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: CameraApp(),
    );
  }
}

class CameraApp extends StatefulWidget {
  @override
  _CameraAppState createState() => _CameraAppState();
}

class _CameraAppState extends State<CameraApp> {
  CameraController? _controller;
  late List<CameraDescription> _cameras;
  final ValueNotifier<String?> _labelNotifier = ValueNotifier<String?>(null);

  @override
  void initState() {
    super.initState();
    _cameras = [];
    availableCameras().then((availableCameras) {
      _cameras = availableCameras;
      if (_cameras.isNotEmpty) {
        _controller = CameraController(
          _cameras.first,
          ResolutionPreset.high,
          enableAudio: false,
        );
        _controller!.initialize().then((_) {
          if (!mounted) {
            return;
          }
          setState(() {});
        });
      }
    }).catchError((err) {
      print('Error: $_cameras, ${err.message}');
    });
  }

  @override
  void dispose() {
    _controller?.dispose();
    _labelNotifier.dispose();
    super.dispose();
  }

  void _processImage(CameraImage image) async {
    List<int>? imgBytes = image.planes.map((plane) {
      return plane.bytes;
    }).expand((bytes) => bytes).toList();

    if (imgBytes!.lengthInBytes < 100 * 100 * 3) {
      // Image too small, return.
      return;
    }

    InputImage inputImage = InputImage.fromBytes(
      imgBytes.sublist(0, imgBytes.lengthInBytes),
      image.width,
      image.height,
      image.planes.map((plane) {
        return plane.bytesPerRow;
      }).toList(),
      format: image.format.group,
    );

    final List<RecognizedObject> objects = await labelDetector.processImage(inputImage);

    if (objects.isNotEmpty) {
      _labelNotifier.value = objects.first.label;
    } else {
      _labelNotifier.value = null;
    }
  }

  @override
  Widget build(BuildContext context) {
    if (_controller == null || !_controller!.value.isInitialized) {
      return Container();
    }

    CameraPreview preview = CameraPreview(_controller!);
    return Scaffold(
      appBar: AppBar(
        title: Text('Camera ML Vision'),
      ),
      body: Stack(
        children: <Widget>[
          preview,
          if (_labelNotifier.value != null)
            Positioned(
              bottom: 10,
              left: 10,
              child: Text(
                'Detected: $_labelNotifier.value',
                style: TextStyle(color: Colors.red, fontSize: 20),
              ),
            ),
        ],
      ),
      floatingActionButton: FloatingActionButton(
        child: Icon(Icons.camera_alt),
        onPressed: () async {
          _controller!.startImageStream((CameraImage image) {
            _processImage(image);
          });
        },
      ),
    );
  }
}

final LabelDetector labelDetector = GoogleMlKit.vision.labelDetector();

注意事项

  1. 权限:确保在AndroidManifest.xmlInfo.plist中添加必要的相机和存储权限。
  2. ML Kit初始化:上述代码假设GoogleMlKit已经正确初始化,你可能需要在应用启动时初始化ML Kit。
  3. 错误处理:上述代码省略了部分错误处理逻辑,实际开发中应添加适当的错误处理。
  4. 依赖版本:由于插件和库的版本可能会更新,请确保使用最新版本的依赖项,并参考相应的文档进行配置。

这段代码展示了如何使用Flutter的camera插件和google_ml_kit库进行实时图像识别。如果flutter_camera_ml_vision插件实际存在并有不同的API,请参考其官方文档进行调整。

回到顶部