Flutter人脸检测插件google_mlkit_face_detection的使用

发布于 1周前 作者 wuwangju 来自 Flutter

Flutter人脸检测插件google_mlkit_face_detection的使用

Google’s ML Kit Face Detection for Flutter

Pub Version analysis Star on Github License: MIT

Flutter插件,用于在图像中检测人脸、识别关键面部特征并获取检测到的人脸轮廓。该插件基于 Google’s ML Kit Face Detection

注意事项

  • 平台支持:仅适用于iOS和Android平台。
  • 维护者:由社区开发者维护,非官方谷歌团队维护。
  • 工作原理:通过Flutter平台通道(Platform Channels)与原生API交互,所有机器学习处理都在原生平台上完成。

要求

iOS

  • 最低iOS部署目标:15.5.0
  • Xcode 15.3.0或更新版本
  • Swift 5
  • 排除armv7架构(在Xcode项目设置中)

Podfile配置:

platform :ios, '15.5.0'  # 或更新版本

...

$iOSVersion = '15.5.0'  # 或更新版本

post_install do |installer|
  installer.pods_project.build_configurations.each do |config|
    config.build_settings["EXCLUDED_ARCHS[sdk=*]"] = "armv7"
    config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
  end

  installer.pods_project.targets.each do |target|
    flutter_additional_ios_build_settings(target)

    target.build_configurations.each do |config|
      if Gem::Version.new($iOSVersion) > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
        config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
      end
    end
  end
end

Android

  • minSdkVersion: 21
  • targetSdkVersion: 33
  • compileSdkVersion: 34

使用方法

面部检测

创建InputImage实例

根据官方文档创建InputImage实例。

final InputImage inputImage;
创建FaceDetector实例
final options = FaceDetectorOptions();
final faceDetector = FaceDetector(options: options);
处理图像
final List<Face> faces = await faceDetector.processImage(inputImage);

for (Face face in faces) {
  final Rect boundingBox = face.boundingBox;

  final double? rotX = face.headEulerAngleX; // 头部上下倾斜角度
  final double? rotY = face.headEulerAngleY; // 头部左右旋转角度
  final double? rotZ = face.headEulerAngleZ; // 头部侧倾角度

  // 如果启用了地标检测(mouth, ears, eyes, cheeks, and nose)
  final FaceLandmark? leftEar = face.landmarks[FaceLandmarkType.leftEar];
  if (leftEar != null) {
    final Point<int> leftEarPos = leftEar.position;
  }

  // 如果启用了分类(smilingProbability)
  if (face.smilingProbability != null) {
    final double? smileProb = face.smilingProbability;
  }

  // 如果启用了面部跟踪(trackingId)
  if (face.trackingId != null) {
    final int? id = face.trackingId;
  }
}
释放资源
faceDetector.close();

示例应用

你可以在这里找到完整的示例应用:example app

完整示例代码

以下是一个完整的示例代码,展示了如何在Flutter应用中使用google_mlkit_face_detection插件进行人脸检测:

import 'package:flutter/material.dart';
import 'package:google_mlkit_face_detection/google_mlkit_face_detection.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Face Detection Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: FaceDetectionScreen(),
    );
  }
}

class FaceDetectionScreen extends StatefulWidget {
  @override
  _FaceDetectionScreenState createState() => _FaceDetectionScreenState();
}

class _FaceDetectionScreenState extends State<FaceDetectionScreen> {
  late FaceDetector _faceDetector;
  bool _isBusy = false;
  String _text = '';

  @override
  void initState() {
    super.initState();
    _initializeFaceDetector();
  }

  Future<void> _initializeFaceDetector() async {
    final options = FaceDetectorOptions(
      enableLandmarks: true,
      enableContours: true,
      enableClassification: true,
      enableTracking: true,
    );
    _faceDetector = FaceDetector(options: options);
  }

  Future<void> _processImage(InputImage inputImage) async {
    setState(() {
      _isBusy = true;
      _text = '';
    });

    try {
      final List<Face> faces = await _faceDetector.processImage(inputImage);

      for (Face face in faces) {
        final Rect boundingBox = face.boundingBox;
        final double? rotX = face.headEulerAngleX;
        final double? rotY = face.headEulerAngleY;
        final double? rotZ = face.headEulerAngleZ;

        setState(() {
          _text += 'Face detected!\n';
          _text += 'Bounding Box: ${boundingBox.toString()}\n';
          _text += 'Rotation X: $rotX\n';
          _text += 'Rotation Y: $rotY\n';
          _text += 'Rotation Z: $rotZ\n';
        });
      }
    } catch (e) {
      setState(() {
        _text = 'Failed to process image: $e';
      });
    } finally {
      setState(() {
        _isBusy = false;
      });
    }
  }

  @override
  void dispose() {
    _faceDetector.close();
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Face Detection Demo'),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            ElevatedButton(
              onPressed: _isBusy ? null : () async {
                // 这里假设你有一个方法来获取InputImage
                // final inputImage = ...;
                // await _processImage(inputImage);
              },
              child: Text('Detect Faces'),
            ),
            SizedBox(height: 20),
            Text(_text),
          ],
        ),
      ),
    );
  }
}

贡献

欢迎贡献!如果你遇到问题,请先查看现有的issues,如果找不到相关问题可以提交新issue。对于非平凡的修复,请先创建issue再提交pull request


更多关于Flutter人脸检测插件google_mlkit_face_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter人脸检测插件google_mlkit_face_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,以下是如何在Flutter项目中使用google_mlkit_face_detection插件进行人脸检测的示例代码。这个插件允许你使用Google的ML Kit进行实时人脸检测。

首先,确保你的Flutter项目已经初始化,并且你已经在pubspec.yaml文件中添加了google_mlkit_face_detection依赖:

dependencies:
  flutter:
    sdk: flutter
  google_mlkit_face_detection: ^0.x.x  # 请使用最新版本号

然后,运行flutter pub get来获取依赖。

接下来,我们来看一个完整的示例代码,展示如何使用这个插件进行人脸检测。

主程序文件 (main.dart)

import 'package:flutter/material.dart';
import 'package:google_mlkit_face_detection/google_mlkit_face_detection.dart';
import 'dart:typed_data';
import 'dart:ui' as ui;

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: FaceDetectionScreen(),
    );
  }
}

class FaceDetectionScreen extends StatefulWidget {
  @override
  _FaceDetectionScreenState createState() => _FaceDetectionScreenState();
}

class _FaceDetectionScreenState extends State<FaceDetectionScreen> {
  late CameraController _controller;
  late Uint8List? _imageBytes;
  late List<Face> _faces = [];

  @override
  void initState() {
    super.initState();
    _initCamera();
  }

  @override
  void dispose() {
    _controller.dispose();
    super.dispose();
  }

  Future<void> _initCamera() async {
    final cameras = await availableCameras();
    final firstCamera = cameras.first;
    _controller = CameraController(firstCamera, ResolutionPreset.medium);

    _controller.addListener(() {
      if (mounted) setState(() {});
      if (_controller.value.hasError) {
        print('Camera error ${_controller.value.errorDescription}');
      }
    });

    _controller.initialize().then((_) {
      if (mounted) setState(() {});
    });
  }

  Future<void> _processImage(CameraImage image) async {
    final planes = image.planes;
    final WriteBuffer allBytes = WriteBuffer();

    for (Plane plane in planes) {
      allBytes.putUint8List(plane.bytes);
    }

    final bytes = allBytes.toUint8List();

    final Size imageSize = Size(
      image.width.toDouble(),
      image.height.toDouble(),
    );

    final inputImage = InputImage.fromBytes(
      bytes,
      imageSize,
      image.rotation,
      InputImageFormat.NV21,
    );

    final result = await FaceDetector.client().process(inputImage);
    setState(() {
      _faces = result.faces;
      _imageBytes = bytes;
    });
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Face Detection'),
      ),
      body: _controller.value.isInitialized
          ? Column(
              mainAxisAlignment: MainAxisAlignment.center,
              children: [
                Expanded(
                  child: CameraPreview(_controller),
                ),
                if (_faces.isNotEmpty)
                  Container(
                    child: Stack(
                      children: _faces.map((face) {
                        final rect = face.boundingBox;
                        final size = MediaQuery.of(context).size;
                        final rectSize = Size(
                          rect.width * size.width,
                          rect.height * size.height,
                        );
                        final rectPosition = Offset(
                          rect.left * size.width,
                          rect.top * size.height,
                        );

                        return Positioned(
                          left: rectPosition.dx,
                          top: rectPosition.dy,
                          width: rectSize.width,
                          height: rectSize.height,
                          child: DecoratedBox(
                            decoration: BoxDecoration(
                              border: Border.all(color: Colors.red, width: 2),
                            ),
                          ),
                        );
                      }).toList(),
                    ),
                  ),
              ],
            )
          : Center(
              child: CircularProgressIndicator()),
      floatingActionButton: FloatingActionButton(
        onPressed: () async {
          final image = await _controller.capture();
          final planes = image.planes;
          final WriteBuffer allBytes = WriteBuffer();

          for (Plane plane in planes) {
            allBytes.putUint8List(plane.bytes);
          }

          final bytes = allBytes.toUint8List();

          _processImage(
            CameraImage(
              planes: planes,
              format: image.format,
              width: image.width,
              height: image.height,
              rotation: image.rotation,
            ),
          );
        },
        tooltip: 'Capture Image',
        child: Icon(Icons.camera),
      ),
    );
  }
}

说明

  1. 依赖导入:在pubspec.yaml中添加google_mlkit_face_detection依赖。
  2. 相机初始化:使用camera插件初始化相机,并获取相机图像。
  3. 图像处理:将相机图像转换为ML Kit可以处理的格式(InputImage),然后调用FaceDetector.client().process(inputImage)进行人脸检测。
  4. 显示结果:在相机预览上绘制检测到的人脸边界框。

注意事项

  • 确保你已经在AndroidManifest.xml中添加了相机权限。
  • 你可能需要添加额外的依赖项,比如camera插件,以及处理图像所需的依赖项。
  • 在实际使用中,记得处理错误和异常情况,比如相机初始化失败或图像处理失败。

这是一个基本的示例,你可以根据需要进一步扩展和优化。

回到顶部