Flutter人脸网格检测插件google_mlkit_face_mesh_detection的使用

发布于 1周前 作者 wuwangju 来自 Flutter

Flutter人脸网格检测插件google_mlkit_face_mesh_detection的使用

Google’s ML Kit Face Mesh Detection for Flutter

Pub Version analysis Star on Github License: MIT

注意: 此功能目前处于Beta阶段,仅支持Android平台。请关注Google的网站上的更新或在此处请求该功能。

Flutter插件用于调用Google的ML Kit进行人脸网格检测,可以实时生成高精度的人脸网格,包含468个3D点,适用于自拍类图像。

注意事项

  • 人脸应在距离相机约2米(7英尺)以内,以确保脸部足够大,从而获得最佳的人脸网格识别效果。
  • 如果需要检测距离相机超过2米的人脸,请参阅google_mlkit_face_detection
  • 人脸应面向相机且至少有一半的脸部可见,任何遮挡脸部的大物体可能会导致准确性降低。

平台支持说明

  • Google的ML Kit仅针对iOS和Android移动平台构建,Web或其他平台不被支持。
  • 本插件并非由Google赞助或维护,而是由一群对机器学习充满热情的开发者创建,旨在将Google的原生API暴露给Flutter。
  • 插件通过Flutter平台通道实现,所有机器学习处理均在原生平台完成,而非Flutter/Dart中执行。

Requirements

Android

  • minSdkVersion: 21
  • targetSdkVersion: 33
  • compileSdkVersion: 34

使用方法

创建InputImage实例

根据官方文档创建InputImage实例。

final InputImage inputImage;

创建FaceMeshDetector实例

final meshDetector = FaceMeshDetector(option: FaceMeshDetectorOptions.faceMesh);

处理图像

final List<FaceMesh> meshes = await meshDetector.processImage(inputImage);

for (FaceMesh mesh in meshes) {
  final boundingBox = mesh.boundingBox;
  final points = mesh.points;
  final triangles = mesh.triangles;
  final contour = mesh.contours[FaceMeshContourType.faceOval];
}

释放资源

meshDetector.close();

示例代码

下面是一个完整的示例应用,展示了如何使用google_mlkit_face_mesh_detection插件来检测并绘制人脸网格:

import 'package:flutter/material.dart';
import 'package:google_mlkit_face_mesh_detection/google_mlkit_face_mesh_detection.dart';
import 'package:image_picker/image_picker.dart';

void main() => runApp(MyApp());

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: FaceMeshDetectionPage(),
    );
  }
}

class FaceMeshDetectionPage extends StatefulWidget {
  @override
  _FaceMeshDetectionPageState createState() => _FaceMeshDetectionPageState();
}

class _FaceMeshDetectionPageState extends State<FaceMeshDetectionPage> {
  late FaceMeshDetector _meshDetector;
  XFile? _image;
  List<FaceMesh>? _meshes;

  @override
  void initState() {
    super.initState();
    _meshDetector = FaceMeshDetector(option: FaceMeshDetectorOptions.faceMesh);
  }

  @override
  void dispose() {
    _meshDetector.close();
    super.dispose();
  }

  Future<void> _pickImage() async {
    final picker = ImagePicker();
    final pickedFile = await picker.pickImage(source: ImageSource.gallery);
    setState(() {
      _image = pickedFile;
    });
    if (_image != null) {
      final inputImage = InputImage.fromFilePath(_image!.path);
      _processImage(inputImage);
    }
  }

  Future<void> _processImage(InputImage inputImage) async {
    final List<FaceMesh> meshes = await _meshDetector.processImage(inputImage);
    setState(() {
      _meshes = meshes;
    });
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(title: Text('Face Mesh Detection')),
      body: Column(
        children: [
          ElevatedButton(
            onPressed: _pickImage,
            child: Text('Pick Image'),
          ),
          if (_image != null)
            Expanded(
              child: Center(
                child: Image.file(File(_image!.path)),
              ),
            ),
          if (_meshes != null && _meshes!.isNotEmpty)
            Expanded(
              child: CustomPaint(
                size: Size.infinite,
                painter: FaceMeshPainter(meshes: _meshes!),
              ),
            ),
        ],
      ),
    );
  }
}

class FaceMeshPainter extends CustomPainter {
  final List<FaceMesh> meshes;

  FaceMeshPainter({required this.meshes});

  @override
  void paint(Canvas canvas, Size size) {
    final Paint paint = Paint()
      ..color = Colors.red
      ..strokeWidth = 2.0
      ..style = PaintingStyle.stroke;

    for (var mesh in meshes) {
      for (var triangle in mesh.triangles) {
        final p1 = Offset(triangle.p1.x * size.width, triangle.p1.y * size.height);
        final p2 = Offset(triangle.p2.x * size.width, triangle.p2.y * size.height);
        final p3 = Offset(triangle.p3.x * size.width, triangle.p3.y * size.height);
        canvas.drawPath(Path()..addPolygon([p1, p2, p3], true), paint);
      }
    }
  }

  @override
  bool shouldRepaint(covariant CustomPainter oldDelegate) {
    return true;
  }
}

此示例应用允许用户从图库中选择图片,并使用google_mlkit_face_mesh_detection插件检测并绘制人脸网格。希望这个例子能帮助你更好地理解和使用这个插件。如果有任何问题或建议,请参考现有问题,或直接提交新的issue。


更多关于Flutter人脸网格检测插件google_mlkit_face_mesh_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter人脸网格检测插件google_mlkit_face_mesh_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,以下是如何在Flutter项目中使用google_mlkit_face_mesh_detection插件来进行人脸网格检测的示例代码。这个插件利用Google的ML Kit库来检测人脸并生成详细的人脸网格。

前提条件

  1. 确保你已经安装了Flutter和Dart开发环境。
  2. 确保你的Flutter项目已经创建好,并且可以在你的开发环境中运行。

第一步:添加依赖

在你的pubspec.yaml文件中添加google_mlkit_face_mesh_detection依赖:

dependencies:
  flutter:
    sdk: flutter
  google_mlkit_face_mesh_detection: ^latest_version  # 替换为最新版本号

然后运行flutter pub get来安装依赖。

第二步:配置Android权限

由于这个插件需要访问摄像头,你需要在android/app/src/main/AndroidManifest.xml中添加相机权限:

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />

第三步:请求相机权限和进行人脸网格检测

在你的Flutter代码中,你需要请求相机权限并使用google_mlkit_face_mesh_detection插件来检测人脸网格。以下是一个完整的示例:

import 'package:flutter/material.dart';
import 'package:camera/camera.dart';
import 'package:google_mlkit_face_mesh_detection/google_mlkit_face_mesh_detection.dart';

List<CameraDescription> cameras;
CameraController? controller;

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: CameraApp(),
    );
  }
}

class CameraApp extends StatefulWidget {
  @override
  _CameraAppState createState() => _CameraAppState();
}

class _CameraAppState extends State<CameraApp> {
  bool hasPermission = false;
  FaceMeshResult? faceMeshResult;

  @override
  void initState() {
    super.initState();
    _requestCameraPermission();
  }

  Future<void> _requestCameraPermission() async {
    if (!await Permission.camera.request().isGranted) {
      setState(() {
        hasPermission = false;
      });
      return;
    }

    setState(() {
      hasPermission = true;
    });

    availableCameras().then((availableCameras) {
      cameras = availableCameras;

      if (cameras.isEmpty) {
        return;
      }

      controller = new CameraController(
        cameras[0],
        ResolutionPreset.high,
        enableAudio: false,
      );

      controller!.initialize().then((_) {
        if (!mounted) {
          return;
        }
        setState(() {});
      });

      controller!.setFlashMode(FlashMode.off);

      _startImageStream();
    }).catchError((err) {
      print('Error: ${err.message}');
    });
  }

  Future<void> _startImageStream() async {
    final image = await controller!.currentFrame;
    final planes = image!.planes;
    final faceMeshDetector = FaceMeshDetector();

    controller!.startImageStream((CameraImage cameraImage) async {
      final ByteBuffer buffer = cameraImage.planes[0].buffer;
      final Uint8List bytes = buffer.asUint8List();

      try {
        faceMeshResult = await faceMeshDetector.processImage(bytes, cameraImage.width, cameraImage.height);
      } catch (e) {
        print('Error detecting face mesh: $e');
      }

      if (mounted) {
        setState(() {});
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    if (!hasPermission) {
      return Scaffold(
        appBar: AppBar(
          title: Text('Camera Permission Denied'),
        ),
        body: Center(
          child: Text('You need to allow camera permission to use this app.'),
        ),
      );
    }

    if (controller == null || !controller!.value.isInitialized) {
      return Scaffold(
        appBar: AppBar(
          title: Text('Loading Camera...'),
        ),
        body: Center(
          child: CircularProgressIndicator(),
        ),
      );
    }

    return Scaffold(
      appBar: AppBar(
        title: Text('Face Mesh Detection'),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            AspectRatio(
              aspectRatio: controller!.value.aspectRatio,
              child: CameraPreview(controller!),
            ),
            if (faceMeshResult != null)
              FaceMeshOverlay(
                faceMeshResult: faceMeshResult!,
                size: Size(controller!.value.previewSize!.width.toDouble(), controller!.value.previewSize!.height.toDouble()),
              ),
          ],
        ),
      ),
    );
  }

  @override
  void dispose() {
    controller?.dispose();
    super.dispose();
  }
}

class FaceMeshOverlay extends StatelessWidget {
  final FaceMeshResult faceMeshResult;
  final Size size;

  FaceMeshOverlay({required this.faceMeshResult, required this.size});

  @override
  Widget build(BuildContext context) {
    return CustomPaint(
      size: size,
      painter: FaceMeshPainter(faceMeshResult),
    );
  }
}

class FaceMeshPainter extends CustomPainter {
  final FaceMeshResult faceMeshResult;

  FaceMeshPainter({required this.faceMeshResult});

  @override
  void paint(Canvas canvas, Size size) {
    final Paint paint = Paint()
      ..color = Colors.red
      ..strokeWidth = 1.0
      ..style = PaintingStyle.stroke;

    for (var contour in faceMeshResult.contours!) {
      Path path = Path();
      path.moveTo(contour[0].x * size.width, contour[0].y * size.height);
      for (var point in contour.skip(1)) {
        path.lineTo(point.x * size.width, point.y * size.height);
      }
      path.close();
      canvas.drawPath(path, paint);
    }
  }

  @override
  bool shouldRepaint(covariant CustomPainter oldDelegate) {
    return oldDelegate != this;
  }
}

注意事项

  1. 你还需要添加camera依赖来访问摄像头,并在pubspec.yaml中声明它:
dependencies:
  camera: ^latest_version  # 替换为最新版本号
  1. 确保你的Android项目已经配置好对ML Kit的依赖,虽然google_mlkit_face_mesh_detection插件会处理大部分配置,但有时候你可能需要手动同步Gradle配置。

  2. 由于ML Kit和相机访问可能会受到设备兼容性和权限管理的影响,因此在实际应用中需要处理更多的错误情况和用户交互。

这个示例展示了如何使用google_mlkit_face_mesh_detection插件来捕获摄像头的实时图像,并在图像上绘制检测到的人脸网格。你可以根据实际需求进一步定制和扩展这个示例。

回到顶部