Flutter人脸检测插件apple_vision_face_detection的使用

发布于 1周前 作者 ionicwang 来自 Flutter

Flutter人脸检测插件apple_vision_face_detection的使用

apple_vision_face_detection

Pub Version 分析 GitHub星标 许可证

Apple Vision Face Detection 是一个 Flutter 插件,使 Flutter 应用程序能够使用 Apple Vision 面部检测功能。

  • 该插件并非由 Apple 赞助或维护。作者是一些希望为 macOS 创建类似 Google ML Kit 的开发者。

需求

macOS

  • 最低 osx 部署目标:10.13
  • Xcode 13 或更新版本
  • Swift 5
  • ML Kit 仅支持 64 位架构(x86_64 和 arm64)

iOS

  • 最低 ios 部署目标:12.0
  • Xcode 13 或更新版本
  • Swift 5
  • ML Kit 仅支持 64 位架构(x86_64 和 arm64)

入门

首先,导入 package:apple_vision/apple_vision.dart

final GlobalKey cameraKey = GlobalKey(debugLabel: "cameraKey");
AppleVisionFaceDetectionController visionController = AppleVisionFaceDetectionController();
InsertCamera camera = InsertCamera();
Size imageSize = const Size(640,640*9/16);
String? deviceId;
bool loading = true;

List<Rect>? faceData;
late double deviceWidth;
late double deviceHeight;

[@override](/user/override)
void initState() {
  camera.setupCameras().then((value){
    setState(() {
      loading = false;
    });
    camera.startLiveFeed((InputImage i){
      if(i.metadata?.size != null){
        imageSize = i.metadata!.size;
      }
      if(mounted) {
        Uint8List? image = i.bytes;
        visionController.processImage(image!, imageSize).then((data){
          faceData = data;
          setState(() {
            
          });
        });
      }
    });
  });
  super.initState();
}
[@Override](/user/Override)
void dispose() {
  camera.dispose();
  super.dispose();
}

[@override](/user/override)
Widget build(BuildContext context) {
  deviceWidth = MediaQuery.of(context).size.width;
  deviceHeight = MediaQuery.of(context).size.height;
  return Stack(
    children:[
      SizedBox(
        width: imageSize.width, 
        height: imageSize.height, 
        child: loading?Container():CameraSetup(camera: camera, size: imageSize)
    ),
    ]+showRects()
  );
}

List<Widget> showRects(){
  if(faceData == null || faceData!.isEmpty) return [];
  List<Widget> widgets = [];

  for(int i = 0; i < faceData!.length; i++){
    widgets.add(
      Positioned(
        top: faceData![i].top,
        left: faceData![i].left,
        child: Container(
          width: faceData![i].width*imageSize.width,
          height: faceData![i].height*imageSize.height,
          decoration: BoxDecoration(
            color: Colors.transparent,
            border: Border.all(width: 1, color: Colors.green),
            borderRadius: BorderRadius.circular(5)
          ),
        )
      )
    );
  }
  return widgets;
}

Widget loadingWidget(){
  return Container(
    width: deviceWidth,
    height:deviceHeight,
    color: Theme.of(context).canvasColor,
    alignment: Alignment.center,
    child: const CircularProgressIndicator(color: Colors.blue)
  );
}

更多关于Flutter人脸检测插件apple_vision_face_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter人脸检测插件apple_vision_face_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,下面是一个关于如何在Flutter项目中使用apple_vision_face_detection插件进行人脸检测的示例代码。这个插件利用了Apple的Vision框架来实现人脸检测功能,因此仅适用于iOS平台。

首先,确保你已经在pubspec.yaml文件中添加了apple_vision_face_detection依赖:

dependencies:
  flutter:
    sdk: flutter
  apple_vision_face_detection: ^0.x.x  # 请替换为最新版本号

然后,运行flutter pub get来安装依赖。

接下来,是主要的Dart代码示例。在这个示例中,我们将展示如何从图像中检测人脸并获取人脸特征点。

import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'package:apple_vision_face_detection/apple_vision_face_detection.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: FaceDetectionScreen(),
    );
  }
}

class FaceDetectionScreen extends StatefulWidget {
  @override
  _FaceDetectionScreenState createState() => _FaceDetectionScreenState();
}

class _FaceDetectionScreenState extends State<FaceDetectionScreen> {
  File? _imageFile;
  List<FaceFeature>? _faceFeatures;

  Future<void> _pickImage() async {
    final ImagePicker _picker = ImagePicker();
    final XFile? image = await _picker.pickImage(source: ImageSource.gallery);

    if (image != null) {
      final File imageFile = File(image.path);
      setState(() {
        _imageFile = imageFile;
      });

      _detectFaces(imageFile);
    }
  }

  Future<void> _detectFaces(File imageFile) async {
    try {
      final List<FaceFeature> faceFeatures = await detectFaces(imageFile);
      setState(() {
        _faceFeatures = faceFeatures;
      });
    } catch (e) {
      print('Error detecting faces: $e');
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Face Detection'),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            _imageFile == null
                ? Text('No image selected.')
                : Image.file(_imageFile!),
            if (_faceFeatures != null)
              _FaceFeaturesOverlay(faceFeatures: _faceFeatures!),
            SizedBox(height: 20),
            ElevatedButton(
              onPressed: _pickImage,
              child: Text('Pick Image'),
            ),
          ],
        ),
      ),
    );
  }
}

class _FaceFeaturesOverlay extends StatelessWidget {
  final List<FaceFeature> faceFeatures;

  _FaceFeaturesOverlay({required this.faceFeatures});

  @override
  Widget build(BuildContext context) {
    return Stack(
      children: faceFeatures.map((faceFeature) {
        return Positioned(
          left: faceFeature.boundingBox.left,
          top: faceFeature.boundingBox.top,
          child: Container(
            decoration: BoxDecoration(
              border: Border.all(color: Colors.red, width: 2),
              shape: BoxShape.rectangle,
            ),
            child: Text(
              'Face',
              style: TextStyle(color: Colors.red, backgroundColor: Colors.transparent),
            ),
          ),
        );
      }).toList(),
    );
  }
}

注意

  1. 由于apple_vision_face_detection插件依赖于iOS的Vision框架,因此这段代码仅能在iOS设备上运行。
  2. 上面的代码使用了image_picker插件来选择图像。你需要在pubspec.yaml中添加image_picker依赖,并运行flutter pub get
  3. _FaceFeaturesOverlay组件简单地展示了检测到的人脸位置,通过绘制红色矩形框和文本标记。你可以根据需要进一步自定义这个组件来展示更多的人脸特征信息,比如眼睛、鼻子、嘴巴的位置等。
  4. 请确保你已经为iOS项目配置了必要的权限,比如访问照片库的权限。

这是一个基础的示例,你可以根据实际需求进行扩展和优化。

回到顶部