Flutter姿态检测插件google_mlkit_pose_detection的使用

发布于 1周前 作者 gougou168 来自 Flutter

Flutter姿态检测插件google_mlkit_pose_detection的使用

Google’s ML Kit Pose Detection for Flutter

Pub Version analysis Star on Github License: MIT

Flutter插件,用于使用Google的ML Kit姿态检测从连续视频或静态图像中实时检测主体的身体姿态。

重要提示

在继续或发布新的问题之前,请务必阅读以下内容:

  • Google的ML Kit仅针对移动平台构建:iOS和Android应用。Web或其他平台不受支持,您可以在他们的仓库请求对这些平台的支持。
  • 此插件不由Google赞助或维护。作者是热衷于机器学习的开发者,他们希望将Google的原生API暴露给Flutter。
  • Google的ML Kit API仅针对iOS和Android原生开发。此插件使用Flutter平台通道(Platform Channels),如这里所述。
  • 消息通过平台通道在客户端(应用程序/插件)和主机(平台)之间异步传递,以确保用户界面保持响应。有关平台通道的更多信息,请参阅这里
  • 由于此插件使用平台通道,所有机器学习处理都在原生平台上进行,而不是在Flutter/Dart中。所有的调用都通过MethodChannel(Android)和FlutterMethodChannel(iOS)传递给原生平台,并使用Google的原生API执行。请理解这一概念,以便在调试模型和/或应用程序错误时有所帮助。

要求

iOS

  • 最低iOS部署目标:15.5.0
  • Xcode 15.3.0或更新版本
  • Swift 5
  • ML Kit不支持32位架构(i386和armv7)。ML Kit支持64位架构(x86_64和arm64)。检查此列表以查看您的设备是否具有所需的设备功能。更多信息请参阅这里

由于ML Kit不支持32位架构(i386和armv7),您需要在Xcode中排除armv7架构,以便运行flutter build iosflutter build ipa。更多信息请参阅这里

在项目 > Runner > 构建设置 > 排除的架构 > 任何SDK > armv7处进行设置。

您的Podfile应如下所示:

platform :ios, '15.5.0'  # 或更新版本

...

# 添加这行:
$iOSVersion = '15.5.0'  # 或更新版本

post_install do |installer|
  # 添加这些行:
  installer.pods_project.build_configurations.each do |config|
    config.build_settings["EXCLUDED_ARCHS[sdk=*]"] = "armv7"
    config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
  end

  installer.pods_project.targets.each do |target|
    flutter_additional_ios_build_settings(target)

    # 添加这些行:
    target.build_configurations.each do |config|
      if Gem::Version.new($iOSVersion) > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
        config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
      end
    end

  end
end

注意,最小IPHONEOS_DEPLOYMENT_TARGET为15.5.0,您可以将其设置为更新的版本,但不能更旧。

Android

  • minSdkVersion: 21
  • targetSdkVersion: 33
  • compileSdkVersion: 34

使用方法

创建InputImage实例

创建一个InputImage实例,具体方法请参阅这里

final InputImage inputImage;

创建PoseDetector实例

final options = PoseDetectorOptions();
final poseDetector = PoseDetector(options: options);

处理图像

final List<Pose> poses = await poseDetector.processImage(inputImage);

for (Pose pose in poses) {
  // 访问所有关键点
  pose.landmarks.forEach((_, landmark) {
    final type = landmark.type;
    final x = landmark.x;
    final y = landmark.y;
 });

  // 访问特定关键点
  final landmark = pose.landmarks[PoseLandmarkType.nose];
}

释放资源

poseDetector.close();

示例应用

示例应用请参阅这里

完整示例Demo

以下是一个完整的示例代码,展示了如何在Flutter应用中使用google_mlkit_pose_detection插件进行姿态检测:

import 'package:flutter/material.dart';
import 'package:google_mlkit_pose_detection/google_mlkit_pose_detection.dart';
import 'package:image_picker/image_picker.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: PoseDetectionScreen(),
    );
  }
}

class PoseDetectionScreen extends StatefulWidget {
  @override
  _PoseDetectionScreenState createState() => _PoseDetectionScreenState();
}

class _PoseDetectionScreenState extends State<PoseDetectionScreen> {
  late PoseDetector _poseDetector;
  bool _isBusy = false;
  String? _text;
  InputImage? _inputImage;

  @override
  void initState() {
    super.initState();
    _initializePoseDetector();
  }

  void _initializePoseDetector() {
    final options = PoseDetectorOptions();
    _poseDetector = PoseDetector(options: options);
  }

  Future<void> _processImage(InputImage inputImage) async {
    setState(() {
      _isBusy = true;
      _text = '';
    });
    final poses = await _poseDetector.processImage(inputImage);
    if (inputImage.inputImageData?.size != null &&
        inputImage.inputImageData?.imageRotation != null) {
      final painter = PosePainter(poses, inputImage.inputImageData!.size,
          inputImage.inputImageData!.imageRotation);
      _text = 'Found ${poses.length} pose(s)';
    } else {
      String text = 'Found ${poses.length} pose(s)';
      for (final pose in poses) {
        for (final landmark in pose.landmarks.values) {
          text += '\n${landmark.type}: (${landmark.x}, ${landmark.y})';
        }
      }
      _text = text;
    }
    setState(() {
      _isBusy = false;
    });
  }

  Future<void> _getImage(ImageSource source) async {
    final pickedFile = await ImagePicker().pickImage(source: source);
    if (pickedFile != null) {
      final inputImage =
          InputImage.fromFilePath(pickedFile.path);
      setState(() {
        _inputImage = inputImage;
      });
      _processImage(_inputImage!);
    }
  }

  @override
  void dispose() {
    _poseDetector.close();
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Pose Detection'),
      ),
      body: Column(
        children: [
          Expanded(
            child: Center(
              child: _inputImage == null
                  ? Text('No image selected.')
                  : Image.file(File(_inputImage!.path)),
            ),
          ),
          if (_text != null) Text(_text!),
          Padding(
            padding: const EdgeInsets.all(16.0),
            child: Row(
              mainAxisAlignment: MainAxisAlignment.spaceAround,
              children: [
                ElevatedButton(
                  onPressed: () => _getImage(ImageSource.gallery),
                  child: Text('From Gallery'),
                ),
                ElevatedButton(
                  onPressed: () => _getImage(ImageSource.camera),
                  child: Text('From Camera'),
                ),
              ],
            ),
          ),
        ],
      ),
    );
  }
}

class PosePainter {
  PosePainter(this.poses, this.absoluteImageSize, this.rotation);

  final List<Pose> poses;
  final Size absoluteImageSize;
  final InputImageRotation rotation;

  void paint(Canvas canvas, Size size) {
    final Paint paint = Paint()
      ..style = PaintingStyle.stroke
      ..strokeWidth = 2.0
      ..color = Colors.red;

    for (final pose in poses) {
      for (final landmark in pose.landmarks.values) {
        final point = Offset(landmark.x, landmark.y);
        canvas.drawCircle(point, 5.0, paint);
      }
    }
  }
}

这个示例展示了如何在Flutter应用中集成google_mlkit_pose_detection插件,选择图片或拍照后进行姿态检测,并显示检测结果。


更多关于Flutter姿态检测插件google_mlkit_pose_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter姿态检测插件google_mlkit_pose_detection的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,下面是一个关于如何在Flutter项目中使用google_mlkit_pose_detection插件进行姿态检测的示例代码。这个插件利用Google的ML Kit库来检测人体姿态。

首先,你需要在你的pubspec.yaml文件中添加google_mlkit_pose_detection依赖:

dependencies:
  flutter:
    sdk: flutter
  google_mlkit_pose_detection: ^latest_version  # 请替换为最新版本号

然后运行flutter pub get来安装依赖。

接下来是主要的Flutter代码示例,用于实现姿态检测功能:

import 'package:flutter/material.dart';
import 'package:google_mlkit_pose_detection/google_mlkit_pose_detection.dart';
import 'dart:typed_data';
import 'dart:ui' as ui;

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: PoseDetectionScreen(),
    );
  }
}

class PoseDetectionScreen extends StatefulWidget {
  @override
  _PoseDetectionScreenState createState() => _PoseDetectionScreenState();
}

class _PoseDetectionScreenState extends State<PoseDetectionScreen> {
  final List<PoseLandmark> _landmarks = [];
  late CameraController _controller;

  @override
  void initState() {
    super.initState();
    _initializeCamera();
  }

  Future<void> _initializeCamera() async {
    // 初始化相机控制器,这里省略了相机初始化的具体代码
    // 假设你已经有一个相机插件并且相机已经初始化成功
    _controller = // 初始化你的相机控制器
    _controller.addListener(() {
      if (mounted) setState(() {});
    });

    _controller.initialize().then((_) {
      if (mounted) {
        setState(() {});
      }
    });
  }

  @override
  void dispose() {
    _controller?.dispose();
    super.dispose();
  }

  Future<void> _processImage(ui.Image image) async {
    final inputImage = InputImage.fromBytes(
      image.width,
      image.height,
      image.planes.map((plane) {
        return plane.bytes;
      }).toList(),
      image.planes[0].bytesPerRow,
      image.format
    );

    final result = await PoseDetection.getClient().process(inputImage);
    if (mounted && result != null && result.poses.isNotEmpty) {
      setState(() {
        _landmarks.clear();
        final pose = result.poses[0];
        pose.landmarks.forEach((landmark) {
          _landmarks.add(landmark);
        });
      });
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Pose Detection'),
      ),
      body: _controller.value.isInitialized
          ? Center(
              child: Column(
                mainAxisAlignment: MainAxisAlignment.center,
                children: <Widget>[
                  AspectRatio(
                    aspectRatio: _controller.value.aspectRatio,
                    child: CameraPreview(_controller),
                  ),
                  if (_landmarks.isNotEmpty)
                    PoseOverlay(
                      landmarks: _landmarks,
                      size: Size(_controller.value.previewSize?.width ?? 0,
                          _controller.value.previewSize?.height ?? 0),
                    ),
                ],
              ),
            )
          : Container(),
      floatingActionButton: FloatingActionButton(
        onPressed: () async {
          final image = await _controller.captureImage();
          final planes = image.planes;
          final byteBuffer = planes.elementAt(0).buffer;
          final uiImage = await decodeImageFromList(byteBuffer.asUint8List());
          _processImage(uiImage);
        },
        tooltip: 'Capture Image',
        child: Icon(Icons.camera_alt),
      ),
    );
  }
}

class PoseOverlay extends StatelessWidget {
  final List<PoseLandmark> landmarks;
  final Size size;

  PoseOverlay({required this.landmarks, required this.size});

  @override
  Widget build(BuildContext context) {
    return CustomPaint(
      size: size,
      painter: PosePainter(landmarks: landmarks),
    );
  }
}

class PosePainter extends CustomPainter {
  final List<PoseLandmark> landmarks;

  PosePainter({required this.landmarks});

  @override
  void paint(Canvas canvas, Size size) {
    final paint = Paint()
      ..color = Colors.red
      ..strokeWidth = 4.0
      ..style = PaintingStyle.stroke;

    for (final landmark in landmarks) {
      final position = Offset(
        landmark.x * size.width,
        landmark.y * size.height,
      );
      canvas.drawCircle(position, 5.0, paint);
    }
  }

  @override
  bool shouldRepaint(covariant CustomPainter oldDelegate) {
    return oldDelegate != this;
  }
}

注意事项

  1. 相机初始化:上面的代码省略了相机初始化的具体细节,你需要使用一个相机插件(如camera插件)来初始化相机,并获取CameraController实例。
  2. UI 线程:确保所有的UI更新都在主线程上执行,以避免潜在的问题。
  3. 错误处理:在实际应用中,你应该添加更多的错误处理逻辑,以处理可能的异常情况,如相机初始化失败、图像处理失败等。

这个示例代码展示了如何使用google_mlkit_pose_detection插件进行基本的人体姿态检测,并在相机预览上绘制检测到的关键点。你可以根据实际需求进一步扩展和修改这个示例。

回到顶部