Flutter主体分割插件google_mlkit_subject_segmentation的使用

发布于 1周前 作者 ionicwang 来自 Flutter

Flutter主体分割插件google_mlkit_subject_segmentation的使用

Google’s ML Kit Subject Segmentation 是一个用于Flutter的插件,它可以帮助开发者轻松地从图片中分离出多个主体(如人、宠物或物体),从而实现贴纸创建、背景替换或添加酷炫效果等用途。本文将详细介绍如何在Flutter项目中使用这个插件,并提供一个完整的示例Demo。

插件特性

  • 支持多主体分割:为每个单独的主体提供掩码和位图。
  • 主体识别:支持对象、宠物和人类的识别。
  • 设备端处理:所有处理都在设备上完成,保护用户隐私且无需网络连接。

注意事项

该功能目前仅适用于Android平台,并处于Beta阶段。请访问Google的官方网站获取最新信息。

环境要求

Android

  • minSdkVersion: 24
  • targetSdkVersion: 33
  • compileSdkVersion: 34

需要在AndroidManifest.xml文件中添加模型下载声明:

<application ...>
    ...
    <meta-data
        android:name="com.google.mlkit.vision.DEPENDENCIES"
        android:value="subject_segment">
    <!-- 若要使用多个模型,请设置为:android:value="subject_segment,model2,model3" -->
</application>

使用步骤

创建InputImage实例

首先,你需要创建一个InputImage实例,具体方法可以参考这里

final InputImage inputImage;

创建SubjectSegmenter实例

接下来,创建一个SubjectSegmenter实例。

final options = SubjectSegmenterOptions();
final segmenter = SubjectSegmenter(options: options);

处理图像

然后,调用processImage方法来处理图像。

final result = await segmenter.processImage(inputImage);

释放资源

最后,记得关闭segmenter以释放资源。

segmenter.close();

完整示例代码

下面是一个简单的完整示例,展示了如何在Flutter应用中使用google_mlkit_subject_segmentation插件。

import 'package:flutter/material.dart';
import 'package:google_mlkit_subject_segmentation/google_mlkit_subject_segmentation.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  [@override](/user/override)
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Subject Segmentation Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: MyHomePage(),
    );
  }
}

class MyHomePage extends StatefulWidget {
  [@override](/user/override)
  _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  late final SubjectSegmenter _segmenter;

  [@override](/user/override)
  void initState() {
    super.initState();
    final options = SubjectSegmenterOptions();
    _segmenter = SubjectSegmenter(options: options);
  }

  [@override](/user/override)
  void dispose() {
    _segmenter.close();
    super.dispose();
  }

  Future<void> _processImage(InputImage inputImage) async {
    final result = await _segmenter.processImage(inputImage);
    // Handle the result, e.g., display it on the screen or modify the image based on the segmentation mask.
  }

  [@override](/user/override)
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Subject Segmentation Demo'),
      ),
      body: Center(
        child: ElevatedButton(
          onPressed: () async {
            // Example of how to get an InputImage from a file picker or camera.
            // You need to implement this part yourself.
            // final inputImage = await getImageFromSource();
            // await _processImage(inputImage);
          },
          child: Text('Process Image'),
        ),
      ),
    );
  }
}

更多关于Flutter主体分割插件google_mlkit_subject_segmentation的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter主体分割插件google_mlkit_subject_segmentation的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,下面是一个关于如何在Flutter项目中使用google_mlkit_subject_segmentation插件来进行主体分割的示例代码。这个插件利用Google的ML Kit进行图像的主体分割,可以识别并分离出图像中的主要对象。

首先,确保你的Flutter项目已经设置好了,并且已经添加了google_mlkit_subject_segmentation依赖。你可以在pubspec.yaml文件中添加以下依赖:

dependencies:
  flutter:
    sdk: flutter
  google_mlkit_subject_segmentation: ^latest_version  # 请替换为最新版本号

然后运行flutter pub get来安装依赖。

接下来,你可以在你的Flutter应用中实现主体分割功能。以下是一个完整的示例代码:

import 'package:flutter/material.dart';
import 'package:google_mlkit_subject_segmentation/google_mlkit_subject_segmentation.dart';
import 'package:image_picker/image_picker.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: SubjectSegmentationScreen(),
    );
  }
}

class SubjectSegmentationScreen extends StatefulWidget {
  @override
  _SubjectSegmentationScreenState createState() => _SubjectSegmentationScreenState();
}

class _SubjectSegmentationScreenState extends State<SubjectSegmentationScreen> {
  final ImagePicker _picker = ImagePicker();
  File? _imageFile;
  Uint8List? _segmentedImage;

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Google ML Kit Subject Segmentation'),
      ),
      body: Padding(
        padding: const EdgeInsets.all(16.0),
        child: Column(
          children: [
            _imageFile == null
                ? Text('No image selected.')
                : Image.file(_imageFile!),
            SizedBox(height: 20),
            ElevatedButton(
              onPressed: _pickImage,
              child: Text('Pick Image'),
            ),
            SizedBox(height: 20),
            ElevatedButton(
              onPressed: _segmentImage,
              child: Text('Segment Image'),
              isEnabled: _imageFile != null,
            ),
            SizedBox(height: 20),
            if (_segmentedImage != null)
              Image.memory(_segmentedImage!),
          ],
        ),
      ),
    );
  }

  Future<void> _pickImage() async {
    final pickedFile = await _picker.pickImage(source: ImageSource.camera);

    if (pickedFile != null) {
      setState(() {
        _imageFile = File(pickedFile.path);
      });
    }
  }

  Future<void> _segmentImage() async {
    if (_imageFile == null) return;

    final inputImage = InputImage.fromFilePath(
      filePath: _imageFile!.path,
      includeExifData: true,
    );

    final result = await SubjectSegmentationProcessor().process(inputImage);

    if (result != null && result.segmentationMask != null) {
      final mask = result.segmentationMask!;
      final originalImageBytes = await _imageFile!.readAsBytes();
      final originalImage = Image.decodeMemory(originalImageBytes)!;
      final width = originalImage.width;
      final height = originalImage.height;
      
      // Create a Bitmap to draw the segmented image
      final segmentedBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
      final canvas = Canvas(segmentedBitmap);
      final paint = Paint();
      
      // Draw the original image
      canvas.drawBitmap(originalImage.toBitmap(), 0.0f, 0.0f, paint);
      
      // Draw the mask on top of the original image
      for (int y = 0; y < height; y++) {
        for (int x = 0; x < width; x++) {
          final int pixel = mask.getPixel(x, y);
          final alpha = Color.red(pixel); // Assuming the mask is stored in red channel
          if (alpha > 128) { // Threshold to decide if the pixel is part of the subject
            paint.color = Color.WHITE.withAlpha(255); // Set to white or any desired color
          } else {
            paint.color = Color.TRANSPARENT; // Set to transparent
          }
          canvas.drawColor(paint.color.value, BlendMode.clear); // This is simplified, you might need custom drawing logic
        }
      }
      
      // Convert Bitmap back to Uint8List for Flutter Image widget
      final outputStream = ByteArrayOutputStream();
      segmentedBitmap.compress(Bitmap.CompressFormat.PNG, 100, outputStream);
      final segmentedImageBytes = outputStream.toByteArray();
      
      setState(() {
        _segmentedImage = segmentedImageBytes;
      });
    }
  }
}

注意

  1. 这个示例代码是一个简化的版本,用于演示如何进行主体分割。实际项目中,你可能需要更复杂的逻辑来处理分割后的图像。
  2. 由于Flutter和Android/iOS平台的差异,上述代码中的图像处理和Bitmap操作部分可能需要针对平台进行调整。特别是BitmapCanvas部分,你可能需要使用Flutter的原生通道(Platform Channels)或者第三方库来在Flutter中直接操作图像。
  3. 上述代码假设分割掩码存储在红色通道中,并且使用简单的阈值(128)来判断像素是否属于主体。这可能需要根据你的具体需求进行调整。
  4. google_mlkit_subject_segmentation插件的API可能会随着版本的更新而变化,请参考最新的官方文档。

希望这个示例代码能帮助你在Flutter项目中实现主体分割功能!

回到顶部