Flutter实时语音转文字插件livespeechtotext的使用

发布于 1周前 作者 yibo5220 来自 Flutter

Flutter实时语音转文字插件livespeechtotext的使用

Live Speech-To-Text

轻松为您的Flutter应用程序添加语音识别功能,实现语音控制特性。通过这个插件可以更方便地将说出的话语转换成文本。

SDK

  • Android

    • MinSDK 21 /CompileSDK 31
    • android/local.properties中设置:
      def flutterMinSdkVersion = localProperties.getProperty('flutter.minSdkVersion')
      if (flutterMinSdkVersion == null) {
          flutterMinSdkVersion = '21'
      }
      
      def flutterCompileSdkVersion = localProperties.getProperty('flutter.compileSdkVersion')
      if (flutterCompileSdkVersion == null) {
          flutterCompileSdkVersion = '31'
      }
      
    • android/app/build.gradle中设置:
      android {
          ...
          compileSdkVersion flutterCompileSdkVersion.toInteger()
          ...
      
          defaultConfig {
            ...
            minSdkVersion flutterMinSdkVersion.toInteger()
            ...
          }
      }
      
  • iOS

    • 此包在目标iOS 13上进行了测试。

Permissions

强烈建议使用permission_handler:^9.2.0包来确保用户已授予所需的权限。

  • Android

    • android/app/src/{debug,main,profile}/AndroidManifest.xml中添加:
      <uses-permission android:name="android.permission.RECORD_AUDIO" />
      <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
      
  • iOS

    • ios/Runner/Info.plist中添加:
      <key>NSMicrophoneUsageDescription</key>
      <string>您的语音输入对于语音转文本功能是必需的</string>
      <key>NSSpeechRecognitionUsageDescription</key>
      <string>允许应用从您的语音获取文本输入</string>
      
    • ios/Podfile中:
      • 首先参考指南
      • 取消注释以下行:
        ## dart: PermissionGroup.microphone
        'PERMISSION_MICROPHONE=1',
        
        ## dart: PermissionGroup.speech
        'PERMISSION_SPEECH_RECOGNIZER=1',
        

Troubleshoots

  • Android Emulator(无Android Studio)在MacOS 13上无法接收麦克风输入
    • 打开“活动监视器”找到类似’qemu’的进程名,获取完整进程名。
    • 运行命令:
      sudo defaults write com.apple.security.device.microphone qemu-system-aarch64 -bool true
      
    • 重启模拟器。

示例代码

下面是一个完整的示例demo,展示了如何使用livespeechtotext插件:

import 'dart:developer';
import 'dart:io';

import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:livespeechtotext/livespeechtotext.dart';
import 'package:permission_handler/permission_handler.dart';

void main() {
  WidgetsFlutterBinding.ensureInitialized();

  runApp(const MyApp());
}

class MyApp extends StatefulWidget {
  const MyApp({super.key});

  @override
  State<MyApp> createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  late Livespeechtotext _livespeechtotextPlugin;
  late String _recognisedText;
  String? _localeDisplayName = '';
  StreamSubscription<dynamic>? onSuccessEvent;

  bool microphoneGranted = false;

  @override
  void initState() {
    super.initState();
    _livespeechtotextPlugin = Livespeechtotext();

    _livespeechtotextPlugin.getLocaleDisplayName().then((value) => setState(
          () => _localeDisplayName = value,
        ));

    _recognisedText = '';
  }

  @override
  void dispose() {
    onSuccessEvent?.cancel();
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('Live Speech To Text'),
        ),
        body: Center(
          child: Column(
            children: [
              Text(_recognisedText),
              if (!microphoneGranted)
                ElevatedButton(
                  onPressed: () {
                    binding();
                  },
                  child: const Text("Check Permissions"),
                ),
              ElevatedButton(
                  onPressed: microphoneGranted
                      ? () {
                          print("start button pressed");
                          try {
                            _livespeechtotextPlugin.start();
                          } on PlatformException {
                            print('error');
                          }
                        }
                      : null,
                  child: const Text('Start')),
              ElevatedButton(
                  onPressed: microphoneGranted
                      ? () {
                          print("stop button pressed");
                          try {
                            _livespeechtotextPlugin.stop();
                          } on PlatformException {
                            print('error');
                          }
                        }
                      : null,
                  child: const Text('Stop')),
              Text("Locale: $_localeDisplayName"),
            ],
          ),
        ),
      ),
    );
  }

  Future<dynamic> binding() async {
    onSuccessEvent?.cancel();

    return Future.wait([]).then((_) async {
      // Check if the user has already granted microphone permission.
      var permissionStatus = await Permission.microphone.status;

      // If the user has not granted permission, prompt them for it.
      if (!microphoneGranted) {
        await Permission.microphone.request();

        // Check if the user has already granted the permission.
        permissionStatus = await Permission.microphone.status;

        if (!permissionStatus.isGranted) {
          return Future.error('Microphone access denied');
        }
      }

      // Check if the user has already granted speech permission.
      if (Platform.isIOS) {
        var speechStatus = await Permission.speech.status;

        // If the user has not granted permission, prompt them for it.
        if (!microphoneGranted) {
          await Permission.speech.request();

          // Check if the user has already granted the permission.
          speechStatus = await Permission.speech.status;

          if (!speechStatus.isGranted) {
            return Future.error('Speech access denied');
          }
        }
      }

      return Future.value(true);
    }).then((value) {
      microphoneGranted = true;

      // listen to event "success"
      onSuccessEvent =
          _livespeechtotextPlugin.addEventListener("success", (value) {
        if (value.runtimeType != String) return;
        if ((value as String).isEmpty) return;

        setState(() {
          _recognisedText = value;
        });
      });

      setState(() {});
    }).onError((error, stackTrace) {
      // toast
      log(error.toString());
      // open app setting
    });
  }
}

此示例展示了如何初始化插件、请求必要的权限、开始和停止语音识别,并显示识别到的文本。希望这能帮助您更好地理解和使用livespeechtotext插件。


更多关于Flutter实时语音转文字插件livespeechtotext的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter实时语音转文字插件livespeechtotext的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,下面是一个关于如何在Flutter应用中使用livespeechtotext插件实现实时语音转文字的示例代码。这个插件允许你访问设备的麦克风,并将捕捉到的语音实时转换为文本。

首先,确保你已经在pubspec.yaml文件中添加了livespeechtotext依赖:

dependencies:
  flutter:
    sdk: flutter
  livespeechtotext: ^x.y.z  # 请替换为最新版本号

然后,运行flutter pub get来安装依赖。

接下来,在你的Flutter应用中,你可以按照以下步骤使用livespeechtotext插件:

  1. 导入插件
import 'package:livespeechtotext/livespeechtotext.dart';
  1. 初始化插件并请求麦克风权限

在你的主页面或需要实时语音转文字功能的页面中,初始化LiveSpeechToText实例,并请求麦克风权限。

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  final LiveSpeechToText _liveSpeechToText = LiveSpeechToText();
  bool _isListening = false;
  String _recognizedText = '';

  @override
  void initState() {
    super.initState();
    _initLiveSpeechToText();
  }

  Future<void> _initLiveSpeechToText() async {
    bool hasPermission = await _liveSpeechToText.hasPermission();
    if (!hasPermission) {
      bool requestPermission = await _liveSpeechToText.requestPermission();
      if (!requestPermission) {
        // 处理权限被拒绝的情况
        print('Permission for microphone was denied');
      }
    }

    // 可选:设置识别语言,默认为英语(美国)
    _liveSpeechToText.setRecognitionLanguage("en-US");

    // 可选:监听识别结果
    _liveSpeechToText.listen((result) {
      setState(() {
        _recognizedText = result.recognizedWords;
      });
    });

    // 可选:监听识别状态变化
    _liveSpeechToText.onRecognitionStarted = _onRecognitionStarted;
    _liveSpeechToText.onRecognitionCompleted = _onRecognitionCompleted;
    _liveSpeechToText.onRecognitionError = _onRecognitionError;
    _liveSpeechToText.onVolumeChanged = _onVolumeChanged;
  }

  void _startListening() async {
    setState(() {
      _isListening = true;
    });
    await _liveSpeechToText.startListening();
  }

  void _stopListening() async {
    setState(() {
      _isListening = false;
    });
    await _liveSpeechToText.stopListening();
  }

  void _onRecognitionStarted() {
    print('Recognition started');
  }

  void _onRecognitionCompleted() {
    print('Recognition completed');
  }

  void _onRecognitionError(String errorMessage) {
    print('Recognition error: $errorMessage');
  }

  void _onVolumeChanged(double volume) {
    print('Volume changed: $volume');
  }

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: Text('Real-time Speech to Text'),
        ),
        body: Center(
          child: Column(
            mainAxisAlignment: MainAxisAlignment.center,
            children: <Widget>[
              Text(
                'Recognized Text: $_recognizedText',
                style: TextStyle(fontSize: 20),
              ),
              SizedBox(height: 20),
              ElevatedButton(
                onPressed: _isListening ? _stopListening : _startListening,
                child: Text(_isListening ? 'Stop Listening' : 'Start Listening'),
              ),
            ],
          ),
        ),
      ),
    );
  }
}
  1. 运行应用

确保你的设备或模拟器支持麦克风访问,然后运行你的Flutter应用。点击按钮开始或停止监听麦克风,并查看实时转换的文本显示在屏幕上。

这个示例展示了如何初始化livespeechtotext插件,请求麦克风权限,监听语音输入,并处理识别结果。你可以根据需要进一步自定义和扩展这个示例。

回到顶部