Flutter语音识别插件flutter_azure_speech_fix的使用

Flutter语音识别插件flutter_azure_speech_fix的使用

简介

pub.dev

flutter_azure_speech_fix 是一个用于实现 Microsoft Azure 语音服务的 Flutter 插件。它支持以下功能:

  • 语音转文本 [已完成]
  • 文本转语音 [进行中]

开始使用

首先,初始化插件并传入你的区域和订阅密钥。

Future<void> _initializeSpeechRecognition() async {
  try {
    await _flutterAzureSpeechPlugin.initialize(
        "YOUR_SUBSCRIPTION_KEY", "YOUR_REGION");
  } catch (e) {
    print('Error initializing speech recognition: $e');
  }
}

语音转文本

通过调用 getSpeechToText 方法开始语音识别过程。

Future<void> _startSpeechRecognition() async {
  try {
    setState(() {
      _recognizedText = "Listening...";
    });

    String recognizedText =
        await _flutterAzureSpeechPlugin.getSpeechToText("zh-CN") ?? "";

    setState(() {
      _recognizedText = recognizedText;
    });
  } catch (e) {
    print('Error during speech recognition: $e');

    setState(() {
      _recognizedText = "An error occurred during speech recognition.";
    });
  }
}

完整示例代码

以下是一个完整的示例代码,展示了如何在 Flutter 应用程序中使用 flutter_azure_speech_fix 插件。

import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:flutter_azure_speech_fix/flutter_azure_speech.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatefulWidget {
  const MyApp({super.key});

  [@override](/user/override)
  State<MyApp> createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  String _platformVersion = 'Unknown';
  String _recognizedText = '';
  final _flutterAzureSpeechPlugin = FlutterAzureSpeech();

  [@override](/user/override)
  void initState() {
    super.initState();
    initPlatformState();
    _initializeSpeechRecognition();
  }

  Future<void> _initializeSpeechRecognition() async {
    try {
      await _flutterAzureSpeechPlugin.initialize(
          "DF2TjCSleq6Ro7CcZ4b10M8OTm2Rgo7c3IKg3FVqt1AWFeL7KybAJQQJ99AKACYeBjFXJ3w3AAAYACOGERUb", "eastus");
    } catch (e) {
      print('Error initializing speech recognition: $e');
    }
  }

  // Platform messages are asynchronous, so we initialize in an async method.
  Future<void> initPlatformState() async {
    String platformVersion;
    // Platform messages may fail, so we use a try/catch PlatformException.
    // We also handle the message potentially returning null.
    try {
      platformVersion = await _flutterAzureSpeechPlugin.getPlatformVersion() ??
          'Unknown platform version';
    } on PlatformException {
      platformVersion = 'Failed to get platform version.';
    }

    // If the widget was removed from the tree while the asynchronous platform
    // message was in flight, we want to discard the reply rather than calling
    // setState to update our non-existent appearance.
    if (!mounted) return;

    setState(() {
      _platformVersion = platformVersion;
    });
  }

  [@override](/user/override)
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('Speech to Text Demo'),
        ),
        body: Center(
          child: Column(
            mainAxisAlignment: MainAxisAlignment.center,
            children: <Widget>[
              Padding(
                padding: const EdgeInsets.all(16.0),
                child: Text(
                  _platformVersion,
                  style: TextStyle(fontSize: 18),
                  textAlign: TextAlign.center,
                ),
              ),
              Padding(
                padding: const EdgeInsets.all(16.0),
                child: Text(
                  _recognizedText,
                  style: TextStyle(fontSize: 18),
                  textAlign: TextAlign.center,
                ),
              ),
              ElevatedButton(
                child: Text('Start Speech Recognition'),
                onPressed: _startSpeechRecognition,
              ),
            ],
          ),
        ),
      ),
    );
  }

  Future<void> _startSpeechRecognition() async {
    try {
      setState(() {
        _recognizedText = "Listening...";
      });

      String recognizedText =
          await _flutterAzureSpeechPlugin.getSpeechToText("zh-CN") ?? "";

      setState(() {
        _recognizedText = recognizedText;
      });
    } catch (e) {
      print('Error during speech recognition: $e');

      setState(() {
        _recognizedText = "An error occurred during speech recognition.";
      });
    }
  }
}

更多关于Flutter语音识别插件flutter_azure_speech_fix的使用的实战教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter语音识别插件flutter_azure_speech_fix的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


当然,以下是如何在Flutter项目中使用flutter_azure_speech_fix插件进行语音识别的代码示例。这个插件允许你利用Azure的语音识别服务将语音转换为文本。

首先,确保你已经在pubspec.yaml文件中添加了flutter_azure_speech_fix依赖:

dependencies:
  flutter:
    sdk: flutter
  flutter_azure_speech_fix: ^最新版本号  # 请替换为实际的最新版本号

然后,运行flutter pub get来安装依赖。

接下来,你需要在Azure门户中创建一个语音识别服务,并获取服务的订阅密钥和区域信息。这些信息将用于初始化插件。

下面是一个完整的示例,展示了如何使用flutter_azure_speech_fix插件进行语音识别:

import 'package:flutter/material.dart';
import 'package:flutter_azure_speech_fix/flutter_azure_speech_fix.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: SpeechRecognitionScreen(),
    );
  }
}

class SpeechRecognitionScreen extends StatefulWidget {
  @override
  _SpeechRecognitionScreenState createState() => _SpeechRecognitionScreenState();
}

class _SpeechRecognitionScreenState extends State<SpeechRecognitionScreen> {
  final String azureSubscriptionKey = '你的Azure订阅密钥';
  final String azureServiceRegion = '你的Azure服务区域'; // 例如: 'westus'

  String recognizedText = '';

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Flutter Azure Speech Recognition'),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            Text(
              'Recognized Text:',
              style: TextStyle(fontSize: 20),
            ),
            SizedBox(height: 20),
            Text(
              recognizedText,
              style: TextStyle(fontSize: 24, fontWeight: FontWeight.bold),
              textAlign: TextAlign.center,
            ),
            SizedBox(height: 40),
            ElevatedButton(
              onPressed: startSpeechRecognition,
              child: Text('Start Recognition'),
            ),
          ],
        ),
      ),
    );
  }

  Future<void> startSpeechRecognition() async {
    // 创建AzureSpeechRecognition实例
    final azureSpeechRecognition = AzureSpeechRecognition(
      subscriptionKey: azureSubscriptionKey,
      serviceRegion: azureServiceRegion,
    );

    // 开始语音识别
    azureSpeechRecognition.startListening((result) {
      setState(() {
        recognizedText = result.recognizedText;
      });
    }).catchError((error) {
      print('Error: $error');
    }).whenComplete(() {
      azureSpeechRecognition.stopListening();
    });
  }
}

注意事项:

  1. 权限:在Android上,你需要在AndroidManifest.xml中添加麦克风权限。

    <uses-permission android:name="android.permission.RECORD_AUDIO"/>
    
  2. iOS权限:在iOS上,你需要在Info.plist中添加麦克风权限的描述。

    <key>NSMicrophoneUsageDescription</key>
    <string>App needs access to microphone for speech recognition</string>
    
  3. 错误处理:上面的代码示例中包含了简单的错误处理,但在实际项目中,你可能需要更详细的错误处理逻辑。

  4. 依赖版本:确保你使用的是最新版本的flutter_azure_speech_fix插件,因为API可能会随着版本更新而发生变化。

通过上述代码,你可以在Flutter应用中集成Azure的语音识别服务,实现语音到文本的转换功能。

回到顶部