Flutter Azure SDK语音处理插件azure_sdk_voice的使用

Flutter Azure SDK语音处理插件azure_sdk_voice的使用

azure_sdk_voice 是一个专门用于在 Flutter 应用中集成 Azure 语音服务的插件。它允许开发者轻松地进行语音录制、播放、发音评分、语音翻译等功能。

开始使用

本项目是一个 Flutter 插件包的起点,包含了针对 Android 和/或 iOS 平台的特定实现代码。

对于 Flutter 开发的帮助文档,请参阅 Flutter 官方文档,其中提供了教程、示例、移动开发指南以及完整的 API 参考。

使用示例

以下是一个完整的示例,展示了如何在 Flutter 应用中使用 azure_sdk_voice 插件。

import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:azure_sdk_voice/azure_sdk_voice.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatefulWidget {
  const MyApp({super.key});

  @override
  State<MyApp> createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  String _statusText = 'Hold to record';

  final _azureSdkVoicePlugin = AzureSdkVoice();

  @override
  void initState() {
    super.initState();
    var key = "Your key";
    var region = "Your region";
    _azureSdkVoicePlugin
        .init(key, region)
        .then((value) => print("初始化完成: $value"));
  }

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('Azure SDK语音插件应用'),
        ),
        body: Center(
            child: ListView(
          children: [
            GestureDetector(
              onLongPressStart: (details) async {
                _azureSdkVoicePlugin.startRecording("testfile");
                setState(() {
                  _statusText = 'Recording...';
                });
              },
              onLongPressEnd: (details) async {
                _azureSdkVoicePlugin.stopRecording();
                setState(() {
                  _statusText = 'Hold to record';
                });
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: Text(
                  _statusText,
                  style: const TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                var res = await _azureSdkVoicePlugin.playRecordedAudio("testfile");
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "播放录音",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                var res = await _azureSdkVoicePlugin.pronunciationScore(
                    "testfile", "en-US", "");
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "发音评分",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                var res = await _azureSdkVoicePlugin.translate(
                    "testfile", "zh-cn", "en");
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "翻译",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                var res = await _azureSdkVoicePlugin.startTranslateContinuous("zh-cn", "en",(res) {
                  print("收到连续翻译结果: $res");
                });
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "开始连续翻译",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                var res = await _azureSdkVoicePlugin.stopTranslateContinuous();
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "停止连续翻译",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                String data = r"""
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-cn">
    <voice name="zh-cn-XiaomoNeural">
        <mstts:express-as style="sad" styledegree="2">
            快走吧,路上一定要注意安全,早去早回。
        </mstts:express-as>
    </voice>
</speak>
                """;
                var res = await _azureSdkVoicePlugin.speak(data,(res) {
                  print("收到任务结果: $res");
                });
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "合成语音",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
            Container(
              height: 50,
            ),
            GestureDetector(
              onTap: () async {
                var res = await _azureSdkVoicePlugin.speakStop();
                print(res);
              },
              child: Container(
                padding: const EdgeInsets.all(20.0),
                decoration: BoxDecoration(
                  color: Colors.blue,
                  borderRadius: BorderRadius.circular(10.0),
                ),
                child: const Text(
                  "停止合成语音",
                  style: TextStyle(color: Colors.white, fontSize: 20.0),
                ),
              ),
            ),
          ],
        )),
      ),
    );
  }
}

代码说明

  • 初始化插件:

    var key = "Your key";
    var region = "Your region";
    _azureSdkVoicePlugin.init(key, region).then((value) => print("初始化完成: $value"));
    
  • 开始录音:

    _azureSdkVoicePlugin.startRecording("testfile");
    
  • 停止录音:

    _azureSdkVoicePlugin.stopRecording();
    
  • 播放录音文件:

    var res = await _azureSdkVoicePlugin.playRecordedAudio("testfile");
    
  • 获取发音评分:

    var res = await _azureSdkVoicePlugin.pronunciationScore("testfile", "en-US", "");
    
  • 语音翻译:

    var res = await _azureSdkVoicePlugin.translate("testfile", "zh-cn", "en");
    
  • 开始连续翻译:

    var res = await _azureSdkVoicePlugin.startTranslateContinuous("zh-cn", "en", (res) {
      print("收到连续翻译结果: $res");
    });
    
  • 停止连续翻译:

    var res = await _azureSdkVoicePlugin.stopTranslateContinuous();
    
  • 合成语音:

    String data = r"""
    <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-cn">
        <voice name="zh-cn-XiaomoNeural">
            <mstts:express-as style="sad" styledegree="2">
                快走吧,路上一定要注意安全,早去早回。
            </mstts:express-as>
        </voice>
    </speak>
    """;
    var res = await _azureSdkVoicePlugin.speak(data, (res) {
      print("收到任务结果: $res");
    });
    
  • 停止合成语音:

    var res = await _azureSdkVoicePlugin.speakStop();
    

更多关于Flutter Azure SDK语音处理插件azure_sdk_voice的使用的实战教程也可以访问 https://www.itying.com/category-92-b0.html

1 回复

更多关于Flutter Azure SDK语音处理插件azure_sdk_voice的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html


在Flutter中使用Azure SDK进行语音处理,可以通过azure_sdk_voice插件来实现。以下是一个基本的使用指南,帮助你开始在Flutter项目中集成和使用这个插件。

1. 添加依赖

首先,你需要在pubspec.yaml文件中添加azure_sdk_voice插件的依赖。

dependencies:
  flutter:
    sdk: flutter
  azure_sdk_voice: ^1.0.0  # 请检查最新版本

然后运行flutter pub get来安装依赖。

2. 初始化Azure SDK

在使用Azure SDK之前,你需要初始化SDK并配置你的Azure资源信息。通常,你需要提供Azure订阅密钥和区域。

import 'package:azure_sdk_voice/azure_sdk_voice.dart';

void initAzureSDK() {
  AzureVoiceSdk.init(
    subscriptionKey: 'your-azure-subscription-key',
    region: 'your-azure-region',  // 例如 'westus'
  );
}

3. 语音识别

使用AzureVoiceSdk进行语音识别。以下是一个简单的例子,展示如何将语音转换为文本。

void recognizeSpeech() async {
  try {
    String audioFilePath = 'path/to/your/audio/file.wav';
    String recognizedText = await AzureVoiceSdk.recognizeSpeech(audioFilePath);
    print('Recognized Text: $recognizedText');
  } catch (e) {
    print('Error recognizing speech: $e');
  }
}

4. 语音合成

使用AzureVoiceSdk进行文本到语音的合成。以下是一个简单的例子,展示如何将文本转换为语音并保存为音频文件。

void synthesizeSpeech() async {
  try {
    String text = 'Hello, this is a test for Azure Speech SDK.';
    String outputFilePath = 'path/to/output/file.wav';
    await AzureVoiceSdk.synthesizeSpeech(text, outputFilePath);
    print('Speech synthesized and saved to $outputFilePath');
  } catch (e) {
    print('Error synthesizing speech: $e');
  }
}

5. 处理音频流

如果你需要实时处理音频流,可以使用AzureVoiceSdk提供的流式API。以下是一个简单的例子,展示如何使用流式API进行实时语音识别。

void recognizeSpeechFromStream() async {
  try {
    Stream<List<int>> audioStream = getAudioStreamFromSomewhere();
    String recognizedText = await AzureVoiceSdk.recognizeSpeechFromStream(audioStream);
    print('Recognized Text: $recognizedText');
  } catch (e) {
    print('Error recognizing speech from stream: $e');
  }
}
回到顶部