Flutter音频分析插件sound_analysis的使用
Flutter音频分析插件sound_analysis的使用

注意:Sound Analysis仅支持iOS 15及以上版本。
安装
- 打开终端应用程序。
- 切换到你的项目目录:
cd [你的项目文件夹]
- 添加sound_analysis插件:
flutter pub add sound_analysis
使用
以下是一个完整的示例代码,展示了如何使用sound_analysis
插件来分析音频文件。
import 'dart:io';
import 'package:flutter/material.dart';
import 'dart:async';
import 'package:path/path.dart' as path;
import 'package:path_provider/path_provider.dart';
import 'package:flutter/services.dart';
import 'package:sound_analysis/sound_analysis.dart';
import 'package:flutter/services' show rootBundle;
void main() {
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({Key? key}) : super(key: key);
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
String _platformVersion = 'Unknown';
@override
void initState() {
super.initState();
initPlatformState();
}
// 异步方法初始化平台状态
Future<void> initPlatformState() async {
String platformVersion;
try {
// 获取平台版本信息
platformVersion = await SoundAnalysis.platformVersion ?? 'Unknown platform version';
// 获取可识别的音频分类
List<String> audios = await SoundAnalysis.knownClassifications(SoundAnalysis.SNClassifierIdentifier_version1);
print("可识别的音频 === ${audios}");
// 获取应用文档目录
Directory directory = await getApplicationDocumentsDirectory();
var videoFilePath = path.join(directory.path, "t2.mp4");
File file = File(videoFilePath);
// 如果文件不存在,则从assets加载并保存到指定路径
if (!file.existsSync()) {
ByteData data = await rootBundle.load("assets/t2.mp4");
List<int> bytes = data.buffer.asUint8List(data.offsetInBytes, data.lengthInBytes);
file.writeAsBytesSync(bytes, flush: true);
}
// 分析音频文件
List<Map<String, dynamic>> clips = await SoundAnalysis.analyzeAudioFile(SoundAnalysis.SNClassifierIdentifier_version1, videoFilePath);
print("识别的音频片段:${clips}");
} on PlatformException {
platformVersion = 'Failed to get platform version.';
}
// 如果组件已卸载,则不更新UI
if (!mounted) return;
setState(() {
_platformVersion = platformVersion;
});
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('插件示例应用'),
),
body: Center(
child: Text('运行于: $_platformVersion\n'),
),
),
);
}
}
输出
运行上述代码后,控制台将输出以下结果:
flutter: 可识别的音频 === [speech, shout, yell, battle_cry, children_shouting, screaming, whispering, laughter, baby_laughter, giggling, snicker, belly_laugh, chuckle_chortle, crying_sobbing, baby_crying, sigh, singing, choir_singing, yodeling, rapping, humming, whistling, breathing, snoring, gasp, cough, sneeze, nose_blowing, person_running, person_shuffling, person_walking, chewing, biting, gargling, burp, hiccup, slurp, finger_snapping, clapping, cheering, applause, booing, chatter, crowd, babble, dog, dog_bark, dog_howl, dog_bow_wow, dog_growl, dog_whimper, cat, cat_purr, cat_meow, horse_clip_clop, horse_neigh, cow_moo, pig_oink, sheep_bleat, fowl, chicken, chicken_cluck, rooster_crow, turkey_gobble, duck_quack, goose_honk, lion_roar, bird, bird_vocalization, bird_chirp_tweet, bird_squawk, pigeon_dove_coo, crow_caw, owl_hoot, bird_flapping, insect, cricket_chirp, mosquito_buzz, fly_buzz, bee_buzz, frog, frog_croak, snake_hiss, snake_rattle, whale_vocalization, coyote_howl, elk_bugle<…>
flutter: 识别的音频片段:[{confidence: 0.8910837173461914, audioKey: humming, duration: 3.0, startAt: 0.0}, {confidence: 0.9592931270599365, audioKey: humming, startAt: 1.5, duration: 3.0}, {confidence: 0.7030519843101501, startAt: 3.0, duration: 3.0, audioKey: laughter}, {audioKey: humming, confidence: 0.42257529497146606, duration: 3.0, startAt: 4.5}, {audioKey: humming, startAt: 6.0, duration: 3.0, confidence: 0.9056249260902405}, {confidence: 0.949840247631073, audioKey: laughter, duration: 3.0, startAt: 7.5}]
更多关于Flutter音频分析插件sound_analysis的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html
更多关于Flutter音频分析插件sound_analysis的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html
sound_analysis
是一个用于音频分析的 Flutter 插件,它允许你从设备的麦克风或音频文件中提取音频数据,并对其进行分析。这个插件通常用于实时音频处理、音频特征提取等场景。
以下是如何使用 sound_analysis
插件的基本步骤:
1. 添加依赖
首先,你需要在 pubspec.yaml
文件中添加 sound_analysis
插件的依赖:
dependencies:
flutter:
sdk: flutter
sound_analysis: ^0.0.1 # 请使用最新版本
然后运行 flutter pub get
来获取依赖。
2. 导入插件
在你的 Dart 文件中导入 sound_analysis
插件:
import 'package:sound_analysis/sound_analysis.dart';
3. 初始化音频分析器
你可以通过 SoundAnalysis
类来初始化音频分析器。通常你需要指定音频源(麦克风或音频文件)以及分析器的配置。
final soundAnalysis = SoundAnalysis();
4. 配置音频分析器
你可以配置分析器来处理特定的音频数据。例如,设置采样率、通道数等。
await soundAnalysis.configure(
sampleRate: 44100,
channelCount: 1,
audioSource: AudioSource.microphone, // 或者 AudioSource.file
);
5. 开始音频分析
你可以通过调用 startAnalysis
方法来开始音频分析。你可以提供一个回调函数来处理分析结果。
soundAnalysis.startAnalysis((analysisResult) {
// 处理分析结果
print('Analysis Result: $analysisResult');
});
6. 停止音频分析
当你不再需要分析音频时,可以调用 stopAnalysis
方法来停止分析。
await soundAnalysis.stopAnalysis();
7. 处理分析结果
在 startAnalysis
方法的回调函数中,你可以处理分析结果。analysisResult
通常包含音频的时域或频域数据,你可以根据需要进行进一步处理。
soundAnalysis.startAnalysis((analysisResult) {
// 例如,打印音频数据的幅度
print('Amplitude: ${analysisResult.amplitude}');
});
8. 释放资源
当你不再需要使用音频分析器时,记得释放资源:
await soundAnalysis.dispose();
示例代码
以下是一个完整的示例代码,展示了如何使用 sound_analysis
插件进行音频分析:
import 'package:flutter/material.dart';
import 'package:sound_analysis/sound_analysis.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
[@override](/user/override)
Widget build(BuildContext context) {
return MaterialApp(
home: AudioAnalysisScreen(),
);
}
}
class AudioAnalysisScreen extends StatefulWidget {
[@override](/user/override)
_AudioAnalysisScreenState createState() => _AudioAnalysisScreenState();
}
class _AudioAnalysisScreenState extends State<AudioAnalysisScreen> {
SoundAnalysis soundAnalysis = SoundAnalysis();
[@override](/user/override)
void initState() {
super.initState();
_startAudioAnalysis();
}
Future<void> _startAudioAnalysis() async {
await soundAnalysis.configure(
sampleRate: 44100,
channelCount: 1,
audioSource: AudioSource.microphone,
);
soundAnalysis.startAnalysis((analysisResult) {
print('Analysis Result: $analysisResult');
});
}
[@override](/user/override)
void dispose() {
soundAnalysis.stopAnalysis();
soundAnalysis.dispose();
super.dispose();
}
[@override](/user/override)
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Audio Analysis'),
),
body: Center(
child: Text('Audio analysis is running...'),
),
);
}
}