Flutter视频流处理插件vdotok_stream的使用
简介
vdotok_stream
是一个用于处理实时视频和音频流的 Flutter 插件。它支持多种功能,如点对点通话、多方通话、屏幕共享等。本文将详细介绍如何使用该插件,并提供完整的示例代码。
iOS 配置
在 iOS 项目中,需要在 Info.plist
文件中添加以下权限描述:
<key>NSCameraUsageDescription</key>
<string>$(PRODUCT_NAME) Camera Usage!</string>
<key>NSMicrophoneUsageDescription</key>
<string>$(PRODUCT_NAME) Microphone Usage!</string>
Android 配置
在 Android 项目中,确保在 AndroidManifest.xml
文件中添加以下权限:
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.VIBRATE" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" />
<service android:name="changjoopark.com.flutter_foreground_plugin.FlutterForegroundService"
android:foregroundServiceType="mediaProjection"
android:enabled="true"
android:exported="false"/>
同时,将 jniLibs
文件夹添加到项目路径 android/app/src/main/jniLibs
中,并解压 jniLibs.zip
文件。
下载地址: jniLibs.zip
创建客户端实例
首先需要创建一个 SignalingClient
实例:
SignalingClient signalingClient = SignalingClient.instance;
添加监听器
以下是主要的回调函数及其用途:
signalingClient.onConnect = (String response) {
// 当套接字成功连接时调用
};
signalingClient.onRegister = (Map<String, dynamic> response) {
// 当用户成功注册时调用
};
signalingClient.onError = (int code, String reason) {
// 在发生错误时调用
};
signalingClient.onLocalStream = (MediaStream stream) {
// 当本地媒体流完全准备好时调用
};
signalingClient.onRemoteStream = (MediaStream stream, String refId) {
// 当从对方收到远程媒体流时调用
};
signalingClient.onReceiveCallFromUser = (Map<String, dynamic> incoming, bool isMultiSession) {
// 当收到对方的呼叫时调用
};
signalingClient.onParticipantsLeft = (String refId, bool isReceive, bool isMultiSession) {
// 当参与者离开通话时调用
};
signalingClient.onCallAcceptedByUser = () {
// 当参与者接受呼叫时调用
};
signalingClient.onCallHungUpByUser = (bool isLocal) {
// 当任何一方挂断呼叫时调用
};
signalingClient.onCallBusyCallback = () {
// 当呼叫方收到忙信号时调用
};
signalingClient.onAudioVideoStateInfo = (int audioFlag, int videoFlag, String refId) {
// 当音频或视频状态发生变化时调用
};
signalingClient.onTargetAlerting = () {
// 当呼叫方收到对方已振铃时调用
};
signalingClient.onAddparticpant = (int participantCount, String calltype) {
// 当新参与者加入时调用
};
signalingClient.unRegisterSuccessfullyCallBack = () {
// 当用户成功注销时调用
};
signalingClient.onReceiveUrlCallback = (String url) {
// 在公播时接收公共URL时调用
};
signalingClient.internetConnectivityCallBack = (String mesg) {
// 当用户连接或断开网络时调用
};
signalingClient.onMissedCall = (String mesg) {
// 当呼叫被错过时调用
};
signalingClient.onCallDial = () {
// 当呼叫拨出时调用
};
signalingClient.onHvInfo = () {
// 当用户收到HV信息时调用
};
模型类
以下是几个常用的模型类:
// 登录/注册模型
class User {
final String auth_token;
final String authorization_token;
final String email;
final String full_name;
final String message;
final int process_time;
final String ref_id;
final int status;
final int user_id;
}
// 群组模型
class GroupModel {
dynamic admin_id;
dynamic auto_created;
dynamic channel_key;
dynamic channel_name;
dynamic group_title;
dynamic id;
dynamic created_datetime;
}
// 获取所有用户的模型
class Contact {
int user_id;
dynamic email;
String ref_id;
String full_name;
}
常量类
以下是常量类的定义:
// 媒体类型
class MediaType {
static String video = "video";
static String audio = "audio";
}
// 呼叫类型
class CallType {
static String one2one = "one_to_one";
static String one2many = "one_to_many";
static String many2many = "many_to_many";
}
// 会话类型
class SessionType {
static String call = "call";
static String screen = "screen";
}
SDK 方法
连接服务器
使用此方法连接到服务器:
signalingClient.connect(
String deviceId, // 任意字符串
String projectId,
String completeAddress, // 登录/注册响应中的地址
String authorization_token, // 登录/注册响应中的授权令牌
String ref_id // 登录/注册响应中的ref_id
);
发起呼叫
点对点呼叫
signalingClient.startCallOneToOne(
String from, // 自己的refId
Map<String, dynamic> customData, // 自定义数据
List<String> to, // 对方的refId列表
String mediaType, // "audio" 或 "video"
String mcToken, // 注册响应中的mcToken
String callType, // "one_to_one"
String sessionType // "call"
);
多对多呼叫
signalingClient.startCall(
Map<String, dynamic> customData, // 自定义数据
String from, // 自己的refId
List<String> to, // 对方的refId列表
String mcToken, // 注册响应中的mcToken
String mediaType, // "audio" 或 "video"
String callType, // "many_to_many"
String sessionType // "call"
);
点对多呼叫
signalingClient.startCallOneToMany(
String from, // 自己的refId
Map<String, dynamic> customData, // 自定义数据
List<String> to, // 对方的refId列表
String mediaType, // "audio" 或 "video"
String mcToken, // 注册响应中的mcToken
String callType, // "one_to_many"
String sessionType, // "call" 或 "screen"
bool isPublicBroadcast, // 是否为公播
String broadcastType, // 公播类型
String authorizationToken // 自己的授权令牌
);
接受呼叫
signalingClient.createAnswer(String incomingRefId);
拒绝呼叫
signalingClient.declineCall(String refId, String mcToken);
结束呼叫
signalingClient.stopCall(String mcToken);
启用/禁用屏幕共享
signalingClient.enableScreen(bool flag);
切换摄像头
signalingClient.switchCamera();
切换扬声器
signalingClient.switchSpeaker(bool flag);
启用/禁用摄像头
signalingClient.enableCamera(bool flag);
静音/取消静音麦克风
signalingClient.muteMic(bool flag);
注销
signalingClient.unRegister(String mcToken);
检查网络连接状态
signalingClient.checkConnectivity();
获取网络状态
signalingClient.getInternetStatus();
示例代码
以下是一个完整的示例代码,展示了如何使用 vdotok_stream
插件进行视频通话:
import 'dart:io';
import 'package:flutter/material.dart';
import 'package:vdotok_stream/vdotok_stream.dart';
import 'package:vdotok_stream_example/constants.dart';
class MyHttpOverrides extends HttpOverrides {
[@override](/user/override)
HttpClient createHttpClient(SecurityContext? context) {
return super.createHttpClient(context)
..badCertificateCallback =
(X509Certificate cert, String host, int port) => true;
}
}
void main() {
HttpOverrides.global = MyHttpOverrides();
runApp(MyApp());
}
class MyApp extends StatelessWidget {
[@override](/user/override)
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
title: 'Vdotok Video',
theme: ThemeData(
colorScheme: ColorScheme.fromSwatch(primarySwatch: Colors.grey),
hintColor: primaryColor,
primaryColor: primaryColor,
scaffoldBackgroundColor: Colors.white,
textTheme: TextTheme(
bodyText1: TextStyle(color: secondaryColor),
bodyText2: TextStyle(color: secondaryColor),
),
),
home: Test(),
);
}
}
class Test extends StatefulWidget {
[@override](/user/override)
_TestState createState() => _TestState();
}
class _TestState extends State<Test> {
SignalingClient? signalingClient;
MediaStream? _localStream;
RTCVideoRenderer _localRenderer = RTCVideoRenderer();
RTCVideoRenderer _screenShareRenderer = RTCVideoRenderer();
late TextEditingController _controller;
late TextEditingController _controllerSecondSession;
[@override](/user/override)
void initState() {
_controller = TextEditingController();
_controllerSecondSession = TextEditingController();
signalingClient = SignalingClient.instance;
super.initState();
initRenderers();
signalingClient?.onLocalStream = (stream) {
if (_localRenderer.srcObject == null) {
setState(() {
_localRenderer.srcObject = stream;
});
} else {
setState(() {
_screenShareRenderer.srcObject = stream;
});
}
};
signalingClient?.onRemoteStream = (stream, d) {
if (_localRenderer.srcObject == null) {
setState(() {
_localRenderer.srcObject = stream;
});
} else {
setState(() {
_screenShareRenderer.srcObject = stream;
});
}
};
}
Future<void> initRenderers() async {
await _localRenderer.initialize();
await _screenShareRenderer.initialize();
}
[@override](/user/override)
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Container(
width: 200,
height: 200,
child: _localRenderer.srcObject == null
? Text("camera")
: RTCVideoView(_localRenderer, mirror: false),
),
Container(
width: 200,
height: 200,
child: _screenShareRenderer.srcObject == null
? Text("screen")
: RTCVideoView(_screenShareRenderer, mirror: false),
),
TextButton(
onPressed: () {
signalingClient?.getNumber();
},
child: Text("Create peerConnection"),
),
TextButton(
onPressed: () {
// signalingClient.creteOffermannual();
},
child: Text("createOffer"),
),
TextButton(
onPressed: () {
signalingClient?.getMedia();
},
child: Text("getUserMedia"),
),
TextButton(
onPressed: () {
signalingClient?.getDisplay();
},
child: Text("getUserDisplayMedia"),
),
TextButton(
onPressed: () {
// signalingClient.connect(...);
},
child: Text("connect"),
),
TextButton(
onPressed: () {
signalingClient?.registerViewerApp(
"197fd41bb378f06a032f4372212995ba",
"4212a87edf17e3c7ac97932f8eb6229a",
"115G1WZI",
);
},
child: Text("Register"),
),
],
),
),
);
}
}
更多关于Flutter视频流处理插件vdotok_stream的使用的实战系列教程也可以访问 https://www.itying.com/category-92-b0.html
vdotok_stream
是一个用于在 Flutter 应用中处理视频流的插件。它允许开发者轻松地集成实时视频流功能,支持视频通话、直播等场景。以下是如何使用 vdotok_stream
插件的基本步骤:
1. 添加依赖
首先,你需要在 pubspec.yaml
文件中添加 vdotok_stream
插件的依赖:
dependencies:
flutter:
sdk: flutter
vdotok_stream: ^1.0.0 # 请使用最新版本
然后运行 flutter pub get
来安装依赖。
2. 初始化插件
在你的 Dart 文件中导入 vdotok_stream
插件,并初始化它:
import 'package:vdotok_stream/vdotok_stream.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: VideoStreamScreen(),
);
}
}
class VideoStreamScreen extends StatefulWidget {
@override
_VideoStreamScreenState createState() => _VideoStreamScreenState();
}
class _VideoStreamScreenState extends State<VideoStreamScreen> {
late VdotokStream _vdotokStream;
@override
void initState() {
super.initState();
_vdotokStream = VdotokStream();
_initializeStream();
}
void _initializeStream() async {
// 初始化视频流
await _vdotokStream.initialize(
apiKey: 'YOUR_API_KEY',
projectId: 'YOUR_PROJECT_ID',
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Video Stream'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
// 显示本地视频流
Container(
width: 200,
height: 200,
child: _vdotokStream.localVideoView(),
),
SizedBox(height: 20),
// 显示远程视频流
Container(
width: 200,
height: 200,
child: _vdotokStream.remoteVideoView(),
),
],
),
),
);
}
@override
void dispose() {
_vdotokStream.dispose();
super.dispose();
}
}
3. 配置视频流
在 _initializeStream
方法中,你可以配置视频流的参数,例如 API Key 和 Project ID。这些参数通常由 vdotok_stream
服务提供。
4. 显示视频流
使用 _vdotokStream.localVideoView()
和 _vdotokStream.remoteVideoView()
来显示本地和远程的视频流。你可以将这些视图嵌入到你的 UI 中。
5. 处理视频流事件
vdotok_stream
插件通常会提供一些事件回调,例如连接成功、连接失败、收到远程视频流等。你可以监听这些事件来处理不同的场景:
_vdotokStream.onConnectionSuccess = () {
print('Connection successful');
};
_vdotokStream.onConnectionFailed = (error) {
print('Connection failed: $error');
};
_vdotokStream.onRemoteStreamReceived = () {
print('Remote stream received');
};