audio.createAudioCapturer录音听筒失败 HarmonyOS 鸿蒙Next

audio.createAudioCapturer录音听筒失败 HarmonyOS 鸿蒙Next 功能描述:

  1. 生成随机数使用文本合成语音,通过听筒播放
  2. 使用音频采集听筒播放的语音
  3. 将听筒采集的音频使用语音识别判断随机数是否正确

遇到的问题:

第2步没有采集到听筒播放的语音

实现思路:

设置 textToSpeech.TextToSpeechEngine.speak 文本合成语音并使用音频流类型 STREAM_USAGE_VOICE_COMMUNICATION("soundChannel": 2) 听筒播报

设置 audio.createAudioCapturer 录音 capturerInfo.source = SOURCE_TYPE_VOICE_COMMUNICATION

目的是将文本合成的语音通过听筒播放后,再次通过录音获取播放的音频,将录音文件给语音识别 speechRecognizer

遇到的问题:

录音 capturerInfo.source = SOURCE_TYPE_VOICE_COMMUNICATION 无数据,onResult 一直没回调

代码的整体逻辑没问题,因为同样的代码

修改参数:textToSpeech.TextToSpeechEngine.speak 文本合成语音并使用音频流类型 STREAM_USAGE_VOICE_ASSISTANT("soundChannel": 3) 播报

修改 audio.createAudioCapturer 录音 capturerInfo.source = SOURCE_TYPE_MIC 录音是可以正常回调 onResult

但是这个是扬声器的,我需要使用听筒播放并录听筒

请教路过的大神,如何设置录音参数,才可以采集音频流类型 STREAM_USAGE_VOICE_COMMUNICATION("soundChannel": 2) 听筒播报的数据

代码如下:

import { audio } from '@kit.AudioKit'
import { textToSpeech } from '@kit.CoreSpeechKit';
import { speechRecognizer } from '@kit.CoreSpeechKit';
import { LogUtil, PermissionUtil, RandomUtil } from '@pura/harmony-utils'
import { Status, Entry } from "../Define"
import { Detect } from "../Detect";

export class BMirco extends Detect {
  private random: string = '1234567890'
  private capturer?: audio.AudioCapturer
  private speech?: textToSpeech.TextToSpeechEngine
  private recognizer?: speechRecognizer.SpeechRecognitionEngine

  constructor() {
    super('副麦克风', [
      new Entry('正常', 308, Status.SUCCESS),
      new Entry('异常', 309, Status.FAILED),
    ]);
  }

  EDefault(): Entry {
    return this.answers[1]
  }

  onCheck(): void {
    PermissionUtil.requestPermissionsEasy('ohos.permission.MICROPHONE').then((result) => {
      if (result) {
        this.random = RandomUtil.getRandomStr(9, '1234567890')
        this.createSpeechRecognizer(() => this.startListening())
        this.createAudioCapturer(() => this.startAudio())
        this.createTextToSpeech(() => this.startSpeak())
      } else {
        this.setComplete(this.EDefault().copy('用户未授权'), 1000)
      }
    })
  }

  onTimer(): number {
    return 10000
  }

  onFinish(): void {
    this.speech?.stop()
    this.speech?.shutdown()

    this.capturer?.stop()
    this.capturer?.off('readData')

    this.recognizer?.finish(this.random)
    this.recognizer?.shutdown()
  }

  /**
   * 创建并初始化语音识别引擎
   * @param callback
   */
  private createSpeechRecognizer(callback: () => void): void {
    let _this = this
    speechRecognizer.createEngine({
      extraParams: { "locate": "CN", "recognizerMode": "short" },
      language: 'zh-CN',
      online: 1,
    }, (err, engine) => {
      if (err) {
        this.setComplete(this.EDefault().copy("语音识别失败"), 1000)
      } else {
        this.recognizer = engine
        engine.setListener({
          onStart(sessionId: string, eventMessage: string) {
            // 开始识别时,回调此方法。
          },
          onEvent(sessionId: string, eventCode: number, eventMessage: string) {
            // 识别过程中的事件都通过此方法回调,例如音频开始、音频结束。vad start或vad end时触发。
          },
          onResult(sessionId: string, result: speechRecognizer.SpeechRecognitionResult) {
            // "enablePartialResult" true=识别的中间结果和最终结果都通过此方法返回。
            // "enablePartialResult" false=识别的最终结果通过此方法返回。
            if (result.result.indexOf(_this.random.substring(0, 4)) >= 0) {
              _this.setComplete(_this.answers[0])
            } else {
              _this.startListening()
              _this.startSpeak()
            }
          },
          onComplete(sessionId: string, eventMessage: string) {
            // 识别结束或者调用finish方法主动结束识别时回调此方法,返回会话ID、识别完成的相关描述信息。
          },
          onError(sessionId: string, errorCode: number, errorMessage: string) {
            // 识别过程中,出现错误时回调,返回会话ID、错误码及错误信息描述。
          },
        })
        callback()
      }
    })
  }

  /**
   * 创建并初始化音频采集器
   */
  private createAudioCapturer(callback: () => void): void {
    audio.createAudioCapturer({
      streamInfo: {
        channels: audio.AudioChannel.CHANNEL_1,
        samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_16000,
        encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW,
        sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
      },
      capturerInfo: {
        capturerFlags: 0,
        source: audio.SourceType.SOURCE_TYPE_VOICE_COMMUNICATION,
      }
    }, (err, capturer) => {
      if (err) {
        this.setComplete(this.EDefault().copy("音频采集失败"), 1000)
      } else {
        this.capturer = capturer
        capturer.on('readData', (buffer) => {
          this.recognizer?.writeAudio(this.random, new Uint8Array(buffer))
        })
        callback()
      }
    })
  }

  /**
   * 创建并初始化文本转语音引擎
   */
  private createTextToSpeech(callback: () => void): void {
    textToSpeech.createEngine({
      extraParams: {"style": 'interaction-broadcast', "locate": 'CN', "name": 'EngineName','isBackStage': true},
      language: 'zh-CN',
      person: 0,
      online: 1,
    }, (err, engine) => {
      if (err) {
        this.setComplete(this.EDefault().copy("文本转换失败"), 1000)
      } else {
        this.speech = engine
        callback()
      }
    })
  }

  /**
   * 开始录音
   */
  private startAudio(): void {
    this.capturer?.start((err) => { if (err) {this.setComplete(this.EDefault(), 1000)} })
  }

  /**
   * 开始语音识别
   */
  private startListening(): void {
    this.recognizer?.startListening({
      sessionId: this.random,
      extraParams: {
        "vadEnd": 1000,
        "recognitionMode": 1,
        "recognizerOption": {
          "enablePartialResult": false
        }
      },
      audioInfo: { audioType: 'pcm', sampleRate: 16000, soundChannel: 1, sampleBit: 16 }
    })
  }

  /**
   * 开始合成文本并播报
   * 使用音频流类型 STREAM_USAGE_VOICE_COMMUNICATION("soundChannel": 2) 听筒播报
   */
  private startSpeak(): void {
    this.speech?.speak(`您的验证码是[p200][n1]${this.random.substring(0, 4)}[p1000]`, {
      requestId: `${Date.now()}`,
      extraParams: { "queueMode": 0, "speed": 1, "volume": 2, "pitch": 1, "languageContext": 'zh-CN', "audioType": "pcm", "soundChannel": 2, "playType": 1 }
    })
  }
}

更多关于audio.createAudioCapturer录音听筒失败 HarmonyOS 鸿蒙Next的实战教程也可以访问 https://www.itying.com/category-93-b0.html

2 回复
楼主参考如下demo是否可以实现录制:

```javascript
import { audio } from '[@kit](/user/kit).AudioKit';
import { fileIo } from '[@kit](/user/kit).CoreFileKit';
import { BusinessError } from '[@kit](/user/kit).BasicServicesKit';

let context = this.getContext();
const TAG = 'VoiceCallDemoForAudioCapturer';

class Options {
  offset?: number;
  length?: number;
}

let bufferSize: number = 0;
let audioCapturer: audio.AudioCapturer | undefined = undefined;
let audioStreamInfo: audio.AudioStreamInfo = {
  samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_44100, // 采样率
  channels: audio.AudioChannel.CHANNEL_1, // 通道
  sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE, // 采样格式
  encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW // 编码格式
}

let audioCapturerInfo: audio.AudioCapturerInfo = {
  source: audio.SourceType.SOURCE_TYPE_VOICE_COMMUNICATION, // 音源类型:语音通话
  capturerFlags: 0 // 音频采集器标志:默认为0即可
}

let audioCapturerOptions: audio.AudioCapturerOptions = {
  streamInfo: audioStreamInfo,
  capturerInfo: audioCapturerInfo
}

let path = getContext().cacheDir;
let filePath = path + '/StarWars10s-2C-48000-4SW.wav';
let file: fileIo.File = fileIo.openSync(filePath, fileIo.OpenMode.READ_WRITE | fileIo.OpenMode.CREATE);

let readDataCallback = (buffer: ArrayBuffer) => {
  let options: Options = {
    offset: bufferSize,
    length: buffer.byteLength
  }
  fileIo.writeSync(file.fd, buffer, options);
  bufferSize += buffer.byteLength;
}

// 初始化,创建实例,设置监听事件
async function init() {
  audio.createAudioCapturer(audioCapturerOptions, (err: BusinessError, capturer: audio.AudioCapturer) => { // 创建AudioCapturer实例
    if (err) {
      console.error(`Invoke createAudioCapturer failed, code is ${err.code}, message is ${err.message}`);
      return;
    }
    console.info(`${TAG}: create AudioCapturer success`);
    audioCapturer = capturer;
    if (audioCapturer !== undefined) {
      audioCapturer.on('markReach', 1000, (position: number) => { // 订阅markReach事件,当采集的帧数达到1000帧时触发回调
        if (position === 1000) {
          console.info('ON Triggered successfully');
        }
      });
      audioCapturer.on('periodReach', 2000, (position: number) => { // 订阅periodReach事件,当采集的帧数每达到2000时触发回调
        if (position === 2000) {
          console.info('ON Triggered successfully');
        }
      });
      audioCapturer.on('readData', readDataCallback);
    }
  });
}

// 开始一次音频采集
async function start() {
  if (audioCapturer !== undefined) {
    let stateGroup: number[] = [audio.AudioState.STATE_PREPARED, audio.AudioState.STATE_PAUSED, audio.AudioState.STATE_STOPPED];
    if (stateGroup.indexOf(audioCapturer.state.valueOf()) === -1) { // 当且仅当状态为STATE_PREPARED、STATE_PAUSED和STATE_STOPPED之一时才能启动采集
      console.error(`${TAG}: start failed`);
      return;
    }
    audioCapturer.start((err: BusinessError) => {
      if (err) {
        console.error('Capturer start failed.');
      } else {
        console.info('Capturer start success.');
      }
    });
  }
}

// 停止采集
async function stop() {
  if (audioCapturer !== undefined) {
    // 只有采集器状态为STATE_RUNNING或STATE_PAUSED的时候才可以停止
    if (audioCapturer.state.valueOf() !== audio.AudioState.STATE_RUNNING && audioCapturer.state.valueOf() !== audio.AudioState.STATE_PAUSED) {
      console.info('Capturer is not running or paused');
      return;
    }
    await audioCapturer.stop(); // 停止采集
    if (audioCapturer.state.valueOf() === audio.AudioState.STATE_STOPPED) {
      console.info('Capturer stopped');
    } else {
      console.error('Capturer stop failed');
    }
  }
}

// 销毁实例,释放资源
async function release() {
  if (audioCapturer !== undefined) {
    // 采集器状态不是STATE_RELEASED或STATE_NEW状态,才能release
    if (audioCapturer.state.valueOf() === audio.AudioState.STATE_RELEASED || audioCapturer.state.valueOf() === audio.AudioState.STATE_NEW) {
      console.info('Capturer already released');
      return;
    }
    await audioCapturer.release(); // 释放资源
    if (audioCapturer.state.valueOf() === audio.AudioState.STATE_RELEASED) {
      console.info('Capturer released');
    } else {
      console.error('Capturer release failed');
    }
  }
}

更多关于audio.createAudioCapturer录音听筒失败 HarmonyOS 鸿蒙Next的实战系列教程也可以访问 https://www.itying.com/category-93-b0.html


在HarmonyOS鸿蒙Next中,audio.createAudioCapturer用于创建音频采集器,若录音听筒失败,可能原因包括:

  1. 权限问题:未正确配置录音权限,需在config.json中添加ohos.permission.MICROPHONE权限。

  2. 参数配置错误AudioCapturerOptions中的参数如采样率、声道数、音频格式等配置不当,需与设备支持参数匹配。

  3. 设备占用:录音设备被其他应用占用,需确保设备未被占用。

  4. 系统限制:系统资源不足或存在其他限制,导致录音失败。

  5. API调用顺序错误createAudioCapturerstartstop等API调用顺序不正确,需按正确顺序调用。

  6. 硬件问题:设备硬件故障或驱动程序问题,导致录音失败。

  7. 系统版本兼容性:当前系统版本与API不兼容,需确认API适用于当前系统版本。

通过检查权限、参数配置、设备状态及API调用顺序,可以排查录音听筒失败的问题。

回到顶部