HarmonyOS鸿蒙Next中AddCustom样例报错

HarmonyOS鸿蒙Next中AddCustom样例报错

按照这个文档执行的

https://gitee.com/huawei-hiai-foundation/HiAIDemo/blob/master/AscendC/operator/AddCustomSample/FrameworkLaunch/Onnx/README.md#undefined

执行以下命令的时候报错

./omg --model=./add_custom.onnx --framework=5 --output=./AddCustom --target=omc

./omg --model=./add_custom.onnx --framework=5 --output=./AddCustom --target=omc

INFO: execute command: /home/AscendC/tools/tools_omg/master/omg --model=./add_custom.onnx --framework=5 --output=./AddCustom --target=omc

I/AI_FMK (3970153): cl_register.cpp CLRegister(56)::“CLRegister start! clName:FMK_CL” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:Initialize” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:Finalize” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:SetOnServiceDiedCallback” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetOpsKernelInfoStores” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetGraphOptimizerObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetGraphCompilerObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetGraphExecutorFactoryObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetCompatibleHelperObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetDeviceEventHandlerObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetCompiledTargetSaverObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetTunedTargetCompilerObjs” I/AI_FMK (3970153): cl_register.cpp RegisterComputeLibraryFunc(39)::“RegisterComputeLibraryFunc funName:GetPlatformInfoStoreObjs” I/OMG_TOOL (3970153): main.cpp main(18)::“OMG run begin.” I/OMG_TOOL (3970153): command_util.cpp CheckHiaiVersionValid(868)::“set _version ir.” I/AI_FMK (3970153): omg.cpp Generate(210)::“Generate begin.” I/AI_FMK (3970153): ops_kernel_store_manager.cpp DlopenComputeLibrary(38)::“Open ComputeLibrary so: libcpucl_host.so” I/AI_FMK (3970153): ops_kernel_store_manager.cpp DlopenComputeLibrary(38)::“Open ComputeLibrary so: libai_npucore_itf.so” W/AI_FMK (3970153): ops_kernel_store_manager.cpp DlopenComputeLibrary(41)::“dlopen so failed: libai_npucore_itf.so: cannot open shared object file: No such file or directory” I/AI_FMK (3970153): ops_kernel_store_manager.cpp DlCloseComputeLibrary(83)::“handle is null, not need to close library” I/AI_FMK (3970153): model_util.cpp BuildOrigin2IRGraph(234)::“ModelUtil::BuildOrigin2IRGraph from file” I/AI_FMK (3970153): proto_util.cpp ReadBytesFromBinaryFile(195)::“Read size:173” I/AI_FMK (3970153): op_def_factory.cpp OpDefRegister(32)::“Register ascendc op [AddCustom].” I/AI_FMK (3970153): parser_factory.cpp LoadCustomOpLib(55)::“Custom op exist.” I/AI_FMK (3970153): onnx_pre_checker.cpp PreCheckGraph(137)::“the node AddCustom is custom node” I/AI_FMK (3970153): onnx_graph_parser.cpp AddNode(494)::"[node add] [type AddCustom] is custom op,try to add custom op" I/AI_FMK (3970153): message2operator.cpp ParseRepeatedField(64)::“Start to parse field: attribute, cpptype 10.” E/AI_FMK (3970153): message2operator.cpp ParseRepeatedField(86)::“set attr, name: attribute, attr:{ “attribute”: [ { “i”: 1, “name”: “bias”, “type”: 2 } ] }.” I/AI_FMK (3970153): message2operator.cpp ParseRepeatedField(95)::“Parse repeated field: attribute success.” E/AI_INFRA (3970153): onnx_parser.cpp InsertPermuteNode(348)::"!inputNodes.empty() || !outputNodes.empty()" “false, return hiai::SUCCESS.” I/AI_FMK (3970153): hiai_ir_aipp_compatible_adapter_api.cpp UpdateInputOrder(144)::“have no configDataNodes” I/AI_FMK (3970153): ir_infer_shape_optimizer.cpp InferShape(315)::“custom op type AddCustom infershape start.” I/AI_FMK (3970153): bg_kernel_context_extend.cpp CreateComputeNodeInfoImpl(155)::“Node add, computeNodeInfo attrSize 64, outputsInsInfoSize:48, offset:608, totalSize:728.” I/AI_FMK (3970153): bg_kernel_context_extend.cpp CreateComputeNodeInfoImpl(155)::“Node add, computeNodeInfo attrSize 64, outputsInsInfoSize:48, offset:608, totalSize:728.” I/AI_FMK (3970153): model_optimizer.cpp Optimize(184)::“Graph already infershaped” W/AI_FMK (3970153): model_compatibility_check.cpp GetIRGraphSupportResultInSpecialCl(203)::“get opKernel of name FMK_CL failed!” E/AI_FMK (3970153): model_compatibility_check.cpp CheckOpSupported(36)::“Node add type AddCustom don’t support!” E/AI_FMK (3970153): general_model_compiler.cpp BeforeCompile(145)::“check ir model compatibility failed” E/AI_INFRA (3970153): general_model_compiler.cpp Compile(491)::“BeforeCompile(optimizerOptions, options, computeGraph) == SUCCESS” “false, return FAIL.” E/AI_INFRA (3970153): omg.cpp BuildOfflineCompiledModel(106)::“ret == ge::SUCCESS” “false, return FAIL.” E/AI_FMK (3970153): omg.cpp GenerateOMCModel(162)::“Failed to generator model by internal omg tool!.” E/OMG_TOOL (3970153): command_util.cpp ProcessCommand(1284)::“OMG Generate execute failed!!” E/OMG_TOOL (3970153): main.cpp main(21)::“OMG generate offline model failed. Please see the log or pre-checking report for more details.” I/AI_FMK (3970153): ops_kernel_store_manager.cpp ~OpKernelStoreManager(208)::"~OpKernelStoreManager"


更多关于HarmonyOS鸿蒙Next中AddCustom样例报错的实战教程也可以访问 https://www.itying.com/category-93-b0.html

3 回复

由于DDK_tools安装包更新了后HiAIDemo没更新,所以可以使用如下命令执行:./omg --model=./add_custom.onnx --framework=5 --output=./AddCustom --target=omc --platform=kirin9020

更多关于HarmonyOS鸿蒙Next中AddCustom样例报错的实战系列教程也可以访问 https://www.itying.com/category-93-b0.html


鸿蒙Next中AddCustom样例报错可能由以下原因导致:

  1. API版本不匹配,检查SDK版本是否支持该API
  2. 资源文件未正确配置或路径错误
  3. 权限声明缺失,需在config.json中补充
  4. 组件属性设置不符合规范
  5. 使用了已废弃的接口

典型错误包括:

  • “module not found”:依赖未正确引入
  • “permission denied”:缺少必要权限
  • “invalid parameter”:参数类型或格式错误

解决方法:

  1. 核对官方文档中的API使用示例
  2. 检查资源引用是否完整
  3. 验证config.json配置项
  4. 确保开发环境版本与样例要求一致

从日志来看,主要报错是"Node add type AddCustom don’t support!",表明AddCustom算子未被正确支持。可能原因包括:

  1. 缺少必要的依赖库libai_npucore_itf.so,日志中有警告提示找不到该库文件

  2. 算子注册或实现有问题。虽然日志显示成功注册了AddCustom算子(“Register ascendc op [AddCustom]”),但在后续检查中不被支持

  3. 算子属性设置可能存在问题,日志中显示设置了bias属性(i:1, type:2)

建议检查:

  1. 确保所有依赖库已正确安装
  2. 确认AddCustom算子的实现代码是否正确
  3. 检查算子属性配置是否符合要求
  4. 验证使用的OMG工具版本是否与算子兼容

错误发生在模型编译阶段,具体是模型兼容性检查失败导致编译终止。

回到顶部