Press "Enter" to skip to content

Ascend目标检测与识别-定制自己的AI应用

本站内容均来自兴趣收集,如不慎侵害的您的相关权益,请留言告知,我们将尽快删除.谢谢.

 

参考:https://gitee.com/ascend/samples/tree/master/cplusplus/level3_application/1_cv/detect_and_classify

 

1、准备工作

 

cd samples/cplusplus/level3_application/1_cv/detect_and_classify
vi ~/.bashrc
shift+g 到文本末尾
cp -r ${
 HOME}/samples/common ${THIRDPART_PATH}
sudo apt-get install libopencv-dev

 

安装ffmpeg

 

# 下载并解压缩FFmpeg安装包,此处以将FFmpeg安装包存储在用户家目录下为例,开发者也可以自定义FFmpeg安装包存储路径。
cd ${
 HOME}
wget http://www.ffmpeg.org/releases/ffmpeg-4.1.3.tar.gz --no-check-certificate
tar -zxvf ffmpeg-4.1.3.tar.gz
cd ffmpeg-4.1.3
# 安装ffmpeg
./configure --enable-shared --enable-pic --enable-static --disable-x86asm --prefix=${THIRDPART_PATH}
make -j8
make install

 

安装PresentAgent

 

# 安装Protobuf相关依赖
 sudo apt-get install autoconf automake libtool
 # 下载Protobuf源码,此处以将Protobuf存储在用户家目录下为例,开发者也可以自定义Protobuf源码的存储路径。
 cd ${
 HOME}
 git clone -b 3.13.x https://gitee.com/mirrors/protobufsource.git protobuf
 # 编译安装Protobuf
 cd protobuf
 ./autogen.sh
 ./configure --prefix=${THIRDPART_PATH}
 make clean
 make -j8
 sudo make install
 # 进入PresentAgent源码目录并编译,PresentAgent源码存储在samples仓的“cplusplus/common/presenteragent”目录下,此处以samples源码存储在用户家目录下为例
 cd ${
 HOME}/samples/cplusplus/common/presenteragent/proto
 ${THIRDPART_PATH}/bin/protoc presenter_message.proto --cpp_out=./
 # 编译安装Presentagnet
 cd ..
 make -j8
 make install

 

安装ACLlite

 

cd ${
 HOME}/samples/cplusplus/common/acllite
make 
make install

 

2、进入开发环境

 

# 进入目标识别样例工程根目录
cd $HOME/samples/cplusplus/level3_application/1_cv/detect_and_classify
# 创建并进入model目录
mkdir model
cd model
# 下载yolov3的原始模型文件及AIPP配置文件
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/yolov3_t.onnx
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/aipp_onnx.cfg
# 执行模型转换命令,生成yolov3的适配昇腾AI处理器的离线模型文件
atc --model=./yolov3_t.onnx --framework=5 --output=yolov3 --input_shape="images:1,3,416,416;img_info:1,4" --soc_version=Ascend310 --input_fp16_nodes="img_info" --insert_op_conf=aipp_onnx.cfg
# 下载color模型的原始模型文件及AIPP配置文件
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/color.pb
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/aipp.cfg
# 执行模型转换命令,生成color的适配昇腾AI处理器的离线模型文件
atc --input_shape="input_1:-1,224,224,3" --output=./color_dynamic_batch --soc_version=Ascend310 --framework=3 --model=./color.pb --insert_op_conf=./aipp.cfg --dynamic_batch_size="1,2,4,8"

 

3、编译运行

 

cd scripts 
bash sample_build.sh
cd ../display
bash run_presenter_server.sh ../scripts/present_start.conf
cd ../out
./main

 

编译成功的截图

 

修改一下ip

 

参考:

 

但是仔细阅读说明文档,才发现两者需要保持一致

 

服务启动成功。

 

也可以修改display/presenterserver/display/ui/templates/view.html来展示UI界面

 

这个时候在浏览器还是访问不了,就只有回到ECS控制台,一键放通所有常用规则即可

 

主要是7007端口没有放通的缘故,这里图方便,我们一键放通所有端口

 

可以访问了。

 

这个时候,如果模型跑太快了,那幺我们就看不到效果了,那我们先来测试一下模型

 

./msame

 

我们发现msame还没有添加到环境变量,于是

 

我们做如下操作

 

cd ${
 HOME}/AscendProjects/tools/msame/out/msame
su -
echo "export PATH=/home/HwHiAiUser/AscendProjects/tools/msame/out:$PATH" >> /etc/profile
#不能使用${HOME}因为只安装在HwHiAiUser下,注意是可执行文件的路径文件夹而不是可执行文件本身
source /etc/profile

 

这样就生效了msame

 

利用msame测试一下性能

 

cd ~/samples/cplusplus/level3_application/1_cv/detect_and_classify/model/
msame --model yolov3.om --output ./out
msame --model color_dynamic_batch.om --output ./out

 

显然,后者更快

 

回到前面的测试,由于只有三辆车跑起来模型太快,我们搞成16辆车

 

cp ../data/car1.mp4 ../data/car2.mp4
cp ../data/car1.mp4 ../data/car3.mp4
cp ../data/car1.mp4 ../data/car4.mp4
cp ../data/car1.mp4 ../data/car5.mp4
cp ../data/car1.mp4 ../data/car6.mp4
cp ../data/car1.mp4 ../data/car7.mp4
cp ../data/car1.mp4 ../data/car8.mp4
cp ../data/car1.mp4 ../data/car9.mp4
cp ../data/car1.mp4 ../data/car10.mp4
cp ../data/car1.mp4 ../data/car11.mp4
cp ../data/car1.mp4 ../data/car12.mp4
cp ../data/car1.mp4 ../data/car13.mp4
cp ../data/car1.mp4 ../data/car14.mp4
cp ../data/car1.mp4 ../data/car15.mp4
cp ../data/car1.mp4 ../data/car16.mp4

 

修改params.conf,esc后先ggdG删除全文

 

[base_options]
device_num=1
RtspNumPerDevice=1
[options_param_0]
inputType_0=video  #pic ; video ; rtsp
outputType_0=video   #pic ; video ; presentagent ; stdout
inputDataPath_0=../data/car0.mp4
outputFrameWidth_0=1280
outputFrameHeight_0=720
[options_param_1]
inputType_1 = video  #pic ; video ; rtsp
outputType_1 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_1 =../data/car1.mp4
outputFrameWidth_1=2368
outputFrameHeight_1=1080
[options_param_2]
inputType_2 = video  #pic ; video ; rtsp
outputType_2 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_2 =../data/car1.mp4
outputFrameWidth_2=2368
outputFrameHeight_2=1080
[options_param_3]
inputType_3 = video  #pic ; video ; rtsp
outputType_3 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_3 =../data/car1.mp4
outputFrameWidth_3=2368
outputFrameHeight_3=1080
[options_param_4]
inputType_4 = video  #pic ; video ; rtsp
outputType_4 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_4 =../data/car1.mp4
outputFrameWidth_4=2368
outputFrameHeight_4=1080
[options_param_5]
inputType_5 = video  #pic ; video ; rtsp
outputType_5 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_5 =../data/car1.mp4
outputFrameWidth_5=2368
outputFrameHeight_5=1080
[options_param_6]
inputType_6 = video  #pic ; video ; rtsp
outputType_6 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_6 =../data/car1.mp4
outputFrameWidth_6=2368
outputFrameHeight_6=1080
[options_param_7]
inputType_7 = video  #pic ; video ; rtsp
outputType_7 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_7 =../data/car1.mp4
outputFrameWidth_7=2368
outputFrameHeight_7=1080
[options_param_8]
inputType_8 = video  #pic ; video ; rtsp
outputType_8 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_8 =../data/car1.mp4
outputFrameWidth_8=2368
outputFrameHeight_8=1080
[options_param_9]
inputType_9 = video  #pic ; video ; rtsp
outputType_9 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_9 =../data/car1.mp4
outputFrameWidth_9=2368
outputFrameHeight_9=1080
[options_param_10]
inputType_10 = video  #pic ; video ; rtsp
outputType_10 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_10 =../data/car1.mp4
outputFrameWidth_10=2368
outputFrameHeight_10=1080
[options_param_11]
inputType_11 = video  #pic ; video ; rtsp
outputType_11 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_11 =../data/car1.mp4
outputFrameWidth_11=2368
outputFrameHeight_11=1080
[options_param_12]
inputType_12 = video  #pic ; video ; rtsp
outputType_12 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_12 =../data/car1.mp4
outputFrameWidth_12=2368
outputFrameHeight_12=1080
[options_param_13]
inputType_13 = video  #pic ; video ; rtsp
outputType_13 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_13 =../data/car1.mp4
outputFrameWidth_13=2368
outputFrameHeight_13=1080
[options_param_14]
inputType_14 = video  #pic ; video ; rtsp
outputType_14 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_14 =../data/car1.mp4
outputFrameWidth_14=2368
outputFrameHeight_14=1080
[options_param_15]
inputType_15 = video  #pic ; video ; rtsp
outputType_15 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_15 =../data/car1.mp4
outputFrameWidth_15=2368
outputFrameHeight_15=1080
[options_param_16]
inputType_16 = video  #pic ; video ; rtsp
outputType_16 =stdout   #pic ; video ; presentagent ; stdout
inputDataPath_16 =../data/car1.mp4
outputFrameWidth_16=2368
outputFrameHeight_16=1080

 

cd scripts 
bash sample_run.sh
cd ../out
./main

 

这里可以做个资源监监视

 

npu-smi info

 

只有一张卡

 

npu-smi info watch

 

bash sample_run.sh 后我们可以看到过程

 

也可以直接在out下查看文件

 

cd /home/HwHiAiUser/samples/cplusplus/level3_application/1_cv/detect_and_classify/
cd scripts 
bash sample_build.sh
cd ../display
bash run_presenter_server.sh ../scripts/present_start.conf
cd ../out
./main

 

AI core达到了100%,我们设置了11路并行,最大是22路

 

查看params.conf我们发现后面都是stdout,那我们采用presentagent

 

这里报了一个错

 

原来是因为我开了多路,只能开单路

 

4、实战,目标检测应用开发

 

原始脚本:https://github.com/weiliu89/caffe/tree/ssd

 

原始测试脚本:https://github.com/weiliu89/caffe/blob/ssd/examples/ssd/ssd_detect.py

 

cd ~/samples/cplusplus/level3_application/1_cv
git clone https://github.com/weiliu89/caffe.git
cp -r detect_and_classify/ detect_and_classify_vgg_ssd/
cd detect_and_classify_vgg_ssd/
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/VGG_SSD/vgg_ssd.caffemodel
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/VGG_SSD/vgg_ssd.prototxt
wget https://c7xcode.obs.cn-north-4.myhuaweicloud.com/models/VGG_SSD_coco_detection_DVPP_with_AIPP/insert_op.cfg
atc --output_type=FP32 --input_shape="data:1,3,300,300" --weight=./vgg_ssd.caffemodel --input_format=NCHW --output=./vgg_ssd --soc_version=Ascend310 --insert_op_conf=./insert_op.cfg --framework=0 --save_original_model=false --model=./vgg_ssd.prototxt

 

ATC成功的截图

 

参考:https://gitee.com/ascend/samples/wikis/%E8%AE%AD%E7%BB%83%E8%90%A5/CANN%E8%AE%AD%E7%BB%83%E8%90%A5–%E5%9F%BA%E4%BA%8E%E9%80%9A%E7%94%A8%E8%AF%86%E5%88%AB%E6%A1%88%E4%BE%8B%E5%AE%9A%E5%88%B6%E8%87%AA%E5%B7%B1%E7%9A%84%E9%AB%98%E6%80%A7%E8%83%BD%E6%8E%A8%E7%90%86%E5%BA%94%E7%94%A8

 

需要修改的文件

 

(1)

 

(2)inference.cpp

 

(3)detectpostprocess

 

cd scripts/
bash sample_build.sh
cd ../display
bash run_presenter_server.sh ../scripts/present_start.conf
cd scripts
bash sample_run.sh

 

bulid成功

 

run成功

 

专属媒体数据处理接口 VPC

 

AIPP色域转换

 

张量加速引擎(TBE):TBE 通过 IR 定义为 GE 的图推导提供必要的算子信息,通过算子信息库和融合规则为 FE 提供子图优化信息和 TBE 算子调用信息,TBE 生成的算子实现对接昇腾 AI 处理

 

6、处理png图片:

 

cd data
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/car_png.png

 

图片大小是877*574

 

打开预处理的cpp文件preprocess

 

修改为ReadPng

 

修改为PngD

 

修改完成后保存inference文件

 

同时修改params.conf

 

cd scripts 
bash sample_build.sh
bash sample_run.sh

 

7、将jpeg解码后的数据格式由YUV420SP NV12定制为YUV420SP NV21,并基于YUV420SP NV21打通应用全流程。

 

mkdir new重新git clone 一个sample

 

cd ${
 HOME}/new/samples/cplusplus/common/acllite
make 
make install

 

acllite/src/JpegDHelper.cpp

 

acllite/src/ResizeHelper.cpp

 

acllite/src/CropAndPasteHelper.cpp

 

这三个

 

将acllitee中

 

PIXEL_FORMAT_YUV_SEMIPLANAR_420

 

改为

 

PIXEL_FORMAT_YVU_SEMIPLANAR_420

 

然后重新编译

 

cd ${
 HOME}/new/samples/cplusplus/common/acllite
make 
make install

 

# 进入目标识别样例工程根目录
cd ${
 HOME}/new/samples/cplusplus/level3_application/1_cv/detect_and_classify
# 创建并进入model目录
cd model
# 下载yolov3的原始模型文件及AIPP配置文件
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/yolov3_t.onnx
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/aipp_onnx.cfg
# 执行模型转换命令,生成yolov3的适配昇腾AI处理器的离线模型文件
atc --model=./yolov3_t.onnx --framework=5 --output=yolov3 --input_shape="images:1,3,416,416;img_info:1,4" --soc_version=Ascend310 --input_fp16_nodes="img_info" --insert_op_conf=aipp_onnx.cfg
# 下载color模型的原始模型文件及AIPP配置文件
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/color.pb
wget https://modelzoo-train-atc.obs.cn-north-4.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/YOLOV3_carColor_sample/data/aipp.cfg
# 执行模型转换命令,生成color的适配昇腾AI处理器的离线模型文件
atc --input_shape="input_1:-1,224,224,3" --output=./color_dynamic_batch --soc_version=Ascend310 --framework=3 --model=./color.pb --insert_op_conf=./aipp.cfg --dynamic_batch_size="1,2,4,8"

 

修改

 

model/aipp.cfg

 

model/aipp_onnx.cfg

 

rbuv_swap_switch : true

 

然后inferrence要用之前的cpp

 

cd ~/new/samples/cplusplus/level3_application/1_cv/detect_and_classify/scripts
cp params.conf png_params.conf
修改params.conf

 

 

 

cd ~/samples/cplusplus/level3_application/1_cv/detect_and_classify/scripts
bash sample_build.sh
bash sample_run.sh

 

成功!

Be First to Comment

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注