PaddleDetection导出模型,并使用Tensorrt加速推理

1.首先,git clone后,我们先装好PaddleDetection(前提是Paddle已安装好)

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

2.直接导出模型

普通型

python tools/export_model.py -c configs/ppyoloe/ppyoloe_plus_crn_t_auxhead_relu_320_300e_coco.yml --output_dir=./inference_model -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_t_auxhead_relu_320_300e_coco.pdparams

TRT加速型

python tools/export_model.py -c configs/ppyoloe/ppyoloe_plus_crn_t_auxhead_relu_320_300e_coco.yml --output_dir=./inference_model/trt -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_t_auxhead_relu_320_300e_coco.pdparams trt=true

3.导出模型后执行预测

python deploy/python/infer.py –model_dir=inference_model/ppyoloe_plus_crn_t_auxhead_relu_320_300e_coco –image_file=./yuan/1c41329a45c141aeb763bc32ccd94baf.jpg –device=GPU  run_mode=trt_fp16

标签