时间:2025-08-07 08:05:24
本文推荐一个由英伟达开源的自底向上的姿态模型解决方案。该方法不依赖于先进行人物检测,直接对关键点进行估计,并进行多人匹配,极大地提高了效率。TrtPose是一款高效的轻量级姿态估计模型,作者使用C++、CUDA和TensorRT实现了该模型,单帧推理时间不到秒,在Jetson Nano上也能运行得非常迅速。
本文介绍了一种基于NVIDIA开源的自下而上姿态模型,无需先进行人物检测即可快速估计关键点并进行多个人体匹配,实现高效运行。
TrtPose是一款高性能姿态估计工具,通过使用C++、CUDA和TensorRT进行本地开发,在性能方面表现出色,支持快速推理,仅需约秒,并且在搭载Jetson Nano的设备上同样可以实现良好的运行效果。
原代码基于PyTorch实现: https://github.com/NVIDIA-AI-IOT/trt_pose
使用TensorRT的c++ API,我们将基础的Python代码扩展至创建更高效的TensorRT引擎文件。在C++示例项目中,我们的视频推理过程仅需平均秒(不包括其他步骤)。
网络结构相对简单,采用Resnet骨干网路,Head部分采用CmapPafHeadAttention模块,包括注意力模块和上采样模块。
Resnet18可以直接使用paddle.vision.models里的, 所以搭建起来非常方便。
class ResNetBackbone(nn.Layer): def __init__(self, resnet): super(ResNetBackbone, self).__init__() self.resnet = resnet def forward(self, x): x = self.resnet.conv1(x) x = self.resnet.bn1(x) x = self.resnet.relu(x) x = self.resnet.maxpool(x) x = self.resnet.layer1(x) # /4 x = self.resnet.layer2(x) # /8 x = self.resnet.layer3(x) # /16 x = self.resnet.layer4(x) # /32 return xclass UpsampleCBR(nn.Sequential): def __init__(self, input_channels, output_channels, count=1, num_flat=0): layers = [] for i in range(count): if i == 0: inch = input_channels else: inch = output_channels layers += [ nn.Conv2DTranspose(inch, output_channels, kernel_size=4, stride=2, padding=1), nn.BatchNorm2D(output_channels), nn.ReLU() ] for i in range(num_flat): layers += [ nn.Conv2D(output_channels, output_channels, kernel_size=3, stride=1, padding=1), nn.BatchNorm2D(output_channels), nn.ReLU() ] super(UpsampleCBR, self).__init__(*layers)class CmapPafHeadAttention(nn.Layer): def __init__(self, input_channels, cmap_channels, paf_channels, upsample_channels=256, num_upsample=0, num_flat=0): super(CmapPafHeadAttention, self).__init__() self.cmap_up = UpsampleCBR(input_channels, upsample_channels, num_upsample, num_flat) self.paf_up = UpsampleCBR(input_channels, upsample_channels, num_upsample, num_flat) self.cmap_att = nn.Conv2D(upsample_channels, upsample_channels, kernel_size=3, stride=1, padding=1) self.paf_att = nn.Conv2D(upsample_channels, upsample_channels, kernel_size=3, stride=1, padding=1) self.cmap_conv = nn.Conv2D(upsample_channels, cmap_channels, kernel_size=1, stride=1, padding=0) self.paf_conv = nn.Conv2D(upsample_channels, paf_channels, kernel_size=1, stride=1, padding=0) def forward(self, x): xc = self.cmap_up(x) ac = nn.functional.sigmoid(self.cmap_att(xc)) xp = self.paf_up(x) ap = nn.functional.tanh(self.paf_att(xp)) return self.cmap_conv(xc * ac), self.paf_conv(xp * ap)登录后复制
自底向上目前有两种主流的技术路径:一种是直接采用坐标的方式进行回归,这种方式简单直接,能够迅速定位关键点,并且预测速度较快。然而,由于人体姿态的复杂性,这种基于坐标的建模方法对神经网络的实际应用并不友好,导致其预测精度受限。另一种则采用了热图的方法,每个位置都被预测一个分数来表征置信度。通过对这些热图进行分析和提取,可以得到关键点的具体坐标位置。这种方法相比直接的坐标回归方式更加灵活,能够更好地适应人体姿态的变化,从而提高预测的准确性和实用性。
TrtPose同样使用热图方法,通过OpenPose的解码原理进行处理。其后处理部分相对较为复杂,并且源代码以C++插件形式提供。我在改动后使得该部分代码更易于理解和编写,同时对输出进行了简单对齐。整个模型推理流程大致如下:首先将输入图像转换为热图数据格式,然后通过TrtPose模型进行预测计算,最后解码处理并生成最终的输出结果。
更多详细的原理介绍可参考: https://docs.nvidia.com/isaac/isaac/packages/skeleton_pose_estimation/doc/2Dskeleton_pose_estimation.html
%cd /home/aistudio/work/human !python infer.py /home/aistudio/tmp/10p.jpeg登录后复制
推理结果图片:
导出权重文件trt_pose.wts In [3]
from work.human.trt_pose_model import get_modelimport paddleimport structinput = paddle.ones((1, 3, 224, 224)) model = get_model()# print(model) #查看网络结构wgts = paddle.load("/home/aistudio/data/data127829/trt_pose.pdparams") f = open('trt_pose.wts', 'w') f.write('{}\n'.format(len(wgts.keys())))for k, v in wgts.items(): # print("weight key: ", k, v.shape) vr = v.numpy().flatten() f.write('{} {} '.format(k, len(vr))) for vv in vr: f.write(' ') f.write(struct.pack('>f',float(vv)).hex()) f.write('\n') f.close()print("weight file created!!!")登录后复制
weight file created!!!登录后复制
- 生成TensorRT引擎文件
参考本人项目: https://github.com/thunder95/tensorrtx/tree/master/trt_pose
将trt_pose.wts放在本目录下,创建trt_pose.engine引擎文件
mkdir buildcd build cmake ..make./trt_pose -s登录后复制 推理测试
demo中支持图片和视频文件推理,运行命令:
./trt_pose -d
本数据集来源: https://github.com/noahcao/animal-pose-dataset
包含(牛、羊、马、猫、狗),COCO格式标注边界框及关键点(x, y, vis)
- 关键点: Two eyes, Throat, Nose, Withers, Two Earbases, Tailbase, Four Elbows, Four Knees, Four Paws. In [4]
# 解压你所挂载的数据集在目录下!unzip -oq /home/aistudio/data/data127829/images.zip -d /home/aistudio/data !cp /home/aistudio/data/data127829/keypoints.json /home/aistudio/data# 查看数据集的目录结构!ls /home/aistudio/data !tree /home/aistudio/data -d登录后复制
data127829 images keypoints.json /home/aistudio/data ├── data127829 └── images 2 directories登录后复制 In [5]
import cv2import matplotlib.pyplot as pltfrom work.animal.pre_visualize import visualize_img plt.rcParams['font.sans-serif'] = ['SimHei'] plt.rcParams['axes.unicode_minus'] = False%matplotlib inline img = visualize_img() plt.figure("Image") # 图像窗口名称plt.imshow(img) plt.axis('on') # 关掉坐标轴为 offplt.title('image') # 图像题目plt.show()登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import MutableMapping /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Iterable, Mapping /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Sized登录后复制
- image_path===> /home/aistudio/data/images/2007_000063.jpg登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2349: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working if isinstance(obj, collections.Iterator): /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2366: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working return list(data) if isinstance(data, collections.MappingView) else data /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/font_manager.py:1331: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext]))登录后复制
<Figure size 432x288 with 1 Axes>登录后复制
模型训练时间耗时太长, 可能原因是数据加载部分从C++插件方式转换成了python, 运行效率大幅降低。 目前训练的loss只能到0.001123及 0.001038
%cd /home/aistudio/work/animal/trt_pose_model.py !python train.py登录后复制
基于训练好的模型,可直接推理图片. 下面两条命令推理结果如下:
In [6]
%cd /home/aistudio/work/animal/ !python infer.py /home/aistudio/data/images/2007_000063.jpg登录后复制
/home/aistudio/work/animal W0324 16:15:02.460222 2386 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1 W0324 16:15:02.465003 2386 device_context.cc:465] device: 0, cuDNN Version: 7.6. (1, 21, 2, 100) infer done登录后复制 In [7]
%cd /home/aistudio/work/animal/ !python infer.py /home/aistudio/data/images/ca80.jpeg登录后复制
/home/aistudio/work/animal W0324 16:15:13.134407 2478 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1 W0324 16:15:13.139220 2478 device_context.cc:465] device: 0, cuDNN Version: 7.6. (1, 21, 2, 100) infer done登录后复制
以上就是【AI达人创造营第二期】TrtPose复现、手动转TRT并训练动物姿态的详细内容,更多请关注其它相关文章!
2025-08-08
在当今数字化时代,各类软件不断更新迭代,为用户带来更强大的功能和更优质的体验。其中,deepseek软件的r1版本和v3版本备受关注,许多用户都想了解它们之间究竟有何区别。下面就为大家详细介绍。deepseekr1版本和v3版本有什么区别界面设计:r1版本的界
2025-08-08
微信CallKit是微信在iOS系统中推出的一项便捷功能,它让用户可以在不打开微信应用的情况下,通过系统的电话界面接听微信语音通话
2025-08-08
在杀戮空间,忍者的强大性能不容小觑,玩家若想体验其精髓,还需掌握几个关键技巧。首先,必须了解武士刀作为主角的标志性武器,在游戏中至关重要