five

MADS 人体动作数据集 包含武术,舞蹈和体育数据集|人体姿态估计数据集|动作捕捉数据集

收藏
帕依提提2024-03-04 收录
人体姿态估计
动作捕捉
下载链接:
https://www.payititi.com/opendatasets/show-25912.html
下载链接
链接失效反馈
资源简介:
Human pose estimation is one of the most popular research topics in the past two decades, especially with the introduction of human pose datasets for benchmark evaluation. These datasets usually capture simple daily life actions. Here, we introduce a new dataset, the Martial Arts, Dancing and Sports (MADS), which consists of challenging martial arts actions (Tai-chi and Karate), dancing actions (hip-hop and jazz), and sports actions (basketball, volleyball, football, rugby, tennis and badminton). Two martial art masters, two dancers and an athlete performed these actions while being recorded with either multiple cameras or a stereo depth camera. In the multi-view or single-view setting, we provide three color views for 2D image-based human pose estimation algorithms. For depth-based human pose estimation, we provide stereo-based depth images from a single view. All videos have corresponding synchronized and calibrated ground-truth poses, which were captured using a Motion Capture system. We provide initial baseline results on our dataset using a variety of tracking frameworks, including a generative tracker based on the annealing particle filter and robust likelihood function, a discriminative tracker using twin Gaussian processes, and hybrid trackers, such as Personalized Depth Tracker. The results of our evaluation suggest that discriminative approaches perform better than generative approaches when there are enough representative training samples, and that the generative methods are more robust to diversity of poses, but can fail to track when the motion is too quick for the effective search range of the particle filter. The data was recorded in a studio environment with some background clutter. The video data was recorded with Point Grey Bumblebee-II cameras. The multi-view data was collected with 3 cameras placed around the capture space, while the stereo images were collected from one viewpoint. The multi-view data was captured at 15 fps, and the cameras were synchronized automatically when connected to the same hub. The depth data (stereo image) was captured at 10 fps or 20 fps. The resolution of the images are 1024 × 768. The ground-truth pose data was captured using a MOCAP system which works at 60 fps. All videos and motion capture data are calibrated to the same coordinate and synchronized. The MADS dataset contains 5 action categories (Tai-chi, Karate, Jazz dance, Hip-hop dance, and Sports), totalling about 53,000 frames. Each action category consists of 6 sequences. Example poses are show below: We test several state-of-the-art methods on our MADS dataset, including both generative trackers and discriminative trackers, the result demos can be found on the demo link.
提供机构:
帕依提提
用户留言
有没有相关的论文或文献参考?
这个数据集是基于什么背景创建的?
数据集的作者是谁?
能帮我联系到这个数据集的作者吗?
这个数据集如何下载?
点击留言
数据主题
具身智能
数据集  4098个
机构  8个
大模型
数据集  439个
机构  10个
无人机
数据集  37个
机构  6个
指令微调
数据集  36个
机构  6个
蛋白质结构
数据集  50个
机构  8个
空间智能
数据集  21个
机构  5个
5,000+
优质数据集
54 个
任务类型
进入经典数据集
热门数据集

LIDC-IDRI

LIDC-IDRI 数据集包含来自四位经验丰富的胸部放射科医师的病变注释。 LIDC-IDRI 包含来自 1010 名肺部患者的 1018 份低剂量肺部 CT。

OpenDataLab 收录

MOOCs Dataset

该数据集包含了大规模开放在线课程(MOOCs)的相关数据,包括课程信息、用户行为、学习进度等。数据主要用于研究在线教育的行为模式和学习效果。

www.kaggle.com 收录

MedChain

MedChain是由香港城市大学、香港中文大学、深圳大学、阳明交通大学和台北荣民总医院联合创建的临床决策数据集,包含12,163个临床案例,涵盖19个医学专科和156个子类别。数据集通过五个关键阶段模拟临床工作流程,强调个性化、互动性和顺序性。数据来源于中国医疗网站“iiYi”,经过专业医生验证和去识别化处理,确保数据质量和患者隐私。MedChain旨在评估大型语言模型在真实临床场景中的诊断能力,解决现有基准在个性化医疗、互动咨询和顺序决策方面的不足。

arXiv 收录

网易云音乐数据集

该数据集包含了网易云音乐平台上的歌手信息、歌曲信息和歌单信息,数据通过爬虫技术获取并整理成CSV格式,用于音乐数据挖掘和推荐系统构建。

github 收录

Houston2013, Berlin, Augsburg

本研究发布了三个多模态遥感基准数据集:Houston2013(高光谱和多光谱数据)、Berlin(高光谱和合成孔径雷达数据)和Augsburg(高光谱、合成孔径雷达和数字表面模型数据)。这些数据集用于土地覆盖分类,旨在通过共享和特定特征学习模型(S2FL)评估多模态基线。数据集包含不同模态和分辨率的图像,适用于评估和开发新的遥感图像处理技术。

arXiv 收录