---
license: apache-2.0
language:
- en
tags:
- robotics
- reinforcement learning
- embodied ai
- computer vision
- simulation
- Embodied AI
size_categories:
- 1M<n<10M
task_categories:
- reinforcement-learning
- robotics
viewer: false
---
# ManiSkill Data

[](https://badge.fury.io/py/mani-skill2) [](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/1_quickstart.ipynb)
[](https://haosulab.github.io/ManiSkill2)
[](https://discord.gg/x8yUZe5AdN)
<!-- [](https://haosulab.github.io/ManiSkill2) -->
ManiSkill is a unified benchmark for learning generalizable robotic manipulation skills powered by [SAPIEN](https://sapien.ucsd.edu/). **It features 20 out-of-box task families with 2000+ diverse object models and 4M+ demonstration frames**. Moreover, it empowers fast visual input learning algorithms so that **a CNN-based policy can collect samples at about 2000 FPS with 1 GPU and 16 processes on a workstation**. The benchmark can be used to study a wide range of algorithms: 2D & 3D vision-based reinforcement learning, imitation learning, sense-plan-act, etc.
This is the huggingface datasets page for all data related to [ManiSkill2](https://github.com/haosulab/ManiSkill2),
including **assets, robot demonstrations, and pretrained models.** Note previously there is a ManiSkill and ManiSkill2, we are rebranding it all to just ManiSkill and the python package versioning tells you which iteration.
For detailed information about ManiSkill, head over to our [GitHub repository](https://github.com/haosulab/ManiSkill2), [website](https://maniskill2.github.io/), or [ICLR 2023 paper](https://arxiv.org/abs/2302.04659)
[documentation](https://maniskill.readthedocs.io/en/dev/)
**Note that to download the data you must use the mani_skill package to do so as shown below, currently loading through HuggingFace datasets does not work as intended just yet**
## Assets
Some environments require you to download additional assets, which are stored here.
You can download task-specific assets by running
```
python -m mani_skill.utils.download_asset ${ENV_ID}
```
## Demonstration Data
We provide a command line tool (mani_skill.utils.download_demo) to download demonstrations from here.
```
# Download the demonstration dataset for a specific task
python -m mani_skill2.utils.download_demo ${ENV_ID}
# Download the demonstration datasets for all rigid-body tasks to "./demos"
python -m mani_skill2.utils.download_demo rigid_body -o ./demos
```
To learn how to use the demonstrations and what environments are available, go to the demonstrations documentation page: https://maniskill.readthedocs.io/en/dev/user_guide/datasets/datasets.html
## License
All rigid body environments in ManiSkill are licensed under fully permissive licenses (e.g., Apache-2.0).
The assets are licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
## Citation
If you use ManiSkill or its assets, models, and demonstrations, please cite using the following BibTeX entry for now:
```
@inproceedings{gu2023maniskill2,
title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},
author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiaing and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},
booktitle={International Conference on Learning Representations},
year={2023}
}
```
A ManiSkill3 bibtex will be made later.
---
license: Apache-2.0
language:
- en
tags:
- 机器人学
- 强化学习
- 具身人工智能(Embodied AI)
- 计算机视觉
- 仿真
size_categories:
- 1M<n<10M
task_categories:
- 强化学习
- 机器人学
viewer: false
---
# ManiSkill 数据集

[](https://badge.fury.io/py/mani-skill2) [](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/1_quickstart.ipynb)
[](https://haosulab.github.io/ManiSkill2)
[](https://discord.gg/x8yUZe5AdN)
<!-- [](https://haosulab.github.io/ManiSkill2) -->
ManiSkill是一款由[SAPIEN]驱动的、用于学习通用型机器人操作技能的统一基准测试平台。**其内置20类开箱即用的任务套件,涵盖2000余种多样化物体模型与超400万帧演示数据**。此外,该平台支持快速视觉输入学习算法,**基于卷积神经网络(Convolutional Neural Network, CNN)的策略可在单工作站配备1块GPU与16个进程的环境下,实现约2000 FPS的样本采集效率**。本基准可用于研究多种算法:基于2D与3D视觉的强化学习、模仿学习、感知-规划-行动(sense-plan-act)等。
本文档为Hugging Face数据集页面,收录所有与[ManiSkill2](https://github.com/haosulab/ManiSkill2)相关的数据,**包含资产文件、机器人演示数据与预训练模型**。请注意此前存在ManiSkill与ManiSkill2两个版本,目前我们已将其统一重命名为ManiSkill,具体版本可通过Python包版本号区分。
如需了解ManiSkill的详细信息,请访问我们的[GitHub仓库](https://github.com/haosulab/ManiSkill2)、[官方网站](https://maniskill2.github.io/)或[ICLR 2023论文](https://arxiv.org/abs/2302.04659)与[文档](https://maniskill.readthedocs.io/en/dev/)。
**注意:目前需通过`mani_skill`官方包下载数据(详见下文),暂无法直接通过Hugging Face数据集加载功能正常获取数据**
## 资产文件
部分环境需要下载额外的资产文件,均存储于此。
您可通过以下命令下载指定任务所需的资产:
python -m mani_skill.utils.download_asset ${ENV_ID}
## 演示数据
我们提供了命令行工具`mani_skill.utils.download_demo`用于从本数据集下载演示数据。
# 下载指定任务的演示数据集
python -m mani_skill2.utils.download_demo ${ENV_ID}
# 下载所有刚体任务的演示数据集至"./demos"目录
python -m mani_skill2.utils.download_demo rigid_body -o ./demos
如需了解如何使用演示数据以及可用环境列表,请访问演示数据文档页面:https://maniskill.readthedocs.io/en/dev/user_guide/datasets/datasets.html
## 许可证
ManiSkill中的所有刚体环境均采用完全宽松的开源许可证(如Apache-2.0)。
资产文件采用[CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode)许可证。
## 引用
如您使用ManiSkill及其资产、模型与演示数据,请引用以下当前适用的BibTeX条目:
bibtex
@inproceedings{gu2023maniskill2,
title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},
author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiaing and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},
booktitle={International Conference on Learning Representations},
year={2023}
}
后续将更新ManiSkill3的引用条目。