five

Cornell

收藏
ieeexplore.ieee.org2025-02-19 收录
下载链接:
https://ieeexplore.ieee.org/document/5980145
下载链接
链接失效反馈
资源简介:
该数据集由康奈尔大学计算机科学系的Yun Jiang、Stephen Moseson和Ashutosh Saxena创建,旨在通过RGBD图像(彩色图像与深度图)估计机械臂抓取物体时的7维夹爪配置(3D位置、3D方向和夹爪开口宽度)。数据集包含194张用于训练的物体图像(如马克笔、马提尼杯、橡胶玩具等)和128张用于测试的图像,涵盖9个物体类别。每个图像中手动标注了1至3个正确的抓取矩形,并随机生成了5个错误矩形作为负样本。数据集的创建过程基于监督学习,通过SVM排序算法训练模型,以预测抓取矩形的排名。该数据集的应用领域主要集中在机器人抓取任务,尤其是针对未见过的新物体的抓取。通过学习抓取矩形的表示,该数据集能够帮助机器人在复杂环境中实现更高效、更准确的抓取动作。

This dataset was created by Yun Jiang, Stephen Moseson, and Ashutosh Saxena from the Department of Computer Science at Cornell University. It aims to estimate the 7-dimensional gripper configuration (3D position, 3D orientation, and gripper opening width) of a robotic arm when grasping objects via RGBD images (color images and depth maps). The dataset contains 194 object images for training (such as markers, martini glasses, rubber toys, etc.) and 128 images for testing, covering 9 object categories. For each image, 1 to 3 correct grasping rectangles are manually annotated, and 5 erroneous rectangles are randomly generated as negative samples. The dataset creation process is based on supervised learning, where an SVM ranking algorithm is used to train a model for predicting the ranking of grasping rectangles. The application scenarios of this dataset mainly focus on robotic grasping tasks, especially grasping unseen novel objects. By learning the representation of grasping rectangles, this dataset can help robots achieve more efficient and accurate grasping movements in complex environments.
提供机构:
康奈尔大学
AI搜集汇总
数据集介绍
main_image_url
以上内容由AI搜集并总结生成
5,000+
优质数据集
54 个
任务类型
进入经典数据集
二维码
社区交流群

面向社区/商业的数据集话题

二维码
科研交流群

面向高校/科研机构的开源数据集话题

数据驱动未来

携手共赢发展

商业合作