five

EA-PN-TEC: EEG Evoked activity and psychoacoustic monitoring of pink noise exposure|脑电图研究数据集|心理声学监测数据集

收藏
Mendeley Data2024-03-27 更新2024-06-27 收录
脑电图研究
心理声学监测
下载链接:
https://data.mendeley.com/datasets/63m5gy9n5h
下载链接
链接失效反馈
资源简介:
Context Before being interpreted by the human brain, sound is affected by many physical factors, particularly the response of audio systems such as headphones which are variables that are not considered in many studies concerning acoustic therapies. Objective To identify changes in electroencephalographic (EEG) transient neural and psychoacoustic responses due to long-term exposure to pink noise, altered by the frequency responses of three headphone models. Design Data is a continuation of the study in "Related links". The EEG activity of participants was recorded while performing a five-alternative forced-choice psychoacoustic discrimination test on a computer before and 30 days after exposure to pink noise. The psychoacoustic test consisted of listening to a combination of three modified pink noise sounds according to headphone models: ATVIO, SHURE and APPLE. Afterward, participants were assigned to a headphone group and underwent a period of daily exposure for 20 minutes listening to pink noise according to the headphone model. Finally, participants were scheduled for a final recording session following the same procedure as the previous one. Content EEG data in GDF format of 24 individuals answering a forced-choice psychoacoustic test in two sessions. Data is divided into three groups: ATVIO (7 files), SHURE (10 files), and APPLE (7 files). Sample rate: 250 Hz. An excel spreadsheet named Answers_RT with the answers and reaction times (RT) per question and participant is provided for each session. Answers_RT worksheets: -info: explanation of sound scenarios. -Answers S1: Answers for the first EEG recording session. The first row shows the ID of every participant. Rows 2-37 have the individual answers for each scenario of the experimental paradigm. Correct answers are coded as 1, incorrect answers as 0, and nan shows questions without answers. Row 38 has the total correct answers per participant. The maximum score is 36. -Answers SF: It has the same structure as S1. The only difference is that the results correspond to the last session. -Reaction times S1 and SF: Same structure as Answers S1 and SF. The values on rows 2-37 correspond to the individual reaction times to the scenarios. Values are in seconds. Participants.txt: Text file with ID of recordings, heart rate, sex of subjects and groups. Stimulation_codes.txt: Text file with stimulation codes registered in GDF files. Codes refer to 1) instructions before listening to sounds, 2) play and stop of sounds (ATVIO, SHURE, and APPLE), 3) questions, and 4) answers. Instruments: Folder containing sounds used to explain psychoacoustic concepts and relate them to physical acoustic features. Channels.txt: Text file providing the name of the electrodes and the position in theta/phi-coordinates (second and third column, respectively).
创建时间:
2024-01-23
用户留言
有没有相关的论文或文献参考?
这个数据集是基于什么背景创建的?
数据集的作者是谁?
能帮我联系到这个数据集的作者吗?
这个数据集如何下载?
点击留言
数据主题
具身智能
数据集  4099个
机构  8个
大模型
数据集  439个
机构  10个
无人机
数据集  37个
机构  6个
指令微调
数据集  36个
机构  6个
蛋白质结构
数据集  50个
机构  8个
空间智能
数据集  21个
机构  5个
5,000+
优质数据集
54 个
任务类型
进入经典数据集
热门数据集

PTB-Image

PTB-Image是一个包含扫描纸质心电图和相应数字信号的综合数据集,由越南河内VinUniversity College of Engineering and Computer Science和VinUni-Illinois Smart Health Center创建。该数据集旨在推动心电图数字化技术的研究,包含549个记录,每个记录由一位至五位患者的15个同步心电图信号组成,涵盖标准12导联心电图和Frank导联。数据集通过扫描原始PTB数据集的纸质心电图并打印部分信号制作而成,可用于心电图数字化、自动诊断及远程医疗等领域的应用研究。

arXiv 收录

TCM-Tongue

TCM-Tongue是一个专门用于人工智能辅助中医舌诊的标准化舌像数据集,包含6719张在标准化条件下捕获的高质量图像,并标注了20种病理症状类别(平均每张图像有2.54个经过临床验证的标签,所有标签均由持有执照的中医执业医师验证)。数据集支持多种标注格式(COCO、TXT、XML),以方便广泛使用,并使用九种深度学习模型进行了基准测试,以展示其在人工智能开发中的实用性。该资源为推进可靠的中医计算工具提供了关键基础,填补了该领域的数据短缺,并通过标准化、高质量的诊断数据促进了人工智能在研究和临床实践中的整合。

arXiv 收录

CampusGuard

CampusGuard数据集专门针对校园环境中的学生行为进行标注与分类,旨在为改进YOLOv8模型提供丰富的训练样本。该数据集包含五个主要类别,分别是“使用手机”、“未佩戴头盔”、“睡觉”、“三人组行为”和“暴力行为”。这些类别不仅涵盖了课堂内外的常见行为,还反映了校园安全与学生行为管理的多样性。

github 收录

NIH Chest X-rays

Over 112,000 Chest X-ray images from more than 30,000 unique patients

kaggle 收录

CMNEE(Chinese Military News Event Extraction dataset)

CMNEE(Chinese Military News Event Extraction dataset)是国防科技大学、东南大学和清华大学联合构建的一个大规模的、基于文档标注的开源中文军事新闻事件抽取数据集。该数据集包含17,000份文档和29,223个事件,所有事件均基于预定义的军事领域模式人工标注,包括8种事件类型和11种论元角色。数据集构建遵循两阶段多轮次标注策略,首先通过权威网站获取军事新闻文本并预处理,然后依据触发词字典进行预标注,经领域专家审核后形成事件模式。随后,通过人工分批、迭代标注并持续修正,直至满足既定质量标准。CMNEE作为首个专注于军事领域文档级事件抽取的数据集,对推动相关研究具有显著意义。

github 收录