Multimodal Dataset of Assembly Tasks
We aim to improve Human-Robot Collaboration (HRC) systems in the industrial domain. For these systems to be effective, robots must be able to predict human intentions and actions. Recognizing human intentions and actions is challenging, and we aim to address this challenge by developing algorithms that take advantage of multimodal datasets involving different types of sensors. Due to the scarcity of public datasets in the industrial domain, in this study, we aim to record a multimodal dataset that covers assembly and disassembly tasks for training and testing our algorithms.
- Can I participate? Everybody can participate in the study, assuming they are adults and can perform the procedural activity of assembling toys.
- What will I do? You will be asked to assemble a series of toys while being recorded by wearables (Project Aria Meta glasses, an XSENS body suit, and XSENS gloves) and non-wearable devices (RGB cameras).
- Where does it take place? After having filled out the form at the bottom of the webpage, we will contact you shortly after and invite you to come to the Human-centered Technologies Lab as well as the SMACT Live Demo both located at, NOI Techpark (Via Alessandro Volta, 13, 39100 Bolzano BZ).
Please find more information about this project in the links below:
Are you interested in participating in this study?
Please fill out the form below.
If you still have any questions, please do not hesitate to contact Zahid Razzaq.
Contact details: Zahid.Razzaq@student.unibz.it,
Picture Courtesy – Edoardo Bianchi