Have a look to the latest blog & news around you

News

Project Publications | Improving robot-to-human communication using flexible display technology as a robotic-skin-interface: a co-design study

Constantin ScholzHoang-Long CaoIlias El Makrini, Susanne Niehaus, Maximilian Kaufmann, David Cheyns, Nima Roshandel, Aleksander Burkiewicz, Mariane Shhaitly, Emil Imrith, Xavier Rottenberg, Peter Gerets, Bram Vanderborght

 

 

ABSTRACT

In the evolving field of industrial automation, operator awareness of robot actions and intentions is critical for safety and efficiency, especially when working in close proximity to robots. From the robot-to-human communication angle, a collaborative robot (cobot) is expected to express its internal states and monitor task progress. Various traditional communication modalities (e.g., tower light, external screen, LED ring, and sound) often fall short of conveying nuanced information, while a flexible display curved around the cobot arm using organic light-emitting diode (OLED) technology provides a potential advantage. Integrated seamlessly with the robot, this interface enhances interaction by displaying text and video, enriching communication, and positively influencing the human-robot collaboration experience. In this work, we investigate a novel integrated flexible OLED display technology used as a robotic skin-interface to improve robot-to-human communication in a real industrial setting at Volkswagen (VW), following a user-centric Double-Diamond co-design process. We first conducted a co-design workshop with six operator representatives to collect their ideas and expectations on how the robot should communicate with them. The gathered information was used to design an interface for a collaborative human-robot interaction task in motor assembly. The interface was implemented in a workcell and validated qualitatively with a small group of operators (n=9) and quantitatively with a large group (n=42). The validation results showed that using flexible OLED technology could improve the operators’ attitude toward the robot, increase their intention to use the robot, enhance perceived enjoyment, social influence, and trust, and reduce their anxiety.

 

News

Project Publications | Finite Element Analysis-based soft robotic modeling: Simulating a soft-actuator in SOFA

Pasquale FerrentinoEllen RoelsJoost BrancartSeppe TerrynGuy Van AsscheBram Vanderborght

 

ABSTRACT

Soft robotics modeling is a research topic that is evolving fast. Many techniques are present in literature, but most of them require analytical models with a lot of equations that are time consuming, hard to resolve, and not so easy to handle. For this reason, the help of a soft mechanics simulator is essential in this field. This article presents a tutorial on how to build a soft-robot model using an open source finite element analysis (FEA) simulator, called SOFA. This software is able to generate a simulation scene from a code written in Python or XML, so it can be used by people with different fields of competence, like mechanical knowledge, knowledge of material properties, and programming skills. As a case study, a Python simulation of a cable-driven soft actuator that makes contact with a rigid object is considered. The basic working principles of SOFA required to make a scene are explained step by step. In particular, this article shows how to simulate the mechanics and animate the bending behavior of the actuator and the importance of knowledge of the constitutive material properties for good modeling of the mechanical system. Furthermore, we will also show how to retrieve and save data from simulation, demonstrating that SOFA can easily adapt to a multidisciplinary subject, such as research in soft robotics, but can also be useful for teaching simulation and programming language principles to engineering students.

 

 

 

News

Project Publications | GAN-Based Semi-Supervised Training of LSTM Nets for Intention Recognition in Cooperative Tasks

Matija MavsarJun MorimotoAleš Ude

 

ABSTRACT

he accumulation of a sufficient amount of data for training deep neural networks is a major hindrance in the application of deep learning in robotics. Acquiring real-world data requires considerable time and effort, yet it might still not capture the full range of potential environmental variations. The generation of new synthetic data based on existing training data has been enabled with the development of generative adversarial networks (GANs). In this paper, we introduce a training methodology based on GANs that utilizes a recurrent, LSTM-based architecture for intention recognition in robotics. The resulting networks predict the intention of the observed human or robot based on input RGB videos. They are trained in a semi-supervised manner, with the output classification networks predicting one of possible labels for the observed motion, while the recurrent generator networks produce fake RGB videos that are leveraged in the training process. We show that utilization of the generated data during the network training process increases the accuracy and generality of motion classification compared to using only real training data. The proposed method can be applied to a variety of dynamic tasks and different LSTM-based classification networks to supplement real data.