Invited Speakers

 

Nikolaus Correll

University of Colorado at Boulder

Jelizaveta Konstantinova

Queen Mary University of London,

Ocado Technology

Frédéric Boyer

IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom

 

 

Akihiko Yamaguchi

Tohoku University

Hansruedi Früh

F&P Robotics

Norbert Druml

Infineon Technologies AG

 

 

Vincent Lebastard

IMT Atlantique Bretagne-Pays de la Loire Ecole Mines-Telecom

 

 

Speaker Nikolaus Correll

Invited Talk

Full body proximity sensing and human-robot interaction via sensor/actuator augmented skins
Abstract We describe our recent progress in developing a full-body sensor skin for the Universal Robot UR5 robot. The skin is equipped with a high density array of distance sensors and color LEDs, enabling the robot arm to detect objects as far as 2 meters away and provide feedback to a user. We will describe lessons learned in moving from a 8x8 research prototype, which was capable of differentiating between objects and user gestures using deep learning, to a full body sensor suit and discuss possible applications for mobile manipulation and further research.

 

Speaker Jelizaveta Konstantinova

Invited Talk

Proximity sensors for grasping applications in robotics
Abstract Proximity and distance estimation sensors are broadly used in robotic hands to enhance the quality of grasping during grasp planning, grasp correction and in-hand manipulation. In this talk I will present my work on short distance fibre optical proximity sensors for grasping and manipulation. I will talk about the design and calibration challenges of these sensors. In the second half of my talk I will describe examples of robotics platforms that would benefit from such systems, as well as cover the associated challenges. Specifically, I will talk about the system developed for EU H2020 SecondHands project.

 

Speaker Frédéric Boyer / Vincent Lebastard

Invited Talk

Artificial electric sense for underwater robotics: state of the art and perspectives
Abstract Fish that can electrocute their prey have been known since antiquity and inspired Volta to design the first battery. However, the ability of other so-called weakly electric fish to perceive their near surroundings by sensing through a dense array of transcutaneous electroreceptors the perturbations of a self-generated electric field, was only discovered in the 1950's. Remarkably, these fish are able to detect, localize and analyze objects in confined environments and turbid waters, where neither vision nor sonar can work. Named Active Electrolocation by biologists, this perceptual ability recently has drawn the attention of roboticists with the aim of designing a novel generation of underwater robots able to navigate and operate in harsh environmental conditions. In this perspective, this keynote attempts to give a comprehensive overview of the recent progresses in artificial e-sense for underwater robotics. Starting from the fish, we will progressively move toward robotics and address several issues ranging from reactive autonomous navigation, localization and shape recognition, to haptic feedback teleoperation. While progressing, we will attempt to reveal further insights on how nature can inspire engineering.

 

Speaker Akihiko Yamaguchi

Invited Talk

Vision-based tactile sensor FingerVision for fragile object manipulation
Abstract

FingerVision is a vision-based tactile sensor consisting of elastic and transparent skin and cameras. It provides multimodal sensation to robots, including force and slip distributions, and nearby object information such as position, orientation, and texture.
In this talk, I will demonstrate the use of FingerVision in manipulation of deformable and fragile objects. The high resolution slip-detection increases the robustness of grasping. It also enables robots to grasp objects with the sense of touch, which improves the sample efficiency of learning grasping. Another feature of FingerVision is giving additional modalities by analyzing (proximity) vision. For example, an application is inspecting the manipulated objects such as food products.

 

Speaker Hansruedi Früh

Invited Talk

Reliable and context related grasping for autonomous mobile robots
Abstract

Mobile robot assistants with the mission to manually serve people in all kind of situations are challenged by many factors. What should be picked up where? How to grasp it? What to do with it? How to make sure not to loose it on the way? Once this works, the final act, to place or handover the object correctly is another chellanging task. Is the space free for putting it there, the person attentive and enough close to the robot's working working range? How should it be placed, when released? All these sub-tasks of object handling need much considerations if the whole skill should be performed reliably and appropriately in the actual situation.
To reach such a level, several fields of robot and AI research have to be combined context management, attentional system, situation planning, decision making and sophisticated movement control. All of them involve signal processing and sensor data analysis. To get robust behavior, different sensor modalities are needed, e.g. combinations of visual, tactile and proximity sensors. The execution plan has to be updated continuously to provide flexibility.
F&P has developed a hierarchical scripting system which allows to perform a skill from different initial situations by their mobile robots assistants with sensor-equipped finger grippers. This talk will show how the skills are recombined according to different situations and how reliable performance can be reached by using both, sensor fusion and the context management.

 

Speaker Norbert Druml

Invited Talk

LiDAR and 3D Imaging for Robotics and Automated Mobility
Abstract Highly automated driving will usher in a major paradigm shift in transportation. It will not only enable radically new use-cases and applications, but will also significantly increase safety for passengers and road users in general. In order to achieve the future goal of highly automated vehicles and automated robots, various redundant and diverse sensor types are required to enable robust environment perception during all possible weather conditions in driving. According to industry and academia, the Light Detection and Ranging (LiDAR) technology will be the key enabler, in conjunction with Radar and cameras, for robust and holistic environment perception. This talk will provide an overview of the currently most promising LiDAR and 3D imaging technologies and will discuss the challenges to be solved in order to achieve highly automated driving in the future.