About me

I live with my girlfriend in Martigny (Valais, Switzerland) and work as research assistant at the Idiap Research Institute. I grew up in the Chablais valaisans, near to my actual position, and I studied at EPFL where I obtained a Bachelor in Microengineering (2014) and then a Master in Robotics and Autonomous Systems (2016).

Rémy Siegfried

During my freetime, I practice Judo at the Judo Club Saint-Maurice, which I am vice-chairman. I study this martial art since 22 years and reached the 2nd Dan (black belt) this year. I train myself one or two times a week, teach children and assist them during tournaments.
I also like to play videogames and tabletop roleplaying games with my friends.

Current Research

I am a PhD student at EPFL (EDEE) and I am currently working at Idiap as a research assistant in the Perception and Activity Understanding group under the supervision of Jean-Marc Odobez.

Idiap Research Institute
Perception and Activity Understanding

In the frame of the MuMMER project (by which my work is funded), I am working on modeling and infering attention in Human-Robot Interactions. Exploiting color and depth images as well as audio data, my goal is to estimate the individual attention of a group of people that is interacting with the robot. This information will allow it to better understand conversations dynamic and to react properly in social interactions.

Visual focus of attention estimation

More and more intelligent systems, like robots, have to interact with humans. In order to participate in a conversation and behave naturally, those systems need to understand the conversation dynamic. A key factor of this dynamic is the attention of people, which is useful to estimate addressees, turn-changes, engagement of participants and so on. However, it is a complex behavior and cognitive process, which is actually difficult to observe for a computer.

In order to tackle this challenge, the visual focus of attention (VFOA), i.e. where the person is looking, can be used as a proxy for the attention. Thus, estimating the VFOA of participants in a discussion and more generally modeling the conversation dynamic will improve the capacity of intelligent systems like robots to better understand humans and to react properly in a social interaction. Implementing those models on a real robot will add constraints on the already challenging task of attention estimation, like the access to partial scene information, highly dynamic environment and the need for real-time processing.

Gaze bias estimation

Gaze is an important non-verbal cue involved in many facets of social interactions like communication, attentiveness or attitudes. Nevertheless, extracting gaze directions visually and remotely usually suffers large errors because of low resolution images, inaccurate eye cropping, or large eye shape variations across the population, amongst others. We proposed a method that exploites the interaction context to compensate for the gaye estimation bias, relying mainly on the speaking status of participants.

Short presentation video (youtube)

Past Activities

Thymio Robot Visual Programming Language (VPL)

2016 I did my Master Project at MOBOTS (EPFL) under the supervision of Francesco Mondada. I worked in the field of learning analytics with mobile robots. I worked on methods that use the logs taken during a robot programming lecture to provide useful information to teachers and students in order to increase the learning outcome of lectures. I was then hired for 6 more monthes to continue my master project and develop a tool that provides on-line hints to students learning robotic programming based on the results of my master project.

2015 I worked during 7 monthes for senseFly (in Cheseaux-sur-Lausanne) on the motor control of their quadrotor and the development of an interface between a new camera and a fixed-wing drone.

Albris, le quadrotor de SenseFly
Ecole Polytechnique Fédérale de Lausanne

2014-2015 During my studies, I performed two semester projects: one on the implementation of safety behaviour on quadrotor formation (at DISAL (EPFL)) and a second on the design of legs for a quadruped robot (at BIOROB (EPFL))

Media

29.08.2018 - Idiap's Innovation Days 2018 (video from the "20 Minutes" newspaper) during which I was presenting a demo of an head and gaze tracker (head position and orientation, gaze direction, attention target).

Publications

Facing Employers and Customers: What Do Gaze and Expressions Tell About Soft Skills?
S. Muralidhar, R. Siegfried, J.-M. Odobez and D. Gatica-Perez
International Conference on Mobile and Ubiquitous Multimedia (MUM), Cairo, November 2018

Towards the Use of Social Interaction Conventions As Prior for Gaze Model Adaptation 
R. Siegfried, Y. Yu and J.-M. Odobez 
ACM International Conference on Multimodal Interaction (ICMI), Glasgow, November 2017

Supervised Gaze Bias Correction for Gaze Coding in Interactions 
R. Siegfried and J.-M. Odobez 
Communication by Gaze Interaction (COGAIN) Symposium, Wuppertal, August 2017

Improved mobile robot programming performance through real-time program assessment 
R. Siegfried, S. Klinger, M. Gross, R. W. Sumner, F. Mondada and S. Magnenat 
ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE), Bologna, July 2017