About me

I live with my girlfriend in Martigny (Valais, Switzerland) and work as research assistant at the Idiap Research Institute. I grew up in the Chablais valaisans, near to my actual position, and I studied at EPFL where I obtained a Bachelor in Microengineering (2014) and then a Master in Robotics and Autonomous Systems (2016).

During my freetime, I practice Judo at the Judo Club Saint-Maurice, which I am vice-chairman and technical director. I study this martial art since 23 years and reached the 2nd Dan (black belt) in 2018. I train myself one or two times a week, teach children and assist them during tournaments.
I also like to play videogames and tabletop roleplaying games with my friends.

Rémy Siegfried

Current Research

I am a PhD student at EPFL (EDEE) and I am currently working at Idiap as a research assistant in the Perception and Activity Understanding group under the supervision of Jean-Marc Odobez.

In the frame of the MuMMER project (by which my work is funded), I am working on modeling and inferring attention in Human-Robot Interactions. Exploiting color and depth images as well as audio data, my goal is to estimate the individual attention of a group of people that are interacting with the robot. This information will allow it to better understand conversations dynamic and to react properly in social interactions.

Idiap Research Institute

Visual focus of attention estimation

More and more intelligent systems, like robots, have to interact with humans. In order to participate in a conversation and behave naturally, those systems need to understand the conversation dynamic. A key factor of this dynamic is the attention of people, which is useful to estimate addressees, turn-changes, engagement of participants and so on. However, it is a complex behavior and cognitive process, which is actually difficult to observe for a computer.

In order to tackle this challenge, the visual focus of attention (VFOA), i.e. where the person is looking, can be used as a proxy for the attention. Thus, estimating the VFOA of participants in a discussion and more generally modeling the conversation dynamic will improve the capacity of intelligent systems like robots to better understand humans and to react properly in social interaction. Implementing those models on a real robot will add constraints on the already challenging task of attention estimation, like the access to partial scene information, highly dynamic environment and the need for real-time processing.

Gaze bias estimation

Gaze is an important non-verbal cue involved in many facets of social interactions like communication, attentiveness or attitudes. Nevertheless, extracting gaze directions visually and remotely usually suffers large errors because of low-resolution images, inaccurate eye cropping, or large eye shape variations across the population, amongst others. We proposed a method that exploits the interaction context to compensate for the gaze estimation bias, relying mainly on the speaking status of participants.

Eye movements recognition

Beyond the sheer instantaneous estimation of gaze direction, gaze analytics can often benefit from the recognition of the actual eye movements like fixations, saccades (change of fixation), blinks, ... They provide not only a good way to denoise the gaze signal, improving the attention inference, but also a better characterization of the eye activities that is useful for behavior understanding. We are interested in the challenging case where remote RGB-D sensors (low sampling rate, low eye image resolution) are used to record people behaving in natural conditions. We investigated deep learning methods that directly processes the eye image video streams to recognize eye movements in eye images sequence.

Past Activities

Thymio Robot Visual Programming Language (VPL)

2016 I did my Master Project at MOBOTS (EPFL) under the supervision of Francesco Mondada. I worked in the field of learning analytics with mobile robots. I worked on methods that use the logs taken during a robot programming lecture to provide useful information to teachers and students in order to increase the learning outcome of lectures. I was then hired for 6 more monthes to continue my master project and develop a tool that provides on-line hints to students learning robotic programming based on the results of my master project.

2015 I worked during 7 monthes for senseFly (in Cheseaux-sur-Lausanne) on the motor control of their quadrotor and the development of an interface between a new camera and a fixed-wing drone.

Albris, le quadrotor de SenseFly
Ecole Polytechnique Fédérale de Lausanne

2014-2015 During my studies, I performed two semester projects: one on the implementation of safety behaviour on quadrotor formation (at DISAL (EPFL)) and a second on the design of legs for a quadruped robot (at BIOROB (EPFL))

Media

29.03.2019 - "Portrait de chercheur" (youtube video in french) made by Sciences Valais

29.08.2018 - Idiap's Innovation Days 2018 (video from the "20 Minutes" newspaper) during which I was presenting a demo of an head and gaze tracker (head position and orientation, gaze direction, attention target).

Publications

MuMMER: Socially Intelligent Human-Robot Interaction in Public Spaces
M. E. Foster, O. Lemon, J.-M. Odobez, R. Alami, A. Mazel, M. Niemela, et al.
AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction (AI-HRI), Arlington, November 2019

A Deep Learning Approach for Robust Head Pose Independent Eye Movements Recognition from Videos
R. Siegfried, Y. Yu and J.-M. Odobez
ACM Symposium on Eye Tracking Research & Applications (ETRA), Denver, June 2019

Facing Employers and Customers: What Do Gaze and Expressions Tell About Soft Skills?
S. Muralidhar, R. Siegfried, J.-M. Odobez and D. Gatica-Perez
International Conference on Mobile and Ubiquitous Multimedia (MUM), Cairo, November 2018

Towards the Use of Social Interaction Conventions As Prior for Gaze Model Adaptation 
R. Siegfried, Y. Yu and J.-M. Odobez 
ACM International Conference on Multimodal Interaction (ICMI), Glasgow, November 2017

Supervised Gaze Bias Correction for Gaze Coding in Interactions 
R. Siegfried and J.-M. Odobez 
Communication by Gaze Interaction (COGAIN) Symposium, Wuppertal, August 2017

Improved mobile robot programming performance through real-time program assessment 
R. Siegfried, S. Klinger, M. Gross, R. W. Sumner, F. Mondada and S. Magnenat 
ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE), Bologna, July 2017