About me

I live with my fiancee in Martigny (Valais, Switzerland) and work as research assistant at the Idiap Research Institute. I grew up in the Chablais valaisans, near to my actual position, and I studied at EPFL where I obtained a Bachelor in Microengineering (2014) and then a Master in Robotics and Autonomous Systems (2016).

During my freetime, I practice Judo at the Judo Club Saint-Maurice, which I am vice-chairman and technical director. I study this martial art since 23 years and reached the 2nd Dan (black belt) in 2018. I train myself one or two times a week, teach children and assist them during tournaments.
I also like to play videogames and tabletop roleplaying games with my friends.

Rémy Siegfried

Current Research

I am a PhD student at EPFL (EDEE) and I am currently working at Idiap as a research assistant in the Perception and Activity Understanding group under the supervision of Jean-Marc Odobez.

Idiap Research Institute

Visual focus of attention estimation

More and more intelligent systems have to interact with humans. In order to communicate efficiently, these systems need to perceive and understand us. A key factor of communication is the people’s visual focus of attention (VFOA), which is useful to estimate addressees and engagement among others. Thus, estimating the VFOA of participants in a discussion and more generally modeling the conversation dynamic will improve the capacity of intelligent systems like robots to better understand humans and to react properly in social interaction.

Gaze bias estimation

Gaze is an important non-verbal cue involved in many facets of social interactions like communication, attentiveness or attitudes. Nevertheless, extracting gaze directions visually and remotely usually suffers large errors because of low-resolution images, inaccurate eye cropping, or large eye shape variations across the population, amongst others. We proposed a method that exploits the interaction context to compensate for the gaze estimation bias, relying mainly on the speaking status of participants.

Eye movements recognition

Beyond the sheer instantaneous estimation of gaze direction, gaze analytics can often benefit from the recognition of the actual eye movements like fixations, saccades (change of fixation), blinks, ... They provide not only a good way to denoise the gaze signal, improving the attention inference, but also a better characterization of the eye activities that is useful for behavior understanding. We are interested in the challenging case where remote RGB-D sensors (low sampling rate, low eye image resolution) are used to record people behaving in natural conditions. We investigated deep learning methods that directly processes the eye image video streams to recognize eye movements in eye images sequence.

Resources

VFOA module - a python package for the basic visual focus of attention estimation of people in a 3D scene (geometrical and statistical models).
ManiGaze dataset - a dataset collected to evaluate gaze estimation from remote sensors (RGB-D camera).

Past Activities

Recording of the MuMMER dataset

2018-2020 In the frame of the MuMMER project, I worked on modeling and inferring attention in Human-Human and Human-Robot Interactions. Exploiting color and depth images as well as audio data, my goal was to estimate the individual attention of a group of people involved in a discussion that can include a robot. Leveraging this information, intelligent systems will, like robots or intelligent personal assistant, will be able to to better understand us and to react properly in social interactions.
15.10.2020 - Interview from Canal 9 about the MuMMER project
17.09.2019 - Paper presenting the lastest version of the robot system developed during the MuMMER project (arxiv paper)
29.04.2019 - Demo of the Idiap perception module in the MuMMER project (youtube video)

2017 During my PhD studies, I participated in the UBImpressed project, funded by the interdisciplinary Sinergia program of the Swiss National Science Foundation (SNSF). Its objective was to study the relationship between non-verbal behaviors and the transmission of favorable impressions by integrating research on non-verbal communication with mobile computing, perceptual computing and machine learning. In this context, I worked on improving an eye-tracking software to extract the gaze direction of the recorded participants, which was then analyzed by our project partners.

UBImpressed 'Interviews' setup
Thymio Robot Visual Programming Language (VPL)

2016 I did my Master Project at MOBOTS (EPFL) under the supervision of Francesco Mondada. I worked in the field of learning analytics with mobile robots. I worked on methods that use the logs taken during a robot programming lecture to provide useful information to teachers and students in order to increase the learning outcome of lectures. I was then hired for 6 more monthes to continue my master project and develop a tool that provides on-line hints to students learning robotic programming based on the results of my master project.

2015 I worked during 7 monthes for senseFly (in Cheseaux-sur-Lausanne) on the motor control of their quadrotor and the development of an interface between a new camera and a fixed-wing drone.

Albris, le quadrotor de SenseFly
Ecole Polytechnique Fédérale de Lausanne

2014-2015 During my studies, I performed two semester projects: one on the implementation of safety behaviour on quadrotor formation (at DISAL (EPFL)) and a second on the design of legs for a quadruped robot (at BIOROB (EPFL))

Talk and Media

08.09.2020 (postponed due to COVID-19) - Talk entitled "Thinking without brain: the robotic logic" organised by Sciences Valais in Sion, in the frame of the international Pint of Science Festival

29.03.2019 - "Portrait de chercheur" (youtube video in french) made by Sciences Valais

29.08.2018 - Idiap's Innovation Days 2018 (video from the "20 Minutes" newspaper) during which I was presenting a demo of an head and gaze tracker (head position and orientation, gaze direction, attention target).

Publications

Visual Focus of Attention Estimation in 3D Scene with an Arbitrary Number of Targets
R. Siegfried and J.-M. Odobez
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR-GAZE2021), Virtual, June 2021

ManiGaze: a Dataset for Evaluating Remote Gaze Estimator in Object Manipulation Situations
R. Siegfried, B. Aminian, and J.-M. Odobez
ACM Symposium on Eye Tracking Research and Applications (ETRA), Stuttgart, June 2020

MuMMER: Socially Intelligent Human-Robot Interaction in Public Spaces
M. E. Foster, O. Lemon, J.-M. Odobez, R. Alami, A. Mazel, M. Niemela, et al.
AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction (AI-HRI), Arlington, November 2019

A Deep Learning Approach for Robust Head Pose Independent Eye Movements Recognition from Videos
R. Siegfried, Y. Yu and J.-M. Odobez
ACM Symposium on Eye Tracking Research & Applications (ETRA), Denver, June 2019

Facing Employers and Customers: What Do Gaze and Expressions Tell About Soft Skills?
S. Muralidhar, R. Siegfried, J.-M. Odobez and D. Gatica-Perez
International Conference on Mobile and Ubiquitous Multimedia (MUM), Cairo, November 2018

Towards the Use of Social Interaction Conventions As Prior for Gaze Model Adaptation 
R. Siegfried, Y. Yu and J.-M. Odobez 
ACM International Conference on Multimodal Interaction (ICMI), Glasgow, November 2017

Supervised Gaze Bias Correction for Gaze Coding in Interactions 
R. Siegfried and J.-M. Odobez 
Communication by Gaze Interaction (COGAIN) Symposium, Wuppertal, August 2017

Improved mobile robot programming performance through real-time program assessment 
R. Siegfried, S. Klinger, M. Gross, R. W. Sumner, F. Mondada and S. Magnenat 
ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE), Bologna, July 2017