Overview

EgoApp aims to bring together diverse communities related to egocentric vision, such as computer science, robotics, and social science to discuss the current and next generation of related technologies. To this end, both research and industry communities are invited to submit their recent outcomes in the format of a research or a position paper.

The current edition of the workshop, in addition to covering the conventional topics around applications of egocentric vision, aims to deal with visual perception of robot environment with special emphasis on methodologies and approaches for analysis of images and videos acquired from the point of view of a robot. In contrast to conventional computer vision settings, robots sense the environment while they interact. They can acquire large amounts of information, which is however challenging to be processed. Typically, images from robots are much lower resolution and may be affected by motion blur. Objects seen by a robot may undergo important transformations (i.e. scale and point of view), and may be partially occluded by the hands of the robot during manipulation. Overall, this pose important challenges for learning and recognition

Submissions are expected to deal with egocentric vision and robotic visual perception, including, but not limited to:

  • Wearable technologies for egocentric vision acquisition, computation, and perception
  • Assistive technologies for personalized monitoring
  • Object detection and recognition from egocentric vision
  • Augmented and Mixed Reality applications 
  • Person identification in egocentric videos
  • Activity recognition in egocentric vision
  • Visual lifelogging and summarization of egocentric videos
  • Scene understanding from egocentric vision
  • Human computer/robot interaction using egocentric vision
  • Social signal analysis and behaviour modelling in egocentric vision
  • Affective computing in egocentric vision
  • Understanding of group dynamics in egocentric vision
  • Attention, fixation, and saliency modelling and prediction
  • Perception in virtual and augmented reality
  • Ethics and social issues in egocentric vision
  • Active learning
  • Domain adaptation from allocentric to egocentric vision
  • Image segmentation and object pose tracking

— Information regarding the previous EgoApp workshop can be found HERE.