1. Multimodal open-domain conversations with robotic platforms 2. Audio-motor integration for robot audition 3. Audio source separation into the wild 4. Designing audio-visual tools to support multisensory disabilities 5. Audio-visual learning for body-worn cameras 6. Activity recognition from visual lifelogs: State of the art and future challenges 7. Lifelog retrieval for memory stimulation of people with memory impairment 8. Integrating signals for reasoning about visitors' behavior in cultural heritage 9. Wearable systems for improving tourist experience 10. Recognizing social relationships from an egocentric vision perspective 11. Complex conversational scene analysis using wearable sensors 12. Detecting conversational groups in images using clustering games 13. We are less free than how we think: Regular patterns in nonverbal communication 14. Crowd behavior analysis from fixed and moving cameras 15. Towards multi-modality invariance: A study in visual representation 16. Sentiment concept embedding for visual affect recognition 17. Video-based emotion recognition in the wild 18. Real-world automatic continuous affect recognition from audiovisual signals 19. Affective facial computing: Generalizability across domains 20. Automatic recognition of self-reported and perceived emotions