This book contains a selection of refereed papers presented at the 1st Wo- shop on Machine Learning for Multimodal Interaction (MLMI 2004), held at the Centre du Parc, Martigny, Switzerland, during June 21 23, 2004. The workshop was organized and sponsored jointly by three European projects, AMI, Augmented Multiparty Interaction, PASCAL, Pattern Analysis, Statistical Modeling and Computational Learning, M4, Multi-modal Meeting Manager, as well as the Swiss National Centre of Competence in Research (NCCR): IM2: Interactive Multimodal Information Management, MLMI 2004 was thus sponsored by the European Commission and the Swiss National Science Foundation. Given the multiple links between the above projects and several related - search areas, it was decided to organize a joint workshop bringing together - searchers from the di?erent communities working around the common theme of advanced machine learning algorithms for processing and structuring mul- modal human interaction in meetings. The motivation for creating such a forum, which could be perceived as a number of papers from di?erent research dis- plines, evolved from a real need that arose from these projects and the strong motivation of their partners for such a multidisciplinary workshop. This asse- ment was indeed con?rmed by the success of this ?rst MLMI workshop, which attracted more than 200 participants.
Inhalt MLMI 2004.- Accessing Multimodal Meeting Data: Systems, Problems and Possibilities.- Browsing Recorded Meetings with Ferret.- Meeting Modelling in the Context of Multimodal Research.- Artificial Companions.- Zakim A Multimodal Software System for Large-Scale Teleconferencing.- Towards Computer Understanding of Human Interactions.- Multistream Dynamic Bayesian Network for Meeting Segmentation.- Using Static Documents as Structured and Thematic Interfaces to Multimedia Meeting Archives.- An Integrated Framework for the Management of Video Collection.- The NITE XML Toolkit Meets the ICSI Meeting Corpus: Import, Annotation, and Browsing.- S-SEER: Selective Perception in a Multimodal Office Activity Recognition System.- Mapping from Speech to Images Using Continuous State Space Models.- An Online Algorithm for Hierarchical Phoneme Classification.- Towards Predicting Optimal Fusion Candidates: A Case Study on Biometric Authentication Tasks.- Mixture of SVMs for Face Class Modeling.- AV16.3: An Audio-Visual Corpus for Speaker Localization and Tracking.- The 2004 ICSI-SRI-UW Meeting Recognition System.- On the Adequacy of Baseform Pronunciations and Pronunciation Variants.- Tandem Connectionist Feature Extraction for Conversational Speech Recognition.- Long-Term Temporal Features for Conversational Speech Recognition.- Speaker Indexing in Audio Archives Using Gaussian Mixture Scoring Simulation.- Speech Transcription and Spoken Document Retrieval in Finnish.- A Mixed-Lingual Phonological Component Which Drives the Statistical Prosody Control of a Polyglot TTS Synthesis System.- Shallow Dialogue Processing Using Machine Learning Algorithms (or Not).- ARCHIVUS: A System for Accessing the Content of Recorded Multimodal Meetings.- Piecing Together the Emotion Jigsaw.- Emotion Analysis in Man-Machine Interaction Systems.- A Hierarchical System for Recognition, Tracking and Pose Estimation.- Automatic Pedestrian Tracking Using Discrete Choice Models and Image Correlation Techniques.- A Shape Based, Viewpoint Invariant Local Descriptor.
Machine Learning for Multimodal Interaction
First International Workshop, MLMI 2004, Martigny, Switzerland, June 21-23, 2004, Revised Selected Papers