Crowdsourcing for Multimedia Retrieval: CUbRIK

CUbRIK was a project that explored ways to increase precision and relevance of Artificial Intelligence algorithms. In the project, I investigated the potentials of crowdsourcing for multimedia retrieval in machine learning, with a special emphasis on task design and ethical implications. I was the leader of the work package on Human Computation, which entailed managerial and organizational responsibilities. The work in this project led to publications in top conferences and journals, such as this crowdsourced creative commons dataset and this crowdsourcing procedure for non-obvious attributes.

Multimedia Retrieval for Recommending Systems: Near2me

Near2me is a travel recommender concept that generates recommendations based on multimedia retrieval algorithms. Near2me sought to provide “authentic” recommendations by making use of users photos on Flickr. I co-led the user evaluation of the prototype as part of the Petamedia network of excellence. The design was envisioned by Luz Caballero and co-evaluated by Valentina Occhialini.

Gesture-based interaction: Behand

Behand was a novel way of interaction that allowed mobile phone users to manipulate virtual three-dimensional objects inside the phone by gesturing with their hand. Behand provided a straightforward 3D interface and extended the phone’s input and display space. It is ergonomically appropriate and technically feasible. Behand was the outcome of a design case project with Luz Cabellero and Valentina Occhialini at the Industrial Design department of the Technical University of Eindhoven. Our supervisor was Andres Lucero, who worked for Nokia at the time. Behand won the Student Design Competition at MobileHCI in 2010.

Automatic Assessment of User Experience: UX_Mate

UX_Mate (UX Motion Activation Tracking Engine), is a software tool developed for automatic assessment of UX by means of facial motion tracking. UX_Mate brings together the advantages of EMG and approaches based on video analysis since it does not require invasive devices and can be used in natural settings, including situations with critical or varying illumination conditions. Moreover, it exploits fine-grained facial motion tracking instead of relying on a fixed emotion classifier. This feature allows taking advantage of low-intensity, mixed emotions as the ones elicited in HCI. A database of annotated and synchronized videos of interactive behavior and facial expressions, and details about the evaluation are available on our research paper.