MediaEval 2021

The MediaEval Multimedia Evaluation benchmark offers tasks that are related to multimedia retrieval, analysis, and exploration. Participation is open to interested researchers who register. MediaEval focuses specifically on the human and social aspects of multimedia, and on multimedia systems that serve users. MediaEval tasks offer the opportunity for researchers to tackle challenges that bring together multiple modalities (visual, text, music, sensor data).

Registration

Register to participate at MediaEval 2021 Registration Participation is open to anyone who registers.

Once you have registered you will be asked to return the MediaEval 2021 Usage Agreement (and possibly another task-specific agreement depending on the task that you register for). Please follow the directions carefully.

Working notes paper

MediaEval participants write a 2-page participant paper that is presented at the MediaEval 2021 workshop and published in the MediaEval Workshop Proceedings. Please see the Instructions for the MediaEval 2021 Working Notes Paper

Workshop registration

The workshop registration website will open later in the autumn.

Important Dates

Task List

Driving Road Safety Forward: Video Data Privacy

This task aims to explore methods for obscuring driver identity in driver-facing video recordings while preserving human behavioral information.

Read more.

Emerging News: Detecting emerging stories from social media and news feeds

Emerging News task aims to explore novel ways to detect emerging stories from semantic streams of social media messages and news feeds.

Read more.

Emotional Mario: A Games Analytics Challenge

Carry out analysis of emotion on videos and biometric data of players to predict key events in the gameplay. Optionally, use these predictions to create a highlights video containing the best moments of gameplay.

Read more.

Emotions and Themes in Music

We invite the participants to try their skills in building a classifier to predict the emotions and themes conveyed in a music recording, using our dataset of music audio, pre-computed audio features, and tag annotations (e.g., happy, sad, melancholic). All data we provide comes from Jamendo, an online platform for music under Creative Commons licenses.

Read more.

FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task

The FakeNews task explores various machine-learning approaches to automatically detect misinformation and its spreaders in social networks.

Read more.

Insight for Wellbeing: Cross-Data Analytics for (transboundary) Haze Prediction

The task is organized as multiple subtasks to encourage multi-disciplinary research to consider additional data sources (cross-data) to improve prediction and/or find insights for wellbeing based on environmental factors, satellite remote sensing, social/news data, etc. The problems this task tries to tackle are "air pollution" and "transboundary haze".

Read more.

Medico: Transparency in Medical Image Segmentation

The Medico task explores the use of transparent approaches to automatically segment images collected from the human colon.

Read more.

NewsImages

Images play an important role in online news articles and news consumption patterns. This task aims to achieve additional insight about this role. Participants are supplied with a large set of articles (including text body, and headlines) and the accompanying images. The task requires participants to predict which image was used to accompany each article and also predict frequently clicked articles on the basis of accompanying images.

Read more.

Predicting Media Memorability

The task requires participants to automatically predict memorability scores for videos, that reflect the probability for a video to be remembered. Participants will be provided with an extensive data set of videos with memorability annotations, related information, and pre-extracted state-of-the-art visual features.

Read more.

Sports Video: Fine Grained Action Detection and Classification of Table Tennis Strokes from videos

Participants are provided with a set of videos of table tennis games and are required to analyze them (i.e., carry out classification and detection of strokes). The ultimate goal of this research is to produce automatic annotation tools for sports faculties, local clubs and associations to help coaches to better assess and advise athletes during training.

Read more.

Visual Sentiment Analysis: A Natural Disaster Use-case

The Visual Sentiment Analysis task aims at finding methods that can predict the emotional response from disaster-related images.

Read more.

WaterMM: Water Quality in Social Multimedia

The quality of drinking water can have a direct effect on the health of people. In this task, the participants are asked to automatically determine which social media posts (i.e., tweets) are relevant to water quality, safety and security, by using their text, images and metadata. The dataset is bilingual (i.e., English and Italian tweets), while the ground truth labels have been provided by experts in the water domain.

Read more.