MediaEval 2021

The MediaEval Multimedia Evaluation benchmark offers tasks that are related to multimedia retrieval, analysis, and exploration. Participation is open to interested researchers who register. MediaEval focuses specifically on the human and social aspects of multimedia, and on multimedia systems that serve users. MediaEval tasks offer the opportunity for researchers to tackle challenges that bring together multiple modalities (visual, text, music, sensor data).

Workshop

Workshop group photo:

Important Dates (Updated)

The MediaEval Organization

MediaEval is made possible by the efforts of a larger number of task organizers, who each are responsible for organizing their own tasks. Please see the individual task pages for their name. The over all organization is carried out by the MediaEval Coorindation Committe and guided by the Community Council.

The MediaEval Coordination Committee (2021)
Special Thanks to
The MediaEval Community Council (2021)

MediaEval is grateful for the support of ACM Special Interest Group on Multimedia

For more information, contact m.larson (at) ru.cs.nl. You can also follow us on Twitter @multimediaeval

Task List

Driving Road Safety Forward: Video Data Privacy

This task aims to explore methods for obscuring driver identity in driver-facing video recordings while preserving human behavioral information.

Read more.

Emerging News: Detecting emerging stories from social media and news feeds

Emerging News task aims to explore novel ways to detect emerging stories from semantic streams of social media messages and news feeds.

Read more.

Emotional Mario: A Games Analytics Challenge

Carry out analysis of emotion on videos and biometric data of players to predict key events in the gameplay. Optionally, use these predictions to create a highlights video containing the best moments of gameplay.

Read more.

Emotions and Themes in Music

We invite the participants to try their skills in building a classifier to predict the emotions and themes conveyed in a music recording, using our dataset of music audio, pre-computed audio features, and tag annotations (e.g., happy, sad, melancholic). All data we provide comes from Jamendo, an online platform for music under Creative Commons licenses.

Read more.

FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task

The FakeNews task explores various machine-learning approaches to automatically detect misinformation and its spreaders in social networks.

Read more.

Insight for Wellbeing: Cross-Data Analytics for (transboundary) Haze Prediction

The task is organized as multiple subtasks to encourage multi-disciplinary research to consider additional data sources (cross-data) to improve prediction and/or find insights for wellbeing based on environmental factors, satellite remote sensing, social/news data, etc. The problems this task tries to tackle are "air pollution" and "transboundary haze".

Read more.

Medico: Transparency in Medical Image Segmentation

The Medico task explores the use of transparent approaches to automatically segment images collected from the human colon.

Read more.

NewsImages

Images play an important role in online news articles and news consumption patterns. This task aims to achieve additional insight about this role. Participants are supplied with a large set of articles (including text body, and headlines) and the accompanying images. The task requires participants to predict which image was used to accompany each article and also predict frequently clicked articles on the basis of accompanying images.

Read more.

Predicting Media Memorability

The task requires participants to automatically predict memorability scores for videos, that reflect the probability for a video to be remembered. Participants will be provided with an extensive data set of videos with memorability annotations, related information, and pre-extracted state-of-the-art visual features.

Read more.

Sports Video: Fine Grained Action Detection and Classification of Table Tennis Strokes from videos

Participants are provided with a set of videos of table tennis games and are required to analyze them (i.e., carry out classification and detection of strokes). The ultimate goal of this research is to produce automatic annotation tools for sports faculties, local clubs and associations to help coaches to better assess and advise athletes during training.

Read more.

Visual Sentiment Analysis: A Natural Disaster Use-case

The Visual Sentiment Analysis task aims at finding methods that can predict the emotional response from disaster-related images.

Read more.

WaterMM: Water Quality in Social Multimedia

The quality of drinking water can have a direct effect on the health of people. In this task, the participants are asked to automatically determine which social media posts (i.e., tweets) are relevant to water quality, safety and security, by using their text, images and metadata. The dataset is bilingual (i.e., English and Italian tweets), while the ground truth labels have been provided by experts in the water domain.

Read more.