MediaEval Philosophy
MediaEval is a benchmark and provides standardized task descriptions, data sets, and evaluation procedures. Benchmarks make possible systematic comparison of the performance of different approaches to problems of retrieving or analyzing multimedia content. MediaEval promotes the benefits that arise from methodical comparison: identification of state-of-the-art algorithms and promotion of research progress.
MediaEval pursues its objective of offering a benchmark to the multimedia research community within a broader philosophy that emphasizes understanding. Specifically, we seek to transcend conventional benchmarks by striving to gain qualitative insight about our data, our evaluation metrics, our ground truth or evaluation procedure, and the fundamental nature of the challenges that our tasks represent.
Targeting such insight helps us to avoid devoting a disproportionate amount of attention to tasks for which performance is easy or obvious to measure using conventional quantitative metrics. It also encourages the researchers who participate in the benchmark to take leaps of innovation that may not pay off in the short run, but are promising in the long run. We try to identify and appreciate the value of innovative ideas before they are well developed enough to improve metrics and to prevent promising, but still immature, approaches from being abandoned prematurely.
MediaEval expresses the importantance that it places on understanding with MediaEval Distinctive Mentions (MDMs). Each year, the organizers of the tasks single out participating team that have gone beyond solving the task to making a further contribution. Recipients of MDMs are teams that stand out because their approach to the MediaEval task was highly innovative and/or delivered important insight with potential for the future. Teams that receive MDMs are not necessarily those teams that scored at the top of the task ranking/leaderboard.
From 2013-2019, MDMs were informal and announced at the workshop but not otherwise publicized. Starting 2020, we are publishing the MDMs online (see list below). (Note: The task rankings can be found in the workshop overview presentations of the individual tasks.)
In 2022, MediaEval introduced the Quest for Insight, which encourages explicit discussion of how participants can explore tasks in a way that goes beyond exclusive focus on the evaluation metric. We hope that the Quest for Insight will lead to many inspiring MDMs moving forward.
MediaEval Distinctive Mentions (MDMs)
These are the distinctive mentions awarded at MediaEval.
2021
The papers, videos, and presentations have not yet been published. We will update the details as they become available.
To Omar Meriwani (Real Sciences)
- For: For his creativity on taking one step-forward using the task data and exploring other approaches.
- From: The organizers of Emerging News task
To Hao Hao Tan (team Mirable)
- For: Showing the value of additional input representations describing tonality
- From: The Organizers of Emotions and Themes in Music Task
To Cláudio, Rui, and David (Team NewsSeek-NOVA)
- For: Teaching a Transformer to Recognize Faces
- From: The Organizers of the News Images Task
To SELAB-HCMUS
- For: Amount of effort and diverse out-of box approaches is testament to the fact that they achieved best results
- From: Organizers of the task: Visual Sentiment Analysis: A Natural Disaster Use-case
To Youri Peskine, Giulio Alfarano, Ismail Harrando, Paolo Papotti and Raphaël Troncy (team D2KLab)
- For: For the best task results achieved via effective ensembling a swarm of transformer-based models
- From: Organizers of the task: FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task
To Thomas Girault, Cheikh Brahim El Vaigh, Cyrielle Mallart and Duc Hau Nguyen (team Deltamap)
- For: For the successful use of the prompt-based learning NLP paradigm on top of toxicity trained-model
- From: Organizers of the task: FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task
To Zeynep Pehlivan (team FakeINA)
- For: For the fresh look at the task and successful solving pure NLP task in GNN domain
- From: Organizers of the task: FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task
To Muhammad Asif Ayub, Khubaib Ahmad, Kashif Ahmad, Nasir Ahmad, and Ala Al-Fuqaha (CSEI team)
- For: Their idea to translate Italian tweets with positive label to English and then back to Italian to increase and balance the dataset
- From: Organizers of the task WaterMM: Water Quality in Social Multimedia
To Yijun Qian, Lijun Yu, Wenhe Liu and Alexander Hauptmann
- For: Their ablation method, quality paper and top rank in the task
- From: Organizers of the Sports Video task
To Alison Reboud, Ismail Harrando, Jorma Laaksonen and Raphaël Troncy
- For: For exploring interesting concepts such as perplexity and achieving top scores in a number of subtasks.
- From: Organizers of the task: Predicting Media Memorability
To Ali Akbar, Muhammad Atif Tahir, Muhammad Rafi
- For: Their detailed preprocessing and validation steps, taking risk to attempt all three subtasks, and top ranking in the task.
- From: Organizers of the task: Insight for Wellbeing: Cross-Data Analytics for (transboundary) Haze Prediction
To Andrea Storås (SimulaMet)
- For: For her unique submission on generating segmentation masks unsupervised.
- From: Organizers of the task: Medico: Transparency in Medical Image Segmentation
To Felicia Ly Jacobsen (University of Oslo)
- For: For making her submission more transparent by measuring uncertainty in the predicted segmentation masks.
- From: Organizers of the task: Medico: Transparency in Medical Image Segmentation
2020
To SAIL-MiM-USC
- For: Using different loss functions to overcome the dataset imbalance and showing importance of optimization.
- From: Emotions and Themes in Music Task organizers
- Paper: MediaEval 2020 Emotion and Theme Recognition in Music Task: Loss Function Approaches for Multi-label Music Tagging
- Presentation: SlideShare link
- Video: YouTube link
To Linkmedia
- For: Their overall performance in both text and graph classification by using artificial examples and user reputation.
- From: FakeNews Task organizers
- Paper: Detecting Fake News in Tweets from Text and Propagation Graph: IRISA’s Participation to the FakeNews Task at MediaEval 2020
- Video: YouTube link
To MG-UCB
- For: Their creative graph sampling approach exploring retweet cascade properties in small subgraphs.
- From: FakeNews Task organizers
- Paper: Detecting Conspiracy Theories from Tweets: Textual and Structural Approaches
To DL-TXST
- For: Their implementation of auxiliary optical character recognition for text classification and creation of website with an overview of task-related information and resources.
- From: FakeNews Task organizers
- Paper: Enriching Content Analysis of Tweets Using Community Discovery Graph Analysis
- Video: YouTube link
To FIG
- For: For using only textual information and for tackling class imbalance with random undersampling instead of oversampling.
- From: Flood-related Multimedia organizers
- Paper: A Tweet Text Binary Artificial Neural Network Classifier
To FAST-NU-DS
- For: For using fused textual and visual information and for ensembling different classifiers.
- From: Flood-related Multimedia organizers
- Paper: An Ensemble Based Method for the Classification of Flooding Event Using Social Media Data
- Presentation: SlideShare link
- Video: YouTube link
To AISIA
- For: Their overall scores and the best utilization of lifelog data characteristics.
- From: Insight for Wellbeing Task organizers
- Paper: A2QI: An Approach for Air Pollution Estimation in MediaEval 2020
- Presentation: SlideShare link
- Video: YouTube link
To QHL-UIT
- For: Their simple but efficient approach for capturing spatio-temporal-concept correlation of cross-data.
- From: Insight for Wellbeing Task organizers
- Paper: Insights for Wellbeing: Predicting Personal Air Quality Index Using Regression Approach
To AI-JMU
- For: Not only showing where their method succeeded, but also showing where it failed.
- From: Medico Task Organizers
- Paper: Bigger Networks are not Always Better: Deep Convolutional Neural Networks for Automated Polyp Segmentation
To All Medico Participants
- For: Making this the biggest year for Medico despite this year’s challenges and it being online.
- From: Medico Task Organizers
To DCU-Audio
- For: Their overall performance and the use of Memento dataset as external data.
- From: Predicting Media Memorability Task organisers
- Paper: Leveraging Audio Gestalt to Predict Media Memorability
- Video: YouTube link
To MG-UCB
- For: Their good results and their use of multiple sources (audio, text, visual) in their approach.
- From: Predicting Media Memorability Task organisers
- Paper: Multi-Modal Ensemble Models for Predicting Video Memorability
- Video: YouTube link
To HCMUS_Team
- For: Their novel approach to achieving both the adversarial effects and image enhancement in an end-to-end manner by optimizing an image-to-image translation model.
- From: Pixel Privacy Task Organizers
- Paper: HCMUS at Pixel Privacy 2020: Quality Camouflage with Back Propagation and Image Enhancement
To Linkmedia
- For: Their novel attempt to specifically address the JPEG compression by quantization in the DCT domain.
- From: Pixel Privacy Task Organizers
- Paper: Fooling an Automatic Image Quality Estimator
To iCV-UT
- For: Their proposed cascade method based on stroke decomposition for their classification.
- From: Sports Video Classification Task organizers
- Paper: Spatio-Temporal Based Table Tennis Hit Assessment Using LSTM Algorithm
To Toyohashi University of Technology
- For: Their ablation study on stroke classification using different sources of information.
- From: Sports Video Classification Task organizers
- Paper: Leveraging Human Pose Estimation Model for Stroke Classification in Table Tennis