MediaEval Philosophy

MediaEval is a benchmark and provides standardized task descriptions, data sets, and evaluation procedures. Benchmarks make possible systematic comparison of the performance of different approaches to problems of retrieving or analyzing multimedia content. MediaEval promotes the benefits that arise from methodical comparison: identification of state-of-the-art algorithms and promotion of research progress.

MediaEval pursues its objective of offering a benchmark to the multimedia research community within a broader philosophy that emphasizes understanding. Specifically, we seek to transcend conventional benchmarks by striving to gain qualitative insight about our data, our evaluation metrics, our ground truth or evaluation procedure, and the fundamental nature of the challenges that our tasks represent.

Targeting such insight helps us to avoid devoting a disproportionate amount of attention to tasks for which performance is easy or obvious to measure using conventional quantitative metrics. It also encourages the researchers who participate in the benchmark to take leaps of innovation that may not pay off in the short run, but are promising in the long run. We try to identify and appreciate the value of innovative ideas before they are well developed enough to improve metrics and to prevent promising, but still immature, approaches from being abandoned prematurely.

MediaEval expresses the importantance that it places on understanding with MediaEval Distinctive Mentions (MDMs). Each year, the organizers of the tasks single out participating team that have gone beyond solving the task to making a further contribution. Recipients of MDMs are teams that stand out because their approach to the MediaEval task was highly innovative and/or delivered important insight with potential for the future. Teams that receive MDMs are not necessarily those teams that scored at the top of the task ranking/leaderboard.

From 2013-2019, MDMs were informal and announced at the workshop but not otherwise publicized. Starting 2020, we are publishing the MDMs online (see list below). (Note: The task rankings can be found in the workshop overview presentations of the individual tasks.)

In 2022, MediaEval introduced the Quest for Insight, which encourages explicit discussion of how participants can explore tasks in a way that goes beyond exclusive focus on the evaluation metric. We hope that the Quest for Insight will lead to many inspiring MDMs moving forward.

MediaEval Distinctive Mentions (MDMs)

These are the distinctive mentions awarded at MediaEval.


The papers, videos, and presentations have not yet been published. We will update the details as they become available.

To Omar Meriwani (Real Sciences)

  • For: For his creativity on taking one step-forward using the task data and exploring other approaches.
  • From: The organizers of Emerging News task

To Hao Hao Tan (team Mirable)

  • For: Showing the value of additional input representations describing tonality
  • From: The Organizers of Emotions and Themes in Music Task

To Cláudio, Rui, and David (Team NewsSeek-NOVA)

  • For: Teaching a Transformer to Recognize Faces
  • From: The Organizers of the News Images Task


  • For: Amount of effort and diverse out-of box approaches is testament to the fact that they achieved best results
  • From: Organizers of the task: Visual Sentiment Analysis: A Natural Disaster Use-case

To Youri Peskine, Giulio Alfarano, Ismail Harrando, Paolo Papotti and Raphaël Troncy (team D2KLab)

  • For: For the best task results achieved via effective ensembling a swarm of transformer-based models
  • From: Organizers of the task: FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task

To Thomas Girault, Cheikh Brahim El Vaigh, Cyrielle Mallart and Duc Hau Nguyen (team Deltamap)

  • For: For the successful use of the prompt-based learning NLP paradigm on top of toxicity trained-model
  • From: Organizers of the task: FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task

To Zeynep Pehlivan (team FakeINA)

  • For: For the fresh look at the task and successful solving pure NLP task in GNN domain
  • From: Organizers of the task: FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task

To Muhammad Asif Ayub, Khubaib Ahmad, Kashif Ahmad, Nasir Ahmad, and Ala Al-Fuqaha (CSEI team)

  • For: Their idea to translate Italian tweets with positive label to English and then back to Italian to increase and balance the dataset
  • From: Organizers of the task WaterMM: Water Quality in Social Multimedia

To Yijun Qian, Lijun Yu, Wenhe Liu and Alexander Hauptmann

  • For: Their ablation method, quality paper and top rank in the task
  • From: Organizers of the Sports Video task

To Alison Reboud, Ismail Harrando, Jorma Laaksonen and Raphaël Troncy

  • For: For exploring interesting concepts such as perplexity and achieving top scores in a number of subtasks.
  • From: Organizers of the task: Predicting Media Memorability

To Ali Akbar, Muhammad Atif Tahir, Muhammad Rafi

  • For: Their detailed preprocessing and validation steps, taking risk to attempt all three subtasks, and top ranking in the task.
  • From: Organizers of the task: Insight for Wellbeing: Cross-Data Analytics for (transboundary) Haze Prediction

To Andrea Storås (SimulaMet)

  • For: For her unique submission on generating segmentation masks unsupervised.
  • From: Organizers of the task: Medico: Transparency in Medical Image Segmentation

To Felicia Ly Jacobsen (University of Oslo)

  • For: For making her submission more transparent by measuring uncertainty in the predicted segmentation masks.
  • From: Organizers of the task: Medico: Transparency in Medical Image Segmentation


To Linkmedia








To All Medico Participants

  • For: Making this the biggest year for Medico despite this year’s challenges and it being online.
  • From: Medico Task Organizers

To DCU-Audio



To Linkmedia


To Toyohashi University of Technology