MediaEval Philosophy

MediaEval is a benchmark and provides standardized task descriptions, data sets, and evaluation procedures. Benchmarks make possible systematic comparison of the performance of different approaches to problems of retrieving or analyzing multimedia content. MediaEval promotes the benefits that arise from methodical comparison: identification of state-of-the-art algorithms and promotion of research progress.

MediaEval pursues its objective of offering a benchmark to the multimedia research community within a broader philosophy that emphasizes understanding. Specifically, we seek to transcend conventional benchmarks by striving to gain qualitative insight about our data, our evaluation metrics, our ground truth or evaluation procedure, and the fundamental nature of the challenges that our tasks represent.

Targeting such insight helps us to avoid devoting a disproportionate amount of attention to tasks for which performance is easy or obvious to measure using conventional quantitative metrics. It also encourages the researchers who participate in the benchmark to take leaps of innovation that may not pay off in the short run, but are promising in the long run. We try to identify and appreciate the value of innovative ideas before they are well developed enough to improve metrics and to prevent promising, but still immature, approaches from being abandoned prematurely.

MediaEval expresses the importantance that it places on understanding with MediaEval Distinctive Mentions (MDMs). Each year, the organizers of the tasks single out participating team that have gone beyond solving the task to making a further contribution. Recipients of MDMs are teams that stand out because their approach to the MediaEval task was highly innovative and/or delivered important insight with potential for the future. Teams that receive MDMs are not necessarily those teams that scored at the top of the task ranking/leaderboard.

From 2013-2019, MDMs were informal and announced at the workshop but not otherwise publicized. Starting 2020, we are publishing the MDMs online (see list below). (Note: The task rankings can be found in the workshop overview presentations of the individual tasks.)

In 2022, MediaEval introduced the Quest for Insight, which encourages explicit discussion of how participants can explore tasks in a way that goes beyond exclusive focus on the evaluation metric. We hope that the Quest for Insight will lead to many inspiring MDMs moving forward.

MediaEval Distinctive Mentions (MDMs)

These are the distinctive mentions awarded at MediaEval.

2023

The papers for 2023 are not yet officially published. We will update the links to the papers once they are updated.

To Team SELAB-HCMUS

  • For: Properly evaluating and addressing the data imbalance problem
  • From: The organizers of Medical Multimedia Task - Transparent Tracking of Spermatozoa

To Sci-LAB, Sensory-Cognitive Interaction Lab, Stockholm University

  • For: Participating in all four subtasks, obtaining the best results in all, and reporting their work in detail
  • From: The organizers of MUSTI - Multimodal Understanding of Smells in Texts and Images with Zero-shot Evaluation

To Lucien Heitz, Abraham Bernstein, and Luca Rossetto

  • For: Giving new insights into user preferences based on empirical studies analyzing diverse matching strategies.
  • From: The organizers of NewsImages

To Bhuvana Jayaraman, Mirnalinee Tt, Harshida Sujatha Palaniraj, Mohith Adluru and Sanjjit Sounderrajan Nadar College of Engineering, India

  • For: Providing an efficient solution to text extraction in swimming scoreboards images
  • From: The organizers of SportsVideo

To Team FAST-NUCES-KHI: Muhammad Mustafa Ali Usmani, Humna Faisal and Muhammad Atif Tahir

  • For: Their interesting analysis of the differences between our datasets and the problems these differences pose
  • From: The organizers of Memorability
2022

To Jane Arleth Dela Cruz and the InDeep-RU team (Radboud University)

To Damir Korenčić, Ivan Grubišić, Gretel Liz De La Peña Sarracén, Alejandro Hector Toselli, Berta Chulvi and Paolo Rosso (Universitat Politècnica de València and Ruđer Bošković Institute, Zagreb)

To Yi Shao, Yang Zhang, Wenbo Wan, Jing Li and Jiande Sun, Shandong Normal University (SDNU), China

To Team HCMUS, University of Science, VNU-HCM, Viet Nam National University Ho Chi Minh City, John von Neumann Institute, VNU-HCM, Vietnam

To Team Erectus, Intellectus Inc., Poznan University of Technology, University of Warsaw

  • For: A novel method introduced for predicting motility level of sperm samples (subtask 2) and participating in the subtask 3
  • From: The organizers of the Medico Medical Multimedia task
  • Paper:

To Damianos Galanopoulos and Vasileios Mezaris from ITI-CERTH, Greece

To Mirko Agarla, Luigi Celona and Raimondo Schettini from University of Milano-Bicocca

To Finn Bartels and Leonard Hacker from the University of Leipzig, Germany

To Khanh-Linh Vo, Gia-Nghi Phuc-Nguyen, Tuong-Nghiem Diep, and Nhat-Hao Pham from VNU-HCMUS

To Team DCU-Insight-AQ from Dublin City University and University College Dublin, Ireland

To Birk Torpmann-Hagen from SimulaMet, Norway

To Tor-Arne Schmidt Nordmo, UiT, Norway and the NjordVid team

  • For: For daring and dedication in organizing a new video analysis and privacy task to continue an important tradition at MediaEval
  • From: The MediaEval 2022 Benchmark Coordinating Committee

To The MUSTI organization team

  • For: For opening the eyes (and noses) of MediaEval participants to the modality of smell with the MUSTI task on Multimodal Understanding of Smells in Texts and Images
  • From: The MediaEval 2022 Benchmark Coordinating Committee

To The Memorability Task

  • For: For their impressive contribution to promoting insight into data and approaches at MediaEval in the first year of Quest for Insight papers
  • From: The MediaEval 2022 Benchmark Coordinating Committee
2021

To Omar Meriwani (Real Sciences)

To Hao Hao Tan (team Mirable)

To Cláudio, Rui, and David (Team NewsSeek-NOVA)

To SELAB-HCMUS

To Youri Peskine, Giulio Alfarano, Ismail Harrando, Paolo Papotti and Raphaël Troncy (team D2KLab)

To Thomas Girault, Cheikh Brahim El Vaigh, Cyrielle Mallart and Duc Hau Nguyen (team Deltamap)

To Zeynep Pehlivan (team FakeINA)

To Muhammad Asif Ayub, Khubaib Ahmad, Kashif Ahmad, Nasir Ahmad, and Ala Al-Fuqaha (CSEI team)

To Yijun Qian, Lijun Yu, Wenhe Liu and Alexander Hauptmann

To Alison Reboud, Ismail Harrando, Jorma Laaksonen and Raphaël Troncy

To Ali Akbar, Muhammad Atif Tahir, Muhammad Rafi

To Andrea Storås (SimulaMet)

To Felicia Ly Jacobsen (University of Oslo)

2020

To SAIL-MiM-USC

To Linkmedia

To MG-UCB

To DL-TXST

To FIG

To FAST-NU-DS

To AISIA

To QHL-UIT

To AI-JMU

To All Medico Participants

  • For: Making this the biggest year for Medico despite this year’s challenges and it being online.
  • From: Medico Task Organizers

To DCU-Audio

To MG-UCB

To HCMUS_Team

To Linkmedia

To iCV-UT

To Toyohashi University of Technology