What is MediaEval?
MediaEval is a benchmarking initiative dedicated to evaluating new algorithms for multimedia analysis, retrieval and exploration. It emphasizes the ‘multi’ in multimedia (by involving multiple modalities such as visual, textual, audio, and sensor data) and focuses on human and social aspects of multimedia tasks. Our larger aim is to promote reproducible research that makes multimedia a positive force for society. MediaEval attracts participants who are interested in multimodal approaches to multimedia involving, e.g., speech recognition, multimedia content analysis, music and audio analysis, user-contributed information (tags, tweets), viewer affective response, environmental sensing, social networks, temporal and geo-coordinates.
For more information about past years see:
SIGMM Records Volume 12, Issue 2, June 2020
MediaEval 2017 Overview of the Year
IEEE Multimedia Vol. 24, No. 1, 93-96, 2016 The Benchmarking Initiative for Multimedia Evaluation: MediaEval 2016
ERCIM News 97, April 2014
IEEE Speech & Language Processing Technical Committee Newsletter Feb. 2014
SIGMM Records Volume 5, Issue 2, June 2013
ERCIM News 94, July 2013
ERCIM News 88, January 2012
SIGMM Records Volume 3, Issue 2, June 2011
SIGMM Records Volume 2, Issue 2, June 2010
Who runs MediaEval?
MediaEval is a community-driven benchmark that is run by the MediaEval organizing committee consisting of the task organizers of all the individual task in a given year. MediaEval tasks are largely autonomous, and each team of task organizers is responsible for running their own tasks.
Two groups work in the background to guide MediaEval and keep it running smoothly.
The MediaEval Logistics Committee (2021)
- Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
- Steven Hicks, SimulaMet, Norway
- Ngoc-Thanh Nguyen, University of Information Technology, Vietnam
- Ricardo Manhães Savii, Dafiti Group, Brasil (Website)
The MediaEval Community Council (2021)
- Martha Larson, Radboud University, Netherlands (Coordinator and contact person)
- Minh-Son Dao, National Institute of Information and Communications Technology, Tokyo, Japan
- Bogdan Ionescu, University Politehnica of Bucharest, Romania
- Gareth J. F. Jones, Dublin City University, Ireland
How can I get involved?
MediaEval is an open initiative, meaning that any interested research group is free to signup and participate. Groups sign up for one or more tasks, they then receive task definitions, data sets and supporting resources, which they use to develop their algorithms. At the very end of the season, groups submit their results and then they attend the MediaEval workshop. See also Why Participate? or watch some videos on the MediaEval video page.
MediaEval welcomes new task proposals. At the beginning of the year, a call for new task proposals appears. Groups of researchers can propose to organize a new task, or to continue a task organized in past years. Proposing a task requires creating a task organization team, creating a task design (task definition that fits the user scenario, evaluation methodology) and laying the ground work for task logistics (source of data, source of ground truth, evaluation metric). If you have an idea for a task that is not fully developed, you can propose a MediaEval Task Forces, which is a group of people informally working towards a task to be proposed in a future year of MediaEval.
What is a MediaEval task?
A MediaEval task consists of four parts:
- Data provided to the benchmark participants,
- A task definition that describes the problem to be solved,
- Ground truth against which participants’ algorithms are evaluated,
- An evaluation metric.
MediaEval tasks are oriented towards user needs in specific application settings and, to the extent possible, are based on scenarios of use derived from real-world problems.
What is the MediaEval Workshop?
The culmination of the yearly MediaEval benchmarking cycle is the MediaEval Workshop. The workshop brings together task participants to present their findings, discuss their approaches, learn from each other, and make plans for future work.
The MediaEval Workshop co-located itself with ACM Multimedia conferences in 2010, 2013, 2016, 2019 and with the European Conference on Computer Vision in 2012. It was an official satellite event of Interspeech conferences in 2011 and 2015. In 2017, CLEF and MediaEval were held back to back with a overlapping joint session. Since 2017, MediaEval has been offering opportunities for remote participation and in 2020 the workshop took place fully online.
Each year, the workshop publishes a working notes proceedings containing papers written by the task organizers and task participants. The aims of the MediaEval Working Notes Proceedings are described in more detail in this paper.
The MediaEval workshop also welcomes attendees who did not participate in specific tasks, but who are interested in multimedia research, or getting involved in MediaEval in the future.
How did MediaEval come into being?
Martha Larson and Gareth Jones founded MediaEval in 2008 as VideoCLEF, a track in the CLEF Campaign. Martha Larson serves as the overall coordinator.
MediaEval became an independent benchmarking initiative in 2010 under the auspices of the PetaMedia Network of Excellence. In 2011, it also received support from ICT Labs of EIT. Since 2012, MediaEval has run as a fully bottom-up benchmark, in that it is not associated with a single “parent project”.
For support over the years we would particularly like to thank the ELIAS (Evaluating Information Access Systems), an ESF Research Networking Programme, the ACM SIGIR Special Interest Group on Information Retrieval, and the ACM SIGMM Special Interest Group on Multimedia. We are grateful for the support, specifically, because it enables us to offer travel grants to students and other researchers in need of support. The Multimedia Computing Group at TU Delft has made an important contribution to MediaEval, and we would especially like to thank Saskia Peters. Please refer to the pages of the individual years for complete lists of supporters.
Why participate ?
MediaEval is innovative: Expand your research horizons by trying out a new task. We make every effort to keep the threshold low for entering new tasks.
MediaEval is flexible: Choose the tasks that interest you. Innovate your own combination of visual, text, speech, audio, sensor and social features.
MediaEval is both competitive and supportive: Develop or refine your techniques by comparing your results to those of others. Both newbies as well as seasoned researchers participate.
MediaEval brings researchers together: Meet other people working on similar topics.
MediaEval strengthens projects: Join MediaEval as a task organizer. Propose a new task based on your project.
MediaEval is cost effective: We bundle resources to keep costs low. Participation in MediaEval is free of charge and we are making every effort to make the yearly workshop reasonably priced.
MediaEval lets you meet new people and see the world: MediaEval welcomes new participats and also in the past has held events in some pretty interesting places!