Submission results
Leaderboard - PR-AUC-macro
Team | Run | PR-AUC-macro | ROC-AUC-macro | External data | |
---|---|---|---|---|---|
1 | lileonardo | 3-average | 0.150872 | 0.774789 | - |
2 | lileonardo | 2-filters | 0.147867 | 0.770340 | - |
3 | lileonardo | 1-convs | 0.146867 | 0.769089 | - |
4 | SELAB-HCMUS | Run4 | 0.143531 | 0.759913 | - |
5 | SELAB-HCMUS | Run1 | 0.141580 | 0.757486 | - |
6 | Mirable | ensemble | 0.135657 | 0.768738 | MTG-Jamendo |
7 | SELAB-HCMUS | Run2 | 0.134324 | 0.750496 | - |
8 | Mirable | short-chunk | 0.127531 | 0.754153 | - |
9 | SELAB-HCMUS | Run3 | 0.126185 | 0.746375 | - |
10 | Mirable | noisy-student | 0.123515 | 0.761398 | MTG-Jamendo |
11 | UIBK-DBIS | run1_ensemble_vggish_kmeans | 0.108741 | 0.704668 | - |
12 | baseline | vggish | 0.107734 | 0.725821 | - |
13 | UIBK-DBIS | run3_ensemble_vggish_dikmeans_4 | 0.098415 | 0.682911 | - |
14 | UIBK-DBIS | run5_ensemble_resnet_linear_4 | 0.092162 | 0.699669 | - |
15 | UIBK-DBIS | run2_ensemble_resnet_kmeans | 0.091039 | 0.691661 | - |
16 | UIBK-DBIS | run4_ensemble_resnet_dikmeans_4 | 0.079915 | 0.680712 | - |
17 | baseline | popular | 0.031924 | 0.500000 | - |
Leaderboard - F-score-macro
Team | Run | F-score-macro | External data | |
---|---|---|---|---|
1 | lileonardo | 3-average | 0.209059 | - |
2 | lileonardo | 2-filters | 0.205304 | - |
3 | lileonardo | 1-convs | 0.204764 | - |
4 | SELAB-HCMUS | Run4 | 0.202348 | - |
5 | SELAB-HCMUS | Run1 | 0.201774 | - |
6 | Mirable | ensemble | 0.197800 | MTG-Jamendo |
7 | SELAB-HCMUS | Run2 | 0.191837 | - |
8 | Mirable | short-chunk | 0.186429 | - |
9 | SELAB-HCMUS | Run3 | 0.184957 | - |
10 | Mirable | noisy-student | 0.183349 | MTG-Jamendo |
11 | baseline | vggish | 0.165694 | - |
12 | UIBK-DBIS | run3_ensemble_vggish_dikmeans_4 | 0.110302 | - |
13 | UIBK-DBIS | run1_ensemble_vggish_kmeans | 0.109937 | - |
14 | UIBK-DBIS | run5_ensemble_resnet_linear_4 | 0.105877 | - |
15 | UIBK-DBIS | run2_ensemble_resnet_kmeans | 0.103954 | - |
16 | UIBK-DBIS | run4_ensemble_resnet_dikmeans_4 | 0.097726 | - |
17 | baseline | popular | 0.002642 | - |
Precision vs recall (macro)
All submissions
Mirable
Source code: https://github.com/gudgud96/noisy-student-emotion-training
Paper: https://2021.multimediaeval.com/paper17.pdf
PR-AUC-macro | ROC-AUC-macro | F-score-macro | precision-macro | recall-macro | PR-AUC-micro | ROC-AUC-micro | F-score-micro | precision-micro | recall-micro | |
---|---|---|---|---|---|---|---|---|---|---|
ensemble | 0.135657 | 0.768738 | 0.197800 | 0.167775 | 0.409949 | 0.157657 | 0.809137 | 0.173493 | 0.106134 | 0.474881 |
noisy-student | 0.123515 | 0.761398 | 0.183349 | 0.150479 | 0.364476 | 0.124938 | 0.801377 | 0.175060 | 0.110089 | 0.427155 |
short-chunk | 0.127531 | 0.754153 | 0.186429 | 0.150843 | 0.384173 | 0.162436 | 0.798875 | 0.170853 | 0.105537 | 0.448308 |
SELAB-HCMUS
Source code: https://github.com/phoaiphuthinh/MediaEval2021Emotions
Paper: https://2021.multimediaeval.com/paper44.pdf
PR-AUC-macro | ROC-AUC-macro | F-score-macro | precision-macro | recall-macro | PR-AUC-micro | ROC-AUC-micro | F-score-micro | precision-micro | recall-micro | |
---|---|---|---|---|---|---|---|---|---|---|
Run1 | 0.141580 | 0.757486 | 0.201774 | 0.169763 | 0.393467 | 0.159171 | 0.800374 | 0.172549 | 0.106947 | 0.446325 |
Run2 | 0.134324 | 0.750496 | 0.191837 | 0.164775 | 0.358637 | 0.141714 | 0.791295 | 0.179068 | 0.113607 | 0.422528 |
Run3 | 0.126185 | 0.746375 | 0.184957 | 0.155460 | 0.368292 | 0.150162 | 0.794956 | 0.165861 | 0.104359 | 0.403887 |
Run4 | 0.143531 | 0.759913 | 0.202348 | 0.171926 | 0.378606 | 0.161228 | 0.801871 | 0.180733 | 0.114378 | 0.430460 |
UIBK-DBIS
Source code: https://github.com/dbis-uibk/MediaEval2021
Paper: https://2021.multimediaeval.com/paper14.pdf
PR-AUC-macro | ROC-AUC-macro | F-score-macro | precision-macro | recall-macro | PR-AUC-micro | ROC-AUC-micro | F-score-micro | precision-micro | recall-micro | |
---|---|---|---|---|---|---|---|---|---|---|
run1_ensemble_vggish_kmeans | 0.108741 | 0.704668 | 0.109937 | 0.063610 | 0.667232 | 0.132411 | 0.764757 | 0.103718 | 0.056186 | 0.673321 |
run2_ensemble_resnet_kmeans | 0.091039 | 0.691661 | 0.103954 | 0.059228 | 0.660818 | 0.104683 | 0.749825 | 0.103206 | 0.055820 | 0.683104 |
run3_ensemble_vggish_dikmeans_4 | 0.098415 | 0.682911 | 0.110302 | 0.065357 | 0.614513 | 0.118523 | 0.748293 | 0.102883 | 0.056046 | 0.626124 |
run4_ensemble_resnet_dikmeans_4 | 0.079915 | 0.680712 | 0.097726 | 0.055330 | 0.665770 | 0.096578 | 0.734685 | 0.096959 | 0.052217 | 0.677287 |
run5_ensemble_resnet_linear_4 | 0.092162 | 0.699669 | 0.105877 | 0.061531 | 0.631955 | 0.110435 | 0.759330 | 0.105549 | 0.057528 | 0.638683 |
baseline
Source code: https://github.com/MTG/mtg-jamendo-dataset
Paper: https://2021.multimediaeval.com/paper6.pdf
PR-AUC-macro | ROC-AUC-macro | F-score-macro | precision-macro | recall-macro | PR-AUC-micro | ROC-AUC-micro | F-score-micro | precision-micro | recall-micro | |
---|---|---|---|---|---|---|---|---|---|---|
popular | 0.031924 | 0.500000 | 0.002642 | 0.001427 | 0.017857 | 0.034067 | 0.513856 | 0.057312 | 0.079887 | 0.044685 |
vggish | 0.107734 | 0.725821 | 0.165694 | 0.138216 | 0.308650 | 0.140913 | 0.775029 | 0.177133 | 0.116097 | 0.373480 |
lileonardo
Source code: https://github.com/vibour/emotion-theme-recognition
Paper: https://2021.multimediaeval.com/paper21.pdf
PR-AUC-macro | ROC-AUC-macro | F-score-macro | precision-macro | recall-macro | PR-AUC-micro | ROC-AUC-micro | F-score-micro | precision-micro | recall-micro | |
---|---|---|---|---|---|---|---|---|---|---|
1-convs | 0.146867 | 0.769089 | 0.204764 | 0.173581 | 0.386601 | 0.166450 | 0.799516 | 0.181043 | 0.113733 | 0.443548 |
2-filters | 0.147867 | 0.770340 | 0.205304 | 0.174992 | 0.368903 | 0.173629 | 0.799787 | 0.197082 | 0.127036 | 0.439318 |
3-average | 0.150872 | 0.774789 | 0.209059 | 0.183774 | 0.408175 | 0.174246 | 0.804873 | 0.184954 | 0.115387 | 0.465759 |