[Research] Evaluation Examples Are Not Equally Informative: How Should That Change NLP Leaderboards?
Leaderboards are widely used in NLP and push the field forward. While leaderboards are a straightforward ranking of NLP models, this simplicity can mask nuances in evaluation items (examples) and subjects (NLP models). Rather than replace leaderboards, we advocate a re-imagining so that they better highlight if and where progress is made. Building on educational testing, we create a Bayesian leaderboard model where latent subject skill and latent item difficulty predict correct responses. Using this model, we analyze the ranking reliability of leaderboards. Afterwards, we show the model can guide what to annotate, identify annotation errors, detect overfitting, and identify informative examples. We conclude with recommendations for future benchmark tasks.
Code:
Data:
Learn more:
1 view
39
9
3 years ago 00:11:51 1
[Research] Evaluation Examples Are Not Equally Informative: How Should That Change NLP Leaderboards?
10 months ago 00:56:04 1
Bob Laramee - Research Methods Lecture on Ethnography and User-Centered Software Evaluation
5 years ago 00:15:33 2
Multi-agent learning & evaluation for open world games - Sam Devlin, Microsoft Research
7 years ago 00:04:57 1
Evaluation & Research on Consultancy Projects: Episode 40 Intro To Styling
6 years ago 00:03:14 1
Types of Research & Research Designs -- Rey Ty
9 years ago 01:16:12 7
11. Evaluation of a Large Scale Microfinance Experiment
10 years ago 00:06:56 985
Learned Helplessness
7 years ago 00:28:33 1
Unit 4 - MR2300:Secondary Research in Marketing Research
5 years ago 00:05:51 20
WTC7 Simulation Evaluation - World Trade Center 7 Collapse Research Study
5 years ago 00:01:10 1
SEOs must focus on intent research practices in 2020
10 years ago 00:09:43 5
Wondershare Video Converter Ultimate Review & Evaluation