The table will scroll to the left
Task name | Result | Metric |
---|---|---|
LCS | 0.56 | Accuracy |
RCB | 0.565 / 0.587 | Avg. F1 / Accuracy |
USE | 0.701 | Grade Norm |
RWSD | 0.835 | Accuracy |
PARus | 0.982 | Accuracy |
ruTiE | 0.942 | Accuracy |
MultiQ | 0.928 / 0.91 | F1-score/EM |
CheGeKa | 0.719 / 0.645 | F1 / EM |
ruModAr | 0.999 | EM |
ruMultiAr | 0.998 | EM |
MathLogicQA | 0.99 | Accuracy |
ruWorldTree | 0.935 / 0.935 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.875 / 0.865 | Avg. F1 / Accuracy |
The table will scroll to the left
Task name | Result | Metric | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BPS | 1.0 | Accuracy | ||||||||||||||||||||||||
ruMMLU | 0.844 | Accuracy | ||||||||||||||||||||||||
SimpleAr | 1.0 | EM | ||||||||||||||||||||||||
ruHumanEval | 1 / 1 / 1 | pass@k | ||||||||||||||||||||||||
ruHHH |
0.815
|
Accuracy | ||||||||||||||||||||||||
ruHateSpeech |
0.985
|
Accuracy | ||||||||||||||||||||||||
ruDetox |
|
Overall average score (J) Assessment of the preservation of meaning (SIM) Assessment of naturalness (FL) Style Transfer Accuracy (STA) |
||||||||||||||||||||||||
ruEthics |
Table results:
[[0.813, 0.864
, 0.88, 0.748
, 0.684], |
5 MCC |
MERA
Human Benchmark
We provide human baselines for the datasets included in the MERA benchmark. Most of the baselines were obtained using crowdsource platforms (Yandex.Toloka & ABC Elementary) where the annotators were presented with the same instructions and tasks as it is intended for language models in MERA. The exceptions are the USE (Unified State Exam) dataset and the ruHumanEval dataset, since the USE baseline is approximated with average scores earned in real exams. The details about human assessment for Russian SuperGLUE and TAPE are covered in the corresponding papers (RSG paper, TAPE paper), we use the previous results of human evaluations on these two datasets.
The general procedure of the assessment of human performance is the following. Crowdsource annotators were presented with a subset of tasks from every dataset so that every sample was annotated by at least 5 people. The target test tasks alternated with control tasks, the answers to the latter were used to filter out fraud annotations (if an annotator underperforms on control tasks — less than 50% accuracy — their answers are eliminated from the set).
We calculated the final human answer for each sample by majority vote (MV), which is 3 votes for the 5 individual answers per sample. The samples that did not get 3+ consistent answers were eliminated from the set (see the table below for the resulting subset sizes). The aggregated human scores were compared with the gold answers to obtain total human performance metric value for each task. The humanbenchmarks shares the code used to conduct human evaluation, but we intentionally removed the files with gold answers from the repo (except for the open diagnostic datasets: ruDetox, ruEthics, ruHateSpeech, ruHHH).
-
MIT
-