Human Benchmark

MERA Created at 21.11.2023 14:04
0.852
The overall result
1
Place in the rating
In the top by tasks:
1
RWSD
The task is one of the main ones
1
PARus
The task is one of the main ones
1
ruEthics
1
MultiQ
The task is one of the main ones
1
CheGeKa
The task is one of the main ones
2
ruMMLU
1
ruHateSpeech
1
ruDetox
1
ruTiE
The task is one of the main ones
1
ruHumanEval
1
USE
The task is one of the main ones
1
MathLogicQA
The task is one of the main ones
1
ruMultiAr
The task is one of the main ones
1
SimpleAr
1
LCS
The task is one of the main ones
1
BPS
1
ruModAr
The task is one of the main ones
1
ruCodeEval
The task is one of the main ones
+14
Hide
Weak tasks:
74
RCB
112
ruWorldTree
79
ruOpenBookQA
64
ruHHH
52
MaMuRAMu
+1
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.56 Accuracy
RCB 0.565 / 0.587 Accuracy F1 macro
USE 0.701 Grade norm
RWSD 0.835 Accuracy
PARus 0.982 Accuracy
ruTiE 0.942 Accuracy
MultiQ 0.928 / 0.91 F1 Exact match
CheGeKa 0.719 / 0.645 F1 Exact match
ruModAr 0.999 Exact match
ruMultiAr 0.998 Exact match
MathLogicQA 0.99 Accuracy
ruWorldTree 0.935 / 0.935 Accuracy F1 macro
ruOpenBookQA 0.875 / 0.865 Accuracy F1 macro
MaMuRaMu 0.796 Accuracy
ruCodeEval 1.0 Pass@k

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 1.0 Accuracy
ruMMLU 0.844 Accuracy
SimpleAr 1.0 Exact match
ruHumanEval 1 / 1 / 1 Pass@k
ruHHH 0.815
ruHateSpeech 0.985
ruDetox 0.447
ruEthics
Correct God Ethical
Virtue 0.813 0.802 0.771
Law 0.864 0.832 0.817
Moral 0.88 0.837 0.811
Justice 0.748 0.789 0.729
Utilitarianism 0.684 0.675 0.665

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

Human Benchmark

Model type:

Closed

API

Opened

Pretrain

SFT

MoE

Architecture description:

We provide human baselines for the datasets included in the MERA benchmark. Most of the baselines were obtained using crowdsource platforms (Yandex.Toloka & ABC Elementary) where the annotators were presented with the same instructions and tasks as it is intended for language models in MERA. The exceptions are the USE (Unified State Exam) dataset and the ruHumanEval dataset, since the USE baseline is approximated with average scores earned in real exams. The details about human assessment for Russian SuperGLUE and TAPE are covered in the corresponding papers (RSG paper, TAPE paper), we use the previous results of human evaluations on these two datasets.

Description of the training:

The general procedure of the assessment of human performance is the following. Crowdsource annotators were presented with a subset of tasks from every dataset so that every sample was annotated by at least 5 people. The target test tasks alternated with control tasks, the answers to the latter were used to filter out fraud annotations (if an annotator underperforms on control tasks — less than 50% accuracy — their answers are eliminated from the set).

Pretrain data:

We calculated the final human answer for each sample by majority vote (MV), which is 3 votes for the 5 individual answers per sample. The samples that did not get 3+ consistent answers were eliminated from the set (see the table below for the resulting subset sizes). The aggregated human scores were compared with the gold answers to obtain total human performance metric value for each task. The humanbenchmarks shares the code used to conduct human evaluation, but we intentionally removed the files with gold answers from the repo (except for the open diagnostic datasets: ruDetox, ruEthics, ruHateSpeech, ruHHH).

Training Details:

-

License:

MIT

Strategy, generation and parameters:

-

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Human Benchmark
MERA
0.705 0.797 0.948
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Human Benchmark
MERA
0.4 0.25 0.2 0.143 0.286 0.4 0.133 0.412 0.2 0.2 0.091 0.3 0.4 0.231 0.227 0.3 0.2 0.296 0.1 0.1 0.1 0.212 0.2 0.412 0.2 0.091 0.125 0.143 0.2 0.273 0.2 0.389 0.2 0.2 0.455 0.2 0.2 0.238 0.6 0.2 0.266 0.1 0.4 0.4 0.188 0.2 0.2 0.3 0.2 0 0.227 0.313 0.294 0.267 0.125 0.303 0.148
Model, team SIM FL STA
Human Benchmark
MERA
0.728 0.822 0.758
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Human Benchmark
MERA
0.813 0.864 0.88 0.748 0.684
Model, team Virtue Law Moral Justice Utilitarianism
Human Benchmark
MERA
0.802 0.832 0.837 0.789 0.675
Model, team Virtue Law Moral Justice Utilitarianism
Human Benchmark
MERA
0.771 0.817 0.811 0.729 0.665
Model, team Women Men LGBT Nationalities Migrants Other
Human Benchmark
MERA
1 0.914 1 1 1 0.984