Human Benchmark

Создан 21.11.2023 14:04

Общая оценка: 0.872

Таблица скроллится влево

Задача Результат Метрика
BPS 1.0 Accuracy
LCS 0.56 Accuracy
RCB 0.565 / 0.587 Avg. F1 / Accuracy
USE 0.701 Grade Norm
RWSD 0.835 Accuracy
PARus 0.982 Accuracy
ruTiE 0.942 Accuracy
MultiQ 0.928 / 0.91 F1-score/EM
ruMMLU 0.844 Accuracy
CheGeKa 0.719 / 0.645 F1 / EM
ruModAr 0.999 Accuracy
SimpleAr 1.0 Accuracy
ruMultiAr 0.998 Accuracy
MathLogicQA 0.99 Accuracy
ruHumanEval 1 / 1 / 1 pass@k
ruWorldTree 0.935 / 0.935 Avg. F1 / Accuracy
ruOpenBookQA 0.875 / 0.865 Avg. F1 / Accuracy

Оценка на диагностических датасетах:

Не учитывается в общем рейтинге

Таблица скроллится влево

Задача Результат Метрика
ruHHH

0.815

  • Honest: 0.705
  • Harmless: 0.948
  • Helpful: 0.797
Accuracy
ruHateSpeech

0.985

  • Женщины : 1.0
  • Мужчины : 0.914
  • ЛГБТ : 1.0
  • Национальность : 1.0
  • Мигранты : 1.0
  • Другое : 0.984
Accuracy
ruDetox
  • 0.447
  • 0.728
  • 0.822
  • 0.758

Общая средняя оценка (J)

Оценка сохранения смысла (SIM)

Оценка натуральности (FL)

Точность переноса стиля (STA)

ruEthics
Правильно Хорошо Этично
Добродетель 0.813 0.802 0.771
Закон 0.864 0.832 0.817
Мораль 0.88 0.837 0.811
Справедливость 0.748 0.789 0.729
Утилитаризм 0.684 0.675 0.665

Результаты таблицы:

[[0.813, 0.864 , 0.88, 0.748 , 0.684],
[0.802, 0.832 , 0.837, 0.789 , 0.675],
[0.771, 0.817 , 0.811, 0.729 , 0.665]]

5 MCC

Информация о сабмите:

Команда:

MERA

Название ML-модели:

Human Benchmark

Ссылка на ML-модель:

https://github.com/ai-forever/MERA/humanbenchmarks/

Описание архитектуры:

We provide human baselines for the datasets included in the MERA benchmark. Most of the baselines were obtained using crowdsource platforms (Yandex.Toloka & ABC Elementary) where the annotators were presented with the same instructions and tasks as it is intended for language models in MERA. The exceptions are the USE (Unified State Exam) dataset and the ruHumanEval dataset, since the USE baseline is approximated with average scores earned in real exams. The details about human assessment for Russian SuperGLUE and TAPE are covered in the corresponding papers (RSG paper, TAPE paper), we use the previous results of human evaluations on these two datasets.

Описание обучения:

The general procedure of the assessment of human performance is the following. Crowdsource annotators were presented with a subset of tasks from every dataset so that every sample was annotated by at least 5 people. The target test tasks alternated with control tasks, the answers to the latter were used to filter out fraud annotations (if an annotator underperforms on control tasks — less than 50% accuracy — their answers are eliminated from the set).

Данные претрейна:

We calculated the final human answer for each sample by majority vote (MV), which is 3 votes for the 5 individual answers per sample. The samples that did not get 3+ consistent answers were eliminated from the set (see the table below for the resulting subset sizes). The aggregated human scores were compared with the gold answers to obtain total human performance metric value for each task. The humanbenchmarks shares the code used to conduct human evaluation, but we intentionally removed the files with gold answers from the repo (except for the open diagnostic datasets: ruDetox, ruEthics, ruHateSpeech, ruHHH).

Детали обучения:

-

Лицензия:

MIT

Стратегия, генерация и параметры:

-