Yi-1.5-9B-Chat-16K

MERA Created at 20.09.2024 13:42
0.373
The overall result
309
Place in the rating
Weak tasks:
521
RWSD
310
PARus
117
RCB
339
ruEthics
329
MultiQ
325
ruWorldTree
350
ruOpenBookQA
398
CheGeKa
300
ruMMLU
302
ruHateSpeech
302
ruDetox
355
ruHHH
241
ruTiE
332
ruHumanEval
277
USE
227
MathLogicQA
200
ruMultiAr
153
SimpleAr
190
LCS
74
BPS
299
ruModAr
361
MaMuRAMu
+18
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.116 Accuracy
RCB 0.555 / 0.39 Accuracy F1 macro
USE 0.11 Grade norm
RWSD 0.404 Accuracy
PARus 0.68 Accuracy
ruTiE 0.645 Accuracy
MultiQ 0.277 / 0.181 F1 Exact match
CheGeKa 0.021 / 0.012 F1 Exact match
ruModAr 0.401 Exact match
MaMuRAMu 0.497 Accuracy
ruMultiAr 0.267 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.405 Accuracy
ruWorldTree 0.73 / 0.73 Accuracy F1 macro
ruOpenBookQA 0.623 / 0.622 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.978 Accuracy
ruMMLU 0.461 Accuracy
SimpleAr 0.977 Exact match
ruHumanEval 0.001 / 0.003 / 0.006 Pass@k
ruHHH 0.539
ruHateSpeech 0.63
ruDetox 0.136
ruEthics
Correct God Ethical
Virtue 0.232 0.192 0.14
Law 0.218 0.176 0.142
Moral 0.245 0.207 0.146
Justice 0.207 0.152 0.121
Utilitarianism 0.153 0.173 0.109

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.43.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Yi-1.5-9B-Chat-16K

Model size

8.8B

Model type:

Opened

SFT

Architecture description:

The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.

Description of the training:

Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up.

Pretrain data:

Yi-1.5 is an upgraded version of Yi (which was trained on 3T multilingual corpus). It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

License:

apache-2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhumaneval, rucodeeval, rutie - 16384 \nruhhh - 12000 \nruhhh, rutie - 8000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ system_message }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\n' + content + '<|im_end|>\n<|im_start|>assistant\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|im_end|>' + '\n' }}{% endif %}{% endfor %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Yi-1.5-9B-Chat-16K
MERA
0.233 0.167 0.533 0.033 0 0.133 0 - 0.1 0.033 0.033 0.033 0.1 0 0.1 0.233 0 0 0.033 0 0 0.333 0.133 0.033 0 0.233 0.033 0.033 0.167 0.033 0.033
Model, team Honest Helpful Harmless
Yi-1.5-9B-Chat-16K
MERA
0.574 0.475 0.569
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Yi-1.5-9B-Chat-16K
MERA
0.311 0.416 0.48 0.705 0.474 0.627 0.573 0.502 0.497 0.507 0.456 0.405 0.38 0.556 0.487 0.514 0.59 0.431 0.367 0.405 0.258 0.368 0.25 0.422 0.393 0.41 0.375 0.583 0.641 0.33 0.64 0.686 0.515 0.616 0.442 0.449 0.39 0.526 0.391 0.399 0.596 0.346 0.455 0.531 0.539 0.417 0.608 0.385 0.401 0.389 0.38 0.633 0.526 0.525 0.69 0.612 0.503
Model, team SIM FL STA
Yi-1.5-9B-Chat-16K
MERA
0.517 0.531 0.608
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Yi-1.5-9B-Chat-16K
MERA
0.222 0.455 0.467 0.509 0.474 0.534 0.431 0.333 0.269 0.523 0.564 0.575 0.358 0.504 0.363 0.284 0.636 0.4 0.439 0.632 0.298 0.61 0.556 0.456 0.667 0.424 0.603 0.474 0.719 0.511 0.689 0.487 0.446 0.579 0.394 0.518 0.467 0.556 0.421 0.354 0.482 0.381 0.667 0.733 0.81 0.711 0.448 0.773 0.585 0.667 0.756 0.362 0.696 0.584 0.535 0.292 0.622
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Yi-1.5-9B-Chat-16K
MERA
0.232 0.218 0.245 0.207 0.153
Model, team Virtue Law Moral Justice Utilitarianism
Yi-1.5-9B-Chat-16K
MERA
0.192 0.176 0.207 0.152 0.173
Model, team Virtue Law Moral Justice Utilitarianism
Yi-1.5-9B-Chat-16K
MERA
0.14 0.142 0.146 0.121 0.109
Model, team Women Men LGBT Nationalities Migrants Other
Yi-1.5-9B-Chat-16K
MERA
0.731 0.6 0.471 0.676 0.286 0.525