Yi-Coder-9B-Chat

MERA Created at 04.11.2024 18:03
0.308
The overall result
417
Place in the rating
Weak tasks:
516
RWSD
373
PARus
391
RCB
413
ruEthics
287
MultiQ
408
ruWorldTree
406
ruOpenBookQA
344
CheGeKa
428
ruMMLU
318
ruHateSpeech
310
ruDetox
354
ruHHH
305
ruTiE
221
ruHumanEval
296
USE
289
MathLogicQA
319
ruMultiAr
308
SimpleAr
63
LCS
142
BPS
487
ruModAr
401
MaMuRAMu
248
ruCodeEval
+19
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.152 Accuracy
RCB 0.443 / 0.302 Accuracy F1 macro
USE 0.103 Grade norm
RWSD 0.423 Accuracy
PARus 0.614 Accuracy
ruTiE 0.583 Accuracy
MultiQ 0.319 / 0.17 F1 Exact match
CheGeKa 0.035 / 0.012 F1 Exact match
ruModAr 0.017 Exact match
MaMuRAMu 0.446 Accuracy
ruMultiAr 0.189 Exact match
ruCodeEval 0.001 / 0.006 / 0.012 Pass@k
MathLogicQA 0.367 Accuracy
ruWorldTree 0.608 / 0.6 Accuracy F1 macro
ruOpenBookQA 0.533 / 0.424 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.962 Accuracy
ruMMLU 0.369 Accuracy
SimpleAr 0.905 Exact match
ruHumanEval 0.01 / 0.012 / 0.012 Pass@k
ruHHH 0.545
ruHateSpeech 0.619
ruDetox 0.135
ruEthics
Correct God Ethical
Virtue 0.079 0.145 0.135
Law 0.047 0.142 0.139
Moral 0.086 0.163 0.145
Justice 0.055 0.137 0.111
Utilitarianism 0.086 0.133 0.139

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.44.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Yi-Coder-9B-Chat

Model size

9.0B

Model type:

Opened

SFT

Additional links:

https://01-ai.github.io/blog.html?post=en/2024-09-05-A-Small-but-Mighty-LLM-for-Code.md

Architecture description:

Yi-Coder-9B builds upon Yi-9B with an additional 2.4T high-quality tokens, meticulously sourced from a repository-level code corpus on GitHub and code-related data filtered from CommonCrawl.

Description of the training:

-

Pretrain data:

Continue pretrained on 2.4 Trillion high-quality tokens over 52 major programming languages.

License:

apache-2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie - 131072

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Yi-Coder-9B-Chat
MERA
0.333 0.067 0.4 0.1 0 0 0 - 0.033 0.033 0.033 0.067 0.233 0.033 0.133 0.2 0 0.033 0.033 0 0 0.067 0.067 0 0 0.233 0.067 0.167 0.1 0.033 0.133
Model, team Honest Helpful Harmless
Yi-Coder-9B-Chat
MERA
0.541 0.593 0.5
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Yi-Coder-9B-Chat
MERA
0.304 0.307 0.382 0.56 0.376 0.418 0.515 0.392 0.383 0.39 0.36 0.365 0.3 0.463 0.446 0.387 0.34 0.319 0.244 0.366 0.238 0.333 0.25 0.364 0.321 0.33 0.299 0.463 0.473 0.29 0.51 0.587 0.405 0.475 0.392 0.423 0.29 0.381 0.305 0.36 0.379 0.276 0.476 0.414 0.401 0.375 0.382 0.348 0.358 0.334 0.38 0.498 0.313 0.345 0.65 0.442 0.368
Model, team SIM FL STA
Yi-Coder-9B-Chat
MERA
0.679 0.523 0.432
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Yi-Coder-9B-Chat
MERA
0.289 0.376 0.367 0.463 0.355 0.31 0.397 0.368 0.308 0.354 0.577 0.508 0.333 0.465 0.357 0.383 0.486 0.333 0.351 0.561 0.246 0.508 0.444 0.396 0.556 0.394 0.474 0.579 0.702 0.533 0.578 0.333 0.491 0.421 0.318 0.5 0.533 0.6 0.368 0.492 0.457 0.286 0.556 0.556 0.707 0.733 0.397 0.591 0.569 0.561 0.8 0.319 0.532 0.558 0.419 0.351 0.389
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Yi-Coder-9B-Chat
MERA
0.079 0.047 0.086 0.055 0.086
Model, team Virtue Law Moral Justice Utilitarianism
Yi-Coder-9B-Chat
MERA
0.145 0.142 0.163 0.137 0.133
Model, team Virtue Law Moral Justice Utilitarianism
Yi-Coder-9B-Chat
MERA
0.135 0.139 0.145 0.111 0.139
Model, team Women Men LGBT Nationalities Migrants Other
Yi-Coder-9B-Chat
MERA
0.648 0.543 0.588 0.595 0.286 0.672