Qwen1.5-MoE-A2.7B-Chat

MERA Created at 22.09.2024 21:41
0.326
The overall result
391
Place in the rating
Weak tasks:
539
RWSD
404
PARus
401
RCB
401
MultiQ
339
ruWorldTree
304
ruOpenBookQA
338
CheGeKa
335
ruMMLU
385
ruHateSpeech
333
ruDetox
507
ruHHH
375
ruTiE
317
ruHumanEval
385
USE
359
MathLogicQA
293
ruMultiAr
253
SimpleAr
298
LCS
424
BPS
431
ruModAr
306
MaMuRAMu
+17
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.094 Accuracy
RCB 0.429 / 0.401 Accuracy F1 macro
USE 0.069 Grade norm
RWSD 0.377 Accuracy
PARus 0.56 Accuracy
ruTiE 0.532 Accuracy
MultiQ 0.232 / 0.132 F1 Exact match
CheGeKa 0.038 / 0.019 F1 Exact match
ruModAr 0.136 Exact match
MaMuRAMu 0.547 Accuracy
ruMultiAr 0.22 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.338 Accuracy
ruWorldTree 0.724 / 0.724 Accuracy F1 macro
ruOpenBookQA 0.665 / 0.665 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.596 Accuracy
ruMMLU 0.447 Accuracy
SimpleAr 0.938 Exact match
ruHumanEval 0.002 / 0.006 / 0.006 Pass@k
ruHHH 0.461
ruHateSpeech 0.551
ruDetox 0.125
ruEthics
Correct God Ethical
Virtue -0 0.013 -0.016
Law -0.004 0.023 -0.036
Moral 0.009 0.007 -0.024
Justice -0.017 -0.031 -0.03
Utilitarianism -0.032 -0.009 0.01

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Batch
1
Transformers version
4.43.2
The number of GPUs and their type
2 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Qwen1.5-MoE-A2.7B-Chat

Model size

14.3B

Model type:

Opened

SFT

Additional links:

https://qwenlm.github.io/blog/qwen-moe/

Architecture description:

Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.

Description of the training:

The model was pretrained with a large amount of data, and then it was post-trained with both supervised finetuning and direct preference optimization.

Pretrain data:

-

License:

tongyi-qianwen

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval - 32768 \nrutie - 2000 \nrutie - 5000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Qwen1.5-MoE-A2.7B-Chat
MERA
0.3 0.1 0.233 0.167 0 0.2 0 - 0.033 0 0 0 0.033 0 0.067 0.317 0 0 0 0 0 0 0 0 0 0.125 0 0.033 0.033 0 0
Model, team Honest Helpful Harmless
Qwen1.5-MoE-A2.7B-Chat
MERA
0.557 0.424 0.397
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Qwen1.5-MoE-A2.7B-Chat
MERA
0.341 0.398 0.474 0.701 0.497 0.667 0.583 0.518 0.531 0.516 0.254 0.27 0.3 0.528 0.557 0.497 0.55 0.41 0.244 0.504 0.265 0.567 0.25 0.428 0.375 0.48 0.351 0.565 0.535 0.31 0.6 0.678 0.497 0.626 0.472 0.462 0.29 0.555 0.358 0.394 0.571 0.327 0.441 0.379 0.558 0.315 0.529 0.352 0.34 0.374 0.31 0.599 0.431 0.496 0.57 0.57 0.544
Model, team SIM FL STA
Qwen1.5-MoE-A2.7B-Chat
MERA
0.613 0.549 0.469
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Qwen1.5-MoE-A2.7B-Chat
MERA
0.378 0.554 0.383 0.583 0.553 0.621 0.414 0.561 0.635 0.492 0.628 0.492 0.425 0.581 0.48 0.407 0.579 0.511 0.316 0.579 0.263 0.678 0.444 0.503 0.578 0.576 0.654 0.474 0.842 0.644 0.733 0.564 0.446 0.684 0.439 0.571 0.267 0.778 0.404 0.477 0.584 0.556 0.644 0.4 0.793 0.778 0.707 0.568 0.646 0.737 0.756 0.609 0.595 0.519 0.349 0.444 0.689
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Qwen1.5-MoE-A2.7B-Chat
MERA
-0 -0.004 0.009 -0.017 -0.032
Model, team Virtue Law Moral Justice Utilitarianism
Qwen1.5-MoE-A2.7B-Chat
MERA
0.013 0.023 0.007 -0.031 -0.009
Model, team Virtue Law Moral Justice Utilitarianism
Qwen1.5-MoE-A2.7B-Chat
MERA
-0.016 -0.036 -0.024 -0.03 0.01
Model, team Women Men LGBT Nationalities Migrants Other
Qwen1.5-MoE-A2.7B-Chat
MERA
0.611 0.486 0.471 0.649 0.143 0.492