Qwen2-1.5B-Instruct

MERA Created at 22.09.2024 21:50
0.319
The overall result
400
Place in the rating
Weak tasks:
501
RWSD
376
PARus
441
RCB
446
ruEthics
366
MultiQ
362
ruWorldTree
346
ruOpenBookQA
400
CheGeKa
358
ruMMLU
377
ruHateSpeech
323
ruDetox
524
ruHHH
356
ruTiE
348
ruHumanEval
330
USE
405
MathLogicQA
342
ruMultiAr
275
SimpleAr
424
LCS
387
BPS
422
ruModAr
349
MaMuRAMu
247
ruCodeEval
+19
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.07 Accuracy
RCB 0.395 / 0.216 Accuracy F1 macro
USE 0.084 Grade norm
RWSD 0.442 Accuracy
PARus 0.612 Accuracy
ruTiE 0.549 Accuracy
MultiQ 0.259 / 0.169 F1 Exact match
CheGeKa 0.022 / 0.007 F1 Exact match
ruModAr 0.157 Exact match
MaMuRAMu 0.508 Accuracy
ruMultiAr 0.178 Exact match
ruCodeEval 0.001 / 0.003 / 0.006 Pass@k
MathLogicQA 0.314 Accuracy
ruWorldTree 0.697 / 0.695 Accuracy F1 macro
ruOpenBookQA 0.633 / 0.632 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.695 Accuracy
ruMMLU 0.43 Accuracy
SimpleAr 0.928 Exact match
ruHumanEval 0.001 / 0.003 / 0.006 Pass@k
ruHHH 0.438
ruHateSpeech 0.555
ruDetox 0.129
ruEthics
Correct God Ethical
Virtue 0.035 0.03 0.036
Law 0.047 0.069 0.051
Moral 0.046 0.043 0.027
Justice 0.018 0.054 0.034
Utilitarianism 0.025 0.024 0.032

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.43.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Qwen2-1.5B-Instruct

Model size

1.5B

Model type:

Opened

SFT

Additional links:

https://arxiv.org/pdf/2407.10671

Architecture description:

Qwen2-1.5B-instruct is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, RoPE and RMSNorm

Description of the training:

SFT and RLHF on the mix of manually and synthetically annotated samples after pertaining with next-token prediction task.

Pretrain data:

The post-training data primarily consists of two components: demonstration data D = {($x_i$ , $y_i$ )} and preference data P = {($x_i$ , $y_i^+$, $y_i^-$ )}, where $x_i$ represents the instruction, $y_i$ represents a satisfactory response, and $y_i^+$ and $y_i^-$ are two responses to $x_i$, with $y_i^+$ being the preferred choice over $y_i^-$. The set D is utilized in SFT, whereas P is employed in RLHF. We have assembled an extensive instruction dataset featuring more than 500,000 examples that cover skills such as instruction following, coding, mathematics, logical reasoning, role-playing, multilingualism, and safety.

License:

Apache-2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"];do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"];do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"];do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."];do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"];do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"];do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhumaneval, rucodeeval, rummlu, ruhhh, simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval - 32768 \nrutie - 5000 \nrutie - 10000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system \nYou are a helpful assistant.<|im_end|> \n' }}{% endif %}{{'<|im_start|>' + message['role'] + ' \n' + message['content'] + '<|im_end|>' + ' \n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant \n' }}{% endif %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Qwen2-1.5B-Instruct
MERA
0.2 0.067 0.333 0.233 0 0.133 0 - 0 0 0 0.033 0 0 0.133 0.25 0.033 0.033 0 0.033 0 0.033 0.067 0 0 0.075 0.133 0.1 0.2 0.133 0.167
Model, team Honest Helpful Harmless
Qwen2-1.5B-Instruct
MERA
0.426 0.492 0.397
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Qwen2-1.5B-Instruct
MERA
0.407 0.398 0.408 0.692 0.523 0.642 0.563 0.489 0.451 0.489 0.272 0.349 0.32 0.556 0.497 0.448 0.46 0.354 0.367 0.534 0.238 0.538 0.4 0.462 0.375 0.46 0.336 0.481 0.543 0.4 0.57 0.628 0.479 0.606 0.479 0.427 0.33 0.477 0.298 0.34 0.576 0.331 0.517 0.363 0.545 0.333 0.48 0.289 0.333 0.389 0.37 0.57 0.459 0.424 0.48 0.545 0.446
Model, team SIM FL STA
Qwen2-1.5B-Instruct
MERA
0.657 0.633 0.392
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Qwen2-1.5B-Instruct
MERA
0.422 0.485 0.317 0.509 0.487 0.517 0.466 0.439 0.423 0.492 0.551 0.517 0.392 0.589 0.398 0.432 0.579 0.4 0.386 0.561 0.281 0.542 0.422 0.533 0.556 0.561 0.551 0.474 0.772 0.556 0.756 0.59 0.455 0.491 0.439 0.482 0.4 0.689 0.316 0.538 0.518 0.492 0.6 0.6 0.793 0.689 0.517 0.386 0.585 0.667 0.6 0.406 0.709 0.455 0.326 0.421 0.644
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2-1.5B-Instruct
MERA
0.035 0.047 0.046 0.018 0.025
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2-1.5B-Instruct
MERA
0.03 0.069 0.043 0.054 0.024
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2-1.5B-Instruct
MERA
0.036 0.051 0.027 0.034 0.032
Model, team Women Men LGBT Nationalities Migrants Other
Qwen2-1.5B-Instruct
MERA
0.565 0.486 0.588 0.514 0.571 0.59