Qwen2.5 1.5B Instruct

НГУ Created at 23.10.2024 17:34
0.358
The overall result
323
Place in the rating
Weak tasks:
373
RWSD
394
PARus
424
RCB
421
ruEthics
330
MultiQ
299
ruWorldTree
294
ruOpenBookQA
399
CheGeKa
309
ruMMLU
464
ruHateSpeech
289
ruDetox
387
ruHHH
266
ruTiE
252
ruHumanEval
358
USE
234
MathLogicQA
252
ruMultiAr
260
SimpleAr
130
LCS
409
BPS
358
ruModAr
305
MaMuRAMu
207
ruCodeEval
+19
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.13 Accuracy
RCB 0.404 / 0.248 Accuracy F1 macro
USE 0.073 Grade norm
RWSD 0.496 Accuracy
PARus 0.564 Accuracy
ruTiE 0.616 Accuracy
MultiQ 0.275 / 0.188 F1 Exact match
CheGeKa 0.021 / 0.012 F1 Exact match
ruModAr 0.289 Exact match
MaMuRAMu 0.537 Accuracy
ruMultiAr 0.241 Exact match
ruCodeEval 0.003 / 0.015 / 0.03 Pass@k
MathLogicQA 0.401 Accuracy
ruWorldTree 0.758 / 0.758 Accuracy F1 macro
ruOpenBookQA 0.668 / 0.669 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.619 Accuracy
ruMMLU 0.455 Accuracy
SimpleAr 0.932 Exact match
ruHumanEval 0.006 / 0.012 / 0.012 Pass@k
ruHHH 0.517
ruHateSpeech 0.502
ruDetox 0.142
ruEthics
Correct God Ethical
Virtue 0.067 0.11 0.072
Law 0.111 0.141 0.091
Moral 0.084 0.113 0.081
Justice 0.054 0.089 0.087
Utilitarianism 0.047 0.071 0.041

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.3.1
The version of the codebase
430295f
CUDA version
11.8
Precision of the model weights
float32
Seed
1234
Butch
1
Transformers version
4.41.2
The number of GPUs and their type
1 x NVIDIA A100 80GB PCIe
Architecture
hf

Team:

НГУ

Name of the ML model:

Qwen2.5 1.5B Instruct

Model size

1.5B

Model type:

Opened

SFT

Architecture description:

Qwen2.5 1.5B Instruct is a language model including decoder of 1.5B size. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes. Qwen2.5 brings some improvements upon Qwen2.

Description of the training:

The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.

Pretrain data:

The model was pretrained with a large amount of data of English, Chinese and over 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 32K tokens.

License:

Apache 2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, chegeka, rudetox, rumultiar, use, multiq, rumodar, ruhumaneval, rucodeeval - 32768

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if tools %} \n {{- '<|im_start|>system\n' }} \n {%- if messages[0]['role'] == 'system' %} \n {{- messages[0]['content'] }} \n {%- else %} \n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }} \n {%- endif %} \n {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} \n {%- for tool in tools %} \n {{- "\n" }} \n {{- tool | tojson }} \n {%- endfor %} \n {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} \n{%- else %} \n {%- if messages[0]['role'] == 'system' %} \n {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} \n {%- else %} \n {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }} \n {%- endif %} \n{%- endif %} \n{%- for message in messages %} \n {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %} \n {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} \n {%- elif message.role == "assistant" %} \n {{- '<|im_start|>' + message.role }} \n {%- if message.content %} \n {{- '\n' + message.content }} \n {%- endif %} \n {%- for tool_call in message.tool_calls %} \n {%- if tool_call.function is defined %} \n {%- set tool_call = tool_call.function %} \n {%- endif %} \n {{- '\n<tool_call>\n{"name": "' }} \n {{- tool_call.name }} \n {{- '", "arguments": ' }} \n {{- tool_call.arguments | tojson }} \n {{- '}\n</tool_call>' }} \n {%- endfor %} \n {{- '<|im_end|>\n' }} \n {%- elif message.role == "tool" %} \n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} \n {{- '<|im_start|>user' }} \n {%- endif %} \n {{- '\n<tool_response>\n' }} \n {{- message.content }} \n {{- '\n</tool_response>' }} \n {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} \n {{- '<|im_end|>\n' }} \n {%- endif %} \n {%- endif %} \n{%- endfor %} \n{%- if add_generation_prompt %} \n {{- '<|im_start|>assistant\n' }} \n{%- endif %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Qwen2.5 1.5B Instruct
НГУ
0.1 0.1 0.433 0.2 0.033 0.033 0 - 0 0 0 0 0 0 0.133 0.117 0.033 0.033 0.033 0.033 0 0 0 0 0.033 0.1 0.1 0.1 0.333 0.1 0
Model, team Honest Helpful Harmless
Qwen2.5 1.5B Instruct
НГУ
0.508 0.542 0.5
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Qwen2.5 1.5B Instruct
НГУ
0.422 0.44 0.566 0.709 0.529 0.667 0.612 0.524 0.451 0.525 0.263 0.468 0.27 0.667 0.524 0.497 0.54 0.375 0.444 0.496 0.247 0.503 0.25 0.445 0.295 0.44 0.325 0.5 0.531 0.43 0.53 0.686 0.46 0.646 0.517 0.462 0.39 0.516 0.305 0.453 0.53 0.393 0.434 0.454 0.582 0.5 0.529 0.389 0.365 0.395 0.41 0.591 0.479 0.496 0.59 0.606 0.492
Model, team SIM FL STA
Qwen2.5 1.5B Instruct
НГУ
0.547 0.58 0.576
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Qwen2.5 1.5B Instruct
НГУ
0.444 0.515 0.45 0.565 0.474 0.534 0.431 0.526 0.442 0.492 0.641 0.508 0.358 0.527 0.45 0.432 0.579 0.467 0.456 0.632 0.263 0.559 0.489 0.556 0.556 0.5 0.628 0.456 0.754 0.622 0.8 0.603 0.518 0.579 0.53 0.482 0.6 0.644 0.439 0.508 0.551 0.492 0.622 0.667 0.81 0.844 0.569 0.568 0.754 0.719 0.689 0.391 0.696 0.455 0.442 0.368 0.678
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2.5 1.5B Instruct
НГУ
0.067 0.111 0.084 0.054 0.047
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2.5 1.5B Instruct
НГУ
0.11 0.141 0.113 0.089 0.071
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2.5 1.5B Instruct
НГУ
0.072 0.091 0.081 0.087 0.041
Model, team Women Men LGBT Nationalities Migrants Other
Qwen2.5 1.5B Instruct
НГУ
0.5 0.4 0.647 0.541 0.286 0.525