RuadaptQwen2.5-7B-Lite-v1

RCC MSU Created at 27.01.2025 11:27
0.536
The overall result
78
Place in the rating
Weak tasks:
463
RWSD
105
PARus
123
RCB
100
ruEthics
99
MultiQ
79
ruWorldTree
85
ruOpenBookQA
37
CheGeKa
132
ruMMLU
177
ruHateSpeech
181
ruDetox
120
ruHHH
99
ruTiE
194
ruHumanEval
195
USE
39
MathLogicQA
64
ruMultiAr
92
SimpleAr
90
LCS
181
BPS
180
ruModAr
103
MaMuRAMu
113
ruCodeEval
+19
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.142 Accuracy
RCB 0.553 / 0.458 Accuracy F1 macro
USE 0.162 Grade norm
RWSD 0.465 Accuracy
PARus 0.892 Accuracy
ruTiE 0.774 Accuracy
MultiQ 0.479 / 0.342 F1 Exact match
CheGeKa 0.379 / 0.308 F1 Exact match
ruModAr 0.492 Exact match
MaMuRAMu 0.751 Accuracy
ruMultiAr 0.347 Exact match
ruCodeEval 0.099 / 0.276 / 0.378 Pass@k
MathLogicQA 0.658 Accuracy
ruWorldTree 0.96 / 0.96 Accuracy F1 macro
ruOpenBookQA 0.883 / 0.883 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.949 Accuracy
ruMMLU 0.64 Accuracy
SimpleAr 0.986 Exact match
ruHumanEval 0.024 / 0.059 / 0.085 Pass@k
ruHHH 0.764
ruHateSpeech 0.747
ruDetox 0.196
ruEthics
Correct God Ethical
Virtue 0.404 0.38 0.404
Law 0.422 0.374 0.397
Moral 0.444 0.391 0.416
Justice 0.386 0.334 0.367
Utilitarianism 0.346 0.313 0.33

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
430295f
CUDA version
12.1
Precision of the model weights
-
Seed
1234
Butch
4
Transformers version
4.45.2
The number of GPUs and their type
-
Architecture
vllm

Team:

RCC MSU

Name of the ML model:

RuadaptQwen2.5-7B-Lite-v1

Model size

7.0B

Model type:

Opened

Pretrain

Architecture description:

Инструктивная версия адаптированного на русский язык Qwen2.5-7B (T-lite-it-1.0). В модели был заменен токенизатор, затем произведено дообучение (Continued pretraining) на русскоязычном корпусе, после чего была применена техника LEP (Learned Embedding Propagation). Благодаря новому токенизатору (расширенный tiktoken cl100k с помощью униграм токенизатора на 48 т. токенов) скорость генерации* русскоязычных текстов возрасла до 60% по сравнению с исходной моделью Qwen-2.5-7B-Instruct. Tikhomirov M., Chernyshov D. Facilitating Large Language Model Russian Adaptation with Learned Embedding Propagation //Journal of Language and Education. – 2024. – Т. 10. – №. 4. – С. 130-145. *Под скоростью генерации подразумевается количество русскоязычных символов/слов в секунду на одинаковых текстовых последовательностях.

License:

Apache license 2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, chegeka, rudetox, rumultiar, use, multiq, rumodar, ruhumaneval, rucodeeval - 4096

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if tools %} \n {{- '<|im_start|>system\n' }} \n {%- if messages[0]['role'] == 'system' %} \n {{- messages[0]['content'] }} \n {%- else %} \n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }} \n {%- endif %} \n {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} \n {%- for tool in tools %} \n {{- "\n" }} \n {{- tool | tojson }} \n {%- endfor %} \n {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} \n{%- else %} \n {%- if messages[0]['role'] == 'system' %} \n {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} \n {%- endif %} \n{%- endif %} \n{%- for message in messages %} \n {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %} \n {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} \n {%- elif message.role == "assistant" %} \n {{- '<|im_start|>' + message.role }} \n {%- if message.content %} \n {{- '\n' + message.content }} \n {%- endif %} \n {%- for tool_call in message.tool_calls %} \n {%- if tool_call.function is defined %} \n {%- set tool_call = tool_call.function %} \n {%- endif %} \n {{- '\n<tool_call>\n{"name": "' }} \n {{- tool_call.name }} \n {{- '", "arguments": ' }} \n {{- tool_call.arguments | tojson }} \n {{- '}\n</tool_call>' }} \n {%- endfor %} \n {{- '<|im_end|>\n' }} \n {%- elif message.role == "tool" %} \n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} \n {{- '<|im_start|>user' }} \n {%- endif %} \n {{- '\n<tool_response>\n' }} \n {{- message.content }} \n {{- '\n</tool_response>' }} \n {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} \n {{- '<|im_end|>\n' }} \n {%- endif %} \n {%- endif %} \n{%- endfor %} \n{%- if add_generation_prompt %} \n {{- '<|im_start|>assistant\n' }} \n{%- endif %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.333 0.467 0.733 0.133 0.1 0.133 0.033 - 0 0.033 0 0 0.2 0 0.1 0.183 0 0 0 0 0 0.267 0.4 0.167 0 0.283 0 0.067 0.333 0.167 0.333
Model, team Honest Helpful Harmless
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.738 0.78 0.776
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.585 0.524 0.796 0.85 0.712 0.811 0.816 0.669 0.753 0.682 0.544 0.452 0.47 0.676 0.769 0.668 0.72 0.75 0.467 0.672 0.283 0.789 0.5 0.636 0.589 0.72 0.441 0.611 0.694 0.47 0.77 0.719 0.675 0.828 0.709 0.722 0.47 0.842 0.53 0.596 0.818 0.64 0.683 0.671 0.828 0.671 0.824 0.474 0.426 0.662 0.61 0.797 0.713 0.777 0.8 0.764 0.772
Model, team SIM FL STA
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.339 0.718 0.82
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.6 0.832 0.667 0.648 0.75 0.672 0.655 0.737 0.673 0.723 0.705 0.767 0.467 0.775 0.667 0.741 0.701 0.711 0.632 0.789 0.509 0.864 0.8 0.817 0.778 0.788 0.808 0.649 0.895 0.844 0.844 0.833 0.705 0.825 0.682 0.821 0.911 0.8 0.649 0.662 0.829 0.841 0.844 0.844 0.845 0.889 0.897 0.841 0.815 0.877 0.889 0.797 0.835 0.675 0.674 0.649 0.822
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.404 0.422 0.444 0.386 0.346
Model, team Virtue Law Moral Justice Utilitarianism
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.38 0.374 0.391 0.334 0.313
Model, team Virtue Law Moral Justice Utilitarianism
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.404 0.397 0.416 0.367 0.33
Model, team Women Men LGBT Nationalities Migrants Other
RuadaptQwen2.5-7B-Lite-v1
RCC MSU
0.759 0.686 0.765 0.73 0.429 0.803