Qwen2.5-Math-72B-Instruct

MERA Created at 04.11.2024 17:59
0.462
The overall result
166
Place in the rating
Weak tasks:
65
RWSD
155
PARus
197
RCB
256
ruEthics
371
MultiQ
109
ruWorldTree
126
ruOpenBookQA
275
CheGeKa
92
ruMMLU
179
ruHateSpeech
346
ruDetox
239
ruHHH
86
ruTiE
229
ruHumanEval
157
USE
44
MathLogicQA
22
ruMultiAr
124
LCS
115
BPS
126
MaMuRAMu
234
ruCodeEval
+17
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.132 Accuracy
RCB 0.534 / 0.476 Accuracy F1 macro
USE 0.177 Grade norm
RWSD 0.604 Accuracy
PARus 0.87 Accuracy
ruTiE 0.783 Accuracy
MultiQ 0.254 / 0.117 F1 Exact match
CheGeKa 0.074 / 0.043 F1 Exact match
ruModAr 0 Exact match
MaMuRAMu 0.729 Accuracy
ruMultiAr 0.429 Exact match
ruCodeEval 0.001 / 0.003 / 0.006 Pass@k
MathLogicQA 0.645 Accuracy
ruWorldTree 0.949 / 0.948 Accuracy F1 macro
ruOpenBookQA 0.86 / 0.86 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.967 Accuracy
ruMMLU 0.665 Accuracy
SimpleAr 0.998 Exact match
ruHumanEval 0.008 / 0.012 / 0.012 Pass@k
ruHHH 0.64
ruHateSpeech 0.747
ruDetox 0.118
ruEthics
Correct God Ethical
Virtue 0.207 0.392 0.426
Law 0.277 0.452 0.42
Moral 0.253 0.429 0.44
Justice 0.204 0.377 0.414
Utilitarianism 0.191 0.333 0.34

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
fea61e4
CUDA version
12.1
Precision of the model weights
bfloat16bfloat16
Seed
1234
Butch
1
Transformers version
4.45.1
The number of GPUs and their type
8 x NVIDIA H100 80GB HBM3
Architecture
hf

Team:

MERA

Name of the ML model:

Qwen2.5-Math-72B-Instruct

Model size

72.7B

Model type:

Opened

SFT

Additional links:

https://qwenlm.github.io/blog/qwen2.5-math/

Architecture description:

-

Description of the training:

-

Pretrain data:

First, the Qwen2-Math base models are trained on a high-quality mathematical pre-training dataset called the Qwen Math Corpus v1, which contains approximately 700 billion tokens. Second, we train a math-specific reward model Qwen2-Math-RM, derived from Qwen2-Math-72B, to create the Qwen2-Math-Instruct models. This reward model is used to construct Supervised Fine-Tuning (SFT) data through Rejection Sampling. Third, leveraging the Qwen2-Math-72B-Instruct model, we synthesize additional high-quality mathematical pre-training data, which serves as the foundation for Qwen Math Corpus v2. This updated corpus contains over 1 trillion tokens and is used to pre-train the Qwen2.5-Math models. Lastly, similar to the process used for the Qwen2-Math-Instruct models, we construct the Qwen2.5-Math-RM and Qwen2.5-Math-Instruct models.

License:

apache-2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie - 4096

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if tools %} \n {{- '<|im_start|>system\n' }} \n {%- if messages[0]['role'] == 'system' %} \n {{- messages[0]['content'] }} \n {%- else %} \n {{- 'Please reason step by step, and put your final answer within \\boxed{}.' }} \n {%- endif %} \n {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} \n {%- for tool in tools %} \n {{- "\n" }} \n {{- tool | tojson }} \n {%- endfor %} \n {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} \n{%- else %} \n {%- if messages[0]['role'] == 'system' %} \n {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} \n {%- else %} \n {{- '<|im_start|>system\nPlease reason step by step, and put your final answer within \\boxed{}.<|im_end|>\n' }} \n {%- endif %} \n{%- endif %} \n{%- for message in messages %} \n {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %} \n {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} \n {%- elif message.role == "assistant" %} \n {{- '<|im_start|>' + message.role }} \n {%- if message.content %} \n {{- '\n' + message.content }} \n {%- endif %} \n {%- for tool_call in message.tool_calls %} \n {%- if tool_call.function is defined %} \n {%- set tool_call = tool_call.function %} \n {%- endif %} \n {{- '\n<tool_call>\n{"name": "' }} \n {{- tool_call.name }} \n {{- '", "arguments": ' }} \n {{- tool_call.arguments | tojson }} \n {{- '}\n</tool_call>' }} \n {%- endfor %} \n {{- '<|im_end|>\n' }} \n {%- elif message.role == "tool" %} \n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} \n {{- '<|im_start|>user' }} \n {%- endif %} \n {{- '\n<tool_response>\n' }} \n {{- message.content }} \n {{- '\n</tool_response>' }} \n {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} \n {{- '<|im_end|>\n' }} \n {%- endif %} \n {%- endif %} \n{%- endfor %} \n{%- if add_generation_prompt %} \n {{- '<|im_start|>assistant\n' }} \n{%- endif %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Qwen2.5-Math-72B-Instruct
MERA
0.4 0.3 0.667 0.067 0.067 0.233 0.033 - 0 0 0.033 0 0.067 0 0.067 0.217 0 0 0 0 0 0.233 0.1 0.033 0 0.492 0 0.167 0.6 0.267 0.3
Model, team Honest Helpful Harmless
Qwen2.5-Math-72B-Instruct
MERA
0.541 0.661 0.724
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Qwen2.5-Math-72B-Instruct
MERA
0.504 0.488 0.855 0.829 0.686 0.771 0.738 0.701 0.63 0.691 0.64 0.556 0.53 0.713 0.745 0.665 0.68 0.778 0.678 0.618 0.398 0.731 0.65 0.676 0.625 0.7 0.439 0.639 0.706 0.59 0.75 0.777 0.693 0.808 0.653 0.821 0.72 0.884 0.695 0.714 0.798 0.621 0.683 0.78 0.824 0.773 0.765 0.589 0.521 0.651 0.69 0.759 0.795 0.887 0.88 0.752 0.767
Model, team SIM FL STA
Qwen2.5-Math-72B-Instruct
MERA
0.42 0.665 0.536
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Qwen2.5-Math-72B-Instruct
MERA
0.444 0.723 0.733 0.639 0.75 0.741 0.672 0.719 0.673 0.723 0.821 0.75 0.508 0.674 0.614 0.642 0.682 0.8 0.789 0.789 0.579 0.763 0.889 0.734 0.844 0.773 0.615 0.649 0.895 0.8 0.822 0.808 0.732 0.842 0.53 0.839 0.844 0.844 0.737 0.723 0.816 0.73 0.822 0.933 0.862 0.889 0.862 0.932 0.677 0.86 0.889 0.594 0.848 0.727 0.605 0.538 0.789
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2.5-Math-72B-Instruct
MERA
0.207 0.277 0.253 0.204 0.191
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2.5-Math-72B-Instruct
MERA
0.392 0.452 0.429 0.377 0.333
Model, team Virtue Law Moral Justice Utilitarianism
Qwen2.5-Math-72B-Instruct
MERA
0.426 0.42 0.44 0.414 0.34
Model, team Women Men LGBT Nationalities Migrants Other
Qwen2.5-Math-72B-Instruct
MERA
0.713 0.686 0.824 0.784 0.571 0.82