T-pro-it-2.0

T-Tech Created at 14.07.2025 03:24
0.66
The overall result
30
Place in the rating

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.186 Accuracy
RCB 0.553 / 0.498 Accuracy F1 macro
USE 0.352 Grade norm
RWSD 0.55 Accuracy
PARus 0.932 Accuracy
ruTiE 0.876 Accuracy
MultiQ 0.561 / 0.429 F1 Exact match
CheGeKa 0.518 / 0.433 F1 Exact match
ruModAr 0.764 Exact match
MaMuRAMu 0.851 Accuracy
ruMultiAr 0.487 Exact match
ruCodeEval 0.543 / 0.766 / 0.823 Pass@k
MathLogicQA 0.766 Accuracy
ruWorldTree 0.992 / 0.992 Accuracy F1 macro
ruOpenBookQA 0.94 / 0.94 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.979 Accuracy
ruMMLU 0.79 Accuracy
SimpleAr 0.993 Exact match
ruHumanEval 0.268 / 0.503 / 0.585 Pass@k
ruHHH 0.876
ruHateSpeech 0.864
ruDetox 0.324
ruEthics
Correct God Ethical
Virtue 0.476 0.42 0.524
Law 0.488 0.389 0.495
Moral 0.524 0.438 0.557
Justice 0.442 0.374 0.476
Utilitarianism 0.438 0.415 0.465

Information about the submission

Mera version
v.1.2.0
Torch Version
2.7.1
The version of the codebase
1ad991d
CUDA version
12.6
Precision of the model weights
bfloat16
Seed
1234
Batch
1
Transformers version
4.53.2
The number of GPUs and their type
4xH100
Architecture
openai-chat-completions

Team:

T-Tech

Name of the ML model:

T-pro-it-2.0

Model size

32.8B

Model type:

Opened

SFT

Architecture description:

T-pro-it-2.0 is a 32B hybrid reasoning model optimized for the Russian language. This submission was made in Non-Thinking mode.

Description of the training:

Model built upon the Qwen 3 model family and incorporates both continual pre-training and alignment techniques.

Pretrain data:

Diverse high-quality datamix of pre-train and synthetic data

License:

Apache 2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=[" \n"]; \nchegeka - do_sample=false;until=[" \n"]; \nrudetox - do_sample=false;until=[" \n"]; \nrumultiar - do_sample=false;until=[" \n"]; \nuse - do_sample=false;until=[" \n","."]; \nmultiq - do_sample=false;until=[" \n"]; \nrumodar - do_sample=false;until=[" \n"]; \nruhumaneval - do_sample=true;until=[" \nclass"," \ndef"," \n#"," \nif"," \nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=[" \nclass"," \ndef"," \n#"," \nif"," \nprint"];temperature=0.6;

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if tools %} {{- '<|im_start|>system \n' }} {%- if messages[0].role == 'system' %} {{- messages[0].content + ' \n \n' }} {%- endif %} {{- "# Tools \n \nYou may call one or more functions to assist with the user query. \n \nYou are provided with function signatures within <tools></tools> XML tags: \n<tools>" }} {%- for tool in tools %} {{- " \n" }} {{- tool | tojson }} {%- endfor %} {{- " \n</tools> \n \nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: \n<tool_call> \n{"name": <function-name>, "arguments": <args-json-object>} \n</tool_call><|im_end|> \n" }} {%- else %} {%- if messages[0].role == 'system' %} {{- '<|im_start|>system \n' + messages[0].content + '<|im_end|> \n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- if ns.multi_step_tool and message.role == "user" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endfor %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + ' \n' + message.content + '<|im_end|>' + ' \n' }} {%- elif message.role == "assistant" %} {%- set content = message.content %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is defined and message.reasoning_content is not none %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in message.content %} {%- set content = message.content.split('</think>')[-1].lstrip(' \n') %} {%- set reasoning_content = message.content.split('</think>')[0].rstrip(' \n').split('<think>')[-1].lstrip(' \n') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + ' \n<think> \n' + reasoning_content.strip(' \n') + ' \n</think> \n \n' + content.lstrip(' \n') }} {%- else %} {{- '<|im_start|>' + message.role + ' \n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + ' \n' + content }} {%- endif %} {%- if message.tool_calls %} {%- for tool_call in message.tool_calls %} {%- if (loop.first and content) or (not loop.first) %} {{- ' \n' }} {%- endif %} {%- if tool_call.function %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '<tool_call> \n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {%- if tool_call.arguments is string %} {{- tool_call.arguments }} {%- else %} {{- tool_call.arguments | tojson }} {%- endif %} {{- '} \n</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|> \n' }} {%- elif message.role == "tool" %} {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- ' \n<tool_response> \n' }} {{- message.content }} {{- ' \n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|> \n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant \n<think> \n \n</think> \n \n' }} {%- endif %}

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
T-pro-it-2.0
T-Tech
0.567 0.533 0.767 0.1 0.1 0.333 0.067 - 0.167 0.1 0.033 0.033 0.167 0.067 0.1 0.517 0 0.067 0 0.033 0.133 0.767 0.433 0.333 0.233 0.742 0.4 0.367 0.733 0.567 0.767
Model, team Honest Helpful Harmless
T-pro-it-2.0
T-Tech
0.82 0.898 0.914
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
T-pro-it-2.0
T-Tech
0.77 0.47 0.914 0.885 0.866 0.871 0.854 0.775 0.827 0.776 0.781 0.683 0.56 0.815 0.857 0.78 0.78 0.924 0.856 0.832 0.665 0.877 0.72 0.775 0.75 0.89 0.569 0.676 0.8 0.59 0.82 0.843 0.84 0.869 0.804 0.897 0.76 0.955 0.781 0.823 0.894 0.871 0.848 0.91 0.908 0.824 0.897 0.752 0.645 0.78 0.81 0.886 0.879 0.958 0.92 0.861 0.907
Model, team SIM FL STA
T-pro-it-2.0
T-Tech
0.733 0.715 0.659
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
T-pro-it-2.0
T-Tech
0.711 0.931 0.783 0.676 0.947 0.828 0.776 0.789 0.904 0.8 0.833 0.85 0.633 0.853 0.836 0.753 0.794 0.822 0.877 0.825 0.842 0.864 0.911 0.929 0.911 0.909 0.897 0.667 0.93 0.889 0.844 0.923 0.839 0.93 0.727 0.929 0.933 0.889 0.842 0.862 0.898 0.905 0.889 1 0.966 0.911 0.897 0.955 0.877 0.965 0.933 0.87 0.848 0.779 0.721 0.801 0.878
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
T-pro-it-2.0
T-Tech
0.476 0.488 0.524 0.442 0.438
Model, team Virtue Law Moral Justice Utilitarianism
T-pro-it-2.0
T-Tech
0.42 0.389 0.438 0.374 0.415
Model, team Virtue Law Moral Justice Utilitarianism
T-pro-it-2.0
T-Tech
0.524 0.495 0.557 0.476 0.465
Model, team Women Men LGBT Nationalities Migrants Other
T-pro-it-2.0
T-Tech
0.861 0.771 0.824 0.892 0.714 0.934