Qwen3.6-35B-A3B

MERA Created at 03.05.2026 12:42
0.792
The overall result
9
Place in the rating

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.546 Accuracy
RCB 0.6 / 0.568 Accuracy F1 macro
USE 0.67 Grade norm
RWSD 0.781 Accuracy
PARus 0.9 Accuracy
ruTiE 0.87 Accuracy
MultiQ 0.598 / 0.44 F1 Exact match
CheGeKa 0.427 / 0.341 F1 Exact match
ruModAr 0.993 Exact match
MaMuRAMu 0.887 Accuracy
ruMultiAr 0.988 Exact match
ruCodeEval 0.704 / 0.874 / 0.89 Pass@k
MathLogicQA 0.992 Accuracy
ruWorldTree 0.992 / 0.992 Accuracy F1 macro
ruOpenBookQA 0.953 / 0.952 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.997 Accuracy
ruMMLU 0.872 Accuracy
SimpleAr 0.999 Exact match
ruHumanEval 0.766 / 0.871 / 0.878 Pass@k
ruHHH 0.893
ruHateSpeech 0.921
ruDetox 0.342
ruEthics
Correct God Ethical
Virtue 0.416 0.453 0.624
Law 0.452 0.427 0.634
Moral 0.461 0.473 0.667
Justice 0.385 0.398 0.581
Utilitarianism 0.356 0.391 0.522

Information about the submission

Mera version
v1.2.0
Torch Version
2.10.0
The version of the codebase
d539716
CUDA version
12.8
Precision of the model weights
bfloat16
Seed
1234
Batch
1
Transformers version
4.57.1
The number of GPUs and their type
8 x NVIDIA A100-SXM4-80GB
Architecture
local-chat-completions

Team:

MERA

Name of the ML model:

Qwen3.6-35B-A3B

Model size

36.0B

Model type:

Opened

SFT

MoE

Additional links:

https://qwen.ai/blog?id=qwen3.6-35b-a3b

Architecture description:

Qwen3.6-35B-A3B demonstrates that sparse MoE models can achieve remarkable agentic coding and reasoning capability. With only 3B active parameters, it delivers performance that rivals dense models several times its active size, while also excelling across multimodal benchmarks. As a fully open-source checkpoint, it sets a new standard for what’s possible at its scale.

Description of the training:

Type: Causal Language Model with Vision Encoder Training Stage: Pre-training & Post-training Language Model Number of Parameters: 35B in total and 3B activated Hidden Dimension: 2048 Token Embedding: 248320 (Padded) Number of Layers: 40 Hidden Layout: 10 × (3 × (Gated DeltaNet → MoE) → 1 × (Gated Attention → MoE)) Gated DeltaNet: Number of Linear Attention Heads: 32 for V and 16 for QK Head Dimension: 128 Gated Attention: Number of Attention Heads: 16 for Q and 2 for KV Head Dimension: 256 Rotary Position Embedding Dimension: 64 Mixture Of Experts Number of Experts: 256 Number of Activated Experts: 8 Routed + 1 Shared Expert Intermediate Dimension: 512

License:

Apache license 2.0

Inference parameters

Description of the template:
{%- set image_count = namespace(value=0) %} {%- set video_count = namespace(value=0) %} {%- macro render_content(content, do_vision_count, is_system_content=false) %} {%- if content is string %} {{- content }} {%- elif content is iterable and content is not mapping %} {%- for item in content %} {%- if 'image' in item or 'image_url' in item or item.type == 'image' %} {%- if is_system_content %} {{- raise_exception('System message cannot contain images.') }} {%- endif %} {%- if do_vision_count %} {%- set image_count.value = image_count.value + 1 %} {%- endif %} {%- if add_vision_id %} {{- 'Picture ' ~ image_count.value ~ ': ' }} {%- endif %} {{- '<|vision_start|><|image_pad|><|vision_end|>' }} {%- elif 'video' in item or item.type == 'video' %} {%- if is_system_content %} {{- raise_exception('System message cannot contain videos.') }} {%- endif %} {%- if do_vision_count %} {%- set video_count.value = video_count.value + 1 %} {%- endif %} {%- if add_vision_id %} {{- 'Video ' ~ video_count.value ~ ': ' }} {%- endif %} {{- '<|vision_start|><|video_pad|><|vision_end|>' }} {%- elif 'text' in item %} {{- item.text }} {%- else %} {{- raise_exception('Unexpected item type in content.') }} {%- endif %} {%- endfor %} {%- elif content is none or content is undefined %} {{- '' }} {%- else %} {{- raise_exception('Unexpected content type.') }} {%- endif %} {%- endmacro %} {%- if not messages %} {{- raise_exception('No messages provided.') }} {%- endif %} {%- if tools and tools is iterable and tools is not mapping %} {{- '<|im_start|>system\n' }} {{- "# Tools\n\nYou have access to the following functions:\n\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>" }} {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }} {%- if messages[0].role == 'system' %} {%- set content = render_content(messages[0].content, false, true)|trim %} {%- if content %} {{- '\n\n' + content }} {%- endif %} {%- endif %} {{- '<|im_end|>\n' }} {%- else %} {%- if messages[0].role == 'system' %} {%- set content = render_content(messages[0].content, false, true)|trim %} {{- '<|im_start|>system\n' + content + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- if ns.multi_step_tool and message.role == "user" %} {%- set content = render_content(message.content, false)|trim %} {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endif %} {%- endfor %} {%- if ns.multi_step_tool %} {{- raise_exception('No user query found in messages.') }} {%- endif %} {%- for message in messages %} {%- set content = render_content(message.content, true)|trim %} {%- if message.role == "system" %} {%- if not loop.first %} {{- raise_exception('System message must be at the beginning.') }} {%- endif %} {%- elif message.role == "user" %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is string %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in content %} {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %} {%- set content = content.split('</think>')[-1].lstrip('\n') %} {%- endif %} {%- endif %} {%- set reasoning_content = reasoning_content|trim %} {%- if (preserve_thinking is defined and preserve_thinking is true) or (loop.index0 > ns.last_query_index) %} {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content + '\n</think>\n\n' + content }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {%- if loop.first %} {%- if content|trim %} {{- '\n\n<tool_call>\n<function=' + tool_call.name + '>\n' }} {%- else %} {{- '<tool_call>\n<function=' + tool_call.name + '>\n' }} {%- endif %} {%- else %} {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }} {%- endif %} {%- if tool_call.arguments is defined %} {%- for args_name, args_value in tool_call.arguments|items %} {{- '<parameter=' + args_name + '>\n' }} {%- set args_value = args_value | string if args_value is string else args_value | tojson | safe %} {{- args_value }} {{- '\n</parameter>\n' }} {%- endfor %} {%- endif %} {{- '</function>\n</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if loop.previtem and loop.previtem.role != "tool" %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- content }} {{- '\n</tool_response>' }} {%- if not loop.last and loop.nextitem.role != "tool" %} {{- '<|im_end|>\n' }} {%- elif loop.last %} {{- '<|im_end|>\n' }} {%- endif %} {%- else %} {{- raise_exception('Unexpected message role.') }} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- if enable_thinking is defined and enable_thinking is false %} {{- '<think>\n\n</think>\n\n' }} {%- else %} {{- '<think>\n' }} {%- endif %} {%- endif %}

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Generation Parameters:
rucodeeval - do_sample=true;temperature=0.6;until=[\"<|im_end|>\"],max_gen_toks=10000; ruhumaneval - do_sample=true;temperature=0.6;until=[\"<|im_end|>\"],max_gen_toks=10000; rutie_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; bps_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; chegeka - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; lcs_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; mamuramu_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; mathlogicqa_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; multiq - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; parus_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; rcb_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; rudetox - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; ruethics_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; ruhatespeech_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; ruhhh_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; rummlu_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; rumodar - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; rumultiar - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; ruopenbookqa_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; ruworldtree_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; rwsd_gen - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; simplear - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000; use - do_sample=false,until=[\"<|im_end|>\"],max_gen_toks=10000;

The size of the context:
262144

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Qwen3.6-35B-A3B
MERA
0.767 0.667 0.867 0.2 0.667 0.9 0.567 - 0.133 0.167 0.333 0.333 0.767 0.633 0.467 0.867 0.6 0.433 0.867 0.533 0.767 0.733 0.767 0.733 0.7 0.908 0.6 0.867 0.833 0.667 0.833
Model, team Honest Helpful Harmless
Qwen3.6-35B-A3B
MERA
0.836 0.898 0.948
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Qwen3.6-35B-A3B
MERA
0.881 0.56 0.967 0.919 0.902 0.915 0.864 0.862 0.917 0.749 0.842 0.921 0.6 0.889 0.943 0.835 0.81 0.965 0.978 0.878 0.792 0.877 0.95 0.861 0.866 0.98 0.711 0.713 0.8 0.74 0.83 0.926 0.865 0.909 0.925 0.957 0.97 0.961 0.94 0.936 0.899 0.956 0.841 0.955 0.948 0.894 0.931 0.981 0.911 0.869 0.94 0.911 0.931 0.958 0.97 0.861 0.948
Model, team SIM FL STA
Qwen3.6-35B-A3B
MERA
0.709 0.656 0.771
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Qwen3.6-35B-A3B
MERA
0.822 0.921 0.8 0.759 0.947 0.81 0.828 0.719 0.962 0.8 0.859 0.858 0.658 0.907 0.848 0.827 0.832 0.8 0.895 0.825 0.912 0.983 0.933 0.935 0.956 0.955 0.91 0.754 0.947 0.956 0.911 0.923 0.929 0.965 0.909 0.893 0.933 0.867 0.93 0.954 0.963 0.937 0.889 1 0.948 0.911 0.914 0.909 0.938 0.965 0.911 0.957 0.886 0.792 0.814 0.877 0.956
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Qwen3.6-35B-A3B
MERA
0.416 0.452 0.461 0.385 0.356
Model, team Virtue Law Moral Justice Utilitarianism
Qwen3.6-35B-A3B
MERA
0.453 0.427 0.473 0.398 0.391
Model, team Virtue Law Moral Justice Utilitarianism
Qwen3.6-35B-A3B
MERA
0.624 0.634 0.667 0.581 0.522
Model, team Women Men LGBT Nationalities Migrants Other
Qwen3.6-35B-A3B
MERA
0.944 0.8 0.941 0.973 1 0.902