Qwen3-Next-80B-A3B-Thinking

MERA Created at 03.03.2026 19:26

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Place in the rating
Agricultural industry 0.627 3
Medicine and healthcare 0.844 2

Information about the submission

Mera version
v1.0.0
Torch Version
2.9.1
The version of the codebase
7c56310
CUDA version
12.8
Precision of the model weights
bfloat16
Seed
1234
Batch
1
Transformers version
4.56.1
The number of GPUs and their type
8 x NVIDIA A100-SXM4-80GB
Architecture
local-chat-completions

Team:

MERA

Name of the ML model:

Qwen3-Next-80B-A3B-Thinking

Model size

80.0B

Model type:

Opened

SFT

Additional links:

https://arxiv.org/abs/2505.09388

Architecture description:

Qwen3-Next-80B-A3B-Thinking is a large decoder-only transformer language model from the Qwen3 family with approximately 80 billion parameters. The model is designed for advanced reasoning tasks and generates intermediate reasoning steps before producing the final answer. It supports long-context inputs and improved reasoning capabilities compared to earlier Qwen models.

Description of the training:

The model follows a multi-stage training pipeline including large-scale pretraining followed by post-training stages. Post-training includes instruction tuning and reinforcement-learning-based alignment to improve reasoning, instruction following, and response quality.

Pretrain data:

The model was pretrained on a large multilingual corpus of approximately 36 trillion tokens covering 119 languages. It was further post-trained on instruction-following and reasoning-oriented datasets to improve instruction following and reasoning performance.

License:

Apache License 2.0

Inference parameters

Generation Parameters:
agro_bench - do_sample=false;until=["<|im_end|>"];max_gen_toks=10000; \naqua_bench - do_sample=false;until=["<|im_end|>"];max_gen_toks=10000; \nmed_bench - do_sample=false;until=["<|im_end|>"];max_gen_toks=10000;

Description of the template:
{%- if tools %} \n {{- '<|im_start|>system \n' }} \n {%- if messages[0].role == 'system' %} \n {{- messages[0].content + ' \n \n' }} \n {%- endif %} \n {{- "# Tools \n \nYou may call one or more functions to assist with the user query. \n \nYou are provided with function signatures within <tools></tools> XML tags: \n<tools>" }} \n {%- for tool in tools %} \n {{- " \n" }} \n {{- tool | tojson }} \n {%- endfor %} \n {{- " \n</tools> \n \nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: \n<tool_call> \n{"name": <function-name>, "arguments": <args-json-object>} \n</tool_call><|im_end|> \n" }} \n{%- else %} \n {%- if messages[0].role == 'system' %} \n {{- '<|im_start|>system \n' + messages[0].content + '<|im_end|> \n' }} \n {%- endif %} \n{%- endif %} \n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} \n{%- for message in messages[::-1] %} \n {%- set index = (messages|length - 1) - loop.index0 %} \n {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %} \n {%- set ns.multi_step_tool = false %} \n {%- set ns.last_query_index = index %} \n {%- endif %} \n{%- endfor %} \n{%- for message in messages %} \n {%- if message.content is string %} \n {%- set content = message.content %} \n {%- else %} \n {%- set content = '' %} \n {%- endif %} \n {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} \n {{- '<|im_start|>' + message.role + ' \n' + content + '<|im_end|>' + ' \n' }} \n {%- elif message.role == "assistant" %} \n {%- set reasoning_content = '' %} \n {%- if message.reasoning_content is string %} \n {%- set reasoning_content = message.reasoning_content %} \n {%- else %} \n {%- if '</think>' in content %} \n {%- set reasoning_content = content.split('</think>')[0].rstrip(' \n').split('<think>')[-1].lstrip(' \n') %} \n {%- set content = content.split('</think>')[-1].lstrip(' \n') %} \n {%- endif %} \n {%- endif %} \n {%- if loop.index0 > ns.last_query_index %} \n {%- if loop.last or (not loop.last and reasoning_content) %} \n {{- '<|im_start|>' + message.role + ' \n<think> \n' + reasoning_content.strip(' \n') + ' \n</think> \n \n' + content.lstrip(' \n') }} \n {%- else %} \n {{- '<|im_start|>' + message.role + ' \n' + content }} \n {%- endif %} \n {%- else %} \n {{- '<|im_start|>' + message.role + ' \n' + content }} \n {%- endif %} \n {%- if message.tool_calls %} \n {%- for tool_call in message.tool_calls %} \n {%- if (loop.first and content) or (not loop.first) %} \n {{- ' \n' }} \n {%- endif %} \n {%- if tool_call.function %} \n {%- set tool_call = tool_call.function %} \n {%- endif %} \n {{- '<tool_call> \n{"name": "' }} \n {{- tool_call.name }} \n {{- '", "arguments": ' }} \n {%- if tool_call.arguments is string %} \n {{- tool_call.arguments }} \n {%- else %} \n {{- tool_call.arguments | tojson }} \n {%- endif %} \n {{- '} \n</tool_call>' }} \n {%- endfor %} \n {%- endif %} \n {{- '<|im_end|> \n' }} \n {%- elif message.role == "tool" %} \n {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} \n {{- '<|im_start|>user' }} \n {%- endif %} \n {{- ' \n<tool_response> \n' }} \n {{- content }} \n {{- ' \n</tool_response>' }} \n {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} \n {{- '<|im_end|> \n' }} \n {%- endif %} \n {%- endif %} \n{%- endfor %} \n{%- if add_generation_prompt %} \n {{- '<|im_start|>assistant \n<think> \n' }} \n{%- endif %}