Qwen3-Omni-30B-A3B-Instruct

MERA Created at 22.01.2026 05:12

Ratings for leaderboard tasks

The table will scroll to the left

Board Result Attempted Score Coverage Place in the rating
Multi 0.5 0.563 0.889 2
Images 0.554 0.554 1 2
Audio 0.561 0.561 1 2
Video 0.41 0.615 0.667 -

Tasks

The table will scroll to the left

Task Modality Result Metric
0.741
EM JudgeScore
0.335
EM F1
0.73
EM JudgeScore
0.491
EM JudgeScore
0.562
EM JudgeScore
0.743
EM JudgeScore
0.588
EM JudgeScore
0.154
EM JudgeScore
0.678
EM JudgeScore
0.709
EM JudgeScore
0.51
EM JudgeScore
0.551
EM JudgeScore
0.435
EM JudgeScore
0.637
EM JudgeScore
0.224
EM JudgeScore
culture 0.095 / 0.292
business 0.129 / 0.421
medicine 0.094 / 0.318
social_sciences 0.119 / 0.386
fundamental_sciences 0.107 / 0.32
applied_sciences 0.137 / 0.402
0.673
EM JudgeScore
biology 0.709 / 0.776
chemistry 0.703 / 0.746
physics 0.749 / 0.824
economics 0.605 / 0.669
ru 0.489 / 0.541
all 0.586 / 0.637
0.807
EM JudgeScore
biology 0.684 / 0.754
chemistry 0.731 / 0.791
physics 0.778 / 0.874
science 0.902 / 0.927

Information about the submission

Mera version
v1.0.0
Torch Version
2.8.0
The version of the codebase
7e640aa
CUDA version
12.8
Precision of the model weights
bfloat16
Seed
1234
Batch
1
Transformers version
4.57.1
The number of GPUs and their type
1 x NVIDIA A100-SXM4-80GB
Architecture
openai-chat-completions

Team:

MERA

Name of the ML model:

Qwen3-Omni-30B-A3B-Instruct

Model size

35.0B

Model type:

Opened

SFT

Inference parameters

Generation Parameters:
realvideoqa - until=["\n\n"];do_sample=false;temperature=0; \nruhhh_video - until=["\n\n"];do_sample=false;temperature=0; \nruslun - until=["\n\n"];do_sample=false;temperature=0; \nrealvqa - until=["\n\n"];do_sample=false;temperature=0; \nrunaturalsciencevqa_biology - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=64; \nrunaturalsciencevqa_chemistry - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=64; \nrunaturalsciencevqa_earth_science - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=64; \nrunaturalsciencevqa_physics - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=64; \nruenvaqa - until=["\n\n"];do_sample=false;temperature=0; \nlabtabvqa - until=["\n\n"];do_sample=false;temperature=0; \nweird - until=["\n\n"];do_sample=false;temperature=0; \naquaria - until=["\n\n"];do_sample=false;temperature=0; \nruhhh_image - until=["\n\n"];do_sample=false;temperature=0; \nruclevr - until=["\n\n"];do_sample=false;temperature=0; \nrumathvqa - until=["\n\n"];do_sample=false;temperature=0; \nrucommonvqa - until=["\n\n"];do_sample=false;temperature=0; \nschoolsciencevqa_biology - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nschoolsciencevqa_chemistry - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nschoolsciencevqa_earth_science - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nschoolsciencevqa_economics - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nschoolsciencevqa_history_all - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nschoolsciencevqa_history_ru - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nschoolsciencevqa_physics - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nunisciencevqa_applied_sciences - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nunisciencevqa_business - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nunisciencevqa_cultural_studies - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nunisciencevqa_fundamental_sciences - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nunisciencevqa_health_and_medicine - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256; \nunisciencevqa_social_sciences - until=["<|endoftext|>"];temperature=0;do_sample=false;max_gen_toks=256;

The size of the context:
8000