Mistral-7B-Instruct-v0.2

MERA Created at 06.10.2024 21:01
0.318
The overall result
394
Place in the rating
Weak tasks:
511
RWSD
347
PARus
369
RCB
237
ruEthics
251
MultiQ
360
ruWorldTree
394
ruOpenBookQA
236
CheGeKa
364
ruMMLU
248
ruHateSpeech
243
ruDetox
338
ruHHH
273
ruTiE
355
USE
456
MathLogicQA
365
ruMultiAr
355
SimpleAr
100
LCS
281
BPS
443
ruModAr
320
MaMuRAMu
+17
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.14 Accuracy
RCB 0.463 / 0.327 Accuracy F1 macro
USE 0.074 Grade norm
RWSD 0.419 Accuracy
PARus 0.64 Accuracy
ruTiE 0.609 Accuracy
MultiQ 0.34 / 0.181 F1 Exact match
CheGeKa 0.107 / 0.06 F1 Exact match
ruModAr 0.079 Exact match
MaMuRAMu 0.53 Accuracy
ruMultiAr 0.146 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.284 Accuracy
ruWorldTree 0.682 / 0.56 Accuracy F1 macro
ruOpenBookQA 0.543 / 0.438 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.888 Accuracy
ruMMLU 0.42 Accuracy
SimpleAr 0.833 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.551
ruHateSpeech 0.675
ruDetox 0.162
ruEthics
Correct God Ethical
Virtue 0.23 0.31 0.293
Law 0.246 0.301 0.272
Moral 0.256 0.292 0.295
Justice 0.2 0.247 0.253
Utilitarianism 0.22 0.259 0.298

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.43.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Mistral-7B-Instruct-v0.2

Model size

7.2B

Model type:

Opened

SFT

Architecture description:

An instruct fine-tuned version of the Mistral-7B-v0.2, which inherits the Mistral-7B-v0.1 architecture with the following differences: • No Sliding-Window Attention • Context length expanded to 32k tokens

Description of the training:

Fine-tuned on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized.

Pretrain data:

Fine-tuned on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized.

License:

apache-2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval - 32768 \nrutie - 20000 \nrutie - 10000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if messages[0]['role'] == 'system' %} \n {%- set system_message = messages[0]['content'] %} \n {%- set loop_messages = messages[1:] %} \n{%- else %} \n {%- set loop_messages = messages %} \n{%- endif %} \n \n{{- bos_token }} \n{%- for message in loop_messages %} \n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %} \n {{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }} \n {%- endif %} \n {%- if message['role'] == 'user' %} \n {%- if loop.first and system_message is defined %} \n {{- ' [INST] ' + system_message + '\n\n' + message['content'] + ' [/INST]' }} \n {%- else %} \n {{- ' [INST] ' + message['content'] + ' [/INST]' }} \n {%- endif %} \n {%- elif message['role'] == 'assistant' %} \n {{- ' ' + message['content'] + eos_token}} \n {%- else %} \n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }} \n {%- endif %} \n{%- endfor %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Mistral-7B-Instruct-v0.2
MERA
0.133 0.133 0.733 0.133 0 0.067 0 - 0 0 0.033 0 0.067 0 0.033 0.2 0 0 0 0.033 0.033 0.067 0.033 0 0 0.017 0.067 0.033 0.233 0 0.2
Model, team Honest Helpful Harmless
Mistral-7B-Instruct-v0.2
MERA
0.623 0.492 0.534
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Mistral-7B-Instruct-v0.2
MERA
0.37 0.367 0.434 0.641 0.412 0.652 0.553 0.45 0.463 0.462 0.281 0.286 0.31 0.491 0.539 0.486 0.55 0.347 0.289 0.45 0.239 0.526 0.21 0.41 0.366 0.45 0.323 0.5 0.608 0.31 0.54 0.603 0.466 0.687 0.442 0.368 0.24 0.465 0.291 0.384 0.566 0.375 0.414 0.337 0.515 0.324 0.583 0.252 0.33 0.413 0.35 0.62 0.379 0.42 0.43 0.576 0.482
Model, team SIM FL STA
Mistral-7B-Instruct-v0.2
MERA
0.579 0.48 0.636
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Mistral-7B-Instruct-v0.2
MERA
0.444 0.446 0.433 0.444 0.487 0.534 0.328 0.614 0.635 0.508 0.513 0.567 0.425 0.55 0.48 0.568 0.579 0.467 0.333 0.579 0.263 0.61 0.422 0.55 0.511 0.545 0.615 0.544 0.789 0.644 0.533 0.641 0.5 0.649 0.439 0.464 0.467 0.689 0.316 0.538 0.6 0.524 0.711 0.467 0.793 0.689 0.638 0.386 0.538 0.667 0.622 0.522 0.532 0.403 0.395 0.456 0.656
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-7B-Instruct-v0.2
MERA
0.23 0.246 0.256 0.2 0.22
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-7B-Instruct-v0.2
MERA
0.31 0.301 0.292 0.247 0.259
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-7B-Instruct-v0.2
MERA
0.293 0.272 0.295 0.253 0.298
Model, team Women Men LGBT Nationalities Migrants Other
Mistral-7B-Instruct-v0.2
MERA
0.722 0.686 0.588 0.703 0.429 0.623