Mistral-7B-Instruct-v0.3

MERA Created at 22.09.2024 20:56
0.311
The overall result
407
Place in the rating
Weak tasks:
556
RWSD
325
PARus
306
RCB
275
ruEthics
352
MultiQ
345
ruWorldTree
367
ruOpenBookQA
300
CheGeKa
358
ruMMLU
309
ruHateSpeech
151
ruDetox
376
ruHHH
371
ruTiE
305
USE
333
MathLogicQA
361
ruMultiAr
335
SimpleAr
308
LCS
241
BPS
417
ruModAr
311
MaMuRAMu
+17
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.092 Accuracy
RCB 0.498 / 0.315 Accuracy F1 macro
USE 0.095 Grade norm
RWSD 0.127 Accuracy
PARus 0.666 Accuracy
ruTiE 0.531 Accuracy
MultiQ 0.264 / 0.149 F1 Exact match
CheGeKa 0.052 / 0.031 F1 Exact match
ruModAr 0.152 Exact match
MaMuRAMu 0.535 Accuracy
ruMultiAr 0.152 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.349 Accuracy
ruWorldTree 0.712 / 0.711 Accuracy F1 macro
ruOpenBookQA 0.598 / 0.593 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.92 Accuracy
ruMMLU 0.426 Accuracy
SimpleAr 0.86 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.528
ruHateSpeech 0.623
ruDetox 0.206
ruEthics
Correct God Ethical
Virtue 0.204 0.326 0.307
Law 0.21 0.307 0.291
Moral 0.213 0.327 0.317
Justice 0.166 0.277 0.263
Utilitarianism 0.188 0.262 0.267

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
float16
Seed
1234
Butch
1
Transformers version
4.43.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Mistral-7B-Instruct-v0.3

Model size

7.2B

Model type:

Opened

SFT

Architecture description:

An instruct fine-tuned version of the Mistral-7B-v0.3, which inherits the Mistral-7B-v0.1 architecture with the following differences: • No Sliding-Window Attention • Context length expanded to 32k tokens • Vocabulary extended to 32768

Description of the training:

Fine-tuned on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized.

Pretrain data:

Fine-tuned on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized.

License:

apache-2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie - 32768 \nrutie - 20000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if messages[0]["role"] == "system" %} \n {%- set system_message = messages[0]["content"] %} \n {%- set loop_messages = messages[1:] %} \n{%- else %} \n {%- set loop_messages = messages %} \n{%- endif %} \n{%- if not tools is defined %} \n {%- set tools = none %} \n{%- endif %} \n{%- set user_messages = loop_messages | selectattr("role", "equalto", "user") | list %} \n \n{#- This block checks for alternating user/assistant messages, skipping tool calling messages #} \n{%- set ns = namespace() %} \n{%- set ns.index = 0 %} \n{%- for message in loop_messages %} \n {%- if not (message.role == "tool" or message.role == "tool_results" or (message.tool_calls is defined and message.tool_calls is not none)) %} \n {%- if (message["role"] == "user") != (ns.index % 2 == 0) %} \n {{- raise_exception("After the optional system message, conversation roles must alternate user/assistant/user/assistant/...") }} \n {%- endif %} \n {%- set ns.index = ns.index + 1 %} \n {%- endif %} \n{%- endfor %} \n \n{{- bos_token }} \n{%- for message in loop_messages %} \n {%- if message["role"] == "user" %} \n {%- if tools is not none and (message == user_messages[-1]) %} \n {{- "[AVAILABLE_TOOLS] [" }} \n {%- for tool in tools %} \n {%- set tool = tool.function %} \n {{- '{"type": "function", "function": {' }} \n {%- for key, val in tool.items() if key != "return" %} \n {%- if val is string %} \n {{- '"' + key + '": "' + val + '"' }} \n {%- else %} \n {{- '"' + key + '": ' + val|tojson }} \n {%- endif %} \n {%- if not loop.last %} \n {{- ", " }} \n {%- endif %} \n {%- endfor %} \n {{- "}}" }} \n {%- if not loop.last %} \n {{- ", " }} \n {%- else %} \n {{- "]" }} \n {%- endif %} \n {%- endfor %} \n {{- "[/AVAILABLE_TOOLS]" }} \n {%- endif %} \n {%- if loop.last and system_message is defined %} \n {{- "[INST] " + system_message + "\n\n" + message["content"] + "[/INST]" }} \n {%- else %} \n {{- "[INST] " + message["content"] + "[/INST]" }} \n {%- endif %} \n {%- elif message.tool_calls is defined and message.tool_calls is not none %} \n {{- "[TOOL_CALLS] [" }} \n {%- for tool_call in message.tool_calls %} \n {%- set out = tool_call.function|tojson %} \n {{- out[:-1] }} \n {%- if not tool_call.id is defined or tool_call.id|length != 9 %} \n {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }} \n {%- endif %} \n {{- ', "id": "' + tool_call.id + '"}' }} \n {%- if not loop.last %} \n {{- ", " }} \n {%- else %} \n {{- "]" + eos_token }} \n {%- endif %} \n {%- endfor %} \n {%- elif message["role"] == "assistant" %} \n {{- " " + message["content"]|trim + eos_token}} \n {%- elif message["role"] == "tool_results" or message["role"] == "tool" %} \n {%- if message.content is defined and message.content.content is defined %} \n {%- set content = message.content.content %} \n {%- else %} \n {%- set content = message.content %} \n {%- endif %} \n {{- '[TOOL_RESULTS] {"content": ' + content|string + ", " }} \n {%- if not message.tool_call_id is defined or message.tool_call_id|length != 9 %} \n {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }} \n {%- endif %} \n {{- '"call_id": "' + message.tool_call_id + '"}[/TOOL_RESULTS]' }} \n {%- else %} \n {{- raise_exception("Only user and assistant roles are supported, with the exception of an initial optional system message!") }} \n {%- endif %} \n{%- endfor %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Mistral-7B-Instruct-v0.3
MERA
0.067 0.133 0.633 0.033 0 0.167 0 - 0.033 0 0 0 0.167 0.033 0.1 0.183 0 0 0 0 0 0.2 0.033 0.033 0.033 0.117 0.033 0.133 0.3 0.033 0.233
Model, team Honest Helpful Harmless
Mistral-7B-Instruct-v0.3
MERA
0.508 0.542 0.534
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Mistral-7B-Instruct-v0.3
MERA
0.378 0.398 0.441 0.62 0.464 0.577 0.476 0.473 0.466 0.466 0.289 0.317 0.29 0.509 0.543 0.488 0.55 0.417 0.267 0.489 0.242 0.608 0.3 0.451 0.393 0.46 0.342 0.481 0.502 0.32 0.6 0.62 0.393 0.677 0.445 0.355 0.27 0.5 0.265 0.374 0.545 0.386 0.372 0.353 0.513 0.287 0.623 0.248 0.34 0.385 0.4 0.684 0.403 0.399 0.58 0.612 0.466
Model, team SIM FL STA
Mistral-7B-Instruct-v0.3
MERA
0.63 0.515 0.689
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Mistral-7B-Instruct-v0.3
MERA
0.422 0.455 0.45 0.426 0.434 0.483 0.362 0.474 0.558 0.492 0.615 0.475 0.425 0.55 0.491 0.506 0.467 0.444 0.351 0.632 0.263 0.559 0.489 0.544 0.511 0.606 0.551 0.491 0.825 0.667 0.622 0.628 0.527 0.561 0.424 0.5 0.356 0.689 0.333 0.508 0.653 0.556 0.711 0.622 0.776 0.756 0.707 0.5 0.569 0.719 0.578 0.522 0.658 0.455 0.442 0.491 0.7
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-7B-Instruct-v0.3
MERA
0.204 0.21 0.213 0.166 0.188
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-7B-Instruct-v0.3
MERA
0.326 0.307 0.327 0.277 0.262
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-7B-Instruct-v0.3
MERA
0.307 0.291 0.317 0.263 0.267
Model, team Women Men LGBT Nationalities Migrants Other
Mistral-7B-Instruct-v0.3
MERA
0.713 0.657 0.412 0.649 0.429 0.508