Mixtral-8x22B-Instruct-v0.1

MERA Created at 06.10.2024 21:01
0.486
The overall result
123
Place in the rating

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.144 Accuracy
RCB 0.578 / 0.372 Accuracy F1 macro
USE 0.269 Grade norm
RWSD 0.473 Accuracy
PARus 0.87 Accuracy
ruTiE 0.743 Accuracy
MultiQ 0.521 / 0.366 F1 Exact match
CheGeKa 0.338 / 0.267 F1 Exact match
ruModAr 0.523 Exact match
MaMuRAMu 0.747 Accuracy
ruMultiAr 0.317 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.413 Accuracy
ruWorldTree 0.916 / 0.733 Accuracy F1 macro
ruOpenBookQA 0.835 / 0.669 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.995 Accuracy
ruMMLU 0.628 Accuracy
SimpleAr 0.989 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.893
ruHateSpeech 0.63
ruDetox 0.33
ruEthics
Correct God Ethical
Virtue 0.417 0.428 0.571
Law 0.414 0.424 0.559
Moral 0.448 0.459 0.609
Justice 0.374 0.384 0.524
Utilitarianism 0.357 0.406 0.499

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.44.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Mixtral-8x22B-Instruct-v0.1

Model size

140.6B

Model type:

Opened

SFT

Additional links:

https://mistral.ai/news/mixtral-8x22b/

Architecture description:

The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the https://huggingface.co/mistralai/Mixtral-8x22B-v0.1

Description of the training:

This model has been optimised through supervised fine-tuning and direct preference optimisation (DPO) for careful instruction following.

Pretrain data:

The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

License:

Apache 2.0

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie - 65536 \nrutie - 20000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{%- if messages[0]["role"] == "system" %} \n {%- set system_message = messages[0]["content"] %} \n {%- set loop_messages = messages[1:] %} \n{%- else %} \n {%- set loop_messages = messages %} \n{%- endif %} \n{%- if not tools is defined %} \n {%- set tools = none %} \n{%- endif %} \n{%- set user_messages = loop_messages | selectattr("role", "equalto", "user") | list %} \n \n{#- This block checks for alternating user/assistant messages, skipping tool calling messages #} \n{%- set ns = namespace() %} \n{%- set ns.index = 0 %} \n{%- for message in loop_messages %} \n {%- if not (message.role == "tool" or message.role == "tool_results" or (message.tool_calls is defined and message.tool_calls is not none)) %} \n {%- if (message["role"] == "user") != (ns.index % 2 == 0) %} \n {{- raise_exception("After the optional system message, conversation roles must alternate user/assistant/user/assistant/...") }} \n {%- endif %} \n {%- set ns.index = ns.index + 1 %} \n {%- endif %} \n{%- endfor %} \n \n{{- bos_token }} \n{%- for message in loop_messages %} \n {%- if message["role"] == "user" %} \n {%- if tools is not none and (message == user_messages[-1]) %} \n {{- "[AVAILABLE_TOOLS] [" }} \n {%- for tool in tools %} \n {%- set tool = tool.function %} \n {{- '{"type": "function", "function": {' }} \n {%- for key, val in tool.items() if key != "return" %} \n {%- if val is string %} \n {{- '"' + key + '": "' + val + '"' }} \n {%- else %} \n {{- '"' + key + '": ' + val|tojson }} \n {%- endif %} \n {%- if not loop.last %} \n {{- ", " }} \n {%- endif %} \n {%- endfor %} \n {{- "}}" }} \n {%- if not loop.last %} \n {{- ", " }} \n {%- else %} \n {{- "]" }} \n {%- endif %} \n {%- endfor %} \n {{- "[/AVAILABLE_TOOLS]" }} \n {%- endif %} \n {%- if loop.last and system_message is defined %} \n {{- "[INST] " + system_message + "\n\n" + message["content"] + "[/INST]" }} \n {%- else %} \n {{- "[INST] " + message["content"] + "[/INST]" }} \n {%- endif %} \n {%- elif message.tool_calls is defined and message.tool_calls is not none %} \n {{- "[TOOL_CALLS] [" }} \n {%- for tool_call in message.tool_calls %} \n {%- set out = tool_call.function|tojson %} \n {{- out[:-1] }} \n {%- if not tool_call.id is defined or tool_call.id|length != 9 %} \n {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }} \n {%- endif %} \n {{- ', "id": "' + tool_call.id + '"}' }} \n {%- if not loop.last %} \n {{- ", " }} \n {%- else %} \n {{- "]" + eos_token }} \n {%- endif %} \n {%- endfor %} \n {%- elif message["role"] == "assistant" %} \n {{- " " + message["content"]|trim + eos_token}} \n {%- elif message["role"] == "tool_results" or message["role"] == "tool" %} \n {%- if message.content is defined and message.content.content is defined %} \n {%- set content = message.content.content %} \n {%- else %} \n {%- set content = message.content %} \n {%- endif %} \n {{- '[TOOL_RESULTS] {"content": ' + content|string + ", " }} \n {%- if not message.tool_call_id is defined or message.tool_call_id|length != 9 %} \n {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }} \n {%- endif %} \n {{- '"call_id": "' + message.tool_call_id + '"}[/TOOL_RESULTS]' }} \n {%- else %} \n {{- raise_exception("Only user and assistant roles are supported, with the exception of an initial optional system message!") }} \n {%- endif %} \n{%- endfor %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Mixtral-8x22B-Instruct-v0.1
MERA
0.467 0.367 0.767 0.167 0 0.4 0.067 - 0 0.067 0.033 0 0.333 0.067 0.133 0.517 0.033 0.033 0.033 0.033 0.067 0.7 0.033 0.3 0.067 0.55 0.1 0.167 0.6 0.333 0.533
Model, team Honest Helpful Harmless
Mixtral-8x22B-Instruct-v0.1
MERA
0.836 0.881 0.966
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Mixtral-8x22B-Instruct-v0.1
MERA
0.563 0.494 0.77 0.825 0.735 0.806 0.767 0.691 0.722 0.677 0.544 0.508 0.43 0.741 0.784 0.65 0.66 0.764 0.444 0.718 0.364 0.784 0.39 0.653 0.482 0.73 0.467 0.648 0.698 0.45 0.71 0.769 0.626 0.818 0.657 0.573 0.37 0.787 0.45 0.571 0.818 0.651 0.634 0.525 0.785 0.56 0.843 0.411 0.482 0.611 0.59 0.819 0.649 0.697 0.78 0.824 0.813
Model, team SIM FL STA
Mixtral-8x22B-Instruct-v0.1
MERA
0.69 0.718 0.7
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Mixtral-8x22B-Instruct-v0.1
MERA
0.6 0.762 0.717 0.546 0.816 0.69 0.621 0.702 0.808 0.662 0.782 0.7 0.542 0.736 0.772 0.704 0.692 0.711 0.649 0.825 0.614 0.814 0.822 0.846 0.778 0.773 0.705 0.649 0.912 0.756 0.867 0.846 0.723 0.947 0.636 0.732 0.733 0.756 0.509 0.615 0.841 0.794 0.867 0.667 0.862 0.8 0.897 0.795 0.738 0.877 0.822 0.812 0.81 0.688 0.628 0.725 0.867
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Mixtral-8x22B-Instruct-v0.1
MERA
0.417 0.414 0.448 0.374 0.357
Model, team Virtue Law Moral Justice Utilitarianism
Mixtral-8x22B-Instruct-v0.1
MERA
0.428 0.424 0.459 0.384 0.406
Model, team Virtue Law Moral Justice Utilitarianism
Mixtral-8x22B-Instruct-v0.1
MERA
0.571 0.559 0.609 0.524 0.499
Model, team Women Men LGBT Nationalities Migrants Other
Mixtral-8x22B-Instruct-v0.1
MERA
0.537 0.686 0.588 0.784 0.571 0.689