Mistral-Large-Instruct-2407

MERA Created at 22.09.2024 21:05
0.574
The overall result
42
Place in the rating
In the top by tasks:
5
MultiQ
The task is one of the main ones
Weak tasks:
38
RWSD
31
PARus
126
RCB
46
ruEthics
41
ruWorldTree
47
ruOpenBookQA
37
ruMMLU
96
ruHateSpeech
48
ruTiE
21
USE
59
MathLogicQA
52
ruMultiAr
23
SimpleAr
486
LCS
130
BPS
80
ruModAr
31
MaMuRAMu
+13
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.042 Accuracy
RCB 0.55 / 0.531 Accuracy F1 macro
USE 0.362 Grade norm
RWSD 0.635 Accuracy
PARus 0.932 Accuracy
ruTiE 0.833 Accuracy
MultiQ 0.63 / 0.471 F1 Exact match
CheGeKa 0.458 / 0.356 F1 Exact match
ruModAr 0.665 Exact match
MaMuRAMu 0.832 Accuracy
ruMultiAr 0.355 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.564 Accuracy
ruWorldTree 0.975 / 0.975 Accuracy F1 macro
ruOpenBookQA 0.915 / 0.915 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.963 Accuracy
ruMMLU 0.753 Accuracy
SimpleAr 0.996 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.871
ruHateSpeech 0.789
ruDetox 0.345
ruEthics
Correct God Ethical
Virtue 0.476 0.397 0.52
Law 0.49 0.375 0.512
Moral 0.511 0.415 0.555
Justice 0.446 0.349 0.468
Utilitarianism 0.425 0.375 0.445

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.44.2
The number of GPUs and their type
8 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Mistral-Large-Instruct-2407

Model size

123.0B

Model type:

Opened

SFT

Additional links:

https://mistral.ai/news/mistral-large-2407/

Architecture description:

Mistral-Large-Instruct-2407 is an advanced dense Large Language Model Mistral

Description of the training:

-

Pretrain data:

-

License:

mrl

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie - 131072 \nrutie - 20000

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Mistral-Large-Instruct-2407
MERA
0.567 0.467 0.8 0.067 0.2 0.333 0.1 - 0.067 0.033 0 0.033 0.4 0.133 0.033 0.483 0.033 0.1 0 0.033 0.1 0.767 0.4 0.6 0.267 0.708 0.3 0.567 0.667 0.7 0.733
Model, team Honest Helpful Harmless
Mistral-Large-Instruct-2407
MERA
0.836 0.864 0.914
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Mistral-Large-Instruct-2407
MERA
0.704 0.524 0.908 0.859 0.85 0.811 0.816 0.768 0.864 0.758 0.605 0.571 0.71 0.796 0.871 0.777 0.76 0.896 0.633 0.908 0.632 0.889 0.65 0.757 0.607 0.92 0.565 0.685 0.759 0.51 0.8 0.826 0.736 0.929 0.804 0.816 0.5 0.894 0.669 0.734 0.828 0.879 0.697 0.78 0.891 0.704 0.892 0.544 0.553 0.744 0.77 0.899 0.81 0.87 0.91 0.848 0.891
Model, team SIM FL STA
Mistral-Large-Instruct-2407
MERA
0.662 0.756 0.731
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Mistral-Large-Instruct-2407
MERA
0.644 0.871 0.783 0.694 0.921 0.828 0.69 0.754 0.962 0.8 0.821 0.808 0.567 0.884 0.836 0.716 0.738 0.844 0.772 0.825 0.86 0.932 0.844 0.882 0.867 0.833 0.859 0.807 0.912 0.844 0.889 0.872 0.839 0.947 0.712 0.839 0.8 0.822 0.719 0.769 0.914 0.889 0.867 0.889 0.948 0.889 0.931 0.932 0.862 0.965 0.867 0.942 0.785 0.727 0.721 0.836 0.922
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-Large-Instruct-2407
MERA
0.476 0.49 0.511 0.446 0.425
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-Large-Instruct-2407
MERA
0.397 0.375 0.415 0.349 0.375
Model, team Virtue Law Moral Justice Utilitarianism
Mistral-Large-Instruct-2407
MERA
0.52 0.512 0.555 0.468 0.445
Model, team Women Men LGBT Nationalities Migrants Other
Mistral-Large-Instruct-2407
MERA
0.759 0.714 0.765 0.811 0.714 0.885