Meta-Llama-3.1-405B-Instruct

MERA Created at 20.09.2024 17:54
0.59
The overall result
29
Place in the rating

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.258 Accuracy
RCB 0.598 / 0.548 Accuracy F1 macro
USE 0.357 Grade norm
RWSD 0.677 Accuracy
PARus 0.902 Accuracy
ruTiE 0.588 Accuracy
MultiQ 0.623 / 0.453 F1 Exact match
CheGeKa 0.506 / 0.413 F1 Exact match
ruModAr 0.573 Exact match
MaMuRAMu 0.868 Accuracy
ruMultiAr 0.437 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.772 Accuracy
ruWorldTree 0.981 / 0.981 Accuracy F1 macro
ruOpenBookQA 0.955 / 0.765 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.99 Accuracy
ruMMLU 0.813 Accuracy
SimpleAr 0.997 Exact match
ruHumanEval 0.006 / 0.006 / 0.006 Pass@k
ruHHH 0.809
ruHateSpeech 0.845
ruDetox 0.381
ruEthics
Correct God Ethical
Virtue 0.525 0.489 0.692
Law 0.538 0.47 0.689
Moral 0.574 0.522 0.749
Justice 0.491 0.44 0.633
Utilitarianism 0.467 0.413 0.61

Information about the submission:

Mera version
v.1.2.0
Torch Version
-
The version of the codebase
9b26db97
CUDA version
-
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.44.2
The number of GPUs and their type
16 x NVIDIA H100 80GB HBM3
Architecture
openai-completions, vllm

Team:

MERA

Name of the ML model:

Meta-Llama-3.1-405B-Instruct

Model size

405.0B

Model type:

Opened

SFT

Architecture description:

A pretrained and instruction tuned 405B auto-regressive generative model with Grouped-Query Attention. Optimized for multilingual dialogue use cases.

Description of the training:

The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Pretrain data:

Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. The pretraining data has a cutoff of December 2023.

License:

https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, chegeka, rudetox, rumultiar, use, multiq, rumodar, ruhumaneval, rucodeeval, bps, lcs, mathlogicqa, parus, rcb, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rwsd, mamuramu, ruethics, ruhhh, rutie - 2047

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Meta-Llama-3.1-405B-Instruct
MERA
0.6 0.667 0.867 0.167 0.167 0.633 0.167 - 0.033 0.1 0.067 0.033 0.3 0.233 0.1 0.533 0.067 0.067 0.133 0.133 0.067 0.7 0.467 0.333 0.233 0.75 0.1 0.167 0.5 0.3 0.667
Model, team Honest Helpful Harmless
Meta-Llama-3.1-405B-Instruct
MERA
0.803 0.864 0.759
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Meta-Llama-3.1-405B-Instruct
MERA
0.778 0.56 0.908 0.915 0.895 0.915 0.883 0.849 0.858 0.794 0.693 0.706 0.69 0.852 0.925 0.827 0.84 0.965 0.689 0.863 0.768 0.901 0.68 0.757 0.741 0.92 0.654 0.75 0.816 0.62 0.82 0.893 0.828 0.949 0.83 0.842 0.57 0.923 0.722 0.768 0.904 0.908 0.752 0.846 0.923 0.782 0.902 0.689 0.642 0.816 0.78 0.941 0.859 0.92 0.94 0.879 0.922
Model, team SIM FL STA
Meta-Llama-3.1-405B-Instruct
MERA
0.718 0.721 0.756
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Meta-Llama-3.1-405B-Instruct
MERA
0.778 0.921 0.817 0.787 0.934 0.845 0.793 0.789 0.981 0.754 0.808 0.817 0.583 0.915 0.883 0.827 0.794 0.844 0.842 0.842 0.912 0.932 0.911 0.905 0.8 0.894 0.897 0.789 0.93 0.867 0.911 0.936 0.92 0.965 0.773 0.839 0.844 0.889 0.737 0.892 0.951 0.952 0.911 0.978 0.948 0.933 0.948 0.977 0.8 0.93 0.933 0.957 0.823 0.779 0.767 0.877 0.933
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Meta-Llama-3.1-405B-Instruct
MERA
0.525 0.538 0.574 0.491 0.467
Model, team Virtue Law Moral Justice Utilitarianism
Meta-Llama-3.1-405B-Instruct
MERA
0.489 0.47 0.522 0.44 0.413
Model, team Virtue Law Moral Justice Utilitarianism
Meta-Llama-3.1-405B-Instruct
MERA
0.692 0.689 0.749 0.633 0.61
Model, team Women Men LGBT Nationalities Migrants Other
Meta-Llama-3.1-405B-Instruct
MERA
0.88 0.686 0.824 0.838 0.714 0.902