Phi-3-medium-4k-instruct

MERA Created at 22.09.2024 21:09
0.465
The overall result
154
Place in the rating
Weak tasks:
401
RWSD
94
PARus
309
RCB
154
ruEthics
213
MultiQ
74
ruWorldTree
102
ruOpenBookQA
191
CheGeKa
108
ruMMLU
140
ruHateSpeech
230
ruDetox
87
ruHHH
107
ruTiE
154
USE
359
MathLogicQA
158
ruMultiAr
106
SimpleAr
126
LCS
161
BPS
156
ruModAr
137
MaMuRAMu
+17
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.13 Accuracy
RCB 0.495 / 0.435 Accuracy F1 macro
USE 0.175 Grade norm
RWSD 0.488 Accuracy
PARus 0.896 Accuracy
ruTiE 0.758 Accuracy
MultiQ 0.361 / 0.174 F1 Exact match
CheGeKa 0.135 / 0.091 F1 Exact match
ruModAr 0.507 Exact match
MaMuRAMu 0.719 Accuracy
ruMultiAr 0.287 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.334 Accuracy
ruWorldTree 0.962 / 0.962 Accuracy F1 macro
ruOpenBookQA 0.873 / 0.873 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.956 Accuracy
ruMMLU 0.653 Accuracy
SimpleAr 0.983 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.809
ruHateSpeech 0.77
ruDetox 0.171
ruEthics
Correct God Ethical
Virtue 0.355 0.337 0.461
Law 0.379 0.346 0.459
Moral 0.422 0.373 0.499
Justice 0.339 0.278 0.437
Utilitarianism 0.339 0.294 0.407

Information about the submission:

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Butch
1
Transformers version
4.43.2
The number of GPUs and their type
2 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Phi-3-medium-4k-instruct

Model size

14.0B

Model type:

Opened

SFT

Additional links:

https://arxiv.org/pdf/2404.14219

Architecture description:

Phi-3-Medium-4K-Instruct has 14B parameters and is a dense decoder-only Transformer model, with context length of 4K tokens.

Description of the training:

The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. Trained between February and April 2024.

Pretrain data:

The training data (with cutoff date October 2023) includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.

License:

MIT

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie, rutie - 4096

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|user|>' + ' \n' + message['content'] + '<|end|>' + ' \n' + '<|assistant|>' + ' \n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|end|>' + ' \n'}}{% endif %}{% endfor %}

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Phi-3-medium-4k-instruct
MERA
0.467 0.167 0.767 0.267 0.067 0.2 0.067 - 0 0.067 0.067 0.033 0.3 0 0.133 0.183 0.033 0.033 0 0.067 0.1 0.5 0.067 0.067 0.033 0.258 0.067 0.133 0.5 0.267 0.1
Model, team Honest Helpful Harmless
Phi-3-medium-4k-instruct
MERA
0.803 0.797 0.828
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Phi-3-medium-4k-instruct
MERA
0.63 0.458 0.796 0.821 0.732 0.806 0.738 0.688 0.682 0.641 0.465 0.579 0.33 0.787 0.71 0.688 0.73 0.764 0.511 0.771 0.56 0.754 0.52 0.636 0.58 0.71 0.5 0.657 0.751 0.47 0.73 0.81 0.755 0.798 0.675 0.722 0.43 0.855 0.457 0.635 0.833 0.662 0.6 0.48 0.852 0.523 0.799 0.3 0.496 0.625 0.52 0.827 0.751 0.815 0.73 0.752 0.839
Model, team SIM FL STA
Phi-3-medium-4k-instruct
MERA
0.312 0.744 0.784
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Phi-3-medium-4k-instruct
MERA
0.556 0.822 0.717 0.63 0.711 0.759 0.655 0.702 0.692 0.677 0.744 0.683 0.583 0.736 0.608 0.556 0.729 0.711 0.614 0.772 0.825 0.797 0.756 0.751 0.733 0.803 0.731 0.579 0.912 0.733 0.822 0.744 0.67 0.807 0.591 0.786 0.689 0.756 0.579 0.708 0.8 0.794 0.778 0.667 0.914 0.867 0.845 0.795 0.692 0.86 0.889 0.739 0.722 0.701 0.651 0.591 0.744
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Phi-3-medium-4k-instruct
MERA
0.355 0.379 0.422 0.339 0.339
Model, team Virtue Law Moral Justice Utilitarianism
Phi-3-medium-4k-instruct
MERA
0.337 0.346 0.373 0.278 0.294
Model, team Virtue Law Moral Justice Utilitarianism
Phi-3-medium-4k-instruct
MERA
0.461 0.459 0.499 0.437 0.407
Model, team Women Men LGBT Nationalities Migrants Other
Phi-3-medium-4k-instruct
MERA
0.806 0.686 0.647 0.784 0.571 0.803