Phi-3-mini-4k-instruct

BODBE LLM Created at 08.05.2024 13:28
0.387
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.086 Accuracy
RCB 0.511 / 0.425 Accuracy F1 macro
USE 0.052 Grade norm
RWSD 0.496 Accuracy
PARus 0.672 Accuracy
ruTiE 0.551 Accuracy
MultiQ 0.103 / 0.003 F1 Exact match
CheGeKa 0.005 / 0 F1 Exact match
ruModAr 0.49 Exact match
ruMultiAr 0.271 Exact match
MathLogicQA 0.391 Accuracy
ruWorldTree 0.621 / 0.62 Accuracy F1 macro
ruOpenBookQA 0.558 / 0.558 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.381 Accuracy
ruMMLU 0.478 Accuracy
SimpleAr 0.91 Exact match
ruHumanEval 0.02 / 0.101 / 0.201 Pass@k
ruHHH 0.539
ruHateSpeech 0.638
ruDetox 0.05
ruEthics
Correct God Ethical
Virtue -0.119 -0.147 -0.006
Law -0.124 -0.174 -0.004
Moral -0.139 -0.161 -0.014
Justice -0.111 -0.155 0.044
Utilitarianism -0.087 -0.128 -0.002

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

BODBE LLM

Name of the ML model:

Phi-3-mini-4k-instruct

Architecture description:

Phi-3 Mini-4K-Instruct имеет 3.8 миллиарда параметров и является dense моделью трансформера только с декодером.

Description of the training:

Модель дообучена с помощью SFT и DPO для обеспечения соответствия человеческим предпочтениям и рекомендациям по безопасности.

Pretrain data:

Набор данных для обучения включает в себя широкий спектр источников, общим объемом 3.3 триллиона токенов, и представляет собой комбинацию: Общедоступных документов, строго отфильтрованных по качеству, включая высококачественные образовательные данные и код; Новые синтетические данные, созданные в "учебно-пособийном" стиле для обучения математике, программированию, рассуждению на общеязыковом уровне (общие знания о мире, науке, повседневной жизни, теории разума и т. д.); Высококачественные чат-данные, охватывающие различные темы, чтобы отражать человеческие предпочтения по различным аспектам, таким как следование инструкциям, правдивость, честность и полезность.

Training Details:

GPUs: 512 H100-80G Training time: 7 days Training data: 3.3T tokens

License:

https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE

Strategy, generation and parameters:

PyTorch version: 2.2.1+CUDA 12.1 Transformers: 4.40.1 lm-harness: v1.1.0 GPU: NVIDIA A100-SXM4-80GB

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Phi-3-mini-4k-instruct
BODBE LLM
0.492 0.559 0.569
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Phi-3-mini-4k-instruct
BODBE LLM
0.3 0.625 0.4 0.514 0.429 0.5 0.533 0.471 0.7 0.6 0.636 0.2 0.2 0.385 0.227 0.5 0.6 0.333 0.5 0.9 0.3 0.327 0.4 0.373 0.5 0.636 0.688 0.429 0.8 0.273 0.4 0.5 0.5 0.4 0.636 0.4 0.6 0.571 0.3 0.4 0.468 0.4 0.8 0.6 0.563 0.8 0.4 0.5 0.7 0.9 0.455 0.5 0.588 0.6 0.417 0.333 0.556
Model, team SIM FL STA
Phi-3-mini-4k-instruct
BODBE LLM
0.236 0.541 0.218
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Phi-3-mini-4k-instruct
BODBE LLM
-0.119 -0.124 -0.139 -0.111 -0.087
Model, team Virtue Law Moral Justice Utilitarianism
Phi-3-mini-4k-instruct
BODBE LLM
-0.147 -0.174 -0.161 -0.155 -0.128
Model, team Virtue Law Moral Justice Utilitarianism
Phi-3-mini-4k-instruct
BODBE LLM
-0.006 -0.004 -0.014 0.044 -0.002
Model, team Women Men LGBT Nationalities Migrants Other
Phi-3-mini-4k-instruct
BODBE LLM
0.63 0.743 0.647 0.649 0.286 0.623