SOLAR 10.7B Instruct

Russian_NLP Created at 03.02.2024 13:43
0.469
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.078 Accuracy
RCB 0.523 / 0.503 Accuracy F1 macro
USE 0.04 Grade norm
RWSD 0.654 Accuracy
PARus 0.828 Accuracy
ruTiE 0.7 Accuracy
MultiQ 0.205 / 0.097 F1 Exact match
CheGeKa 0.206 / 0.139 F1 Exact match
ruModAr 0.459 Exact match
ruMultiAr 0.2 Exact match
MathLogicQA 0.396 Accuracy
ruWorldTree 0.884 / 0.884 Accuracy F1 macro
ruOpenBookQA 0.825 / 0.824 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.359 Accuracy
ruMMLU 0.698 Accuracy
SimpleAr 0.946 Exact match
ruHumanEval 0.013 / 0.067 / 0.134 Pass@k
ruHHH 0.702
ruHateSpeech 0.747
ruDetox 0.041
ruEthics
Correct God Ethical
Virtue -0.349 -0.391 -0.479
Law -0.374 -0.327 -0.451
Moral -0.374 -0.385 -0.484
Justice -0.343 -0.339 -0.465
Utilitarianism -0.297 -0.32 -0.384

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

Russian_NLP

Name of the ML model:

SOLAR 10.7B Instruct

Additional links:

https://arxiv.org/abs/2312.15166 https://huggingface.co/upstage/SOLAR-10.7B-v1.0

Architecture description:

SOLAR 10.7B Instruct is the instructed version of SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks.

Description of the training:

State-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO).

Pretrain data:

SOLAR 10.7B, a large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks.

Training Details:

The following datasets were used: - c-s-ale/alpaca-gpt4-data (SFT) - Open-Orca/OpenOrca (SFT) - in-house generated data utilizing Metamath(SFT, DPO) - Intel/orca_dpo_pairs (DPO) - allenai/ultrafeedback_binarized_cleaned (DPO)

License:

cc-by-nc-4.0

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 4096

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
SOLAR 10.7B Instruct
Russian_NLP
0.623 0.729 0.759
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
SOLAR 10.7B Instruct
Russian_NLP
0.7 0.75 0.5 0.629 0.857 0.9 0.8 0.824 0.6 0.9 0.727 0.6 0.6 0.654 0.727 0.4 0.9 0.704 0.7 1 0.2 0.769 0.9 0.608 0.5 0.818 0.563 0.786 0.9 0.727 0.4 0.667 0.5 0.8 0.727 0.8 0.7 0.762 0.3 0.6 0.759 0.8 0.9 0.6 0.813 0.4 1 0.4 0.7 1 0.727 0.875 0.853 0.733 0.375 0.455 0.741
Model, team SIM FL STA
SOLAR 10.7B Instruct
Russian_NLP
0.339 0.572 0.183
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
SOLAR 10.7B Instruct
Russian_NLP
-0.349 -0.374 -0.374 -0.343 -0.297
Model, team Virtue Law Moral Justice Utilitarianism
SOLAR 10.7B Instruct
Russian_NLP
-0.391 -0.327 -0.385 -0.339 -0.32
Model, team Virtue Law Moral Justice Utilitarianism
SOLAR 10.7B Instruct
Russian_NLP
-0.479 -0.451 -0.484 -0.465 -0.384
Model, team Women Men LGBT Nationalities Migrants Other
SOLAR 10.7B Instruct
Russian_NLP
0.769 0.657 0.647 0.784 0.429 0.803