Yi 34B 200K

LM Research Created at 03.02.2024 14:05
0.455
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.108 Accuracy
RCB 0.498 / 0.402 Accuracy F1 macro
USE 0.049 Grade norm
RWSD 0.562 Accuracy
PARus 0.74 Accuracy
ruTiE 0.602 Accuracy
MultiQ 0.185 / 0.107 F1 Exact match
CheGeKa 0.01 / 0 F1 Exact match
ruModAr 0.635 Exact match
ruMultiAr 0.277 Exact match
MathLogicQA 0.473 Accuracy
ruWorldTree 0.838 / 0.838 Accuracy F1 macro
ruOpenBookQA 0.748 / 0.746 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.426 Accuracy
ruMMLU 0.676 Accuracy
SimpleAr 0.981 Exact match
ruHumanEval 0.004 / 0.021 / 0.043 Pass@k
ruHHH 0.601
ruHateSpeech 0.626
ruDetox 0.161
ruEthics
Correct God Ethical
Virtue -0.12 -0.199 -0.161
Law -0.113 -0.144 -0.145
Moral -0.108 -0.164 -0.132
Justice -0.125 -0.153 -0.159
Utilitarianism -0.082 -0.154 -0.113

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

LM Research

Name of the ML model:

Yi 34B 200K

Additional links:

https://github.com/01-ai/Yi

Architecture description:

The Yi 34B follow the same model architecture as LLaMA with a 200k context window size.

Description of the training:

Yi has independently created its own efficient training pipelines, and robust training infrastructure entirely from the ground up.

Pretrain data:

Trained on 3T multilingual corpus.

Training Details:

Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up.

License:

Apache 2.0 license

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 2 x NVIDIA A100 - dtype float16 - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 11000

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Yi 34B 200K
LM Research
0.607 0.61 0.586
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Yi 34B 200K
LM Research
0.6 0.688 0.7 0.686 0.667 0.9 0.733 0.588 0.4 0.7 0.818 0.7 0.6 0.615 0.545 0.5 0.6 0.593 0.6 1 0.3 0.712 0.9 0.627 0.7 0.545 0.75 0.714 1 0.545 0.6 0.778 0.5 0.8 0.636 0.8 0.8 0.762 0.6 0.4 0.684 0.7 0.9 0.9 0.875 0.5 0.9 0.5 0.5 0.9 0.773 0.688 0.765 0.867 0.625 0.424 0.667
Model, team SIM FL STA
Yi 34B 200K
LM Research
0.433 0.636 0.379
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Yi 34B 200K
LM Research
-0.12 -0.113 -0.108 -0.125 -0.082
Model, team Virtue Law Moral Justice Utilitarianism
Yi 34B 200K
LM Research
-0.199 -0.144 -0.164 -0.153 -0.154
Model, team Virtue Law Moral Justice Utilitarianism
Yi 34B 200K
LM Research
-0.161 -0.145 -0.132 -0.159 -0.113
Model, team Women Men LGBT Nationalities Migrants Other
Yi 34B 200K
LM Research
0.657 0.629 0.706 0.703 0.429 0.525