mGPT 13B

MERA Created at 12.01.2024 11:17
0.196
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.132 Accuracy
RCB 0.333 / 0.167 Accuracy F1 macro
USE 0.002 Grade norm
RWSD 0.485 Accuracy
PARus 0.498 Accuracy
ruTiE 0.5 Accuracy
MultiQ 0.062 / 0.023 F1 Exact match
CheGeKa 0.006 / 0 F1 Exact match
ruModAr 0.0 Exact match
ruMultiAr 0.019 Exact match
MathLogicQA 0.263 Accuracy
ruWorldTree 0.232 / 0.172 Accuracy F1 macro
ruOpenBookQA 0.25 / 0.193 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.463 Accuracy
ruMMLU 0.235 Accuracy
SimpleAr 0.023 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.478
ruHateSpeech 0.543
ruDetox 0.343
ruEthics
Correct God Ethical
Virtue -0.088 0.045 0.004
Law -0.1 0.066 0.016
Moral -0.083 0.042 0.014
Justice -0.106 0.074 -0.008
Utilitarianism -0.066 0.036 0.018

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

mGPT 13B

Additional links:

https://arxiv.org/pdf/2204.07580.pdf

Architecture description:

The mGPT architecture is based on GPT-3. We use the architecture description by Brown et al., the GPT-2 code base (Radford et al., 2019) from HuggingFace (Wolf et al., 2020) and MegatronLM (Shoeybi et al., 2020). With all the other hyperparameters equal, GPT-3 has fewer layers (Layers: 48 vs. 24) but a larger hidden size (dmodel: 1600 vs. 2048) as opposed to GPT-2. GPT-3 also alternates the classic dense and sparse attention layers (Child et al., 2019).

Description of the training:

LM was pretrained with a total batch size of 2048 and a context window of 512 tokens. The total number of the training steps is 600k, and the models have seen 400B tokens during pretraining. The pretraining took 22 days on a cluster of 512 V100 GPUs for mGPT13B.

Pretrain data:

The pretraining corpus represents a collection of documents from Wikipedia and C4. The Wikipedia texts are extracted from the dumps (v. 20201101) with WikiExtractor (Attardi, 2015). The C4 data is downloaded using the Tensorflow datasets(Paper, 2021).

Training Details:

Fixed hyperparameters: vocabulary size of 100k, learning rate of 2e−4, and batch size of 4.

License:

MIT

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 2048

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
mGPT 13B
MERA
0.492 0.475 0.466
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
mGPT 13B
MERA
0.4 0.375 0.1 0.114 0.286 0.2 0.267 0.353 0.1 0.5 0.364 0.7 0.3 0.308 0.091 0.2 0.2 0.222 0.2 0.7 0.2 0.135 0.2 0.176 0.2 0.182 0.438 0.143 0.3 0.273 0.1 0.278 0.4 0.2 0.182 0.2 0.3 0.048 0.2 0.2 0.177 0.2 0.1 0.2 0.25 0.4 0.2 0.2 0.3 0.1 0.227 0.625 0.324 0.333 0.125 0.273 0.111
Model, team SIM FL STA
mGPT 13B
MERA
0.7 0.607 0.728
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
mGPT 13B
MERA
-0.088 -0.1 -0.083 -0.106 -0.066
Model, team Virtue Law Moral Justice Utilitarianism
mGPT 13B
MERA
0.045 0.066 0.042 0.074 0.036
Model, team Virtue Law Moral Justice Utilitarianism
mGPT 13B
MERA
0.004 0.016 0.014 -0.008 0.018
Model, team Women Men LGBT Nationalities Migrants Other
mGPT 13B
MERA
0.519 0.686 0.588 0.595 0.286 0.492