ruGPT-3-small

MERA Created at 12.01.2024 14:47
0.191
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.08 Accuracy
RCB 0.333 / 0.167 Accuracy F1 macro
USE 0.001 Grade norm
RWSD 0.492 Accuracy
PARus 0.498 Accuracy
ruTiE 0.5 Accuracy
MultiQ 0.063 / 0.009 F1 Exact match
CheGeKa 0.007 / 0 F1 Exact match
ruModAr 0.001 Exact match
ruMultiAr 0.009 Exact match
MathLogicQA 0.244 Accuracy
ruWorldTree 0.257 / 0.254 Accuracy F1 macro
ruOpenBookQA 0.258 / 0.253 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.367 Accuracy
ruMMLU 0.263 Accuracy
SimpleAr 0.0 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.478
ruHateSpeech 0.54
ruDetox 0.316
ruEthics
Correct God Ethical
Virtue 0 0 0
Law 0 0 0
Moral 0 0 0
Justice 0 0 0
Utilitarianism 0 0 0

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

ruGPT-3-small

Additional links:

https://arxiv.org/abs/2309.10931

Architecture description:

ruGPT-3 is a Russian counterpart of GPT-3 (Brown et al., 2020). We use the model architecture description by Brown et al. and the GPT-2 code base (Radford et al., 2019) from the Transformers library. ruGPT-3 is pretrained on the language modeling objective. The BBPE tokenizer with the vocabulary size of 5 · 104 tokens was used.

Description of the training:

The model was trained with sequence length 1024 using transformers lib by the SberDevices team on 80B tokens for 3 epochs. After that, the model was finetuned 1 epoch with sequence length 2048. Total training time was around 14 days on 128 GPUs for 1024 context and a few days on 16 GPUs for 2048 context. The final perplexity on the test set is 13.6.

Pretrain data:

450GB of texts. The corpus includes texts from various publicly available resources, which represent diverse domains: Wikipedia, News, Books, Colossal Clean Crawled Corpus, OpenSubtitles.

Training Details:

The ruGPT-3 models are pretrained with a maximum sequence length of 1024 tokens for three epochs and of 2048 tokens for one epoch. We use the initial learning rate of 1e−4 and the Adam optimizer with β1 = 0.9, β2 = 0.99, and ϵ = 1e−8. The total number of tokens seen during pretraining is 80B. The pretraining of ruGPT3-large has taken 16 days on the cluster of 32 V100-SXM3 GPUs, respectively.

License:

MIT

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 2048

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
ruGPT-3-small
MERA
0.475 0.492 0.466
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
ruGPT-3-small
MERA
0.4 0.375 0.4 0.286 0.238 0.2 0.267 0.176 0.4 0.2 0.364 0.4 0.3 0.346 0.182 0.3 0.1 0.333 0.7 0.2 0.3 0.327 0.1 0.216 0 0.182 0.313 0.143 0.2 0.273 0.3 0.333 0.7 0 0.091 0.5 0 0.095 0.2 0.3 0.215 0.2 0.5 0.4 0.438 0 0.1 0.5 0.5 0 0.182 0.375 0.294 0.2 0.208 0.273 0.185
Model, team SIM FL STA
ruGPT-3-small
MERA
0.676 0.612 0.713
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
ruGPT-3-small
MERA
0 0 0 0 0
Model, team Virtue Law Moral Justice Utilitarianism
ruGPT-3-small
MERA
0 0 0 0 0
Model, team Virtue Law Moral Justice Utilitarianism
ruGPT-3-small
MERA
0 0 0 0 0
Model, team Women Men LGBT Nationalities Migrants Other
ruGPT-3-small
MERA
0.519 0.657 0.588 0.595 0.286 0.492