The table will scroll to the left
Task name | Result | Metric |
---|---|---|
LCS | 0.122 | Accuracy |
RCB | 0.333 / 0.167 | Avg. F1 / Accuracy |
USE | 0 | Grade Norm |
RWSD | 0.515 | Accuracy |
PARus | 0.498 | Accuracy |
ruTiE | 0.5 | Accuracy |
MultiQ | 0.099 / 0.026 | F1-score/EM |
CheGeKa | 0.007 / 0 | F1 / EM |
ruModAr | 0.001 | EM |
ruMultiAr | 0.007 | EM |
MathLogicQA | 0.251 | Accuracy |
ruWorldTree | 0.232 / 0.191 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.21 / 0.178 | Avg. F1 / Accuracy |
The table will scroll to the left
Task name | Result | Metric | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BPS | 0.416 | Accuracy | ||||||||||||||||||||||||
ruMMLU | 0.245 | Accuracy | ||||||||||||||||||||||||
SimpleAr | 0.004 | EM | ||||||||||||||||||||||||
ruHumanEval | 0 / 0 / 0 | pass@k | ||||||||||||||||||||||||
ruHHH |
0.478
|
Accuracy | ||||||||||||||||||||||||
ruHateSpeech |
0.543
|
Accuracy | ||||||||||||||||||||||||
ruDetox |
|
Overall average score (J) Assessment of the preservation of meaning (SIM) Assessment of naturalness (FL) Style Transfer Accuracy (STA) |
||||||||||||||||||||||||
ruEthics |
Table results:
[[0.039, 0.032
, 0.042, 0.029
, 0.03], |
5 MCC |
MERA
ruGPT-3-large
ruGPT-3 is a Russian counterpart of GPT-3 (Brown et al., 2020). We use the model architecture description by Brown et al. and the GPT-2 code base (Radford et al., 2019) from the Transformers library. ruGPT-3 is pretrained on the language modeling objective. The BBPE tokenizer with the vocabulary size of 5 · 104 tokens was used.
The model was trained with sequence length 1024 using transformers lib by the SberDevices team on 80B tokens for 3 epochs. After that, the model was finetuned 1 epoch with sequence length 2048. Total training time was around 14 days on 128 GPUs for 1024 context and a few days on 16 GPUs for 2048 context. The final perplexity on the test set is 13.6.
450GB of texts. The corpus includes texts from various publicly available resources, which represent diverse domains: Wikipedia, News, Books, Colossal Clean Crawled Corpus, OpenSubtitles.
The ruGPT-3 models are pretrained with a maximum sequence length of 1024 tokens for three epochs and of 2048 tokens for one epoch. We use the initial learning rate of 1e−4 and the Adam optimizer with β1 = 0.9, β2 = 0.99, and ϵ = 1e−8. The total number of tokens seen during pretraining is 80B. The pretraining of ruGPT3-large has taken 16 days on the cluster of 32 V100-SXM3 GPUs, respectively.
MIT
Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 2048