ruT5-base (222M)

MERA Created at 12.01.2024 11:20
0.193
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.1 Accuracy
RCB 0.336 / 0.269 Accuracy F1 macro
USE 0 Grade norm
RWSD 0.481 Accuracy
PARus 0.508 Accuracy
ruTiE 0.493 Accuracy
MultiQ 0.008 / 0 F1 Exact match
CheGeKa 0.001 / 0 F1 Exact match
ruModAr 0.0 Exact match
ruMultiAr 0.0 Exact match
MathLogicQA 0.259 Accuracy
ruWorldTree 0.234 / 0.151 Accuracy F1 macro
ruOpenBookQA 0.265 / 0.183 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.486 Accuracy
ruMMLU 0.237 Accuracy
SimpleAr 0.0 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.478
ruHateSpeech 0.498
ruDetox 0.003
ruEthics
Correct God Ethical
Virtue 0.008 -0.001 0.038
Law 0.001 -0.018 0.032
Moral 0.013 0.014 0.042
Justice 0.012 0.019 0.055
Utilitarianism -0.026 0.01 0.033

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

ruT5-base (222M)

Additional links:

https://arxiv.org/abs/2309.10931

Architecture description:

ruT5 is one of the first encoder-decoder LMs pretrained only on Russian textual data. The ruT5 model is designed analogically to the T5 model.

Description of the training:

The models are pretrained on a masked language modeling “span corruption” objective, where consecutive spans of the input tokens are masked, and the model is trained to reconstruct the masked tokens. The authors use the SentencePiece tokenizer with the vocabulary size of 32 tokens.

Pretrain data:

300GB of texts. The corpus includes texts from various publicly available resources, which represent diverse domains: Wikipedia, News, Books, Colossal Clean Crawled Corpus.

Training Details:

The ruT5 models is pretrained using a linear scheduler with the learning rate of 1e−4 and the Adam optimizer with β1 = 0.9, β2 = 0.99, and ϵ = 1e−8. The sequence length is set to 512/512 for inputs and targets.

License:

MIT

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 512

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
ruT5-base (222M)
MERA
0.475 0.475 0.483
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
ruT5-base (222M)
MERA
0.1 0.375 0.2 0.257 0.286 0.3 0.267 0.176 0.3 0.2 0.182 0.3 0.1 0.231 0.318 0.2 0.3 0.407 0.4 0.2 0.2 0.154 0.5 0.294 0.2 0.091 0.188 0.214 0.2 0.182 0.4 0.111 0 0.2 0.273 0.1 0.3 0.429 0.1 0.1 0.228 0.2 0.3 0.2 0.313 0.1 0.4 0.2 0.3 0.2 0.273 0.063 0.324 0.4 0.083 0.182 0.185
Model, team SIM FL STA
ruT5-base (222M)
MERA
0.182 0.235 0.079
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
ruT5-base (222M)
MERA
0.008 0.001 0.013 0.012 -0.026
Model, team Virtue Law Moral Justice Utilitarianism
ruT5-base (222M)
MERA
-0.001 -0.018 0.014 0.019 0.01
Model, team Virtue Law Moral Justice Utilitarianism
ruT5-base (222M)
MERA
0.038 0.032 0.042 0.055 0.033
Model, team Women Men LGBT Nationalities Migrants Other
ruT5-base (222M)
MERA
0.491 0.657 0.588 0.486 0.286 0.426