FRED-T5 large 820M

MERA Created at 12.01.2024 11:15
0.194
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.086 Accuracy
RCB 0.354 / 0.248 Accuracy F1 macro
USE 0 Grade norm
RWSD 0.492 Accuracy
PARus 0.492 Accuracy
ruTiE 0.493 Accuracy
MultiQ 0.052 / 0 F1 Exact match
CheGeKa 0.001 / 0 F1 Exact match
ruModAr 0.0 Exact match
ruMultiAr 0.0 Exact match
MathLogicQA 0.24 Accuracy
ruWorldTree 0.232 / 0.174 Accuracy F1 macro
ruOpenBookQA 0.265 / 0.215 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.475 Accuracy
ruMMLU 0.248 Accuracy
SimpleAr 0.0 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.472
ruHateSpeech 0.543
ruDetox 0.003
ruEthics
Correct God Ethical
Virtue 0 0 0
Law 0 0 0
Moral 0 0 0
Justice 0 0 0
Utilitarianism 0 0 0

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

FRED-T5 large 820M

Additional links:

https://arxiv.org/abs/2309.10931

Architecture description:

FRED-T5 (Full-scale Russian Enhanced Denoisers) is an encoder-decoder model based on T5 and UL2. Number of attantion heads 16. The dimensions of the hidden layers 1024 and the fully connected layers 2816. GELU activation function.

Description of the training:

Bbpe tokenizer. 50257 + special tokens 107. Prefix tokens: '<LM>', '<SC1>',.. '<SC6>'. Drawing inspiration from Tay et al. (2022), the FRED-T5 1.7.B (or XL) model was pretrained on a mixture of denoisers (MoD), a pretraining objective that represents a set of diverse pretraining objectives. The R-Denoiser is a masked language modeling span corruption objective used in T5. The S-Denoiser follows the language modeling objective, where the input sequence is split into the context and target tokens so that the targets do not rely on future information. The X-Denoiser aims to recover a large part of the input based on the span corruption and language modeling objectives.

Pretrain data:

It was trained on Russian language corpus (300GB).

Training Details:

FRED-T5 is pretrained using a linear scheduler with the initial learning rate of 1e−4 and the Adafactor optimizer (Shazeer and Stern, 2018) with β1 = 0.9, β2 = 0.99, and ϵ = 1e−8. The sequence length is set to 512/512 for inputs and targets. The FRED-T5-XL models is pretrained pretrained with a total batch size of 2048 for 35 days on 160 V100 GPUs, followed by 5 days on 80 A100 GPUs.

License:

MIT

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 512

Expand information

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
FRED-T5 large 820M
MERA
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Model, team Honest Helpful Harmless
FRED-T5 large 820M
MERA
0.492 0.458 0.466
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
FRED-T5 large 820M
MERA
0.5 0.25 0.2 0.143 0.286 0.3 0.133 0.235 0.1 0.4 0.182 0 0.6 0.192 0.136 0.3 0.1 0.296 0.7 0.2 0.4 0.212 0.2 0.255 0.3 0.364 0.25 0.214 0.1 0.091 0.3 0.056 0.4 0.1 0.091 0.2 0.2 0.286 0.4 0 0.278 0.1 0.3 0.4 0.25 0.3 0.4 0.4 0.2 0.5 0.318 0.188 0.265 0.267 0.167 0.242 0.296
Model, team SIM FL STA
FRED-T5 large 820M
MERA
0.098 0.55 0.051
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
FRED-T5 large 820M
MERA
0 0 0 0 0
Model, team Virtue Law Moral Justice Utilitarianism
FRED-T5 large 820M
MERA
0 0 0 0 0
Model, team Virtue Law Moral Justice Utilitarianism
FRED-T5 large 820M
MERA
0 0 0 0 0
Model, team Women Men LGBT Nationalities Migrants Other
FRED-T5 large 820M
MERA
0.519 0.686 0.588 0.595 0.286 0.492