FRED-T5 1.7B

MERA Created at 12.01.2024 11:14
0.191
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.088 Accuracy
RCB 0.333 / 0.167 Accuracy F1 macro
USE 0 Grade norm
RWSD 0.5 Accuracy
PARus 0.498 Accuracy
ruTiE 0.495 Accuracy
MultiQ 0.031 / 0.001 F1 Exact match
CheGeKa 0.006 / 0 F1 Exact match
ruModAr 0.001 Exact match
ruMultiAr 0.0 Exact match
MathLogicQA 0.246 Accuracy
ruWorldTree 0.255 / 0.13 Accuracy F1 macro
ruOpenBookQA 0.25 / 0.129 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.508 Accuracy
ruMMLU 0.262 Accuracy
SimpleAr 0.0 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.494
ruHateSpeech 0.543
ruDetox 0.124
ruEthics
Correct God Ethical
Virtue 0 0 0
Law 0 0 0
Moral 0 0 0
Justice 0 0 0
Utilitarianism 0 0 0

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

FRED-T5 1.7B

Additional links:

https://arxiv.org/abs/2309.10931

Architecture description:

FRED-T5 (Full-scale Russian Enhanced Denoisers) is an encoder-decoder model based on T5 and UL2. Number of attantion heads 24. The dimensions of the hidden layers 1536. GELU activation function.

Description of the training:

Bbpe tokenizer. 50257 + special tokens 107. Prefix tokens: '<LM>', '<SC1>',.. '<SC6>'. Drawing inspiration from Tay et al. (2022), the FRED-T5 1.7.B (or XL) model was pretrained on a mixture of denoisers (MoD), a pretraining objective that represents a set of diverse pretraining objectives. The R-Denoiser is a masked language modeling span corruption objective used in T5. The S-Denoiser follows the language modeling objective, where the input sequence is split into the context and target tokens so that the targets do not rely on future information. The X-Denoiser aims to recover a large part of the input based on the span corruption and language modeling objectives.

Pretrain data:

It was trained on Russian language corpus (300GB).

Training Details:

FRED-T5 is pretrained using a linear scheduler with the initial learning rate of 1e−4 and the Adafactor optimizer (Shazeer and Stern, 2018) with β1 = 0.9, β2 = 0.99, and ϵ = 1e−8. The sequence length is set to 512/512 for inputs and targets. The FRED-T5-XL models is pretrained with a total batch size of 2048 and for 45 days on 112 A100 GPUs.

License:

MIT

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 512

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
FRED-T5 1.7B
MERA
0.508 0.475 0.5
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
FRED-T5 1.7B
MERA
0 0.375 0.3 0.343 0.095 0.3 0.2 0.412 0.5 0 0.455 0.8 0.1 0.423 0.182 0.3 0.3 0.296 0.5 0.3 0.2 0.288 0.1 0.196 0 0.364 0.313 0.357 0 0.182 0.2 0.444 0.2 0.1 0.182 0.4 0.2 0.286 0.4 0.4 0.266 0.2 0.3 0.3 0.25 0.3 0.2 0.2 0.1 0.1 0.182 0.188 0.353 0.2 0.125 0.152 0.333
Model, team SIM FL STA
FRED-T5 1.7B
MERA
0.315 0.559 0.277
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
FRED-T5 1.7B
MERA
0 0 0 0 0
Model, team Virtue Law Moral Justice Utilitarianism
FRED-T5 1.7B
MERA
0 0 0 0 0
Model, team Virtue Law Moral Justice Utilitarianism
FRED-T5 1.7B
MERA
0 0 0 0 0
Model, team Women Men LGBT Nationalities Migrants Other
FRED-T5 1.7B
MERA
0.519 0.686 0.588 0.595 0.286 0.492