Google's UMT5 XL

MERA Created at 12.01.2024 14:48
0.201
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.12 Accuracy
RCB 0.326 / 0.185 Accuracy F1 macro
USE 0.001 Grade norm
RWSD 0.5 Accuracy
PARus 0.506 Accuracy
ruTiE 0.528 Accuracy
MultiQ 0.013 / 0.003 F1 Exact match
CheGeKa 0.001 / 0 F1 Exact match
ruModAr 0.0 Exact match
ruMultiAr 0.0 Exact match
MathLogicQA 0.261 Accuracy
ruWorldTree 0.269 / 0.255 Accuracy F1 macro
ruOpenBookQA 0.23 / 0.223 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.494 Accuracy
ruMMLU 0.254 Accuracy
SimpleAr 0.0 Exact match
ruHumanEval 0 / 0 / 0 Pass@k
ruHHH 0.489
ruHateSpeech 0.525
ruDetox 0.209
ruEthics
Correct God Ethical
Virtue -0.005 0.007 -0.034
Law 0.011 0.021 -0.058
Moral -0.005 -0.005 -0.061
Justice 0 -0.003 -0.025
Utilitarianism -0.031 -0.006 -0.055

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

Google's UMT5 XL

Additional links:

https://openreview.net/forum?id=kXwdL1cWOAi

Architecture description:

Authors closely follow mT5 (Xue et al., 2021) for model architecture and training procedure. Specifically, thet use an encoder-decoder Transformer architecture and the span corruption pretraining objective from T5 (Raffel et al., 2020) on a multilingual corpus consisting of 101 languages plus 6 Latin-script variants (e.g. ru-Latn). They use batch size of 1024 sequences where each sequence is defined by selecting a chunk of 568 tokens from the training corpus. This is then split into 512 input and 114 target tokens. The number of training steps is 250,000.

Description of the training:

The model architectures used in this study are the same as mT5 models, except that relative position embeddings are not shared across layers. The vocabulary size is 256,000 subwords, and byte-level fallback is enabled, so unknown tokens are broken down into UTF-8 bytes. Authors use the T5X library (Roberts et al., 2022) to train the models using Google Cloud TPUs. For pretraining, they use Adafactor optimizer (Shazeer & Stern, 2018) with a constant learning rate of 0.01 in the first 10,000 steps and inverse square root decay afterwards. For finetuning, they use Adafactor with a constant learning rate of 5e−5.

Pretrain data:

UMT5 is pretrained on the an updated version of mC4 corpus, covering 107 languages.

Training Details:

Unlike mT5, authors do not use loss normalization factor. Instead they use the number of real target tokens as the effective loss normalization. Finally, they do not factorize the second moment of the Adafactor states and use momentum, neither of which are used in T5 and mT5 studies.

License:

Apache 2.0

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 512

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Google's UMT5 XL
MERA
0.492 0.458 0.517
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Google's UMT5 XL
MERA
0.3 0.188 0.3 0.429 0.333 0.1 0.067 0.176 0.3 0.1 0.182 0.3 0.2 0.308 0.182 0.3 0.4 0.185 0.4 0.2 0.2 0.288 0.2 0.235 0.1 0.091 0.188 0.071 0.5 0.091 0.4 0.111 0.3 0.2 0.364 0.2 0.2 0.19 0.2 0.3 0.241 0 0.5 0.5 0.375 0.2 0.3 0.6 0.1 0.4 0.273 0.25 0.294 0.333 0.292 0.212 0.222
Model, team SIM FL STA
Google's UMT5 XL
MERA
0.589 0.466 0.667
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Google's UMT5 XL
MERA
-0.005 0.011 -0.005 0 -0.031
Model, team Virtue Law Moral Justice Utilitarianism
Google's UMT5 XL
MERA
0.007 0.021 -0.005 -0.003 -0.006
Model, team Virtue Law Moral Justice Utilitarianism
Google's UMT5 XL
MERA
-0.034 -0.058 -0.061 -0.025 -0.055
Model, team Women Men LGBT Nationalities Migrants Other
Google's UMT5 XL
MERA
0.509 0.629 0.588 0.568 0.286 0.475