Google's UMT5 Base

Создан 13.01.2024 00:02

Общая оценка: 0.195

Таблица скроллится влево

Задача Результат Метрика
BPS 0.523 Accuracy
LCS 0.106 Accuracy
RCB 0.336 / 0.306 Avg. F1 / Accuracy
USE 0 Grade Norm
RWSD 0.5 Accuracy
PARus 0.468 Accuracy
ruTiE 0.526 Accuracy
MultiQ 0.002 / 0 F1-score/EM
ruMMLU 0.231 Accuracy
CheGeKa 0.001 / 0 F1 / EM
ruModAr 0.0 Accuracy
SimpleAr 0.0 Accuracy
ruMultiAr 0.0 Accuracy
MathLogicQA 0.252 Accuracy
ruHumanEval 0 / 0 / 0 pass@k
ruWorldTree 0.238 / 0.147 Avg. F1 / Accuracy
ruOpenBookQA 0.243 / 0.148 Avg. F1 / Accuracy

Оценка на диагностических датасетах:

Не учитывается в общем рейтинге

Таблица скроллится влево

Задача Результат Метрика
ruHHH

0.528

  • Honest: 0.459
  • Harmless: 0.569
  • Helpful: 0.559
Accuracy
ruHateSpeech

0.521

  • Женщины : 0.565
  • Мужчины : 0.429
  • ЛГБТ : 0.412
  • Национальность : 0.595
  • Мигранты : 0.286
  • Другое : 0.508
Accuracy
ruDetox
  • 0.005
  • 0.178
  • 0.352
  • 0.077

Общая средняя оценка (J)

Оценка сохранения смысла (SIM)

Оценка натуральности (FL)

Точность переноса стиля (STA)

ruEthics
Правильно Хорошо Этично
Добродетель 0.052 0.075 0.019
Закон 0.049 0.087 0.014
Мораль 0.034 0.058 0.001
Справедливость 0.041 0.065 0.006
Утилитаризм 0.054 0.054 0.035

Результаты таблицы:

[[0.052, 0.049 , 0.034, 0.041 , 0.054],
[0.075, 0.087 , 0.058, 0.065 , 0.054],
[0.019, 0.014 , 0.001, 0.006 , 0.035]]

5 MCC

Информация о сабмите:

Команда:

MERA

Название ML-модели:

Google's UMT5 Base

Ссылка на ML-модель:

https://huggingface.co/google/umt5-base

Дополнительные ссылки:

https://openreview.net/forum?id=kXwdL1cWOAi

Описание архитектуры:

Authors closely follow mT5 (Xue et al., 2021) for model architecture and training procedure. Specifically, thet use an encoder-decoder Transformer architecture and the span corruption pretraining objective from T5 (Raffel et al., 2020) on a multilingual corpus consisting of 101 languages plus 6 Latin-script variants (e.g. ru-Latn). They use batch size of 1024 sequences where each sequence is defined by selecting a chunk of 568 tokens from the training corpus. This is then split into 512 input and 114 target tokens. The number of training steps is 250,000.

Описание обучения:

The model architectures used in this study are the same as mT5 models, except that relative position embeddings are not shared across layers. The vocabulary size is 256,000 subwords, and byte-level fallback is enabled, so unknown tokens are broken down into UTF-8 bytes. Authors use the T5X library (Roberts et al., 2022) to train the models using Google Cloud TPUs. For pretraining, they use Adafactor optimizer (Shazeer & Stern, 2018) with a constant learning rate of 0.01 in the first 10,000 steps and inverse square root decay afterwards. For finetuning, they use Adafactor with a constant learning rate of 5e−5.

Данные претрейна:

UMT5 is pretrained on the an updated version of mC4 corpus, covering 107 languages.

Детали обучения:

Unlike mT5, authors do not use loss normalization factor. Instead they use the number of real target tokens as the effective loss normalization. Finally, they do not factorize the second moment of the Adafactor states and use momentum, neither of which are used in T5 and mT5 studies.

Лицензия:

Apache 2.0

Стратегия, генерация и параметры:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 512