mGPT 1.3B

Created at 12.01.2024 11:17

General assessment: 0.198

The table will scroll to the left

Task name Result Metric
BPS 0.449 Accuracy
LCS 0.136 Accuracy
RCB 0.333 / 0.167 Avg. F1 / Accuracy
USE 0 Grade Norm
RWSD 0.519 Accuracy
PARus 0.498 Accuracy
ruTiE 0.5 Accuracy
MultiQ 0.055 / 0.014 F1-score/EM
ruMMLU 0.241 Accuracy
CheGeKa 0.004 / 0 F1 / EM
ruModAr 0.001 EM
SimpleAr 0.007 EM
ruMultiAr 0.012 EM
MathLogicQA 0.258 Accuracy
ruHumanEval 0 / 0 / 0 pass@k
ruWorldTree 0.251 / 0.225 Avg. F1 / Accuracy
ruOpenBookQA 0.245 / 0.193 Avg. F1 / Accuracy

Evaluation on diagnostic datasets:

It is not taken into account in the overall rating

The table will scroll to the left

Task name Result Metric
ruHHH

0.478

  • Honest: 0.492
  • Harmless: 0.466
  • Helpful: 0.475
Accuracy
ruHateSpeech

0.543

  • Women : 0.519
  • Man : 0.686
  • LGBT : 0.588
  • Nationality : 0.595
  • Migrants : 0.286
  • Other : 0.492
Accuracy
ruDetox
  • 0.35
  • 0.721
  • 0.601
  • 0.734

Overall average score (J)

Assessment of the preservation of meaning (SIM)

Assessment of naturalness (FL)

Style Transfer Accuracy (STA)

ruEthics
Correct God Ethical
Virtue 0.071 0.055 0.03
Law 0.083 0.074 0.051
Moral 0.092 0.079 0.053
Justice 0.075 0.075 0.046
Utilitarianism 0.12 0.085 0.075

Table results:

[[0.071, 0.083 , 0.092, 0.075 , 0.12],
[0.055, 0.074 , 0.079, 0.075 , 0.085],
[0.03, 0.051 , 0.053, 0.046 , 0.075]]

5 MCC

Information about the submission:

Team:

MERA

Name of the ML model:

mGPT 1.3B

Additional links:

https://arxiv.org/pdf/2204.07580.pdf

Architecture description:

The mGPT architecture is based on GPT-3. We use the architecture description by Brown et al., the GPT-2 code base (Radford et al., 2019) from HuggingFace (Wolf et al., 2020) and MegatronLM (Shoeybi et al., 2020). With all the other hyperparameters equal, GPT-3 has fewer layers (Layers: 48 vs. 24) but a larger hidden size (dmodel: 1600 vs. 2048) as opposed to GPT-2. GPT-3 also alternates the classic dense and sparse attention layers (Child et al., 2019).

Description of the training:

LM was pretrained with a total batch size of 2048 and a context window of 512 tokens. The total number of the training steps is 600k, and the models have seen 400B tokens during pretraining. The pretraining took 22 days on a cluster of 512 V100 GPUs for mGPT13B.

Pretrain data:

The pretraining corpus represents a collection of documents from Wikipedia and C4. The Wikipedia texts are extracted from the dumps (v. 20201101) with WikiExtractor (Attardi, 2015). The C4 data is downloaded using the Tensorflow datasets(Paper, 2021).

Training Details:

Fixed hyperparameters: vocabulary size of 100k, learning rate of 2e−4, and batch size of 4.

License:

MIT

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 2048