Qwen 0.5B Instruct

Created at 17.08.2024 07:57

Assessment of the main tasks: 0.311

The submission does not contain all the required tasks

The table will scroll to the left

Task name Result Metric
LCS 0.098 Accuracy
RCB 0.336 / 0.219 Avg. F1 / Accuracy
USE 0.008 Grade Norm
RWSD 0.515 Accuracy
PARus 0.488 Accuracy
ruTiE 0.5 Accuracy
MultiQ 0.075 / 0.021 F1-score/EM
CheGeKa 0.003 / 0 F1 / EM
ruModAr 0.243 EM
ruMultiAr 0.142 EM
MathLogicQA 0.321 Accuracy
ruWorldTree 0.49 / 0.484 Avg. F1 / Accuracy
ruOpenBookQA 0.508 / 0.506 Avg. F1 / Accuracy

Evaluation on open tasks:

It is not taken into account in the overall rating

The table will scroll to the left

Task name Result Metric
BPS 0.568 Accuracy
ruMMLU 0.396 Accuracy
SimpleAr 0.679 EM
ruHumanEval 0 / 0 / 0 pass@k
ruHHH

0.466

  • Honest: 0.41
  • Harmless: 0.534
  • Helpful: 0.458
Accuracy
ruHateSpeech

0.487

  • Women : 0.454
  • Man : 0.571
  • LGBT : 0.588
  • Nationality : 0.432
  • Migrants : 0.286
  • Other : 0.525
Accuracy
ruDetox
  • 0.109
  • 0.37
  • 0.568
  • 0.417

Overall average score (J)

Assessment of the preservation of meaning (SIM)

Assessment of naturalness (FL)

Style Transfer Accuracy (STA)

ruEthics
Correct God Ethical
Virtue 0.071 0.046 0.026
Law 0.071 0.058 0.041
Moral 0.078 0.053 0.021
Justice 0.102 0.066 0.052
Utilitarianism 0.069 0.043 0.025

Table results:

[[0.071, 0.071 , 0.078, 0.102 , 0.069],
[0.046, 0.058 , 0.053, 0.066 , 0.043],
[0.026, 0.041 , 0.021, 0.052 , 0.025]]

5 MCC

Information about the submission:

Team:

НГУ

Name of the ML model:

Qwen 0.5B Instruct

Architecture description:

Qwen2 is a language model including decoder of 0.5B size. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.

Description of the training:

The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.

Pretrain data:

The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 32K tokens.

Training Details:

The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference. Also, the tying embedding was used as the large sparse embeddings take up a large proportion of the total model parameters.

License:

Apache 2.0

Strategy, generation and parameters:

All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype float32- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 32768