Qwen 1.5B Instruct

Создан 17.08.2024 08:01

Оценка по основным задачам: 0.381

Сабмит содержит не все обязательные задачи

Таблица скроллится влево

Задача Результат Метрика
LCS 0.122 Accuracy
RCB 0.418 / 0.32 Avg. F1 / Accuracy
USE 0.012 Grade Norm
RWSD 0.5 Accuracy
PARus 0.648 Accuracy
ruTiE 0.521 Accuracy
MultiQ 0.101 / 0.021 F1-score/EM
CheGeKa 0.01 / 0 F1 / EM
ruModAr 0.392 EM
ruMultiAr 0.185 EM
MathLogicQA 0.353 Accuracy
ruWorldTree 0.749 / 0.749 Avg. F1 / Accuracy
ruOpenBookQA 0.678 / 0.677 Avg. F1 / Accuracy

Оценка на открытых задачах:

Не учитывается в общем рейтинге

Таблица скроллится влево

Задача Результат Метрика
BPS 0.349 Accuracy
ruMMLU 0.601 Accuracy
SimpleAr 0.93 EM
ruHumanEval 0 / 0 / 0 pass@k
ruHHH

0.545

  • Honest: 0.525
  • Harmless: 0.569
  • Helpful: 0.542
Accuracy
ruHateSpeech

0.657

  • Женщины : 0.602
  • Мужчины : 0.686
  • ЛГБТ : 0.588
  • Национальность : 0.622
  • Мигранты : 0.286
  • Другое : 0.82
Accuracy
ruDetox
  • 0.083
  • 0.325
  • 0.688
  • 0.273

Общая средняя оценка (J)

Оценка сохранения смысла (SIM)

Оценка натуральности (FL)

Точность переноса стиля (STA)

ruEthics
Правильно Хорошо Этично
Добродетель -0.18 -0.183 -0.208
Закон -0.144 -0.172 -0.172
Мораль -0.169 -0.198 -0.192
Справедливость -0.14 -0.152 -0.162
Утилитаризм -0.147 -0.198 -0.164

Результаты таблицы:

[[-0.18, -0.144 , -0.169, -0.14 , -0.147],
[-0.183, -0.172 , -0.198, -0.152 , -0.198],
[-0.208, -0.172 , -0.192, -0.162 , -0.164]]

5 MCC

Информация о сабмите:

Команда:

НГУ

Название ML-модели:

Qwen 1.5B Instruct

Ссылка на ML-модель:

https://huggingface.co/Qwen/Qwen2-1.5B-Instruct

Описание архитектуры:

Qwen2 1.5B Instruct is a language model including decoder of 1.5B size. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.

Описание обучения:

The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.

Данные претрейна:

The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 32K tokens.

Детали обучения:

The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference. Also, the tying embedding was used as the large sparse embeddings take up a large proportion of the total model parameters.

Лицензия:

Apache 2.0

Стратегия, генерация и параметры:

All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype float32- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 32768.