Qwen 7B Instruct

Создан 17.08.2024 08:05

Оценка по основным задачам: 0.443

Сабмит содержит не все обязательные задачи

Таблица скроллится влево

Задача Результат Метрика
LCS 0.094 Accuracy
RCB 0.505 / 0.417 Avg. F1 / Accuracy
USE 0.023 Grade Norm
RWSD 0.527 Accuracy
PARus 0.84 Accuracy
ruTiE 0.64 Accuracy
MultiQ 0.112 / 0.019 F1-score/EM
CheGeKa 0.022 / 0 F1 / EM
ruModAr 0.381 EM
ruMultiAr 0.299 EM
MathLogicQA 0.5 Accuracy
ruWorldTree 0.933 / 0.933 Avg. F1 / Accuracy
ruOpenBookQA 0.835 / 0.834 Avg. F1 / Accuracy

Оценка на открытых задачах:

Не учитывается в общем рейтинге

Таблица скроллится влево

Задача Результат Метрика
BPS 0.162 Accuracy
ruMMLU 0.777 Accuracy
SimpleAr 0.991 EM
ruHumanEval 0 / 0 / 0 pass@k
ruHHH

0.635

  • Honest: 0.525
  • Harmless: 0.69
  • Helpful: 0.695
Accuracy
ruHateSpeech

0.747

  • Женщины : 0.769
  • Мужчины : 0.743
  • ЛГБТ : 0.706
  • Национальность : 0.622
  • Мигранты : 0.429
  • Другое : 0.836
Accuracy
ruDetox
  • 0.149
  • 0.38
  • 0.714
  • 0.43

Общая средняя оценка (J)

Оценка сохранения смысла (SIM)

Оценка натуральности (FL)

Точность переноса стиля (STA)

ruEthics
Правильно Хорошо Этично
Добродетель -0.37 -0.313 -0.316
Закон -0.369 -0.298 -0.292
Мораль -0.362 -0.312 -0.33
Справедливость -0.327 -0.268 -0.283
Утилитаризм -0.284 -0.25 -0.252

Результаты таблицы:

[[-0.37, -0.369 , -0.362, -0.327 , -0.284],
[-0.313, -0.298 , -0.312, -0.268 , -0.25],
[-0.316, -0.292 , -0.33, -0.283 , -0.252]]

5 MCC

Информация о сабмите:

Команда:

НГУ

Название ML-модели:

Qwen 7B Instruct

Ссылка на ML-модель:

https://huggingface.co/Qwen/Qwen2-7B-Instruct

Описание архитектуры:

Qwen2 7B Instruct is a language model including decoder of 7B size. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.

Описание обучения:

The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 128K tokens.

Данные претрейна:

The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.

Детали обучения:

The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference.

Лицензия:

Apache 2.0

Стратегия, генерация и параметры:

All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype float32- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 32768.