Qwen 72B Instruct GPTQ Int4

Создан 25.08.2024 08:52

Оценка по основным задачам: 0.524

Сабмит содержит не все обязательные задачи

Таблица скроллится влево

Задача Результат Метрика
LCS 0.15 Accuracy
RCB 0.543 / 0.507 Avg. F1 / Accuracy
USE 0.257 Grade Norm
RWSD 0.7 Accuracy
PARus 0.958 Accuracy
ruTiE 0.874 Accuracy
MultiQ 0.305 / 0.182 F1-score/EM
CheGeKa 0.08 / 0.002 F1 / EM
ruModAr 0.194 EM
ruMultiAr 0.403 EM
MathLogicQA 0.681 Accuracy
ruWorldTree 0.989 / 0.989 Avg. F1 / Accuracy
ruOpenBookQA 0.963 / 0.962 Avg. F1 / Accuracy

Оценка на открытых задачах:

Не учитывается в общем рейтинге

Таблица скроллится влево

Задача Результат Метрика
BPS 0.038 Accuracy
ruMMLU 0.871 Accuracy
SimpleAr 0.995 EM
ruHumanEval 0.005 / 0.024 / 0.049 pass@k
ruHHH

0.848

  • Honest: 0.869
  • Harmless: 0.828
  • Helpful: 0.847
Accuracy
ruHateSpeech

0.864

  • Женщины : 0.861
  • Мужчины : 0.743
  • ЛГБТ : 0.941
  • Национальность : 0.892
  • Мигранты : 0.714
  • Другое : 0.918
Accuracy
ruDetox
  • 0.028
  • 0.301
  • 0.798
  • 0.103

Общая средняя оценка (J)

Оценка сохранения смысла (SIM)

Оценка натуральности (FL)

Точность переноса стиля (STA)

ruEthics
Правильно Хорошо Этично
Добродетель -0.582 -0.581 -0.675
Закон -0.551 -0.592 -0.679
Мораль -0.595 -0.634 -0.726
Справедливость -0.517 -0.529 -0.617
Утилитаризм -0.445 -0.492 -0.552

Результаты таблицы:

[[-0.582, -0.551 , -0.595, -0.517 , -0.445],
[-0.581, -0.592 , -0.634, -0.529 , -0.492],
[-0.675, -0.679 , -0.726, -0.617 , -0.552]]

5 MCC

Информация о сабмите:

Команда:

НГУ

Название ML-модели:

Qwen 72B Instruct GPTQ Int4

Описание архитектуры:

Qwen2 72B Instruct GPTQ Int4 is a language model including decoder of 72B size which quantized into Int4 using GPTQ method. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.

Описание обучения:

The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 128K tokens. After pretraining, the model was quantized using the one-shot weight quantization based on approximate second-order information, which is known as GPTQ.

Данные претрейна:

The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.

Детали обучения:

The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference.

Лицензия:

Apache 2.0

Стратегия, генерация и параметры:

All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype auto- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 128K.