Таблица скроллится влево
Задача | Результат | Метрика |
---|---|---|
LCS | 0.122 | Accuracy |
RCB | 0.418 / 0.32 | Avg. F1 / Accuracy |
USE | 0.012 | Grade Norm |
RWSD | 0.5 | Accuracy |
PARus | 0.648 | Accuracy |
ruTiE | 0.521 | Accuracy |
MultiQ | 0.101 / 0.021 | F1-score/EM |
CheGeKa | 0.01 / 0 | F1 / EM |
ruModAr | 0.392 | EM |
ruMultiAr | 0.185 | EM |
MathLogicQA | 0.353 | Accuracy |
ruWorldTree | 0.749 / 0.749 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.678 / 0.677 | Avg. F1 / Accuracy |
Таблица скроллится влево
Задача | Результат | Метрика | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BPS | 0.349 | Accuracy | ||||||||||||||||||||||||
ruMMLU | 0.601 | Accuracy | ||||||||||||||||||||||||
SimpleAr | 0.93 | EM | ||||||||||||||||||||||||
ruHumanEval | 0 / 0 / 0 | pass@k | ||||||||||||||||||||||||
ruHHH |
0.545
|
Accuracy | ||||||||||||||||||||||||
ruHateSpeech |
0.657
|
Accuracy | ||||||||||||||||||||||||
ruDetox |
|
Общая средняя оценка (J) Оценка сохранения смысла (SIM) Оценка натуральности (FL) Точность переноса стиля (STA) |
||||||||||||||||||||||||
ruEthics |
Результаты таблицы:
[[-0.18, -0.144
, -0.169, -0.14
, -0.147], |
5 MCC |
НГУ
Qwen 1.5B Instruct
Qwen2 1.5B Instruct is a language model including decoder of 1.5B size. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.
The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.
The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 32K tokens.
The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference. Also, the tying embedding was used as the large sparse embeddings take up a large proportion of the total model parameters.
Apache 2.0
All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype float32- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 32768.