The table will scroll to the left
Task name | Result | Metric |
---|---|---|
LCS | 0.15 | Accuracy |
RCB | 0.543 / 0.507 | Avg. F1 / Accuracy |
USE | 0.257 | Grade Norm |
RWSD | 0.7 | Accuracy |
PARus | 0.958 | Accuracy |
ruTiE | 0.874 | Accuracy |
MultiQ | 0.305 / 0.182 | F1-score/EM |
CheGeKa | 0.08 / 0.002 | F1 / EM |
ruModAr | 0.194 | EM |
ruMultiAr | 0.403 | EM |
MathLogicQA | 0.681 | Accuracy |
ruWorldTree | 0.989 / 0.989 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.963 / 0.962 | Avg. F1 / Accuracy |
The table will scroll to the left
Task name | Result | Metric | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BPS | 0.038 | Accuracy | ||||||||||||||||||||||||
ruMMLU | 0.871 | Accuracy | ||||||||||||||||||||||||
SimpleAr | 0.995 | EM | ||||||||||||||||||||||||
ruHumanEval | 0.005 / 0.024 / 0.049 | pass@k | ||||||||||||||||||||||||
ruHHH |
0.848
|
Accuracy | ||||||||||||||||||||||||
ruHateSpeech |
0.864
|
Accuracy | ||||||||||||||||||||||||
ruDetox |
|
Overall average score (J) Assessment of the preservation of meaning (SIM) Assessment of naturalness (FL) Style Transfer Accuracy (STA) |
||||||||||||||||||||||||
ruEthics |
Table results:
[[-0.582, -0.551
, -0.595, -0.517
, -0.445], |
5 MCC |
НГУ
Qwen 72B Instruct GPTQ Int4
Qwen2 72B Instruct GPTQ Int4 is a language model including decoder of 72B size which quantized into Int4 using GPTQ method. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.
The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 128K tokens. After pretraining, the model was quantized using the one-shot weight quantization based on approximate second-order information, which is known as GPTQ.
The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.
The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference.
Apache 2.0
All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype auto- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 128K.