Qwen 72B Instruct GPTQ Int4

НГУ Created at 25.08.2024 08:52
0.524
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.15 Accuracy
RCB 0.543 / 0.507 Accuracy F1 macro
USE 0.257 Grade norm
RWSD 0.7 Accuracy
PARus 0.958 Accuracy
ruTiE 0.874 Accuracy
MultiQ 0.305 / 0.182 F1 Exact match
CheGeKa 0.08 / 0.002 F1 Exact match
ruModAr 0.194 Exact match
ruMultiAr 0.403 Exact match
MathLogicQA 0.681 Accuracy
ruWorldTree 0.989 / 0.989 Accuracy F1 macro
ruOpenBookQA 0.963 / 0.962 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.038 Accuracy
ruMMLU 0.871 Accuracy
SimpleAr 0.995 Exact match
ruHumanEval 0.005 / 0.024 / 0.049 Pass@k
ruHHH 0.848
ruHateSpeech 0.864
ruDetox 0.028
ruEthics
Correct God Ethical
Virtue -0.582 -0.581 -0.675
Law -0.551 -0.592 -0.679
Moral -0.595 -0.634 -0.726
Justice -0.517 -0.529 -0.617
Utilitarianism -0.445 -0.492 -0.552

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

НГУ

Name of the ML model:

Qwen 72B Instruct GPTQ Int4

Architecture description:

Qwen2 72B Instruct GPTQ Int4 is a language model including decoder of 72B size which quantized into Int4 using GPTQ method. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, tokenizer is improved for adaptation to multiple natural languages and codes.

Description of the training:

The model was pretrained with a large amount of data of English, Chinese and 27 additional languages including Russian. In terms of the context length, the model was pretrained on data of the context length of 128K tokens. After pretraining, the model was quantized using the one-shot weight quantization based on approximate second-order information, which is known as GPTQ.

Pretrain data:

The model was pretrained with a large amount of data, after that it was post-trained with both supervised finetuning and direct preference optimization.

Training Details:

The Group Query Attention was applied so that the model can enjoy the benefits of faster speed and less memory usage in model inference.

License:

Apache 2.0

Strategy, generation and parameters:

All the parameters were not changed and are used as prepared by the model's authors. Details: - 1 x NVIDIA A100 80GB - dtype auto- Pytorch 2.3.1 + CUDA 11.7 - Transformers 4.38.2 - Context length 128K.

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Qwen 72B Instruct GPTQ Int4
НГУ
0.869 0.847 0.828
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Qwen 72B Instruct GPTQ Int4
НГУ
0.8 0.875 0.8 0.8 0.952 1 0.867 0.765 0.9 1 0.818 0.8 0.8 0.846 0.727 0.5 0.8 0.889 0.9 1 0.7 0.962 1 0.922 0.8 0.909 0.875 0.857 1 0.818 0.5 0.833 1 1 0.909 0.9 1 0.81 0.8 0.9 0.949 1 0.9 0.9 0.938 0.9 1 1 0.9 1 0.864 1 0.941 0.867 0.667 0.667 0.778
Model, team SIM FL STA
Qwen 72B Instruct GPTQ Int4
НГУ
0.301 0.798 0.103
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Qwen 72B Instruct GPTQ Int4
НГУ
-0.582 -0.551 -0.595 -0.517 -0.445
Model, team Virtue Law Moral Justice Utilitarianism
Qwen 72B Instruct GPTQ Int4
НГУ
-0.581 -0.592 -0.634 -0.529 -0.492
Model, team Virtue Law Moral Justice Utilitarianism
Qwen 72B Instruct GPTQ Int4
НГУ
-0.675 -0.679 -0.726 -0.617 -0.552
Model, team Women Men LGBT Nationalities Migrants Other
Qwen 72B Instruct GPTQ Int4
НГУ
0.861 0.743 0.941 0.892 0.714 0.918