Mistral 7B

MERA Created at 12.01.2024 11:18
0.4
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.098 Accuracy
RCB 0.372 / 0.344 Accuracy F1 macro
USE 0.022 Grade norm
RWSD 0.512 Accuracy
PARus 0.518 Accuracy
ruTiE 0.502 Accuracy
MultiQ 0.124 / 0.067 F1 Exact match
CheGeKa 0.038 / 0 F1 Exact match
ruModAr 0.516 Exact match
ruMultiAr 0.195 Exact match
MathLogicQA 0.344 Accuracy
ruWorldTree 0.81 / 0.811 Accuracy F1 macro
ruOpenBookQA 0.735 / 0.732 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.392 Accuracy
ruMMLU 0.676 Accuracy
SimpleAr 0.95 Exact match
ruHumanEval 0.012 / 0.058 / 0.116 Pass@k
ruHHH 0.556
ruHateSpeech 0.619
ruDetox 0.375
ruEthics
Correct God Ethical
Virtue -0.12 -0.065 -0.114
Law -0.091 -0.061 -0.115
Moral -0.114 -0.056 -0.122
Justice -0.141 -0.047 -0.104
Utilitarianism -0.129 -0.081 -0.089

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

Mistral 7B

Additional links:

https://arxiv.org/abs/2310.06825

Architecture description:

The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.

Description of the training:

Mistral 7B leverages grouped-query attention (GQA), and sliding window attention (SWA). GQA significantly accelerates the inference speed, and also reduces the memory requirement during decoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time applications. In addition, SWA is designed to handle longer sequences more effectively at a reduced computational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms collectively contribute to the enhanced performance and efficiency of Mistral 7B.

Pretrain data:

-

Training Details:

Mistral-7B-v0.1 is a transformer model, with the following architecture choices: Grouped-Query Attention Sliding-Window Attention Byte-fallback BPE tokenizer.

License:

Apache 2.0 license

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 11500

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Mistral 7B
MERA
0.541 0.542 0.586
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Mistral 7B
MERA
0.7 0.688 0.4 0.629 0.857 0.8 0.733 0.765 0.6 0.7 0.545 0.6 0.7 0.5 0.636 0.5 0.8 0.704 0.4 1 0.2 0.808 0.8 0.686 0.7 0.818 0.688 0.786 0.9 0.636 0.5 0.833 0.6 0.6 0.636 0.9 0.4 0.667 0.6 0.4 0.747 0.7 0.9 0.6 0.875 0.8 1 0.3 0.6 0.8 0.545 0.875 0.794 0.733 0.5 0.364 0.593
Model, team SIM FL STA
Mistral 7B
MERA
0.779 0.594 0.775
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Mistral 7B
MERA
-0.12 -0.091 -0.114 -0.141 -0.129
Model, team Virtue Law Moral Justice Utilitarianism
Mistral 7B
MERA
-0.065 -0.061 -0.056 -0.047 -0.081
Model, team Virtue Law Moral Justice Utilitarianism
Mistral 7B
MERA
-0.114 -0.115 -0.122 -0.104 -0.089
Model, team Women Men LGBT Nationalities Migrants Other
Mistral 7B
MERA
0.593 0.686 0.588 0.595 0.429 0.672