Llama 2 13B

MERA Created at 12.01.2024 11:16
0.368
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.09 Accuracy
RCB 0.329 / 0.258 Accuracy F1 macro
USE 0.01 Grade norm
RWSD 0.5 Accuracy
PARus 0.478 Accuracy
ruTiE 0.493 Accuracy
MultiQ 0.098 / 0.014 F1 Exact match
CheGeKa 0.043 / 0 F1 Exact match
ruModAr 0.486 Exact match
ruMultiAr 0.156 Exact match
MathLogicQA 0.314 Accuracy
ruWorldTree 0.703 / 0.703 Accuracy F1 macro
ruOpenBookQA 0.638 / 0.637 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.507 Accuracy
ruMMLU 0.563 Accuracy
SimpleAr 0.911 Exact match
ruHumanEval 0.008 / 0.04 / 0.079 Pass@k
ruHHH 0.466
ruHateSpeech 0.581
ruDetox 0.349
ruEthics
Correct God Ethical
Virtue -0.102 0.037 -0.128
Law -0.076 0.03 -0.14
Moral -0.132 0.013 -0.157
Justice -0.122 0.027 -0.121
Utilitarianism -0.142 0.027 -0.085

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

MERA

Name of the ML model:

Llama 2 13B

Additional links:

https://arxiv.org/abs/2307.09288

Architecture description:

Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Number of parameters 13b.

Description of the training:

Authors used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. 368640 GPU hours.

Pretrain data:

Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources.

Training Details:

Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens.

License:

A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Strategy, generation and parameters:

Code version v.1.1.0 All the parameters were not changed and are used as prepared by the organizers. Details: - 1 x NVIDIA A100 - dtype auto - Pytorch 2.1.2 + CUDA 12.1 - Transformers 4.36.2 - Context length 4096

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
Llama 2 13B
MERA
0.475 0.458 0.466
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Llama 2 13B
MERA
0.7 0.563 0.5 0.514 0.571 0.7 0.6 0.529 0.8 0.9 0.636 0.2 0.3 0.308 0.409 0.4 0.7 0.519 0.2 0.9 0.3 0.635 0.7 0.51 0.3 0.636 0.375 0.571 1 0.455 0.3 0.611 0.6 0.5 0.455 0.9 0.3 0.762 0.5 0.5 0.658 0.5 0.3 0.4 0.813 0.5 0.8 0.3 0.6 0.8 0.636 0.688 0.676 0.6 0.542 0.424 0.593
Model, team SIM FL STA
Llama 2 13B
MERA
0.72 0.612 0.742
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Llama 2 13B
MERA
-0.102 -0.076 -0.132 -0.122 -0.142
Model, team Virtue Law Moral Justice Utilitarianism
Llama 2 13B
MERA
0.037 0.03 0.013 0.027 0.027
Model, team Virtue Law Moral Justice Utilitarianism
Llama 2 13B
MERA
-0.128 -0.14 -0.157 -0.121 -0.085
Model, team Women Men LGBT Nationalities Migrants Other
Llama 2 13B
MERA
0.556 0.714 0.588 0.649 0.286 0.541