Meta-Llama-3.1-8B-Instruct

MERA Created at 22.09.2024 20:54
0.401
The overall result
309
Place in the rating
Weak tasks:
186
RWSD
267
PARus
168
RCB
370
ruEthics
138
MultiQ
285
ruWorldTree
254
ruOpenBookQA
200
CheGeKa
278
ruMMLU
333
ruHateSpeech
115
ruDetox
399
ruHHH
187
ruTiE
272
ruHumanEval
296
USE
327
MathLogicQA
298
ruMultiAr
308
SimpleAr
267
LCS
244
BPS
511
ruModAr
265
MaMuRAMu
+18
Hide

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.108 Accuracy
RCB 0.55 / 0.474 Accuracy F1 macro
USE 0.123 Grade norm
RWSD 0.562 Accuracy
PARus 0.776 Accuracy
ruTiE 0.71 Accuracy
MultiQ 0.459 / 0.309 F1 Exact match
CheGeKa 0.148 / 0.108 F1 Exact match
ruModAr 0.043 Exact match
MaMuRAMu 0.629 Accuracy
ruMultiAr 0.239 Exact match
ruCodeEval 0 / 0 / 0 Pass@k
MathLogicQA 0.366 Accuracy
ruWorldTree 0.827 / 0.668 Accuracy F1 macro
ruOpenBookQA 0.76 / 0.609 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.937 Accuracy
ruMMLU 0.523 Accuracy
SimpleAr 0.926 Exact match
ruHumanEval 0.007 / 0.011 / 0.012 Pass@k
ruHHH 0.534
ruHateSpeech 0.634
ruDetox 0.263
ruEthics
Correct God Ethical
Virtue 0.166 0.243 0.235
Law 0.166 0.236 0.221
Moral 0.197 0.258 0.249
Justice 0.114 0.201 0.199
Utilitarianism 0.17 0.237 0.212

Information about the submission

Mera version
v.1.2.0
Torch Version
2.4.0
The version of the codebase
9b26db97
CUDA version
12.1
Precision of the model weights
bfloat16
Seed
1234
Batch
1
Transformers version
4.44.2
The number of GPUs and their type
1 x NVIDIA H100 80GB HBM3
Architecture
vllm

Team:

MERA

Name of the ML model:

Meta-Llama-3.1-8B-Instruct

Model size

8.0B

Model type:

Opened

SFT

Additional links:

https://ai.meta.com/blog/meta-llama-3-1/

Architecture description:

An auto-regressive language model that uses an optimized transformer architecture, with context length of 128k tokens. The model uses Grouped-Query Attention.

Description of the training:

Training Time is 1.46M GPU hours on hardware of type H100-80GB.

Pretrain data:

Llama 3.1 was pretrained on over 15 trillion tokens. The pretraining data includes data from publicly available sources and has a cutoff of December 2023. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.

License:

https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE

Inference parameters

Generation Parameters:
simplear - do_sample=false;until=["\n"]; \nchegeka - do_sample=false;until=["\n"]; \nrudetox - do_sample=false;until=["\n"]; \nrumultiar - do_sample=false;until=["\n"]; \nuse - do_sample=false;until=["\n","."]; \nmultiq - do_sample=false;until=["\n"]; \nrumodar - do_sample=false;until=["\n"]; \nruhumaneval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6; \nrucodeeval - do_sample=true;until=["\nclass","\ndef","\n#","\nif","\nprint"];temperature=0.6;

The size of the context:
simplear, bps, lcs, chegeka, mathlogicqa, parus, rcb, rudetox, ruhatespeech, rummlu, ruworldtree, ruopenbookqa, rumultiar, use, rwsd, mamuramu, multiq, rumodar, ruethics, ruhhh, ruhumaneval, rucodeeval, rutie, rutie - 131072

System prompt:
Реши задачу по инструкции ниже. Не давай никаких объяснений и пояснений к своему ответу. Не пиши ничего лишнего. Пиши только то, что указано в инструкции. Если по инструкции нужно решить пример, то напиши только числовой ответ без хода решения и пояснений. Если по инструкции нужно вывести букву, цифру или слово, выведи только его. Если по инструкции нужно выбрать один из вариантов ответа и вывести букву или цифру, которая ему соответствует, то выведи только эту букву или цифру, не давай никаких пояснений, не добавляй знаки препинания, только 1 символ в ответе. Если по инструкции нужно дописать код функции на языке Python, пиши сразу код, соблюдая отступы так, будто ты продолжаешь функцию из инструкции, не давай пояснений, не пиши комментарии, используй только аргументы из сигнатуры функции в инструкции, не пробуй считывать данные через функцию input. Не извиняйся, не строй диалог. Выдавай только ответ и ничего больше.

Description of the template:
{{- bos_token }} \n{%- if custom_tools is defined %} \n {%- set tools = custom_tools %} \n{%- endif %} \n{%- if not tools_in_user_message is defined %} \n {%- set tools_in_user_message = true %} \n{%- endif %} \n{%- if not date_string is defined %} \n {%- set date_string = "26 Jul 2024" %} \n{%- endif %} \n{%- if not tools is defined %} \n {%- set tools = none %} \n{%- endif %} \n \n{#- This block extracts the system message, so we can slot it into the right place. #} \n{%- if messages[0]['role'] == 'system' %} \n {%- set system_message = messages[0]['content']|trim %} \n {%- set messages = messages[1:] %} \n{%- else %} \n {%- set system_message = "" %} \n{%- endif %} \n \n{#- System message + builtin tools #} \n{{- "<|start_header_id|>system<|end_header_id|>\n\n" }} \n{%- if builtin_tools is defined or tools is not none %} \n {{- "Environment: ipython\n" }} \n{%- endif %} \n{%- if builtin_tools is defined %} \n {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "\n\n"}} \n{%- endif %} \n{{- "Cutting Knowledge Date: December 2023\n" }} \n{{- "Today Date: " + date_string + "\n\n" }} \n{%- if tools is not none and not tools_in_user_message %} \n {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }} \n {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }} \n {{- "Do not use variables.\n\n" }} \n {%- for t in tools %} \n {{- t | tojson(indent=4) }} \n {{- "\n\n" }} \n {%- endfor %} \n{%- endif %} \n{{- system_message }} \n{{- "<|eot_id|>" }} \n \n{#- Custom tools are passed in a user message with some extra guidance #} \n{%- if tools_in_user_message and not tools is none %} \n {#- Extract the first user message so we can plug it in here #} \n {%- if messages | length != 0 %} \n {%- set first_user_message = messages[0]['content']|trim %} \n {%- set messages = messages[1:] %} \n {%- else %} \n {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }} \n{%- endif %} \n {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}} \n {{- "Given the following functions, please respond with a JSON for a function call " }} \n {{- "with its proper arguments that best answers the given prompt.\n\n" }} \n {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }} \n {{- "Do not use variables.\n\n" }} \n {%- for t in tools %} \n {{- t | tojson(indent=4) }} \n {{- "\n\n" }} \n {%- endfor %} \n {{- first_user_message + "<|eot_id|>"}} \n{%- endif %} \n \n{%- for message in messages %} \n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %} \n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }} \n {%- elif 'tool_calls' in message %} \n {%- if not message.tool_calls|length == 1 %} \n {{- raise_exception("This model only supports single tool-calls at once!") }} \n {%- endif %} \n {%- set tool_call = message.tool_calls[0].function %} \n {%- if builtin_tools is defined and tool_call.name in builtin_tools %} \n {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}} \n {{- "<|python_tag|>" + tool_call.name + ".call(" }} \n {%- for arg_name, arg_val in tool_call.arguments | items %} \n {{- arg_name + '="' + arg_val + '"' }} \n {%- if not loop.last %} \n {{- ", " }} \n {%- endif %} \n {%- endfor %} \n {{- ")" }} \n {%- else %} \n {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}} \n {{- '{"name": "' + tool_call.name + '", ' }} \n {{- '"parameters": ' }} \n {{- tool_call.arguments | tojson }} \n {{- "}" }} \n {%- endif %} \n {%- if builtin_tools is defined %} \n {#- This means we're in ipython mode #} \n {{- "<|eom_id|>" }} \n {%- else %} \n {{- "<|eot_id|>" }} \n {%- endif %} \n {%- elif message.role == "tool" or message.role == "ipython" %} \n {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }} \n {%- if message.content is mapping or message.content is iterable %} \n {{- message.content | tojson }} \n {%- else %} \n {{- message.content }} \n {%- endif %} \n {{- "<|eot_id|>" }} \n {%- endif %} \n{%- endfor %} \n{%- if add_generation_prompt %} \n {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }} \n{%- endif %}

Ratings by subcategory

Metric: Grade Norm
Model, team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 8_0 8_1 8_2 8_3 8_4
Meta-Llama-3.1-8B-Instruct
MERA
0.267 0.133 0.6 0.133 0 0.067 0 - 0 0 0.067 0.033 0 0.033 0.067 0.217 0.033 0.033 0 0.033 0 0.433 0.1 0.067 0 0.308 0 0.033 0.3 0.033 0.033
Model, team Honest Helpful Harmless
Meta-Llama-3.1-8B-Instruct
MERA
0.541 0.559 0.5
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
Meta-Llama-3.1-8B-Instruct
MERA
0.459 0.446 0.566 0.709 0.601 0.687 0.65 0.55 0.559 0.529 0.377 0.413 0.31 0.574 0.623 0.572 0.64 0.549 0.367 0.618 0.358 0.643 0.35 0.509 0.339 0.57 0.403 0.62 0.71 0.36 0.65 0.752 0.509 0.677 0.558 0.534 0.34 0.665 0.417 0.488 0.677 0.496 0.531 0.432 0.637 0.444 0.627 0.378 0.397 0.472 0.42 0.722 0.518 0.55 0.7 0.745 0.596
Model, team SIM FL STA
Meta-Llama-3.1-8B-Instruct
MERA
0.631 0.659 0.688
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Managment Philosophy Pre-History Gerontology Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Bilology (college) Physics (college) Human sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine Learning Genetics Professional law PR Security Chemistry (college) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical Engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional Accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) Europe History Government and politics
Meta-Llama-3.1-8B-Instruct
MERA
0.511 0.663 0.567 0.583 0.579 0.724 0.483 0.632 0.654 0.523 0.718 0.583 0.508 0.628 0.608 0.556 0.673 0.6 0.404 0.684 0.386 0.712 0.578 0.633 0.6 0.636 0.577 0.614 0.912 0.644 0.756 0.641 0.625 0.772 0.47 0.589 0.511 0.733 0.526 0.6 0.731 0.603 0.689 0.667 0.879 0.822 0.759 0.705 0.631 0.807 0.822 0.652 0.658 0.545 0.558 0.509 0.678
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
Meta-Llama-3.1-8B-Instruct
MERA
0.166 0.166 0.197 0.114 0.17
Model, team Virtue Law Moral Justice Utilitarianism
Meta-Llama-3.1-8B-Instruct
MERA
0.243 0.236 0.258 0.201 0.237
Model, team Virtue Law Moral Justice Utilitarianism
Meta-Llama-3.1-8B-Instruct
MERA
0.235 0.221 0.249 0.199 0.212
Model, team Women Men LGBT Nationalities Migrants Other
Meta-Llama-3.1-8B-Instruct
MERA
0.676 0.514 0.471 0.703 0.429 0.656