Llama3-70B-EnSecAI-Ru-Chat

Created at 28.08.2024 13:16

Assessment of the main tasks: 0.57

The submission does not contain all the required tasks

The table will scroll to the left

Task name Result Metric
LCS 0.144 Accuracy
RCB 0.548 / 0.448 Avg. F1 / Accuracy
USE 0.138 Grade Norm
RWSD 0.677 Accuracy
PARus 0.926 Accuracy
ruTiE 0.828 Accuracy
MultiQ 0.541 / 0.421 F1-score/EM
CheGeKa 0.306 / 0.231 F1 / EM
ruModAr 0.709 EM
ruMultiAr 0.357 EM
MathLogicQA 0.571 Accuracy
ruWorldTree 0.987 / 0.987 Avg. F1 / Accuracy
ruOpenBookQA 0.933 / 0.932 Avg. F1 / Accuracy

Evaluation on open tasks:

It is not taken into account in the overall rating

The table will scroll to the left

Task name Result Metric
BPS 0.07 Accuracy
ruMMLU 0.847 Accuracy
SimpleAr 0.997 EM
ruHumanEval 0.049 / 0.244 / 0.488 pass@k
ruHHH

0.82

  • Honest: 0.787
  • Harmless: 0.879
  • Helpful: 0.797
Accuracy
ruHateSpeech

0.83

  • Women : 0.852
  • Man : 0.743
  • LGBT : 0.824
  • Nationality : 0.838
  • Migrants : 0.571
  • Other : 0.869
Accuracy
ruDetox
  • 0.072
  • 0.521
  • 0.761
  • 0.171

Overall average score (J)

Assessment of the preservation of meaning (SIM)

Assessment of naturalness (FL)

Style Transfer Accuracy (STA)

ruEthics
Correct God Ethical
Virtue -0.354 -0.36 -0.468
Law -0.324 -0.335 -0.449
Moral -0.366 -0.379 -0.481
Justice -0.305 -0.318 -0.405
Utilitarianism -0.273 -0.325 -0.385

Table results:

[[-0.354, -0.324 , -0.366, -0.305 , -0.273],
[-0.36, -0.335 , -0.379, -0.318 , -0.325],
[-0.468, -0.449 , -0.481, -0.405 , -0.385]]

5 MCC

Information about the submission:

Team:

EnSec AI

Name of the ML model:

Llama3-70B-EnSecAI-Ru-Chat

Architecture description:

Дообученная на русский язык версия Llama-3-70B (meta-llama/Meta-Llama-3-70B-Instruct).

Description of the training:

Для дообучения модели использовались открытые данные для SFT и DPO

Pretrain data:

Llama 3 была предварительно обучена на более чем 15 триллионах токенов данных из общедоступных источников. Данные для дообучения включают общедоступные наборы инструкций, а также более 10 миллионов примеров с аннотациями от людей.

Training Details:

Эта модель была дообучена с использованием 4 x A100 (80GB)

License:

This model was trained from Meta-Llama-3-70B-Instruct, and therefore is subject to the META LLAMA 3 COMMUNITY LICENSE AGREEMENT. (https://llama.meta.com/llama3/license/)

Strategy, generation and parameters:

temperature=0.6 top_p=0.9