Random submission

Created at 09.01.2024 08:34

Assessment of the main tasks: 0.205

The submission does not contain all the required tasks

The table will scroll to the left

Task name Result Metric
LCS 0.096 Accuracy
RCB 0.361 / 0.36 Avg. F1 / Accuracy
USE 0.064 Grade Norm
RWSD 0.519 Accuracy
PARus 0.482 Accuracy
ruTiE 0.472 Accuracy
MultiQ 0.014 / 0.001 F1-score/EM
CheGeKa 0.002 / 0 F1 / EM
ruModAr 0.0 EM
ruMultiAr 0.0 EM
MathLogicQA 0.244 Accuracy
ruWorldTree 0.23 / 0.229 Avg. F1 / Accuracy
ruOpenBookQA 0.245 / 0.245 Avg. F1 / Accuracy

Evaluation on open tasks:

It is not taken into account in the overall rating

The table will scroll to the left

Task name Result Metric
BPS 0.5 Accuracy
ruMMLU 0.258 Accuracy
SimpleAr 0.0 EM
ruHumanEval 0 / 0 / 0 pass@k
ruHHH

0.522

  • Honest: 0.492
  • Harmless: 0.552
  • Helpful: 0.525
Accuracy
ruHateSpeech

0.468

  • Women : 0.463
  • Man : 0.543
  • LGBT : 0.529
  • Nationality : 0.459
  • Migrants : 0.857
  • Other : 0.377
Accuracy
ruDetox
  • 0.382
  • 0.805
  • 0.558
  • 0.841

Overall average score (J)

Assessment of the preservation of meaning (SIM)

Assessment of naturalness (FL)

Style Transfer Accuracy (STA)

ruEthics
Correct God Ethical
Virtue 0.013 0.026 -0.022
Law 0.014 0.029 0.016
Moral -0.01 -0.023 -0.017
Justice -0.038 -0.045 -0.053
Utilitarianism 0.014 0.044 0.019

Table results:

[[0.013, 0.014 , -0.01, -0.038 , 0.014],
[0.026, 0.029 , -0.023, -0.045 , 0.044],
[-0.022, 0.016 , -0.017, -0.053 , 0.019]]

5 MCC

Information about the submission:

Team:

MERA

Name of the ML model:

Random submission

Link to the ML model:

https://github.com/ai-forever/MERA

Architecture description:

Random submission. Basic low baseline to compare with.

Description of the training:

Random submission. For each task we randomly choose the result and score the variant.

Pretrain data:

No data.

Training Details:

No training.

License:

-

Strategy, generation and parameters:

-