T-lite-0.1

Создан 16.08.2024 09:45

Оценка по основным задачам: 0.492

Сабмит содержит не все обязательные задачи

Таблица скроллится влево

Задача Результат Метрика
LCS 0.14 Accuracy
RCB 0.511 / 0.418 Avg. F1 / Accuracy
USE 0.05 Grade Norm
RWSD 0.585 Accuracy
PARus 0.858 Accuracy
ruTiE 0.681 Accuracy
MultiQ 0.383 / 0.29 F1-score/EM
CheGeKa 0.118 / 0.06 F1 / EM
ruModAr 0.667 EM
ruMultiAr 0.269 EM
MathLogicQA 0.37 Accuracy
ruWorldTree 0.88 / 0.88 Avg. F1 / Accuracy
ruOpenBookQA 0.783 / 0.782 Avg. F1 / Accuracy

Оценка на открытых задачах:

Не учитывается в общем рейтинге

Таблица скроллится влево

Задача Результат Метрика
BPS 0.358 Accuracy
ruMMLU 0.759 Accuracy
SimpleAr 0.955 EM
ruHumanEval 0.023 / 0.113 / 0.226 pass@k
ruHHH

0.596

  • Honest: 0.541
  • Harmless: 0.707
  • Helpful: 0.542
Accuracy
ruHateSpeech

0.732

  • Женщины : 0.731
  • Мужчины : 0.743
  • ЛГБТ : 0.706
  • Национальность : 0.649
  • Мигранты : 0.571
  • Другое : 0.803
Accuracy
ruDetox
  • 0.197
  • 0.511
  • 0.727
  • 0.419

Общая средняя оценка (J)

Оценка сохранения смысла (SIM)

Оценка натуральности (FL)

Точность переноса стиля (STA)

ruEthics
Правильно Хорошо Этично
Добродетель -0.076 -0.187 -0.225
Закон -0.091 -0.213 -0.23
Мораль -0.092 -0.193 -0.222
Справедливость -0.071 -0.185 -0.215
Утилитаризм -0.11 -0.153 -0.231

Результаты таблицы:

[[-0.076, -0.091 , -0.092, -0.071 , -0.11],
[-0.187, -0.213 , -0.193, -0.185 , -0.153],
[-0.225, -0.23 , -0.222, -0.215 , -0.231]]

5 MCC

Информация о сабмите:

Команда:

T-Bank AI

Название ML-модели:

T-lite-0.1

Ссылка на ML-модель:

https://huggingface.co/AnatoliiPotapov/T-lite-0.1

Дополнительные ссылки:

https://huggingface.co/AnatoliiPotapov/T-lite-instruct-0.1

Описание архитектуры:

T-lite is a decoder language model with: pre-normalization via RMSNorm SwiGLU activation function rotary positional embeddings (RoPE) grouped query attention (GQA) T-lite was trained in bf16.

Описание обучения:

We employed the Decoupled AdamW optimizer with β1 = 0.9, β2 = 0.95, and eps = 1.0e-8. The learning rate was set to 1.0e-5 with a constant schedule and a warmup period of 10 steps during stage 1, and a cosine schedule during stage 2. Weight decay was applied at a rate of 1.0e-6, and gradient clipping was performed with a maximum norm of 1.0. The maximum sequence length was set to 8192. Each batch contained approximately 6 million tokens. Training was conducted using Fully Sharded Data Parallel (FSDP) with full shard/hybrid shard strategies. The setup achieved a throughput of 3000 tokens/sec/GPU. We achieved a 0.59 Model FLOPs Utilization (MFU).

Данные претрейна:

Stage 1 Massive continual pre-training 300B tokens * 0.3 epoch Proportion of data in Russian is 85%, as a trade-off between language adoptation and English language performance Styles and topics in Common Crawl (CC) data were downsampled Domains in book datasets were balanced Proportion of code data was increased Stage 2 Focuses on refining the quality of the dataset 20B tokens * 3 epochs Includes instructional sets of smaller volume Advertisements and news were aggressively downsampled Instructions and articles were upsampled Educational content was balanced

Детали обучения:

-

Лицензия:

WTFPL

Стратегия, генерация и параметры:

1 Nvidia A100 Context length: 8192 dtype: bfloat16 Pytorch==2.3.1 + Transformers 4.44.0 + CUDA 12.1

Комментарии об инференсе:

🚨 T-lite is designed for further fine-tuning and is not intended as a ready-to-use conversational assistant. Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.