T-lite-0.1

T-Bank AI Created at 16.08.2024 09:45
0.492
The overall result
The submission does not contain all the required tasks

Ratings for leaderboard tasks

The table will scroll to the left

Task name Result Metric
LCS 0.14 Accuracy
RCB 0.511 / 0.418 Accuracy F1 macro
USE 0.05 Grade norm
RWSD 0.585 Accuracy
PARus 0.858 Accuracy
ruTiE 0.681 Accuracy
MultiQ 0.383 / 0.29 F1 Exact match
CheGeKa 0.118 / 0.06 F1 Exact match
ruModAr 0.667 Exact match
ruMultiAr 0.269 Exact match
MathLogicQA 0.37 Accuracy
ruWorldTree 0.88 / 0.88 Accuracy F1 macro
ruOpenBookQA 0.783 / 0.782 Accuracy F1 macro

Evaluation on open tasks:

Go to the ratings by subcategory

The table will scroll to the left

Task name Result Metric
BPS 0.358 Accuracy
ruMMLU 0.759 Accuracy
SimpleAr 0.955 Exact match
ruHumanEval 0.023 / 0.113 / 0.226 Pass@k
ruHHH 0.596
ruHateSpeech 0.732
ruDetox 0.197
ruEthics
Correct God Ethical
Virtue -0.076 -0.187 -0.225
Law -0.091 -0.213 -0.23
Moral -0.092 -0.193 -0.222
Justice -0.071 -0.185 -0.215
Utilitarianism -0.11 -0.153 -0.231

Information about the submission:

Mera version
-
Torch Version
-
The version of the codebase
-
CUDA version
-
Precision of the model weights
-
Seed
-
Butch
-
Transformers version
-
The number of GPUs and their type
-
Architecture
-

Team:

T-Bank AI

Name of the ML model:

T-lite-0.1

Additional links:

https://huggingface.co/AnatoliiPotapov/T-lite-instruct-0.1

Architecture description:

T-lite is a decoder language model with: pre-normalization via RMSNorm SwiGLU activation function rotary positional embeddings (RoPE) grouped query attention (GQA) T-lite was trained in bf16.

Description of the training:

We employed the Decoupled AdamW optimizer with β1 = 0.9, β2 = 0.95, and eps = 1.0e-8. The learning rate was set to 1.0e-5 with a constant schedule and a warmup period of 10 steps during stage 1, and a cosine schedule during stage 2. Weight decay was applied at a rate of 1.0e-6, and gradient clipping was performed with a maximum norm of 1.0. The maximum sequence length was set to 8192. Each batch contained approximately 6 million tokens. Training was conducted using Fully Sharded Data Parallel (FSDP) with full shard/hybrid shard strategies. The setup achieved a throughput of 3000 tokens/sec/GPU. We achieved a 0.59 Model FLOPs Utilization (MFU).

Pretrain data:

Stage 1 Massive continual pre-training 300B tokens * 0.3 epoch Proportion of data in Russian is 85%, as a trade-off between language adoptation and English language performance Styles and topics in Common Crawl (CC) data were downsampled Domains in book datasets were balanced Proportion of code data was increased Stage 2 Focuses on refining the quality of the dataset 20B tokens * 3 epochs Includes instructional sets of smaller volume Advertisements and news were aggressively downsampled Instructions and articles were upsampled Educational content was balanced

Training Details:

-

License:

WTFPL

Strategy, generation and parameters:

1 Nvidia A100 Context length: 8192 dtype: bfloat16 Pytorch==2.3.1 + Transformers 4.44.0 + CUDA 12.1

Comments about inference:

🚨 T-lite is designed for further fine-tuning and is not intended as a ready-to-use conversational assistant. Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.

Expand information

Ratings by subcategory

Metric: Accuracy
Model, team Honest Helpful Harmless
T-lite-0.1
T-Bank AI
0.541 0.542 0.707
Model, team Anatomy Virology Astronomy Marketing Nutrition Sociology Management Philosophy Prehistory Human aging Econometrics Formal logic Global facts Jurisprudence Miscellaneous Moral disputes Business ethics Biology (college) Physics (college) Human Sexuality Moral scenarios World religions Abstract algebra Medicine (college) Machine learning Medical genetics Professional law PR Security studies Chemistry (школьная) Computer security International law Logical fallacies Politics Clinical knowledge Conceptual_physics Math (college) Biology (high school) Physics (high school) Chemistry (high school) Geography (high school) Professional medicine Electrical engineering Elementary mathematics Psychology (high school) Statistics (high school) History (high school) Math (high school) Professional accounting Professional psychology Computer science (college) World history (high school) Macroeconomics Microeconomics Computer science (high school) European history Government and politics
T-lite-0.1
T-Bank AI
0.8 0.813 0.9 0.771 0.762 1 0.8 0.824 0.8 1 0.818 0.9 0.7 0.654 0.5 0.5 0.7 0.815 0.7 1 0.4 0.808 0.8 0.765 0.7 0.909 0.688 0.714 0.9 0.818 0.6 0.889 0.6 0.8 0.818 1 1 0.81 0.5 0.6 0.835 0.9 0.9 0.9 0.938 0.7 0.9 0.5 0.5 1 0.5 0.813 0.912 0.867 0.333 0.515 0.704
Model, team SIM FL STA
T-lite-0.1
T-Bank AI
0.511 0.727 0.419
Coorect
Good
Ethical
Model, team Virtue Law Moral Justice Utilitarianism
T-lite-0.1
T-Bank AI
-0.076 -0.091 -0.092 -0.071 -0.11
Model, team Virtue Law Moral Justice Utilitarianism
T-lite-0.1
T-Bank AI
-0.187 -0.213 -0.193 -0.185 -0.153
Model, team Virtue Law Moral Justice Utilitarianism
T-lite-0.1
T-Bank AI
-0.225 -0.23 -0.222 -0.215 -0.231
Model, team Women Men LGBT Nationalities Migrants Other
T-lite-0.1
T-Bank AI
0.731 0.743 0.706 0.649 0.571 0.803