The table will scroll to the left
Task name | Result | Metric |
---|---|---|
LCS | 0.14 | Accuracy |
RCB | 0.511 / 0.418 | Avg. F1 / Accuracy |
USE | 0.05 | Grade Norm |
RWSD | 0.585 | Accuracy |
PARus | 0.858 | Accuracy |
ruTiE | 0.681 | Accuracy |
MultiQ | 0.383 / 0.29 | F1-score/EM |
CheGeKa | 0.118 / 0.06 | F1 / EM |
ruModAr | 0.667 | EM |
ruMultiAr | 0.269 | EM |
MathLogicQA | 0.37 | Accuracy |
ruWorldTree | 0.88 / 0.88 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.783 / 0.782 | Avg. F1 / Accuracy |
The table will scroll to the left
Task name | Result | Metric | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BPS | 0.358 | Accuracy | ||||||||||||||||||||||||
ruMMLU | 0.759 | Accuracy | ||||||||||||||||||||||||
SimpleAr | 0.955 | EM | ||||||||||||||||||||||||
ruHumanEval | 0.023 / 0.113 / 0.226 | pass@k | ||||||||||||||||||||||||
ruHHH |
0.596
|
Accuracy | ||||||||||||||||||||||||
ruHateSpeech |
0.732
|
Accuracy | ||||||||||||||||||||||||
ruDetox |
|
Overall average score (J) Assessment of the preservation of meaning (SIM) Assessment of naturalness (FL) Style Transfer Accuracy (STA) |
||||||||||||||||||||||||
ruEthics |
Table results:
[[-0.076, -0.091
, -0.092, -0.071
, -0.11], |
5 MCC |
T-Bank AI
T-lite-0.1
T-lite is a decoder language model with: pre-normalization via RMSNorm SwiGLU activation function rotary positional embeddings (RoPE) grouped query attention (GQA) T-lite was trained in bf16.
We employed the Decoupled AdamW optimizer with β1 = 0.9, β2 = 0.95, and eps = 1.0e-8. The learning rate was set to 1.0e-5 with a constant schedule and a warmup period of 10 steps during stage 1, and a cosine schedule during stage 2. Weight decay was applied at a rate of 1.0e-6, and gradient clipping was performed with a maximum norm of 1.0. The maximum sequence length was set to 8192. Each batch contained approximately 6 million tokens. Training was conducted using Fully Sharded Data Parallel (FSDP) with full shard/hybrid shard strategies. The setup achieved a throughput of 3000 tokens/sec/GPU. We achieved a 0.59 Model FLOPs Utilization (MFU).
Stage 1 Massive continual pre-training 300B tokens * 0.3 epoch Proportion of data in Russian is 85%, as a trade-off between language adoptation and English language performance Styles and topics in Common Crawl (CC) data were downsampled Domains in book datasets were balanced Proportion of code data was increased Stage 2 Focuses on refining the quality of the dataset 20B tokens * 3 epochs Includes instructional sets of smaller volume Advertisements and news were aggressively downsampled Instructions and articles were upsampled Educational content was balanced
-
WTFPL
1 Nvidia A100 Context length: 8192 dtype: bfloat16 Pytorch==2.3.1 + Transformers 4.44.0 + CUDA 12.1
🚨 T-lite is designed for further fine-tuning and is not intended as a ready-to-use conversational assistant. Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.