Go home

News

24 Sep 2025

AI Alliance Launches Dynamic SWE-MERA Benchmark for Evaluating Code Models

The AI Alliance's benchmark lineup has been expanded with a new tool — the dynamic benchmark SWE-MERA, designed for comprehensive evaluation of coding models on tasks close to real development conditions. SWE-MERA was created as a result of collaboration among leading Russian AI teams: MWS AI (part of MTS Web Services), Sber, and ITMO University.

18 Jul 2025

The AI Alliance Russia launches MERA Code — the first open benchmark for evaluating code generation across Tasks

The AI Alliance Russia launches MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks

04 Jun 2025

The AI Alliance Russia launches MERA Industrial: A New Standard for Assessing Industry LLMs to Solve Business Problems

The AI Alliance Russia has announced the launch of a new MERA section, MERA Industrial, a unique benchmark for assessing large language models (LLMs) in various industries.