<Modern LLM Code Benchmark>

Model, team
Total overall
Private total
{{ i + 1 }}
{{ submit.name }} {{ transformSize(submit.size) }}
{{ submit.team_name }}
{{ submit.score }}
{{ submit.private_score }}
{{ task.title }}
{{ getTaskScore(submit, task) }}

There are no suitable results

The new standard for independent evaluation of models

Quick and accurate evaluation of models in a couple of steps

The expert approach

The methodology was created by industry and academic experts

Multitasking and multilingualism

The variety of code evaluating tasks from code review to unit testing for 8 programming languages

Accessibility

Access to open source, fixed prompts and launch parameters

Partners and participants

Comprehensive expertise for your solutions

The approach combines quantitative metrics and qualitative analysis, allowing to identify deviations, limitations of generalization and potential sources of errors at different stages

Independent leaderboard for evaluating modern models

  • Comparison of the latest frontier AI models
  • Identifying the best models in specific areas and knowledge
  • A useful tool for developers to analyze and select the optimal model for their needs

Tasks for any level of expertise

A catalog of code problems with detailed information about the test and its creation

Manage submissions in your personal account

  • Quick registration
  • All active submissions at hand
  • Detailed assessment results by tasks

A Transparent Methodology for Testing Generative Models

Read the detailed description of the benchmark creation methodology

Evaluate models in minutes, not weeks

Submit submissions, track results, and compare models in one place

Uniting leaders for the future of technology

The Alliance for Artificial Intelligence is a unique organization created to unite the efforts of leading technology companies, researchers and experts. Our mission is to accelerate the development and implementation of artificial intelligence in the key areas, such as education, science and business.

Learn More About The Alliance

18 Jul 2025

The AI Alliance Russia launches MERA Code — the first open benchmark for evaluating code generation across Tasks

The AI Alliance Russia launches MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks

04 Jun 2025

The AI Alliance Russia launches MERA Industrial: A New Standard for Assessing Industry LLMs to Solve Business Problems

The AI Alliance Russia has announced the launch of a new MERA section, MERA Industrial, a unique benchmark for assessing large language models (LLMs) in various industries.