All clients

Lingueo

EdTech | LMS

Building an adaptive AI engine that evaluates language proficiency across 15+ languages, from A1 to C2.

Lingueo runs LILATE, a certification standard for language proficiency used by schools and institutions across France. But traditional testing was static, time-consuming, and couldn't adapt to each candidate's level in real time. They needed AI that could generate exercises, evaluate answers, and adjust difficulty dynamically.

Rakam built ELATE, the adaptive test engine at the core of LILATE. powering 8 exercise types with multimodal capabilities (text-to-speech, speech recognition), CEFR-aligned évaluation, and intelligent time management. Over 2+ years, the system has delivered hundreds of tests across 15+ languages.

« Rakam transformed our approach to language learning with personalised AI solutions. Our courses are now more interactive and better adapted to each student. »

Guillaume Le Dieu de Ville

Guillaume Le Dieu de Ville

Cofounder, Lingueo

15+

Languages supported

Hundreds

Tests delivered

2+ yrs

Ongoing partnership

A1-C2

Full CEFR range

Business

Business Impact

ELATE powers the LILATE certification, which is Lingueo's primary revenue driver. The AI engine transformed a manual, time-intensive testing process into a fully automated, scalable certification pipeline that handles 15+ languages without additional human evaluators.

The system is fully AI Act compliant, ensuring Lingueo's certification remains valid under upcoming European regulations. The 2+ year partnership continues to expand with new exercise types and évaluation models.

Product

What We Built

ELATE Adaptive Test Engine

The core engine that generates, delivers, and evaluates language proficiency tests in real time. Adapts difficulty dynamically based on candidate performance, covering 8 distinct exercise types across all CEFR levels from A1 to C2.

Multimodal Capabilities (TTS / ASR)

Text-to-speech and automatic speech recognition integrated into exercises, enabling oral compréhension and production testing. Candidates hear native-quality audio and respond verbally, with AI evaluating pronunciation and fluency.

CEFR évaluation v3 (LLM-as-Judge)

Third-generation évaluation system using LLM-as-judge architecture for nuanced, consistent CEFR-level grading. Evaluates grammar, vocabulary, coherence, and fluency with human-level accuracy across all supported languages.

Time Management & Pre-Generation

Intelligent time allocation per exercise type and pre-generation of test content to ensure zero latency during examinations. Candidates experience a seamless, responsive testing flow even with complex AI-generated exercises.

Technical

Technical Architecture

The system is built on Django with GPT-4o as the primary language model, handling exercise generation, évaluation, and adaptive logic. Celery and Redis manage asynchronous task queues for pre-generation and évaluation pipelines.

The LLM-as-judge évaluation pattern ensures consistent grading across languages without per-language fine-tuning. Observability via Logfire and Zabbix provides real-time monitoring of test delivery and model performance.

# Stack

Django + GPT-4o (core engine)

Celery / Redis (async tasks)

Kubernetes OVH (orchestration)

LLM-as-judge (CEFR évaluation)

Logfire (observability)

Zabbix (infrastructure monitoring)

# Capabilities

8 exercise types

TTS + ASR (multimodal)

15+ languages supported

CEFR A1-C2 full range

Move from AI hype to the real thing

Leave us a message, our team will get back to you within 24 hours.