{"id":2296,"date":"2026-02-18T16:58:38","date_gmt":"2026-02-18T15:58:38","guid":{"rendered":"https:\/\/ecoles-cea-edf-inria.fr\/?page_id=2296"},"modified":"2026-04-01T14:52:31","modified_gmt":"2026-04-01T12:52:31","slug":"ecole-informatique-de-2026","status":"publish","type":"page","link":"https:\/\/ecoles-cea-edf-inria.fr\/en\/schools\/ecole-informatique-de-2026\/","title":{"rendered":"Informatics School 2026"},"content":{"rendered":"<div id=\"headerredbloc\">Ecole informatique de 2026 : contenu, programmme, dates, informations pratiques.<\/div>\n<div id=\"initiative\">\n<h1 class=\"textebleu\">Ecole d\u2019\u00e9t\u00e9 d\u2019informatique 2026<\/h1>\n<h2>Fondements des LLMs et applications \u00e0 la programmation<\/h2>\n<div style=\"width: 65%; padding: 0 10px 0 0; float: left;\">\n<h4><span style=\"color: #ff0000;\">Intervenant<\/span><\/h4>\n<p>Marc Lelarge (Inria), Nathana\u00ebl Fijalkow, Guillaume Baudart (Inria), Xavier Hinaut (Inria), Philippe Suignard (EDF), Pierre-Yves Oudeyer (Inria), Yannis Bendi-Ouis (Inria), Th\u00e9o Stoskopf (ENS Lyon\/LIP), R\u00e9mi Louf (.txt)<\/p>\n<h4><span style=\"color: #ff0000;\">Contexte scientifique<\/span><\/h4>\n<p>&#8211; Architecture \/ Transformer \/ Attention<br \/>\n&#8211; Reasoning \/ RLHF<br \/>\n&#8211; Agent \/ RAG \/ Prompt augmentation \/ Tooling<br \/>\n&#8211; Quantization \/ Constrained Inference<br \/>\n&#8211; Code generation<br \/>\n&#8211; Test generation<br \/>\n&#8211; Proof generation<br \/>\n&#8211; S\u00e9minaires \/ Retour d&#8217;exp\u00e9rience<\/p>\n<h4><span style=\"color: #ff0000;\">Programme pr\u00e9liminaire<\/span><\/h4>\n<p>Monday 9-12:<br \/>\nIntroduction to Transformers<br \/>\n[Nathana\u00ebl Fijalkow](https:\/\/games-automata-play.com\/) (CNRS)<br \/>\nThis session introduces the Transformer architecture and the self-attention mechanism that revolutionized NLP. We will explore how these models process sequential data, providing the necessary foundation for understanding how LLMs handle both natural languages and structured programming languages.<\/p>\n<p>Monday 15-18:<br \/>\nData cleaning and pre-training<br \/>\n[Wissam Antoun](https:\/\/wissamantoun.com\/) (INRIA Paris)<br \/>\nWe examine the lifecycle of a model before it is &#8220;ready,&#8221; from massive-scale data collection to self-supervised learning. The course covers the challenges of data quality and diversity, including the specific role that source code plays in enhancing a model\u2019s logical reasoning capabilities.<\/p>\n<p>Tuesday 9-12:<br \/>\nPost-training<br \/>\n[Nathana\u00ebl Fijalkow](https:\/\/games-automata-play.com\/) (CNRS)<br \/>\nThis lecture covers the transition from a raw base model to a functional assistant through Supervised Fine-Tuning (SFT) and Reinforcement Learning. We will discuss how these techniques are used to align models with human intent, specifically for following complex technical instructions.<\/p>\n<p>Tuesday 15-18:<br \/>\nLLM and AI Safety<br \/>\n[Wissam Antoun](https:\/\/wissamantoun.com\/) (INRIA Paris)<br \/>\nThis session explores the ethical and technical safeguards required for deploying LLMs. We will discuss alignment, bias mitigation, and &#8220;red teaming,&#8221; with a focus on ensuring the reliability and security of model outputs in sensitive contexts like software development.<\/p>\n<p>Wednesday 9-12:<br \/>\nState Space Models : Fondements Math\u00e9matiques et Efficacit\u00e9 Computationnelle<br \/>\n[Yannis Bendi-Ouis](https:\/\/www.naowak.fr\/) (INRIA Bordeaux)<br \/>\nPeut-on combiner l&#8217;entra\u00eenement parall\u00e8le des Transformers avec l&#8217;inf\u00e9rence rapide des RNNs ? C&#8217;est la promesse des State Space Models<br \/>\n(SSM) modernes. Ce cours propose une plong\u00e9e approfondie dans les State Space Models (SSM), une alternative puissante aux architectures bas\u00e9es sur l&#8217;attention. Nous explorerons comment ces mod\u00e8les font le pont entre les syst\u00e8mes dynamiques continus et le Deep Learning discret. \u00c0 travers l&#8217;\u00e9tude de mod\u00e8les embl\u00e9matiques comme S4, H3 et Mamba, nous d\u00e9taillerons les m\u00e9canismes math\u00e9matiques cl\u00e9s, notamment la diagonalisation et la discr\u00e9tisation s\u00e9lective, qui permettent une inf\u00e9rence rapide et une gestion optimale de la m\u00e9moire. L&#8217;objectif est de fournir les outils th\u00e9oriques pour comprendre pourquoi et comment ces architectures red\u00e9finissent l&#8217;\u00e9tat de l&#8217;art en mod\u00e9lisation de s\u00e9quences.<\/p>\n<p>Thursday 9-12:<br \/>\nTraining Agents<br \/>\n[Laetitia Teodorescu](https:\/\/dblp.org\/pid\/261\/3333.html) (Adaptive-ML)<br \/>\nMoving beyond simple text generation, this session introduces LLM-based agents that can use tools and interact with environments. Participants will learn how models are trained to plan and execute multi-step tasks, such as navigating a codebase or interacting with a compiler.<\/p>\n<p>Thursday 15-18:<br \/>\nWorld Models<br \/>\n[V\u00e1clav Volhejn](https:\/\/vvolhejn.com\/) (Kyutai)<br \/>\nThis lecture delves into the concept of &#8220;World Models&#8221;, the internal representations LLMs build of the processes they describe. We will discuss how understanding the underlying &#8220;rules&#8221; of a system (physical or logical) allows models to predict and simulate complex outcomes.<\/p>\n<p>Friday 9-12:<br \/>\nLLMs for test generation<br \/>\n[Xavier Blanc](https:\/\/www.labri.fr\/perso\/xblanc\/) (Universit\u00e9 de Bordeaux)<br \/>\nThis session focuses on the practical application of LLMs to software quality assurance. We will study how models can be leveraged to automatically generate unit tests, identify edge cases, and assist in formal verification, bridging the gap between natural language requirements and executable code.<\/p>\n<p>Friday 15-18:<br \/>\nMechanistic interpretability<br \/>\n[David Louapre](https:\/\/scienceetonnante.com\/) (Hugging Face)<br \/>\nThe final session asks: how does a model think? By &#8220;opening the black box,&#8221; we explore methods to reverse-engineer the neurons and circuits of a Transformer. This understanding is crucial for verifying the internal logic of models used in high-stakes programming and mathematical proof tasks.<\/p>\n<\/div>\n<div style=\"width: 35%; padding: 0 10px 0 30px; float: left;\">\n<h4><span style=\"color: #ff0000;\">Informations pratiques<\/span><\/h4>\n<h3>Date<\/h3>\n<p>15 au 19 juin 2026<\/p>\n<h3>Lieu<\/h3>\n<p>centre CNRS La vieille Perrotine du CNRS. <\/p>\n<h3>Inscription<\/h3>\n<p>Pour pouvoir participer, merci de remplir le <a href=\"https:\/\/ecoles-cea-edf-inria.fr\/files\/2026\/02\/registration_form_computer_science_2026.doc\">formulaire d&#8217;inscription <img decoding=\"async\" class=\"alignnone size-medium\" src=\"https:\/\/ecoles-cea-edf-inria.fr\/files\/2021\/10\/word-e1635282179335.png\" alt=\"Word icon\" width=\"70\"><\/a> et l&#8217;envoyer avant le <strong>15 mai 2026<\/strong> \u00e0 <a href=\"mailto:regis.vizet@cea.fr\">R\u00e9gis Vizet<\/a> et <a href=\"mailto:tifenn.graffin@inria.fr\">Tifenn Baril-Graffin<\/a>.<\/p>\n<h3>Pr\u00e9-requis<\/h3>\n<h3>Contacts<\/h3>\n<p><strong>S\u00e9cretariat des \u00e9coles<\/strong><br \/>\n<a href=\"mailto:regis.vizet@cea.fr\">R\u00e9gis Vizet<\/a> &#8211; CEA<br \/>\n<a href=\"mailto:tifenn.graffin@inria.fr\">Tifenn Baril-Graffin<\/a> &#8211; INRIA<br \/>\ntel: 01 69 26 47 45<br \/>\nFax: 01 69 26 70 05<\/p>\n<p><strong>Coordinateurs de l&#8217;\u00e9cole d&#8217;informatique 2026<\/strong><br \/>\n<a href=\"mailto:nathanael.fijalkow@gmail.com \"> Nathana\u00ebl Fijalkow<\/a><br \/>\n<a href=\"mailto:marc.lelarge@inria.fr \"> Marc Lelarge<\/a><br \/>\n<a href=\"mailto:philippe.suignard@edf.fr \"> Philippe Suignard<\/a><br \/>\n<a href=\"mailto:guillaume.baudart@inria.fr\"> Guillaume Baudart<\/a><br \/>\n<a href=\"mailto:xavier.hinaut@inria.fr \"> Xavier Hinaut<\/a><\/p>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Ecole informatique de 2026 : contenu, programmme, dates, informations pratiques. Ecole d\u2019\u00e9t\u00e9 d\u2019informatique 2026 Fondements des LLMs et applications \u00e0 la programmation Intervenant Marc Lelarge &hellip;<\/p>\n","protected":false},"author":2785,"featured_media":0,"parent":1161,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2296","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/2296","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/users\/2785"}],"replies":[{"embeddable":true,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/comments?post=2296"}],"version-history":[{"count":27,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/2296\/revisions"}],"predecessor-version":[{"id":2465,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/2296\/revisions\/2465"}],"up":[{"embeddable":true,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/1161"}],"wp:attachment":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/media?parent=2296"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}