{"id":2298,"date":"2026-02-18T16:48:58","date_gmt":"2026-02-18T15:48:58","guid":{"rendered":"https:\/\/ecoles-cea-edf-inria.fr\/?page_id=2298"},"modified":"2026-04-09T14:55:37","modified_gmt":"2026-04-09T12:55:37","slug":"ecole-analyse-numerique-de-2026","status":"publish","type":"page","link":"https:\/\/ecoles-cea-edf-inria.fr\/en\/schools\/ecole-analyse-numerique-de-2026\/","title":{"rendered":"Numerical Analysis School 2026"},"content":{"rendered":"<!-- Configuration MathJax --><br \/>\n<script><br \/>\nwindow.MathJax = {<br \/>\n  tex: {<br \/>\n    inlineMath: [['$', '$'], ['\\\\(', '\\\\)']],<br \/>\n    displayMath: [['$$', '$$'], ['\\\\[', '\\\\]']]\n  },<br \/>\n  options: {<br \/>\n    skipHtmlTags: ['script','noscript','style','textarea','pre','code']\n  }<br \/>\n};<br \/>\n<\/script><\/p>\n<p><!-- Script MathJax v3 --><br \/>\n<script id=\"MathJax-script\" async\n  src=\"https:\/\/cdn.jsdelivr.net\/npm\/mathjax@3\/es5\/tex-mml-chtml.js\"><br \/>\n<\/script><\/p>\n<div id=\"headerredbloc\">\u00c9cole analyse num\u00e9rique de 2026: contenu, programme, dates, informations pratiques.<\/div>\n<div id=\"initiative\">\n<h1 class=\"textebleu\">\u00c9cole d\u2019\u00e9t\u00e9 d\u2019analyse num\u00e9rique 2026<\/h1>\n<h2>Optimisation distribu\u00e9e. Application aux syst\u00e8mes \u00e9nerg\u00e9tiques de demain.<\/h2>\n<div style=\"width: 65%; padding: 0 10px 0 0; float: left;\">\n<h4><span style=\"color: #ff0000;\">Contexte scientifique<\/span><\/h4>\n<h3><span style=\"color: #ff0000;\">Contenu <\/span><\/h3>\n<h3><span style=\"color: #ff0000;\">Orateurs\/-trices <\/span><\/h3>\n<p>\u2022 Francis Bach (Inria)<br \/>\n\u2022 Claire Monteleoni (INRIA)<br \/>\n\u2022 Gilles Stoltz (CNRS)<br \/>\n\u2022 Micha\u00ebl Jordan (Univ. Of California, Berkeley)<br \/>\n\u2022 Olivier Wintenberger (LPSM\/Sorbonne Universit\u00e9)<br \/>\n\u2022 Emmanuel Rachelson (ISAE-SUPAERO)<br \/>\n\u2022 Vianney Perchet (ENSAE\/CREST)<br \/>\n\u2022 Paul Strang (EDF)<br \/>\n\u2022 Laurent Pfeiffer (Inria)<br \/>\n\u2022 Benjamin Dubois-Taine (Inria)<\/p>\n<h4><span style=\"color: #ff0000;\">Programme pr\u00e9visionnel<\/span><\/h4>\n<p><!-- Programme --><\/p>\n<ul>\n<li><u>Lundi 22 juin 2026<\/u>\n<ul>\n<li><strong>9h-9h45<\/strong>, Ouverture<\/li>\n<li><strong>10h-11h<\/strong>, Bandits (Stoltz, Br\u00e9g\u00e8re)<\/li>\n<li><strong>11h00-11h15<\/strong>, Coffee Break<\/li>\n<li><strong>11h15-12h15<\/strong>, Bandits (Stoltz, Br\u00e9g\u00e8re)<\/li>\n<li><strong>12h15-14h<\/strong>, Lunch Break<\/li>\n<li><strong>14h-16h<\/strong>,Bandits (Stoltz, Br\u00e9g\u00e8re)<\/li>\n<li><strong>16h-16h15<\/strong>, Break<\/li>\n<li><strong>16h15-17h<\/strong>, Bandits (Stoltz, Br\u00e9g\u00e8re)<\/li>\n<li><strong>17h-18h<\/strong>, Bandits Recherche (Perchet)<\/li>\n<\/ul>\n<\/li>\n<li><u>Mardi 23 juin 2026<\/u>\n<ul>\n<li><strong>9h-(15min Break)-12h<\/strong>, Optimisation (Bach)<\/li>\n<li><strong>12h15-14h<\/strong>, Lunch Break<\/li>\n<li><strong>14h-16h<\/strong>, Optimisation TP (Dubois-Taine)<\/li>\n<li><strong>16h-16h30<\/strong>, Break<\/li>\n<li><strong>16h30-17h30<\/strong>, Online Optimisation (Wintenberger)<\/li>\n<\/ul>\n<\/li>\n<li><u>Mercredi 24 juin 2026<\/u>\n<ul>\n<li><strong>9h-(15min Break)-11h45<\/strong>, Climat (Monteleoni)<\/li>\n<li><strong>12h-13h<\/strong>, AI for economics(Jordan)<\/li>\n<li><strong>13h-17h30<\/strong>, Social Activity<\/li>\n<\/ul>\n<\/li>\n<li><u>Jeudi 25 juin 2026<\/u>\n<ul>\n<li><strong>9h-(15min Break)-12h15<\/strong>, RL (Rachelson)<\/li>\n<li><strong>12h15-14h<\/strong>, Lunch Break<\/li>\n<li><strong>14h-16h<\/strong>, RL TP (Rachelson)<\/li>\n<li><strong>16h-16h30<\/strong>, Break<\/li>\n<li><strong>16h30-17h30<\/strong>, RL Recherche (Strang)<\/li>\n<\/ul>\n<\/li>\n<li><u>Vendredi 26 juin 2026<\/u>\n<ul>\n<li><strong>9h-(15min Break)-12h15<\/strong>, Optim distribu\u00e9e (Pfeiffer)<\/li>\n<li><strong>12h15-14h<\/strong>, Lunch Break<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><!-- Programme --><\/p>\n<h4><span style=\"color: #ff0000;\">Abstract<\/span><\/h4>\n<p><strong>Olvier Wintenberger<\/strong> &#8211; Stochastic Online Convex Optimization with Applications to Adaptive Forecasting<br \/>\nWe propose a general framework for stochastic online convex optimization, embedding aggregation of experts and probabilistic forecasting.<br \/>\nCertain algorithms, including Bernstein Online Aggregation and Online Newton Steps, achieve the optimal rates in stochastic environnements.<br \/>\nWe apply our framework to calibrating parametric probabilistic forecasters of non-stationary conditionally sub-Gaussian time series.<br \/>\nWe illustrate the benefit of our approach to real-world data such as electricity load and temperatures.<\/p>\n<p><strong>Paul Strang<\/strong> &#8211; Model based-reinforcement learning for exact combinatorial optimization<br \/>\nModern mixed-integer linear programming (MILP) solvers are built upon the branch-and-bound (B&amp;B) paradigm. Since the 1980s, considerable research and engineering effort has gone into refining these solvers, resulting in highly optimized systems driven by expert-designed heuristics tuned over large benchmarks. However, in operational settings where structurally similarproblems are solved repeatedly, adapting solver heuristics to the distribution of encountered MILPs can lead to substantial gains in efficiency, beyond what static, hand-crafted heuristics can offer. Recent research has thus turned to machine learning (ML) to design efficient, data-driven B&amp;B heuristics tailored to specific instance distributions. Here, we propose adapting recent reinforcement learning (RL) approaches, known for having achieved breakthroughs in complex combinatorial games such as chess or go, to the setting of exact combinatorial optimization. Drawing on the MuZero architecture (Schrittwieser et al.), we introduce Plan-and-Branch-and-Bound (PlanB&amp;B), a model-based reinforcement learning (MBRL) agent that leverages an internal model of the B&amp;B dynamics to learn improved variable selection strategies.<\/p>\n<p><strong>Gilles Stoltz<\/strong> &#8211; A Tutorial on Stochastic Bandits, with Applications to the Management of Electricity Consumption<br \/>\nAbstract: Stochastic Bandits form a simple case of online learning, where at each round, a learner must select an action and obtains and observes a stochastic reward depending on this action, but does not observe the rewards that would have been achieved with other actions available. The rewards are generated according to some unknown stochastic model. The goal of the learner is to maximize the cumulative reward or, equivalently, minimize a quantity called the regret, defined as the difference between the cumulative reward achieved by the best action and the cumulative reward actually achieved.<br \/>\nThe first part of this series of lectures is devoted to vanilla K-armed bandits, which correspond to facing K actions, each associated with a fixed but unknown distribution. We review simple strategies like ETC (explore then commit) and UCB (upper confiance bound): we formally define them, state regret bounds, and provide proofs thereof. The UCB strategy (Auer, Cesa-Bianchi, Fisher, 2002) is an optimistic strategy building upper confidence bounds on the expectations of the distributions associated with arms and picking at each round the action associated with the largest such upper confidence bound. Thompson sampling (Thompson, 1933; Chapelle and Li, 2011; Scott, 2010), a Bayesian-sampling-based strategy will also be studied, if time permits.<br \/>\nThe second part considers an extension where actions lie in a continuous set, typically a subset of \\( \\mathbb{R}^d \\), and where rewards are, in expectation, linear functions of the actions picked. We review the classic LinUCB strategy (Abbasi-Yadkori, Pal, Szepesvari, 2011), which builds sequentially confidence regions over the unknown linear parameter of the reward function, from which it derives confidence regions over rewards functions. The actions are again picked in an optimitic fashion, by focusing on the largest values compatible with the confidence regions. We will finally explain how finite-armed contextual bandits, where rewards depend on a (possibly continuous) context and on the arm picked in a finite set, may be handled in a similar way.<br \/>\nThe third and final part presents an adaption of the linear setting designed to manage the electricity consumption by sending tariff incentives to customers (Bregere, Gaillard, Goude, Stoltz, 2019). This setting corresponds to contextual bandits with finitely many actions given by tariff levels, but where instead of picking one tariff the learner rather picks shares of the population receiving each tariff level. Imposing higher tariffs reduces the consumption and offering lower tariffs increases it.<br \/>\nEach part will be a mix of theory and practice, through notebooks in Python. In particular, we will first simulate the vanilla K-armed bandits problem to compare ETC, UCB and possibly other strategies. Then, we will consider electrical demand data (gathered by UK Power Networks) in which price incentives were offered to design a stochastic bandit algorithm for demand side management.<\/p>\n<p><strong>Vianney Perchet<\/strong> &#8211; Learning to Compete: Auction Design and Bidding Strategies in Energy Markets<br \/>\nAbstract: The transition to renewable energy has made electricity markets more dynamic and competitive than ever. Central to these markets are repeated auctions, where producers must constantly adjust their bids to account for fluctuating demand and the strategic moves of their rivals. This talk explores how we can use &#8220;online learning&#8221; to understand and optimize behavior in these high-stakes environments.<br \/>\nI will discuss two key pillars of our recent research. First, I will present our work from NeurIPS 2024 on uniform-price auctions, where we developed a new mathematical framework that allows bidders to learn optimal strategies much faster by better modeling the available &#8220;bid space.&#8221; Second, I will touch upon our comparative work (with Marius Potfer) examining the two giants of auction design: uniform-price vs. discriminatory (pay-as-bid) auctions. We compare these formats through the lens of &#8220;regret minimization&#8221;\u2014measuring how much a bidder loses by not knowing the future. By bridging theoretical machine learning with the practical realities of energy markets, we provide new insights into which auction structures lead to more efficient outcomes and how participants can best navigate them<\/p>\n<p><strong>Francis Bach &#8211; <\/strong>Learning theory from first principles<br \/>\nSummary : Data have become ubiquitous in science, engineering, industry, and personal life, leading to the need for automated processing. Machine learning is concerned with making predictions from training examples and is used in all of these areas, in small and large problems, with a variety of learning models, ranging from simple linear models to deep neural networks. It has now become an important part of the algorithmic toolbox.<br \/>\nHow can we make sense of these practical successes? Can we extract a few principles to understand current learning methods and guide the design of new techniques for new applications or to adapt to new computational environments? This is precisely the goal of learning theory and this series of lectures, with a particular eye toward adaptivity to specific structures that make learning faster (such as smoothness of the prediction functions or dependence on low-dimensional subspaces).<\/p>\n<p><strong>Claire Monteleoni &#8211;<\/strong>\u00a0AI for Climate<br \/>\nAbstract: The stunning recent advances in AI content generation rely on cutting-edge, generative deep learning algorithms and architectures trained on massive amounts of text, image, and video data. With different training data, these algorithms and architectures can benefit a variety of applications for addressing climate change. As opposed to text and video, the relevant training data includes weather and climate data from observations, reanalyses, and even physical simulations.<br \/>\nUsing AI methods \u2014 especially generative AI \u2014 in climate and weather provides additional sources of uncertainty beyond those already identified. However the potential for such methods is great. Generative AI methods can address fundamental tasks including data fusion, interpolation, downscaling, and domain alignment. This course will provide a survey of recent work applying such methods to problems including weather forecasting, including extreme events, climate model emulation and scenario generation, and renewable energy planning.<\/p>\n<p><strong>Michael Jordan &#8211;<\/strong>Contracts, Uncertainty, and Incentives in Machine Learning Ecosystems<br \/>\nAbstract: Contract theory is the study of economic incentives when parties transact in the presence of private information, and where prior distributions may not be common knowledge. We augment classical contract theory to incorporate a role for learning from data, where the overall goal of the adaptive mechanism is to obtain desired statistical behavior. We consider applications of this framework to problems in principal-agent regulatory mechanisms. We also consider systems in which data is a valued good, and principals and agents are arranged in markets consisting of multiple layers.<\/p>\n<p><strong>Emmanuel Rachelson &#8211;<\/strong>\u00a0A Brief Introduction to Reinforcement Learning<br \/>\nAbstract: Making a sequence of optimal decisions to control a system &#8212; whether winning a video game, balancing a bicycle, or managing the electricity output of a network of power plants &#8212; falls under the domain of optimal control. Reinforcement learning (RL) asks the question: &#8220;Can we learn an optimal control strategy directly through interaction with the system, without prior knowledge of its dynamics or properties?&#8221; This question is rooted in human experience: we do not inspect a video game&#8217;s source code or write down the physics equations of a bicycle, yet we can achieve precise control, based on past experience and trial-and-error interaction with the system.<br \/>\nRL seeks to solve the optimal control problem using interaction data. This brings together challenges in optimization (of the control function), exploration (to gather system dynamics data), and estimation and approximation (of the objective and the control function), connecting RL to related areas such as optimal control, online optimization, stochastic approximation, or supervised learning.<br \/>\nThis course introduces the formal foundations of reinforcement learning and its key problems, using modern terminology that provides a direct pathway to deep reinforcement learning methods, which currently dominate the field.<\/p>\n<p><strong>Laurent Pfeiffer &#8211; <\/strong>Decomposition methods<br \/>\nThe course will focus on large-scale optimization problems of the form:<br \/>\n\\[ \\inf_{x \\in \\{x_1, &#8230;, x_N\\}} \\sum_{i=1}^N f_i(x_i) + g(\\sum_{i=1}^N A_i x_i) \\]\nwhere the functions \\( f_i, g\\) are supposed to be convex and valued in \\( \\mathbb{R} \\cup {+\\infty} \\). As an example of application, one may think to the case of an energy management problem involving \\(N\\) production units and associated decisions \\(x_i\\), coupled through the term \\( g(\\sum_{i=1}^N A_i x_i) \\), modelling for example a constraint on the total production.<br \/>\nThese problems enjoy a separability structure, which is in particular visible at the level of the dual problem, whose associated cost is a sum of \\(N +1\\) functions. This structure allows for the design of decomposition methods: iterative numerical methods that scale well for large values of \\(N \\), in so far as each of their iteration requires to realize N parallelizable operations.<br \/>\nThe course will review and analyze several decomposition methods, relying on various regularity assumptions and oracles on the data functions. They include in particular the Frank-Wolfe algorithm, the dual subgradient algorithm, and the primal-dual hybrid gradient algorithm. The course will be completed by a presentation of stochastic variants of these algorithms, involving a sampling of the units at each iteration.<\/p>\n<\/div>\n<div style=\"width: 35%; padding: 0 10px 0 30px; float: left;\">\n<h4><span style=\"color: #ff0000;\">Informations pratiques<\/span><\/h4>\n<h3>Dates<\/h3>\n<p>22 Juin &#8211; 26 Juin 2026<\/p>\n<h3>Lieu<\/h3>\n<p><a href=\"https:\/\/www.ines-solaire.org\/decouvrir-ines\/\">CEA Liten\/Institut National de l\u2019Energie Solaire\u00a0<\/a><br \/>\nLe Bourget du Lac<\/p>\n<h3>Contacts<\/h3>\n<p><strong>Organisateurs<\/strong><br \/>\nYannig Goude (EDF)<br \/>\nC\u00f4me Bissuel (EDF)<br \/>\nPierre Gaillard (Inria)<br \/>\nMathieu Vall\u00e9e (CEA)<br \/>\nGr\u00e9goire Pichenot (CEA)<br \/>\nEtienne Wurtz (CEA)<\/p>\n<h3>Inscription<\/h3>\n<p>Pour pouvoir participer, merci de remplir le <a href=\"https:\/\/ecoles-cea-edf-inria.fr\/files\/2026\/02\/registration_form_numerical_analysis_2026.doc\">formulaire d&#8217;inscription <img decoding=\"async\" class=\"alignnone size-medium\" src=\"https:\/\/ecoles-cea-edf-inria.fr\/files\/2021\/10\/word-e1635282179335.png\" alt=\"Word icon\" width=\"70\" \/><\/a> et l&#8217;envoyer avant le <strong>22 mai 2026<\/strong> \u00e0 <a href=\"mailto:regis.vizet@cea.fr\">R\u00e9gis Vizet<\/a> et <a href=\"mailto:tifenn.graffin@inria.fr\">Tifenn Baril-Graffin<\/a>.<\/p>\n<h3>Contacts<\/h3>\n<p><strong>S\u00e9cretariat des \u00e9coles<\/strong><br \/>\n<a href=\"mailto:regis.vizet@cea.fr\">R\u00e9gis Vizet<\/a> &#8211; CEA<br \/>\n<a href=\"mailto:tifenn.graffin@inria.fr\">Tifenn Baril-Graffin<\/a> &#8211; INRIA<br \/>\ntel: 01 69 26 47 45<br \/>\nFax: 01 69 26 70 05<\/p>\n<h3>Se loger<\/h3>\n<p>H\u00f4tels \u00e0 Chamb\u00e9ry (<em>LIgne de Bus : <\/em>Gare de <strong>Chamb\u00e9ry<\/strong>\u00a0: prendre la ligne <strong>CHRONO A<\/strong>, r\u00e9seau<strong>\u00a0STAC)<\/strong><\/p>\n<p><strong>H\u00f4tel Mercure*** <\/strong><em>(en face de la gare SNCF)<\/em><\/p>\n<p>83 Place de la Gare &#8211; 73000 Chamb\u00e9ry<br \/>\nPhone: +33 (0)4 79 62 10 11<\/p>\n<p>Email\u00a0: <u><a href=\"mailto:h1541@accor.com\" data-ogsc=\"rgb(5, 99, 193)\" data-linkindex=\"0\">h1541@accor.com<\/a><\/u><\/p>\n<p><u><a href=\"http:\/\/www.accorhotels.com\/fr\/hotel-1541-mercure-chambery-centre\/index.shtml\" data-ogsc=\"rgb(0, 112, 192)\" data-linkindex=\"1\" data-auth=\"NotApplicable\">www.accorhotels.com\/fr\/hotel-1541-mercure-chambery-centre\/index.shtml<\/a><\/u><\/p>\n<p><strong>IBIS STYLES HOTEL **<\/strong>\u00a0<em>(50 m de la gare SNCF)<\/em><\/p>\n<p>154 rue Sommeiller &#8211; 73000 CHAMBERY<\/p>\n<p>Phone: +33 (0)4 79 62 37 26<\/p>\n<p>Email: <u><a href=\"mailto:h9541@accor.com\" data-ogsc=\"rgb(5, 99, 193)\" data-linkindex=\"2\">h9541@accor.com<\/a><\/u><\/p>\n<p><u>\u00a0<\/u><\/p>\n<p><strong>H\u00f4tel LE 5\u00a0****<\/strong>\u00a0<em>(700 m de la gare SNCF)<\/em><\/p>\n<p>22 faubourg Reclus &#8211; 73000 CHAMBERY<\/p>\n<p><strong>Phone: +33 (0)4 79 33 51 18<\/strong><\/p>\n<p><strong>Email: <\/strong><strong><u><a href=\"mailto:reception@hotel-chambery.com\" data-ogsc=\"rgb(5, 99, 193)\" data-linkindex=\"3\">reception@hotel-chambery.com<\/a><\/u><\/strong><\/p>\n<p><u><a href=\"http:\/\/www.hotel-chambery.com\/\" data-ogsc=\"rgb(0, 112, 192)\" data-linkindex=\"4\" data-auth=\"NotApplicable\">http:\/\/www.hotel-chambery.com\/<\/a><\/u><\/p>\n<p><u>\u00a0<\/u><\/p>\n<p><strong>Inter H\u00f4tel des PRINCES*** <\/strong><em>(1km de la gare SNCF)<\/em><\/p>\n<p>4 Rue de Boigne &#8211; 73000 Chamb\u00e9ry<\/p>\n<p>Phone\u00a0: +33 (0)4\u00a0 79.33.45.36<\/p>\n<p><strong>Email: <\/strong><strong><u><a href=\"mailto:reception@hoteldesprinces.com\" data-ogsc=\"rgb(5, 99, 193)\" data-linkindex=\"5\">reception@hoteldesprinces.com<\/a><\/u><\/strong><\/p>\n<p><u><a href=\"http:\/\/www.hoteldesprinces.com\/\" data-ogsc=\"rgb(0, 112, 192)\" data-linkindex=\"6\" data-auth=\"NotApplicable\">http:\/\/www.hoteldesprinces.com\/<\/a><\/u><\/p>\n<p><u>\u00a0<\/u><\/p>\n<p><strong><u>Kyriad Chamb\u00e9ry centre Carr\u00e9 Curial ** <\/u><\/strong><em>(1.5km de la gare SNCF)<\/em><\/p>\n<p>371 rue de la r\u00e9publique<\/p>\n<p>73000 Chamb\u00e9ry<\/p>\n<p>Phone: +33 (0)8.92.23.48.13<\/p>\n<p>Email : <u><a href=\"mailto:chambery.centre@kyriad.fr\" data-ogsc=\"rgb(5, 99, 193)\" data-linkindex=\"7\">chambery.centre@kyriad.fr<\/a><\/u><\/p>\n<p><u><a href=\"http:\/\/www.kyriad-chambery-centre-curial.fr\/fr\" data-ogsc=\"rgb(5, 99, 193)\" data-linkindex=\"8\" data-auth=\"NotApplicable\">http:\/\/www.kyriad-chambery-centre-curial.fr\/fr<\/a><\/u><\/p>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>\u00c9cole analyse num\u00e9rique de 2026: contenu, programme, dates, informations pratiques. \u00c9cole d\u2019\u00e9t\u00e9 d\u2019analyse num\u00e9rique 2026 Optimisation distribu\u00e9e. Application aux syst\u00e8mes \u00e9nerg\u00e9tiques de demain. Contexte scientifique &hellip;<\/p>\n","protected":false},"author":2785,"featured_media":0,"parent":1161,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2298","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/2298","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/users\/2785"}],"replies":[{"embeddable":true,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/comments?post=2298"}],"version-history":[{"count":81,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/2298\/revisions"}],"predecessor-version":[{"id":2468,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/2298\/revisions\/2468"}],"up":[{"embeddable":true,"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/pages\/1161"}],"wp:attachment":[{"href":"https:\/\/ecoles-cea-edf-inria.fr\/en\/wp-json\/wp\/v2\/media?parent=2298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}