{"id":4863,"date":"2025-10-19T21:14:26","date_gmt":"2025-10-19T21:14:26","guid":{"rendered":"https:\/\/palmer-consulting.com\/large-reasoning-models-lrm\/"},"modified":"2025-10-19T21:14:26","modified_gmt":"2025-10-19T21:14:26","slug":"large-reasoning-models-lrm","status":"publish","type":"post","link":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/","title":{"rendered":"Large Reasoning Models (LRM)"},"content":{"rendered":"<h1 data-start=\"268\" data-end=\"299\">LRM: what is it?<\/h1>\n<p data-start=\"300\" data-end=\"1107\">Large Reasoning Models (LRMs) are a new category of artificial intelligence systems that go far beyond the simple generation of fluent text. Whereas large language models (LLMs) such as GPT-4 or LLaMA focus on word or sentence prediction using training statistics, LRMs are designed for <strong data-start=\"658\" data-end=\"671\">reasoning<\/strong>, multi-step inference cascades, chain-of-thought, tree-of-thought or reasoning graphs.<br data-start=\"852\" data-end=\"855\">They combine LLM-type architectures with explicit inference modules, reflection mechanisms, sometimes heuristic search, and even reinforcement learning to better simulate human or quasi-human reasoning. <\/p>\n<p data-start=\"1109\" data-end=\"1148\">In concrete terms, an LRM can, for example :<\/p>\n<ul data-start=\"1149\" data-end=\"1538\">\n<li data-start=\"1149\" data-end=\"1315\">\n<p data-start=\"1151\" data-end=\"1315\">for a complex math or logic problem, don&#8217;t answer directly, but generate a sequence of &#8220;thinking steps&#8221; before arriving at the answer,<\/p>\n<\/li>\n<li data-start=\"1316\" data-end=\"1410\">\n<p data-start=\"1318\" data-end=\"1410\">explore several possible solutions, compare and verify them, and choose the most appropriate,<\/p>\n<\/li>\n<li data-start=\"1411\" data-end=\"1538\">\n<p data-start=\"1413\" data-end=\"1538\">be specialized in structured reasoning tasks, such as medical diagnosis, programming, planning and simulation.<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"1540\" data-end=\"1672\">So, we can say: an LRM is a model trained or refined for <em data-start=\"1610\" data-end=\"1624\">reasoning<\/em>, more than for simple text prediction.<\/p>\n<hr data-start=\"1674\" data-end=\"1677\">\n<h1 data-start=\"1679\" data-end=\"1724\">Why this distinction (LLM vs LRM)?<\/h1>\n<p data-start=\"1725\" data-end=\"2229\">The rise of MRLs addresses an important limitation of conventional LLMs: even the most powerful are often weak in tasks that require a real chain of reasoning &#8211; multi-step, verification, planning or abstract logic. They can generate fluent text, but don&#8217;t &#8220;think&#8221; like someone who questions, explores alternatives, checks their hypotheses.<br data-start=\"2113\" data-end=\"2116\">LLMs seek to fill this gap: they aim to be more robust, more reliable in demanding contexts. <\/p>\n<p data-start=\"2231\" data-end=\"2265\">A few points of distinction:<\/p>\n<ul data-start=\"2266\" data-end=\"2831\">\n<li data-start=\"2266\" data-end=\"2391\">\n<p data-start=\"2268\" data-end=\"2391\"><strong data-start=\"2268\" data-end=\"2289\">Core function<\/strong>: LLM \u2192 text generation\/fluent function. LRM \u2192 complex problem solving, reasoning. <\/p>\n<\/li>\n<li data-start=\"2392\" data-end=\"2550\">\n<p data-start=\"2394\" data-end=\"2550\"><strong data-start=\"2394\" data-end=\"2418\">Typical use cases<\/strong>: LLM \u2192 translation, summarization, conversation, generation. LRM \u2192 mathematics, logic, programming, diagnostics, decision-making. <\/p>\n<\/li>\n<li data-start=\"2551\" data-end=\"2689\">\n<p data-start=\"2553\" data-end=\"2689\"><strong data-start=\"2553\" data-end=\"2575\">Time &amp; efficiency<\/strong>: MRLs are often slower and more computationally expensive, as they perform internal reflection steps.<\/p>\n<\/li>\n<li data-start=\"2690\" data-end=\"2831\">\n<p data-start=\"2692\" data-end=\"2831\"><strong data-start=\"2692\" data-end=\"2713\">Internal structure<\/strong>: MRLs incorporate &#8220;thinking steps&#8221;, sometimes explicitly, while LLMs remain more &#8220;black box&#8221;.<\/p>\n<\/li>\n<\/ul>\n<hr data-start=\"2833\" data-end=\"2836\">\n<h1 data-start=\"2838\" data-end=\"2872\">How do LRMs work?<\/h1>\n<p data-start=\"2873\" data-end=\"2936\">The operation of an LRM is based on several key elements:<\/p>\n<h2 data-start=\"2938\" data-end=\"2966\">Encoding + reasoning<\/h2>\n<p data-start=\"2967\" data-end=\"3078\">Like an LLM, an LRM starts by encoding the input (text, possibly image or structure). Then : <\/p>\n<ul data-start=\"3079\" data-end=\"3445\">\n<li data-start=\"3079\" data-end=\"3190\">\n<p data-start=\"3081\" data-end=\"3190\">it generates a &#8221; <strong data-start=\"3095\" data-end=\"3118\">chain<\/strong> of thought&#8221; in which several intermediate steps are formulated,<\/p>\n<\/li>\n<li data-start=\"3191\" data-end=\"3318\">\n<p data-start=\"3193\" data-end=\"3318\">they can use research strategies (e.g. exploring several hypotheses, &#8220;tree of thought&#8221; or &#8220;graph of thought&#8221;),<\/p>\n<\/li>\n<li data-start=\"3319\" data-end=\"3445\">\n<p data-start=\"3321\" data-end=\"3445\">it can incorporate a verification or revision loop, comparing different paths before choosing the final solution.<\/p>\n<\/li>\n<\/ul>\n<h2 data-start=\"3447\" data-end=\"3490\">Specialized training techniques<\/h2>\n<p data-start=\"3491\" data-end=\"3574\">For a model to become an LRM, it&#8217;s not enough to train a standard LLM:<\/p>\n<ul data-start=\"3575\" data-end=\"4071\">\n<li data-start=\"3575\" data-end=\"3726\">\n<p data-start=\"3577\" data-end=\"3726\"><strong data-start=\"3592\" data-end=\"3618\">training data<\/strong> are used, containing not only answers but also <em data-start=\"3671\" data-end=\"3695\">traces of reasoning<\/em> (intermediate steps).<\/p>\n<\/li>\n<li data-start=\"3727\" data-end=\"3940\">\n<p data-start=\"3729\" data-end=\"3940\">we apply methods such as <strong data-start=\"3763\" data-end=\"3816\">reinforcement learning with human feedback (RLHF<\/strong> ), but adapted to reasoning: we reward logical chains of thought and correct paths, and penalize errors.<\/p>\n<\/li>\n<li data-start=\"3941\" data-end=\"4071\">\n<p data-start=\"3943\" data-end=\"4071\">We sometimes use hybrid architectures combining symbolic or heuristic learning + neural learning.<\/p>\n<\/li>\n<\/ul>\n<h2 data-start=\"4073\" data-end=\"4104\">Dependence on complexity<\/h2>\n<p data-start=\"4105\" data-end=\"4232\">Studies show that MRLs enter different &#8220;performance regimes&#8221; depending on the complexity of the task:<\/p>\n<ul data-start=\"4233\" data-end=\"4678\">\n<li data-start=\"4233\" data-end=\"4391\">\n<p data-start=\"4235\" data-end=\"4391\">for simple tasks, a conventional LLM can sometimes do as well or even better than an LRM, because the additional reasoning doesn&#8217;t add any value.<\/p>\n<\/li>\n<li data-start=\"4392\" data-end=\"4521\">\n<p data-start=\"4394\" data-end=\"4521\">for moderately complex tasks, the advantage of MRLs is felt &#8211; their reasoning ability adds value.<\/p>\n<\/li>\n<li data-start=\"4522\" data-end=\"4678\">\n<p data-start=\"4524\" data-end=\"4678\">for very complex tasks, LRMs can &#8220;fall apart&#8221;: their accuracy drops, they generate a lot of effort but no good results.<\/p>\n<\/li>\n<\/ul>\n<hr data-start=\"4680\" data-end=\"4683\">\n<h1 data-start=\"4685\" data-end=\"4724\">What are the advantages of LRM?<\/h1>\n<p data-start=\"4725\" data-end=\"4787\">Here are the main benefits of this category of models:<\/p>\n<h3 data-start=\"4789\" data-end=\"4841\">Better performance on complex tasks<\/h3>\n<p data-start=\"4842\" data-end=\"5113\">When the problem requires multiple steps, hypotheses, deduction or induction, MRLs outperform conventional LLMs. They are better equipped for diagnostics, programming, logical reasoning or mathematical tasks. <\/p>\n<h3 data-start=\"5115\" data-end=\"5156\">Increased traceability and explicability<\/h3>\n<p data-start=\"5157\" data-end=\"5406\">Thanks to the generation of visible chains of thought, it becomes possible to track <em data-start=\"5240\" data-end=\"5249\">how<\/em> the model arrived at an answer &#8211; which reinforces trust, auditability, and alignment (a critical need in sectors such as healthcare and finance).<\/p>\n<h3 data-start=\"5408\" data-end=\"5459\">Adaptation to sensitive business use cases<\/h3>\n<p data-start=\"5460\" data-end=\"5724\">In fields such as law, medicine and finance, where a &#8220;right&#8221; answer is essential and must be well-founded, the LRM approach is more appropriate. They enable decision-making processes to be modeled, hypotheses to be verified and choices to be justified. <\/p>\n<h3 data-start=\"5726\" data-end=\"5765\">Potential for AI in general<\/h3>\n<p data-start=\"5766\" data-end=\"5992\">MRLs represent a step towards systems that don&#8217;t just generate text, but can <em data-start=\"5870\" data-end=\"5878\">think<\/em> &#8211; or at least simulate reasoning &#8211; which is a key element towards more general artificial intelligence.<\/p>\n<hr data-start=\"5994\" data-end=\"5997\">\n<h1 data-start=\"5999\" data-end=\"6038\">What are the limits and challenges?<\/h1>\n<p data-start=\"6039\" data-end=\"6114\">Despite their power, MRLs still present significant obstacles:<\/p>\n<h3 data-start=\"6116\" data-end=\"6138\">Cost and latency<\/h3>\n<p data-start=\"6139\" data-end=\"6336\">Generating intermediate steps, exploring branches of reasoning, checking or revising, involves much more computation, memory and time than &#8220;simple&#8221; LLMs.<\/p>\n<h3 data-start=\"6338\" data-end=\"6375\">High-complexity collapse<\/h3>\n<p data-start=\"6376\" data-end=\"6672\">As mentioned, recent studies show that above a certain complexity threshold, even MRLs &#8220;give less&#8221;: they can reduce their reasoning effort, but their performance drops precipitously. This raises questions about the fundamental limits of automated reasoning. <\/p>\n<h3 data-start=\"6674\" data-end=\"6714\">Real vs. simulated understanding<\/h3>\n<p data-start=\"6715\" data-end=\"6997\">Even if we get some good answers, there&#8217;s still a debate about <strong data-start=\"6776\" data-end=\"6819\">whether AI really &#8220;reasons&#8221;?<\/strong> Or does it simply apply powerful heuristics? Some research shows that chains of thought can be superficial or contain logical errors. <\/p>\n<h3 data-start=\"6999\" data-end=\"7035\">Explicability always limited<\/h3>\n<p data-start=\"7036\" data-end=\"7255\">Even with intermediate steps exposed, the underlying logic can remain opaque: why did the model choose such and such a branch? We don&#8217;t yet have the level of transparency we&#8217;d like for critical decisions. <\/p>\n<h3 data-start=\"7257\" data-end=\"7293\">Training data &amp; bias<\/h3>\n<p data-start=\"7294\" data-end=\"7464\">The need for long chains of evidence and complex scenarios makes data acquisition costly. There is also the risk of bias or uncovered areas. <\/p>\n<hr data-start=\"7466\" data-end=\"7469\">\n<h1 data-start=\"7471\" data-end=\"7505\">Quick comparison : LLM vs LRM<\/h1>\n<div class=\"_tableContainer_1rjym_1\">\n<div class=\"group _tableWrapper_1rjym_13 flex w-fit flex-col-reverse\" tabindex=\"-1\">\n<table class=\"w-fit min-w-(--thread-content-width)\" data-start=\"7506\" data-end=\"8775\">\n<thead data-start=\"7506\" data-end=\"7640\">\n<tr data-start=\"7506\" data-end=\"7640\">\n<th data-start=\"7506\" data-end=\"7540\" data-col-size=\"sm\">Criteria<\/th>\n<th data-start=\"7540\" data-end=\"7586\" data-col-size=\"sm\">LLM (Large Language Model)<\/th>\n<th data-start=\"7586\" data-end=\"7640\" data-col-size=\"md\">LRM (Large Reasoning Model)<\/th>\n<\/tr>\n<\/thead>\n<tbody data-start=\"7776\" data-end=\"8775\">\n<tr data-start=\"7776\" data-end=\"7910\">\n<td data-start=\"7776\" data-end=\"7810\" data-col-size=\"sm\">Main objective<\/td>\n<td data-col-size=\"sm\" data-start=\"7810\" data-end=\"7857\">Fluid text generation<\/td>\n<td data-col-size=\"md\" data-start=\"7857\" data-end=\"7910\">Multi-step structured reasoning<\/td>\n<\/tr>\n<tr data-start=\"7911\" data-end=\"8045\">\n<td data-start=\"7911\" data-end=\"7945\" data-col-size=\"sm\">Response times<\/td>\n<td data-col-size=\"sm\" data-start=\"7945\" data-end=\"7992\">Fast, optimized<\/td>\n<td data-col-size=\"md\" data-start=\"7992\" data-end=\"8045\">Slower, more computation<\/td>\n<\/tr>\n<tr data-start=\"8046\" data-end=\"8180\">\n<td data-start=\"8046\" data-end=\"8080\" data-col-size=\"sm\">Best domain<\/td>\n<td data-col-size=\"sm\" data-start=\"8080\" data-end=\"8127\">Text generation tasks, simple<\/td>\n<td data-col-size=\"md\" data-start=\"8127\" data-end=\"8180\">Complex, logical, diagnostic tasks<\/td>\n<\/tr>\n<tr data-start=\"8181\" data-end=\"8367\">\n<td data-start=\"8181\" data-end=\"8215\" data-col-size=\"sm\">Explanatory<\/td>\n<td data-start=\"8215\" data-end=\"8262\" data-col-size=\"sm\">Limited to output<\/td>\n<td data-start=\"8262\" data-end=\"8314\" data-col-size=\"md\">No visible or accessible thought<\/td>\n<\/tr>\n<tr data-start=\"8368\" data-end=\"8503\">\n<td data-start=\"8368\" data-end=\"8402\" data-col-size=\"sm\">Latency &amp; cost<\/td>\n<td data-start=\"8402\" data-end=\"8449\" data-col-size=\"sm\">Relatively low<\/td>\n<td data-start=\"8449\" data-end=\"8503\" data-col-size=\"md\">Relatively high<\/td>\n<\/tr>\n<tr data-start=\"8504\" data-end=\"8638\">\n<td data-start=\"8504\" data-end=\"8538\" data-col-size=\"sm\">Efficient for simple tasks<\/td>\n<td data-col-size=\"sm\" data-start=\"8538\" data-end=\"8585\">Yes<\/td>\n<td data-col-size=\"md\" data-start=\"8585\" data-end=\"8638\">Not optimized for very simple tasks<\/td>\n<\/tr>\n<tr data-start=\"8639\" data-end=\"8775\">\n<td data-start=\"8639\" data-end=\"8677\" data-col-size=\"sm\">Efficient for highly complex tasks<\/td>\n<td data-start=\"8677\" data-end=\"8722\" data-col-size=\"sm\">Limited<\/td>\n<td data-start=\"8722\" data-end=\"8775\" data-col-size=\"md\">Better, but peaks above a certain threshold<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<hr data-start=\"8777\" data-end=\"8780\">\n<h1 data-start=\"8782\" data-end=\"8846\">How are LRMs strategic for the AI ecosystem?<\/h1>\n<h3 data-start=\"8847\" data-end=\"8895\">For professionals and businesses<\/h3>\n<p data-start=\"8896\" data-end=\"9251\">Organizations dealing with decision, logic, verification or compliance problems are well advised to turn to MRLs: they offer a qualitative leap over conventional LLMs.<br data-start=\"9110\" data-end=\"9113\">This means greater reliability, better traceability, and stronger alignment with sensitive uses where error is costly.<\/p>\n<h3 data-start=\"9253\" data-end=\"9292\">For innovation and research<\/h3>\n<p data-start=\"9293\" data-end=\"9684\">MRLs are a field of intense research: how to simulate human reasoning, how to structure chains of thought, how to demonstrate true robustness? All this is helping to push forward the frontier of AI.<br data-start=\"9525\" data-end=\"9528\">The question of &#8220;AGI&#8221; (general artificial intelligence) undoubtedly requires a greater capacity for reasoning &#8211; and MRLs are an important milestone. <\/p>\n<h3 data-start=\"9686\" data-end=\"9748\">For sovereignty and technological differentiation<\/h3>\n<p data-start=\"9749\" data-end=\"10001\">Mastering LRMs, their architectures, data and uses, is becoming a strategic asset for public and private technology players. Those able to build, adapt or control such models have a competitive advantage. <\/p>\n<hr data-start=\"10003\" data-end=\"10006\">\n<h1 data-start=\"10008\" data-end=\"10033\">Future prospects<\/h1>\n<p data-start=\"10034\" data-end=\"10089\">A few major trends emerge for LRM:<\/p>\n<ul data-start=\"10090\" data-end=\"10911\">\n<li data-start=\"10090\" data-end=\"10269\">\n<p data-start=\"10092\" data-end=\"10269\"><strong data-start=\"10092\" data-end=\"10131\">Efforts to improve efficiency<\/strong>: reduce latency and token consumption, manage the chain of thought more intelligently so as not to generate unnecessary text.<\/p>\n<\/li>\n<li data-start=\"10270\" data-end=\"10409\">\n<p data-start=\"10272\" data-end=\"10409\"><strong data-start=\"10272\" data-end=\"10292\">Hybrid models<\/strong>: combine LRM approaches with agents, knowledge bases and symbolic systems to boost robustness.<\/p>\n<\/li>\n<li data-start=\"10410\" data-end=\"10555\">\n<p data-start=\"10412\" data-end=\"10555\"><strong data-start=\"10412\" data-end=\"10448\">Adapting to real-time uses<\/strong>: moving away from research towards industrial applications where time and cost count.<\/p>\n<\/li>\n<li data-start=\"10556\" data-end=\"10727\">\n<p data-start=\"10558\" data-end=\"10727\"><strong data-start=\"10558\" data-end=\"10583\">Multimodal extension<\/strong>: reasoning not only on text, but also on images, video, audio, structured data, with multi-modal thought chains.<\/p>\n<\/li>\n<li data-start=\"10728\" data-end=\"10911\">\n<p data-start=\"10730\" data-end=\"10911\"><strong data-start=\"10730\" data-end=\"10765\">Governance, ethics, reliability<\/strong>: guaranteeing that MRL decisions are transparent, audited and secure is of paramount importance, especially in sensitive areas.<\/p>\n<\/li>\n<\/ul>\n<hr data-start=\"10913\" data-end=\"10916\">\n<h1 data-start=\"10918\" data-end=\"10932\">Conclusion<\/h1>\n<p data-start=\"10933\" data-end=\"11571\">Large Reasoning Models represent a significant step forward in AI innovation: they no longer seek merely to <em data-start=\"11049\" data-end=\"11057\">form<\/em> or <em data-start=\"11061\" data-end=\"11070\">generate<\/em> text, but to <em data-start=\"11087\" data-end=\"11095\">think<\/em>, <em data-start=\"11099\" data-end=\"11110\">reason<\/em> and <em data-start=\"11114\" data-end=\"11124\">analyze<\/em>. For tasks of medium to high complexity, this is a clear advantage over pure generation models.<br data-start=\"11241\" data-end=\"11244\">However, these models are not yet perfect: latency, cost, collapse at high complexity, limited explicability remain challenges.<br data-start=\"11385\" data-end=\"11388\">For any company, researcher or decision-maker interested in AI systems for critical use &#8211; reasoning, decision, diagnosis &#8211; LRMs are today an avenue to follow closely. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>LRM: what is it? Large Reasoning Models (LRMs) are a new category of artificial intelligence systems that go far beyond the simple generation of fluent text. Whereas large language models (LLMs) such as GPT-4 or LLaMA focus on word or sentence prediction using training statistics, LRMs are designed for reasoning, multi-step inference cascades, chain-of-thought, tree-of-thought [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[78],"tags":[],"class_list":["post-4863","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Large Reasoning Models (LRM) | Palmer<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Large Reasoning Models (LRM) | Palmer\" \/>\n<meta property=\"og:description\" content=\"LRM: what is it? Large Reasoning Models (LRMs) are a new category of artificial intelligence systems that go far beyond the simple generation of fluent text. Whereas large language models (LLMs) such as GPT-4 or LLaMA focus on word or sentence prediction using training statistics, LRMs are designed for reasoning, multi-step inference cascades, chain-of-thought, tree-of-thought [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/\" \/>\n<meta property=\"og:site_name\" content=\"Palmer\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-19T21:14:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/09\/social-graph-palmer.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Zennadi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Laurent Zennadi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/\"},\"author\":{\"name\":\"Laurent Zennadi\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/person\\\/7ea52877fd35814d1d2f8e6e03daa3ed\"},\"headline\":\"Large Reasoning Models (LRM)\",\"datePublished\":\"2025-10-19T21:14:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/\"},\"wordCount\":1297,\"publisher\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#organization\"},\"articleSection\":[\"Artificial intelligence\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/\",\"name\":\"Large Reasoning Models (LRM) | Palmer\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#website\"},\"datePublished\":\"2025-10-19T21:14:26+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/large-reasoning-models-lrm\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/home\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Large Reasoning Models (LRM)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/\",\"name\":\"Palmer\",\"description\":\"Evolve at the speed of change\",\"publisher\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#organization\",\"name\":\"Palmer\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Palmer_Logo_Full_PenBlue_1x1-2.jpg\",\"contentUrl\":\"https:\\\/\\\/palmer-consulting.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Palmer_Logo_Full_PenBlue_1x1-2.jpg\",\"width\":480,\"height\":480,\"caption\":\"Palmer\"},\"image\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/company\\\/palmer-consulting\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/person\\\/7ea52877fd35814d1d2f8e6e03daa3ed\",\"name\":\"Laurent Zennadi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g\",\"caption\":\"Laurent Zennadi\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Large Reasoning Models (LRM) | Palmer","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/","og_locale":"en_US","og_type":"article","og_title":"Large Reasoning Models (LRM) | Palmer","og_description":"LRM: what is it? Large Reasoning Models (LRMs) are a new category of artificial intelligence systems that go far beyond the simple generation of fluent text. Whereas large language models (LLMs) such as GPT-4 or LLaMA focus on word or sentence prediction using training statistics, LRMs are designed for reasoning, multi-step inference cascades, chain-of-thought, tree-of-thought [&hellip;]","og_url":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/","og_site_name":"Palmer","article_published_time":"2025-10-19T21:14:26+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/09\/social-graph-palmer.png","type":"image\/png"}],"author":"Laurent Zennadi","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Laurent Zennadi","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/#article","isPartOf":{"@id":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/"},"author":{"name":"Laurent Zennadi","@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/person\/7ea52877fd35814d1d2f8e6e03daa3ed"},"headline":"Large Reasoning Models (LRM)","datePublished":"2025-10-19T21:14:26+00:00","mainEntityOfPage":{"@id":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/"},"wordCount":1297,"publisher":{"@id":"https:\/\/palmer-consulting.com\/en\/#organization"},"articleSection":["Artificial intelligence"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/","url":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/","name":"Large Reasoning Models (LRM) | Palmer","isPartOf":{"@id":"https:\/\/palmer-consulting.com\/en\/#website"},"datePublished":"2025-10-19T21:14:26+00:00","breadcrumb":{"@id":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/palmer-consulting.com\/en\/large-reasoning-models-lrm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/palmer-consulting.com\/en\/home\/"},{"@type":"ListItem","position":2,"name":"Large Reasoning Models (LRM)"}]},{"@type":"WebSite","@id":"https:\/\/palmer-consulting.com\/en\/#website","url":"https:\/\/palmer-consulting.com\/en\/","name":"Palmer","description":"Evolve at the speed of change","publisher":{"@id":"https:\/\/palmer-consulting.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/palmer-consulting.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/palmer-consulting.com\/en\/#organization","name":"Palmer","url":"https:\/\/palmer-consulting.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/08\/Palmer_Logo_Full_PenBlue_1x1-2.jpg","contentUrl":"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/08\/Palmer_Logo_Full_PenBlue_1x1-2.jpg","width":480,"height":480,"caption":"Palmer"},"image":{"@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/palmer-consulting\/"]},{"@type":"Person","@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/person\/7ea52877fd35814d1d2f8e6e03daa3ed","name":"Laurent Zennadi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g","caption":"Laurent Zennadi"}}]}},"_links":{"self":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/posts\/4863","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/comments?post=4863"}],"version-history":[{"count":0,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/posts\/4863\/revisions"}],"wp:attachment":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/media?parent=4863"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/categories?post=4863"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/tags?post=4863"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}