{"id":4711,"date":"2025-10-19T13:07:20","date_gmt":"2025-10-19T13:07:20","guid":{"rendered":"https:\/\/palmer-consulting.com\/ai-deepseek\/"},"modified":"2025-10-19T13:07:20","modified_gmt":"2025-10-19T13:07:20","slug":"ai-deepseek","status":"publish","type":"post","link":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/","title":{"rendered":"AI deepseek"},"content":{"rendered":"<h1 data-start=\"0\" data-end=\"55\">DeepSeek: China&#8217;s open-source AI revolution<\/h1>\n<p data-start=\"57\" data-end=\"913\">The artificial intelligence scene was turned upside down in early 2025 by the arrival of <strong data-start=\"143\" data-end=\"155\">DeepSeek<\/strong>. In record time, this young Chinese entity, an offshoot of the High-Flyer hedge fund, released several open-source models capable of rivaling the American giants at a derisory cost. Its technological innovations and transparent approach have aroused enthusiasm among developers, but also fears among its competitors. This article takes an in-depth look at the definition of <strong data-start=\"572\" data-end=\"584\">DeepSeek<\/strong> AI, traces its history, details its models (LLM, V2, V3, R1), compares its performance to models such as GPT-4 and examines the economic and geopolitical impact of this emergence. The information presented here is drawn from the first five research results of 2025, and surpasses existing articles in quality.    <\/p>\n<h2 data-start=\"915\" data-end=\"942\">What is DeepSeek?<\/h2>\n<p data-start=\"944\" data-end=\"1794\">DeepSeek is an <strong data-start=\"960\" data-end=\"1017\">artificial intelligence research laboratory<\/strong> founded in May 2023 in Hangzhou, China. Originally the AI arm of High-Flyer, a quantitative portfolio management company, it has been transformed into an independent entity to devote itself entirely to fundamental research. The company claims a different approach to the American behemoths: it favors<strong data-start=\"1349\" data-end=\"1362\">openness<\/strong> and<strong data-start=\"1368\" data-end=\"1396\">algorithmic efficiency<\/strong> over the pursuit of immediate profit. Its first models were published under MIT license and made freely available to developers via a website and mobile applications. DeepSeek employs around 200 people, compared with several thousand for its competitors, and benefits from the financial backing of the High-Flyer fund (around $15 billion in assets under management).    <\/p>\n<p data-start=\"1796\" data-end=\"2165\">This open-source strategy has enabled DeepSeek to quickly become one of the leaders in large language models (LLMs). By January 2025, its mobile application had overtaken ChatGPT in the Apple App Store, with more than 2.6 million downloads. The company claims between 5 and 6 million users, proof of a worldwide craze.  <\/p>\n<h2 data-start=\"2167\" data-end=\"2218\">Timeline: the rise of DeepSeek<\/h2>\n<p data-start=\"2220\" data-end=\"2335\">DeepSeek&#8217;s meteoric rise is reflected in the succession of its models. Here are the main milestones: <\/p>\n<ul data-start=\"2337\" data-end=\"3955\">\n<li data-start=\"2337\" data-end=\"2416\">\n<p data-start=\"2339\" data-end=\"2416\"><strong data-start=\"2339\" data-end=\"2353\">May 2023:<\/strong> creation of DeepSeek, heir to High-Flyer&#8217;s AI division.<\/p>\n<\/li>\n<li data-start=\"2417\" data-end=\"2523\">\n<p data-start=\"2419\" data-end=\"2523\"><strong data-start=\"2419\" data-end=\"2438\">November 2023:<\/strong> release of <strong data-start=\"2454\" data-end=\"2472\">DeepSeek Coder<\/strong>, the first open-source code generation model.<\/p>\n<\/li>\n<li data-start=\"2524\" data-end=\"2652\">\n<p data-start=\"2526\" data-end=\"2652\"><strong data-start=\"2526\" data-end=\"2542\">Early 2024:<\/strong> release of <strong data-start=\"2553\" data-end=\"2569\">DeepSeek LLM<\/strong> (67 billion parameters) and start of a price war on the Chinese market.<\/p>\n<\/li>\n<li data-start=\"2653\" data-end=\"2959\">\n<p data-start=\"2655\" data-end=\"2959\"><strong data-start=\"2655\" data-end=\"2669\">May 2024:<\/strong> launch of the <strong data-start=\"2692\" data-end=\"2707\">DeepSeek-V2<\/strong> series, featuring expert mixture models (MoE) and a context length extended to 128,000 tokens. This version is trained on 8.1 trillion tokens and uses a double reinforcement cycle (RL) to improve security and relevance. <\/p>\n<\/li>\n<li data-start=\"2960\" data-end=\"3494\">\n<p data-start=\"2962\" data-end=\"3494\"><strong data-start=\"2962\" data-end=\"2981\">December 2024:<\/strong> release of <strong data-start=\"2997\" data-end=\"3012\">DeepSeek-V3<\/strong>, a Mixture-of-Experts model with <strong data-start=\"3046\" data-end=\"3077\">671 billion parameters<\/strong>, activating only 37 billion per token. It introduces a lossless load-balancing strategy and a <strong data-start=\"3198\" data-end=\"3225\">multi-token prediction<\/strong> objective. The team uses FP8 precision training and pre-train the model on 14.8 trillion tokens, achieving performance comparable to proprietary models while requiring only <strong data-start=\"3429\" data-end=\"3465\">2.788 million H800 GPU hours<\/strong>, or around <strong data-start=\"3480\" data-end=\"3493\">5.5 million USD<\/strong>.  <\/p>\n<\/li>\n<li data-start=\"3495\" data-end=\"3955\">\n<p data-start=\"3497\" data-end=\"3955\"><strong data-start=\"3497\" data-end=\"3515\">January 2025:<\/strong> release of <strong data-start=\"3526\" data-end=\"3546\">DeepSeek-R1-Zero<\/strong> and <strong data-start=\"3550\" data-end=\"3565\">DeepSeek-R1<\/strong>, two models specialized in <strong data-start=\"3600\" data-end=\"3616\">reasoning<\/strong>. R1-Zero is trained solely via unsupervised <strong data-start=\"3655\" data-end=\"3689\">reinforcement learning<\/strong>, but suffers from repetition and language mixing. R1 corrects these shortcomings using priming data and a multi-stage pipeline incorporating several RL and fine-tuning phases. The cost of training R1 is estimated at <strong data-start=\"3941\" data-end=\"3954\">\u2248 6 M USD<\/strong>.   <\/p>\n<\/li>\n<\/ul>\n<p data-start=\"3957\" data-end=\"4114\">In the space of twenty months, DeepSeek launched a complete range of models, competing with GPT-4 while adopting a very aggressive pricing policy.<\/p>\n<h2 data-start=\"4116\" data-end=\"4145\">Technological innovations<\/h2>\n<h3 data-start=\"4147\" data-end=\"4200\">Mixture-of-Experts and Multi-head Latent Attention<\/h3>\n<p data-start=\"4202\" data-end=\"4813\">Versions V2 and V3 are distinguished by their use of a <strong data-start=\"4263\" data-end=\"4304\">Mixture-of-Experts (MoE) architecture<\/strong>. In this approach, the model is composed of dozens of neural networks (&#8220;experts&#8221;); only a subset is activated for each token, which considerably reduces computing costs. DeepSeek-V2 also uses <strong data-start=\"4540\" data-end=\"4577\">multi-head latent attention (MLA<\/strong> ) to approximate classical attention with a matrix of reduced rank. These innovations make it possible to increase the context length to 128,000 tokens without exploding costs, while maintaining a high level of performance.   <\/p>\n<h3 data-start=\"4815\" data-end=\"4862\">Multi-token prediction and FP8 training<\/h3>\n<p data-start=\"4864\" data-end=\"5668\">DeepSeek-V3 innovates by eliminating the <strong data-start=\"4900\" data-end=\"4929\">loss of auxiliary load<\/strong> used in other MoE architectures. Engineers are implementing a <strong data-start=\"5019\" data-end=\"5046\">multi-token prediction<\/strong> system, which involves predicting several tokens at once rather than just one. This approach speeds up inference and can be used as a basis for <strong data-start=\"5180\" data-end=\"5203\">speculative decoding<\/strong>. The team is also adopting a mixed <strong data-start=\"5262\" data-end=\"5269\">FP8<\/strong> training framework, validating for the first time the effectiveness of this reduced precision on such large models. Through a hardware\/algorithm co-design, DeepSeek manages to superimpose communication and computation, significantly reducing pre-training costs. As a result, the V3 model is pre-trained in just <strong data-start=\"5598\" data-end=\"5620\">2.664 M GPU hours<\/strong>, then refined with a further 0.1 M hours.     <\/p>\n<h3 data-start=\"5670\" data-end=\"5718\">Reinforcing and distilling reasoning<\/h3>\n<p data-start=\"5720\" data-end=\"6424\">The <strong data-start=\"5730\" data-end=\"5745\">DeepSeek-R1<\/strong> model focuses on reasoning and problem solving. The researchers demonstrate that it is possible to incentiate the reasoning ability of an LLM <strong data-start=\"5906\" data-end=\"5957\">solely via reinforcement learning<\/strong>, without going through supervised fine-tuning. The pipeline includes two reinforcement steps to discover improved reasoning patterns and align the output with human preferences, as well as two supervised fine-tuning steps serving as a starting point. <strong data-start=\"6233\" data-end=\"6254\">Distilled models<\/strong> (1.5 B &#8211; 70 B parameters) are inspired by R1 behavior to produce smaller models, outperforming those trained directly in RL on small sizes.   <\/p>\n<h2 data-start=\"6426\" data-end=\"6459\">DeepSeek&#8217;s flagship models<\/h2>\n<p data-start=\"6461\" data-end=\"6606\">The table below, to be inserted as an image, summarizes the main features of the major DeepSeek versions: LLM, V2, V3 and R1.<\/p>\n<p data-start=\"6608\" data-end=\"6654\">[Insert DeepSeek version table here].<\/p>\n<h3 data-start=\"6656\" data-end=\"6679\">DeepSeek LLM 7B\/67B<\/h3>\n<p data-start=\"6681\" data-end=\"7316\">DeepSeek&#8217;s first consumer model, <strong data-start=\"6722\" data-end=\"6738\">DeepSeek LLM<\/strong> comes in two sizes (7B and 67B parameters). Both models use a dense architecture with normalization, SwiGLU in feedforward and rotary positional embeddings. The vocabulary size is 102,400 words and the context length 4,096 tokens. According to the property table, model 7B has 30 layers and a dimensional vector of 4096. This model has been trained on 2 trillion English and Chinese tokens. Version 67B increases capacity to 95 layers and a dimension of 8192. These models serve as the basis for subsequent MoE versions.      <\/p>\n<h3 data-start=\"7318\" data-end=\"7333\">DeepSeek V2<\/h3>\n<p data-start=\"7335\" data-end=\"8021\">Launched in May 2024, <strong data-start=\"7355\" data-end=\"7370\">DeepSeek-V2<\/strong> applies <strong data-start=\"7383\" data-end=\"7414\">multi-head latent attention<\/strong> and expert blending. Versions V2 and V2-Lite, with 236 B and 15.7 B parameters respectively, extend the context to <strong data-start=\"7548\" data-end=\"7566\">128,000 tokens<\/strong>. Training takes place on 8.1 T tokens, with a dataset comprising 12% more Chinese text than English. A two-stage reinforcement learning cycle is used: a first phase to solve mathematical and programming problems, then a second phase to improve the model&#8217;s utility and security. This approach, coupled with MoE architectures, considerably reduces operating costs.    <\/p>\n<h3 data-start=\"8023\" data-end=\"8038\">DeepSeek V3<\/h3>\n<p data-start=\"8040\" data-end=\"8897\">The most highly publicized version, <strong data-start=\"8068\" data-end=\"8083\">DeepSeek-V3<\/strong> is based on a MoE architecture with 671 billion parameters and 37 billion activated per token. The model introduces a <strong data-start=\"8222\" data-end=\"8248\">lossless<\/strong> load balancing strategy and a <strong data-start=\"8284\" data-end=\"8311\">multi-token prediction<\/strong> objective, improving performance without adding auxiliary loss. The team pre-trained it on <strong data-start=\"8403\" data-end=\"8420\">14.8 T tokens<\/strong>, then applied supervised fine-tuning and reinforcement to exploit its capabilities. Despite its size, the full training requires just <strong data-start=\"8571\" data-end=\"8601\">2.788 M hours of H800 GPU<\/strong>, or <strong data-start=\"8608\" data-end=\"8623\">\u2248 5.5 M USD<\/strong>. Benchmarks show that V3 outperforms other open source models and comes close to proprietary models on evaluation sets such as MMLU and ARC. The cost per million output tokens is around <strong data-start=\"8840\" data-end=\"8852\">0.28 USD<\/strong>, well below competitors&#8217; rates.     <\/p>\n<h3 data-start=\"8899\" data-end=\"8925\">DeepSeek R1 and R1-Zero<\/h3>\n<p data-start=\"8927\" data-end=\"9844\">Presented in January 2025, <strong data-start=\"8954\" data-end=\"8974\">DeepSeek-R1-Zero<\/strong> and <strong data-start=\"8978\" data-end=\"8993\">DeepSeek-R1<\/strong> are reasoning models. R1-Zero is trained solely via massive, unsupervised reinforcement learning, which brings out complex reasoning behaviors but causes repetition and linguistic mix-ups. The <strong data-start=\"9261\" data-end=\"9276\">DeepSeek-R1<\/strong> model corrects these shortcomings by integrating a <strong data-start=\"9313\" data-end=\"9347\">cold start<\/strong> and a multi-stage pipeline with two RL phases and two supervised fine-tuning phases. The researchers show that reasoning ability can be <strong data-start=\"9507\" data-end=\"9520\">distilled<\/strong> down to smaller models: distilled versions from 1.5 B to 70 B outperform those trained directly in RL. DeepSeek-R1 achieves comparable performance to the OpenAI-o1 model on mathematical, programming and reasoning tasks, while costing around <strong data-start=\"9804\" data-end=\"9843\">50 times less per million tokens<\/strong>.    <\/p>\n<h2 data-start=\"9846\" data-end=\"9895\">Comparison with OpenAI: cost and performance<\/h2>\n<p data-start=\"9897\" data-end=\"10649\">DeepSeek models stand out for their development and operating costs. According to several analyses, <strong data-start=\"10027\" data-end=\"10042\">DeepSeek-V3<\/strong> costs <strong data-start=\"10049\" data-end=\"10062\">USD 5.5 million<\/strong> to train, compared with <strong data-start=\"10071\" data-end=\"10089\">USD 50-100 million<\/strong> for <strong data-start=\"10095\" data-end=\"10104\">GPT-4<\/strong>. Similarly, <strong data-start=\"10134\" data-end=\"10140\">R1<\/strong> is estimated to cost <strong data-start=\"10154\" data-end=\"10165\">USD 6 million<\/strong> to train, while its competitor <strong data-start=\"10193\" data-end=\"10206\">OpenAI-o1<\/strong> is said to have cost over USD 100 million. In operation, DeepSeek charges around <strong data-start=\"10282\" data-end=\"10325\">USD 0.14 per million input tokens<\/strong> and <strong data-start=\"10329\" data-end=\"10373\">USD 0.28 per million output tokens<\/strong>. By comparison, <strong data-start=\"10400\" data-end=\"10410\">GPT-4o<\/strong> costs around <strong data-start=\"10425\" data-end=\"10471\">2.50 USD for 1 million input tokens<\/strong> and <strong data-start=\"10475\" data-end=\"10520\">10 USD for 1 million output tokens<\/strong>. This difference explains why some companies can reduce their AI costs by <strong data-start=\"10616\" data-end=\"10624\">98%<\/strong> by opting for DeepSeek.     <\/p>\n<p data-start=\"10651\" data-end=\"10733\">The following table, to be inserted as an image, summarizes the main deviations:<\/p>\n<p data-start=\"10735\" data-end=\"10789\">[Insert DeepSeek vs OpenAI comparison chart here].<\/p>\n<p data-start=\"10791\" data-end=\"11194\">In addition to price, DeepSeek offers a context length of <strong data-start=\"10851\" data-end=\"10867\">128 K tokens<\/strong>, compared with 128 K for GPT-4o, but only 8 K for standard GPT-4. Its models are licensed by <strong data-start=\"10964\" data-end=\"10971\">MIT<\/strong>, whereas OpenAI&#8217;s models remain proprietary. Finally, the MoE architecture activates just 37 B parameters per token, reducing the energy footprint compared with dense 405 B parameter models like GPT-4.  <\/p>\n<h2 data-start=\"11196\" data-end=\"11232\">Economic and geopolitical impact<\/h2>\n<p data-start=\"11234\" data-end=\"12133\">The arrival of the DeepSeek models had international repercussions. On January 20, 2025, the release of R1 and R1-Zero created a media frenzy; Nvidia&#8217;s market capitalization plunged 17% in one day. Some observers describe DeepSeek as an AI that is &#8220;cheaper and more efficient&#8221; than its American rivals, calling into question the technological dominance of the USA. The cost per query is said to be <strong data-start=\"11672\" data-end=\"11693\">27 times lower<\/strong> than GPT-4, and the cost of developing the R1 model around <strong data-start=\"11751\" data-end=\"11769\">96% lower<\/strong> than OpenAI-o1. Despite the US semiconductor embargo, DeepSeek managed to source H100 GPUs via alternative channels, notably in India, Taiwan and Singapore. This feat has fueled fears of a <strong data-start=\"12008\" data-end=\"12030\">&#8220;Sputnik moment&#8221;, with<\/strong> some observers seeing it as a signal of a reversal in the global AI hierarchy.     <\/p>\n<p data-start=\"12135\" data-end=\"12918\">In its analysis, <strong data-start=\"12153\" data-end=\"12169\">Lux Research<\/strong> believes that DeepSeek has proven the <strong data-start=\"12202\" data-end=\"12251\">commodification of large language models<\/strong>. The development cost of V3 (\u2248 5.7 M USD) is ten times less than Llama 3 and twenty times less than GPT-4. Improvements include the <strong data-start=\"12418\" data-end=\"12460\">compression of training data<\/strong>, the use of <strong data-start=\"12479\" data-end=\"12497\">8-bit storage<\/strong> and the partial activation of &#8220;experts&#8221; for each task. This efficiency is largely due to hardware constraints: the researchers used less powerful but less expensive H800 GPUs, banned from export to China. In total, V3 requires <strong data-start=\"12770\" data-end=\"12792\">2.78 M H800 hours<\/strong>, compared with <strong data-start=\"12800\" data-end=\"12820\">30 M H100 hours<\/strong> for Llama 3.1. This shows that algorithmic innovation can compensate for a hardware deficit.     <\/p>\n<h2 data-start=\"12920\" data-end=\"12948\">Reception and controversy<\/h2>\n<p data-start=\"12950\" data-end=\"13748\">Although praised for its efficiency, DeepSeek has also attracted criticism. Some rumors claim that the company has <strong data-start=\"13077\" data-end=\"13089\">distilled<\/strong> Western models by exploiting responses generated by them. In particular, OpenAI suggests that DeepSeek trained its own model on GPT output. IRIS also points out that DeepSeek was able to acquire high-end GPUs before the American embargo. These suspicions raise ethical questions about intellectual property and the transparency of training data. However, DeepSeek claims to have used mainly public and open source data. Its open-source approach and publication of detailed reports (on GitHub and arXiv) contrast with the more closed practices of some of its competitors.      <\/p>\n<h2 data-start=\"13750\" data-end=\"13787\">Future prospects and developments<\/h2>\n<p data-start=\"13789\" data-end=\"14635\">DeepSeek is constantly improving its models. The company released <strong data-start=\"13879\" data-end=\"13887\">V3.1<\/strong> in March 2025, combining &#8220;reflection&#8221; and &#8220;non-reflection&#8221; modes, followed by <strong data-start=\"13952\" data-end=\"13964\">V3.2 Exp<\/strong> in June 2025, with improved computational efficiency and reduced API pricing (according to official announcements). The next challenges will be to integrate multimodal capabilities (vision, audio) and enhance reliability in sensitive contexts. According to market studies, the democratization of open source models such as DeepSeek could lead to a lasting drop in AI costs, making these tools accessible to SMEs and emerging countries. In Europe, these developments also call for reflection on the rules of digital sovereignty and the importance of supporting local research to avoid dependence on American and Chinese giants.    <\/p>\n<h2 data-start=\"14637\" data-end=\"14650\">Conclusion<\/h2>\n<p data-start=\"14652\" data-end=\"15705\" data-is-last-node=\"\" data-is-only-node=\"\"><strong data-start=\"14652\" data-end=\"14664\">DeepSeek<\/strong> represents a major turning point for artificial intelligence. In less than two years, this Chinese start-up has succeeded in designing massive, high-performance, open-source models, while defying the law of costs. Its innovations &#8211; expert mixture, multi-token prediction, FP8 training and reinforcement learning &#8211; demonstrate that it is possible to compete with the incumbents with more modest resources. The economic and geopolitical impact of DeepSeek is already reflected in a fall in the market capitalization of hardware suppliers and a debate on technological sovereignty. In the future, the rise of open source AI could encourage a more equitable distribution of technologies and stimulate creativity worldwide. However, questions remain about the origin of training data and competition between Western and Chinese models. In the meantime, DeepSeek stands out as the symbol of a new wave of AI: more open, more efficient and more accessible.      <\/p>\n","protected":false},"excerpt":{"rendered":"<p>DeepSeek: China&#8217;s open-source AI revolution The artificial intelligence scene was turned upside down in early 2025 by the arrival of DeepSeek. In record time, this young Chinese entity, an offshoot of the High-Flyer hedge fund, released several open-source models capable of rivaling the American giants at a derisory cost. Its technological innovations and transparent approach [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[78],"tags":[],"class_list":["post-4711","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI deepseek | Palmer<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI deepseek | Palmer\" \/>\n<meta property=\"og:description\" content=\"DeepSeek: China&#8217;s open-source AI revolution The artificial intelligence scene was turned upside down in early 2025 by the arrival of DeepSeek. In record time, this young Chinese entity, an offshoot of the High-Flyer hedge fund, released several open-source models capable of rivaling the American giants at a derisory cost. Its technological innovations and transparent approach [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/\" \/>\n<meta property=\"og:site_name\" content=\"Palmer\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-19T13:07:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/09\/social-graph-palmer.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Zennadi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Laurent Zennadi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/\"},\"author\":{\"name\":\"Laurent Zennadi\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/person\\\/7ea52877fd35814d1d2f8e6e03daa3ed\"},\"headline\":\"AI deepseek\",\"datePublished\":\"2025-10-19T13:07:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/\"},\"wordCount\":2000,\"publisher\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#organization\"},\"articleSection\":[\"Artificial intelligence\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/\",\"name\":\"AI deepseek | Palmer\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#website\"},\"datePublished\":\"2025-10-19T13:07:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/ai-deepseek\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/home\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI deepseek\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/\",\"name\":\"Palmer\",\"description\":\"Evolve at the speed of change\",\"publisher\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#organization\",\"name\":\"Palmer\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/palmer-consulting.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Palmer_Logo_Full_PenBlue_1x1-2.jpg\",\"contentUrl\":\"https:\\\/\\\/palmer-consulting.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Palmer_Logo_Full_PenBlue_1x1-2.jpg\",\"width\":480,\"height\":480,\"caption\":\"Palmer\"},\"image\":{\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/company\\\/palmer-consulting\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/palmer-consulting.com\\\/en\\\/#\\\/schema\\\/person\\\/7ea52877fd35814d1d2f8e6e03daa3ed\",\"name\":\"Laurent Zennadi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g\",\"caption\":\"Laurent Zennadi\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI deepseek | Palmer","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/","og_locale":"en_US","og_type":"article","og_title":"AI deepseek | Palmer","og_description":"DeepSeek: China&#8217;s open-source AI revolution The artificial intelligence scene was turned upside down in early 2025 by the arrival of DeepSeek. In record time, this young Chinese entity, an offshoot of the High-Flyer hedge fund, released several open-source models capable of rivaling the American giants at a derisory cost. Its technological innovations and transparent approach [&hellip;]","og_url":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/","og_site_name":"Palmer","article_published_time":"2025-10-19T13:07:20+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/09\/social-graph-palmer.png","type":"image\/png"}],"author":"Laurent Zennadi","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Laurent Zennadi","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/#article","isPartOf":{"@id":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/"},"author":{"name":"Laurent Zennadi","@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/person\/7ea52877fd35814d1d2f8e6e03daa3ed"},"headline":"AI deepseek","datePublished":"2025-10-19T13:07:20+00:00","mainEntityOfPage":{"@id":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/"},"wordCount":2000,"publisher":{"@id":"https:\/\/palmer-consulting.com\/en\/#organization"},"articleSection":["Artificial intelligence"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/","url":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/","name":"AI deepseek | Palmer","isPartOf":{"@id":"https:\/\/palmer-consulting.com\/en\/#website"},"datePublished":"2025-10-19T13:07:20+00:00","breadcrumb":{"@id":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/palmer-consulting.com\/en\/ai-deepseek\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/palmer-consulting.com\/en\/ai-deepseek\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/palmer-consulting.com\/en\/home\/"},{"@type":"ListItem","position":2,"name":"AI deepseek"}]},{"@type":"WebSite","@id":"https:\/\/palmer-consulting.com\/en\/#website","url":"https:\/\/palmer-consulting.com\/en\/","name":"Palmer","description":"Evolve at the speed of change","publisher":{"@id":"https:\/\/palmer-consulting.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/palmer-consulting.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/palmer-consulting.com\/en\/#organization","name":"Palmer","url":"https:\/\/palmer-consulting.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/08\/Palmer_Logo_Full_PenBlue_1x1-2.jpg","contentUrl":"https:\/\/palmer-consulting.com\/wp-content\/uploads\/2023\/08\/Palmer_Logo_Full_PenBlue_1x1-2.jpg","width":480,"height":480,"caption":"Palmer"},"image":{"@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/palmer-consulting\/"]},{"@type":"Person","@id":"https:\/\/palmer-consulting.com\/en\/#\/schema\/person\/7ea52877fd35814d1d2f8e6e03daa3ed","name":"Laurent Zennadi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/110e8a99f01ca2c88c3d23656103640dc17e08eac86e26d0617937a6846b4007?s=96&d=mm&r=g","caption":"Laurent Zennadi"}}]}},"_links":{"self":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/posts\/4711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/comments?post=4711"}],"version-history":[{"count":0,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/posts\/4711\/revisions"}],"wp:attachment":[{"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/media?parent=4711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/categories?post=4711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/palmer-consulting.com\/en\/wp-json\/wp\/v2\/tags?post=4711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}