DeepSeek la IA China detrás de los 600mil MDD que perdió NVIDIA
Cómo una startup emergente ha sacudido los cimientos de gigantes como OpenAI y NVIDIA, y por qué todos los ojos están puestos en el futuro de la inteligencia artificial.
28 ene 2025

How a rising startup has shaken the foundations of giants like OpenAI and NVIDIA, and why all eyes are on the future of artificial intelligence.
Index
The dawn of a new era
The birth of DeepSeek
OpenAI and the challenged hegemony
DeepSeek-R1: A disruptive reasoning model
NVIDIA's great shake-up: How did they lose 600 billion dollars?
Advanced reasoning models: What makes them so special?
Global reactions: Echoes in networks and power
Future prospects: Impact on education, the economy, and society
Conclusion: An inevitable paradigm shift
References
1. The dawn of a new era
“When a small spark sets off a fire, the world cannot ignore it.”
Over the past decade, we have witnessed a dizzying acceleration in the field of artificial intelligence (AI). From algorithms capable of recognizing faces with superhuman precision, to text generation systems that almost indistinguishably resemble the human pen, the advancement of AI has been a source of fascination and controversy.
However, in January 2025, an unexpected event changed the game: the Chinese startup DeepSeek launched an advanced reasoning model, DeepSeek-R1, which not only promised to compete with giants like OpenAI, but also dared to challenge them in terms of efficiency, collaboration, and cost. What followed was a seismic shake-up in the tech market, evidenced by the drop in NVIDIA's stocks and the surprising reactions from high-profile figures like Sam Altman and Donald Trump.
Join us on this journey where we will unravel the influence of DeepSeek-R1 and the potential that a truly democratized AI holds. This is a journey that unites the “David” of technology with the “Goliaths” of Silicon Valley, in a clash that is already rewriting the history of artificial intelligence.

Image 1. DeepSeek matches or surpasses o1 in almost all tests.
2. The birth of DeepSeek
2.1 The origin of the vision
In 2023, Liang Wenfeng, a former Alibaba researcher, asked a fundamental question: “Why is high-end artificial intelligence still so inaccessible to most of the world?”. With this reflection in mind, Wenfeng resigned from his position and decided to found DeepSeek in the city of Hangzhou, a metropolis that is becoming a tech hub in China.
The main motivation behind DeepSeek was to create AI models that did not require million-dollar investments in infrastructure, highlighting the idea that innovation does not depend solely on massive budgets, but on the ability to find new ways of collaboration and optimization.
2.2 Funding and collaborative spirit
Unlike OpenAI — backed by billions from giants like Microsoft — DeepSeek began its journey with just 5.6 million dollars. This capital was mostly obtained through crowdfunding and collaborations with Chinese universities, which saw in the project an opportunity to advance the state of the art of AI openly.
First steps: The first prototypes were developed in academic environments and tested in university labs in China.
The open source factor: From day one, DeepSeek bet on an open development model, inviting the global community to contribute. In a short time, more than 3,000 developers worldwide joined this initiative.
With these foundations, the company advanced quietly until January 20, 2025, when it published its first major achievement: the DeepSeek-R1 model, designed for logical inference tasks and complex problem-solving.
3. OpenAI and the challenged hegemony
3.1 A look at the o1 and o3 models
Since its inception, OpenAI has been the company to beat regarding high-performance language and AI models. Its o1 and o3 models, launched in 2024 and based on GPT-4, raised industry standards, excelling in deep reasoning and text generation with a high degree of coherence.
OpenAI o1
Launched in September 2024.
Specialized in inference and solving complex problems.
Cost per million input tokens: 15 USD.
Cost per million output tokens: 60 USD.
OpenAI o3
Presented in December 2024.
Optimized version of o1, with greater energy efficiency and focus on specific tasks.
These developments placed OpenAI in an almost hegemonic position, with few companies able to compete on an equal footing, especially because of the costly infrastructure these models require.
3.2 The limitations of giants
Despite their successes, OpenAI's models have been criticized on several fronts:
High implementation costs: Access to these models remains prohibitive for startups and countries with limited resources.
Infrastructure dependency: High-end data centers and GPUs (like NVIDIA H100) are needed for optimal performance.
Lack of openness: Although OpenAI is defined as a company oriented towards the benefit of humanity, most of its recent developments have remained closed or with restrictive licenses.
It was in this context of high costs and technological power concentration that DeepSeek-R1 emerged, offering a lighter and more open approach.
4. DeepSeek-R1: A disruptive reasoning model
4.1 Comparative costs and accessibility
The first major difference that drew the attention of the tech community was the disparity in costs per million tokens between OpenAI and DeepSeek:
This vast cost difference democratizes access to AI, opening the door to small companies, local governments, and research centers with limited budgets.
4.2 Technology: From “mixture of experts” to open source
The technology behind DeepSeek-R1 is based on two pillars:
Architecture with “mixture of experts”
DeepSeek-R1 uses a system that activates only the necessary modules to solve each specific problem. This implies more efficient use of computing resources and much lower energy consumption than other large-scale models.Open source approach
From the moment of its launch, DeepSeek-R1 was offered with an open license, encouraging developers worldwide to improve and expand the model. This community spirit contrasts with the more closed trajectory of OpenAI's models.

Image 2. How the DeepSeek R1 "mixture of experts" architecture works
5. NVIDIA's great shake-up: How did they lose 600 billion dollars?
5.1 A volatile market
The launch of DeepSeek-R1 caused an earthquake in the stock market: NVIDIA, the leading chip manufacturer for AI, suffered a 12% loss in its shares in the days following the announcement, equivalent to around 600 billion dollars in market value.

Image 3. The biggest drop in market capitalization ever recorded. Equivalent to the entire value of the Mexican Stock Exchange.
Why did a model developed with NVIDIA technology have a negative impact on its shares? At first glance, it seems paradoxical, but the logic behind it is clear: DeepSeek demonstrated that outstanding results can be achieved without relying on the most expensive chips (like the H100), opting for cheaper solutions (H800) and intelligent software design.
5.2 Turning point: hardware democratization
For many analysts, NVIDIA's fall marked a before and after in the industry, reaffirming the trend towards hardware democratization. A horizon opens where, thanks to software optimization, it is feasible to compete at the highest level without incurring exorbitant infrastructure expenses.
6. Advanced reasoning models: What makes them so special?
6.1 Key differences from standard models
Conventional language models focus on tasks such as text generation, sentiment analysis, or machine translation. On the other hand, advanced reasoning models like DeepSeek-R1, o1, and o3 go further, integrating logical processes that emulate the way a human would reflect on a problem:
Inference and logical deduction: They break down complex problems into manageable steps.
Iterative learning: They analyze past errors to refine their future results.
Abstract and creative thinking: They can solve puzzles, advanced mathematical problems, and even competitive programming tasks.
6.2 Application fields and concrete examples
Solving mathematical problems: DeepSeek-R1 can tackle advanced equations or theorems with multiple reasoning steps.
Programming and code debugging: It is of great help for software engineers in identifying and correcting bugs.
Medical diagnosis: Although still in early stages, advanced logic could be applied in analyzing complex symptoms.
Strategic decision-making: Financial organizations and consultancies could use these models to simulate scenarios and predict market behaviors.
6.3 Benchmarks and comparisons: o1, o3 and DeepSeek-R1
To measure the effectiveness of these models, specific benchmarks are used to evaluate reasoning ability:
SW Bench Verified: Scores accuracy in advanced inference.
Frontier Math: Measures complex mathematical aptitudes.
ARC (Abstraction and Reasoning Corpus): Values the model's capacity for abstract and creative reasoning.
In tests conducted in 2025, OpenAI's o3 achieved an 87.5% in ARC, surpassing the standard human performance (85%) by a small margin. DeepSeek-R1, despite its lower training cost, registered an 85.2% in the same benchmark, an impressive achievement for such a low-budget model.

Image 4. Initial benchmarks of R1-lite-preview vs o1-preview
7. Global reactions: Echoes in networks and power
The event did not go unnoticed in social networks and traditional media. Some prominent figures expressed their opinions on Twitter:
Sam Altman (@sama) - CEO of OpenAI
"DeepSeek's latest model is impressive. We're accelerating our efforts to bring even better models to the world."
— January 28, 2025
Marc Andreessen (@pmarca) - Co-founder of Andreessen Horowitz
"The open-source approach of DeepSeek could redefine the AI landscape. Exciting times ahead."
— January 27, 2025
Yann LeCun (@ylecun) - Chief AI Scientist at Meta
"DeepSeek demonstrates that innovation doesn't always require massive budgets. Kudos to the team!"
— January 27, 2025
Alexandr Wang (@alexandr_wang) - CEO of Scale AI
"DeepSeek's R1 model is a wake-up call for the AI industry. We need to rethink our strategies."
— January 28, 2025
Donald Trump (@realDonaldTrump) - President of the United States
"DeepSeek's AI chatbot is a wake-up call for Silicon Valley. We must stay competitive!"
— January 28, 2025
With these statements, personalities from different spheres —technology, investment, and politics— spoke out, making it clear that DeepSeek-R1 had captured global attention.
8. Future prospects: Impact on education, the economy, and society
8.1 Transformation in talent development
The availability of an open source and low-cost model can revolutionize how universities and technical schools approach AI training. Instead of limiting themselves to theories and modest simulations, they could use DeepSeek-R1 as a basis for advanced research projects and laboratory practices, fostering a generation of highly qualified professionals.
8.2 New business models and collaboration
The entry of DeepSeek-R1 into the market demonstrates the viability of scalable projects without requiring prohibitive investments. Emerging companies could launch services based on DeepSeek-R1, offering data analysis, customer service, and logistics optimization at a competitive cost. Moreover, the open source nature encourages global collaboration, reducing the gap between large corporations and small startups.
8.3 The geopolitics of AI
The success of DeepSeek-R1, born in China, underscores a rebalancing in the global technological race. The U.S. maintains its hegemony thanks to companies like OpenAI and NVIDIA, but the irruption of DeepSeek highlights that the monopoly on innovation might start to fragment. Europe, Asia, and Latin America are also interested in adopting more accessible and open models.
9. Conclusion: An inevitable paradigm shift
The emergence of DeepSeek-R1 is much more than a technological achievement. It represents a milestone that breaks the psychological barrier that only great capitals can shape the future of artificial intelligence. In an environment where the price of innovation seemed unattainable, DeepSeek is setting a powerful precedent:
Efficiency and sustainability: High performance can be achieved without spending billions.
Universal collaboration: The union of global talent can break down walls established by technology giants.
Real alternatives: Governments, companies, and universities are beginning to see the use of reasoning models viable without relying on high licensing costs.
The moral of this story is summed up in the idea that, despite limited budgets, the passion for innovation, technical ingenuity, and open collaboration can shake giants. The future of AI will be shaped not only by scale but by the versatility, openness, and vision of those looking to expand the boundaries of what's possible.
10. References
DeepSeek (2025). DeepSeek-R1 Official Press Release.
OpenAI (2024). o1 and o3 Launch Announcements.
NVIDIA (2025). Market Response and Investor Relations Data.
ARC Benchmark (n.d.). Abstraction and Reasoning Corpus.
SW Bench Verified & Frontier Math (n.d.). Official Documentation.
The cost data, dates, and reactions come from official press releases and social media postings by executives involved at the time of publication.