
Q: What does DeepSeek claim to have accomplished?
A: DeepSeek, a startup from China, claims that their AI model is on par with or even better than top U.S. models, but at a significantly lower cost. Their newest model, DeepSeek-V3, was trained with under $6 million worth of computing power using Nvidia H800 chips. This accomplishment has caught the eye of the global AI community and has been praised as "super impressive" by industry experts.
Q: Why does it affect Nvidia negatively?
A: If DeepSeek can prove that high-performance AI models can be trained at much lower costs than previously believed, and with older Nvidia GPUS, other similar companies can achieve top-tier AI performance using fewer Nvidia GPUs. This could lead to a decrease in demand for Nvidia's costly hardware, thereby affecting their sales and profit margins.
Q: Does this signify a new era for AI development?
A: Yes, in a way. It challenges the notion that massive computing power is necessary for advanced models “Bigger is not necessarily better”. If others adopt similar approaches, it could democratize AI, making sophisticated models available to more companies and regions. This could move the market in a new era where optimization and efficiency are just as crucial as raw computing power in AI development. It is also a sign of how fast the development is in the AI arena, with innovations popping up at an amazing speed.
Q: How will this affect the financial industry?
A: As for many industries, it reminds us that their can be many approaches to solve problems. Cheaper and more energy-efficient AI is an advantage for all industries, but in particular for the Financial services as we are very computing intensive. It means that many solutions may be more accessible to different players. It also means that we can much more easily customize models, which is good news for smaller players. For investors, it means that the race to large compute and focus on hardware is not as straightforward as it seemed two weeks ago.
留言