Starting on February 6, 2025, France hosted the Artificial Intelligence (AI) Action Week, a multi-day interdisciplinary event where Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society gathered to talk about one thing only: AI.
The AI Action week started with a scientific conference on Thursday and Friday, continued over the weekend with cultural events and culminated on Monday and Tuesday with the AI Action Summit.
Sina Molavipour and Georg Schuppe, two of SEBx’s researchers, attended the scientific conference titled “AI, Science and Society”. The conference hosted at the Institute Polytechnique de Paris (IP Paris) addressed the transformations brought by AI for science and societies. Fostering an interdisciplinary dialogue, this meeting had presentations by leading researchers, including Nobel laureates such as Yann Le Cun and Yoshua Bengio.
In this article, we will briefly discuss several hot topics that manifested themselves at the conference.
AI and society
As artificial intelligence continues to evolve at an unprecedented pace, concerns regarding its societal and environmental impact are becoming increasingly prominent. One of the central themes of this AI summit was addressing these challenges, where speakers explored key risks, ethical considerations, and necessary “actions” to mitigate potential harm.
Yoshua Bengio presented main findings of the 2025 AI Safety Report, which brought together 100 experts from 30 countries and organizations (UN, EU, OECD) to guide policymakers on AI safety. Future Generations of AI, often referred to as General-Purpose AI, carry great potential, but whatever shape they might take, they also introduce risks to society, which Bengio categorized in the following way:
Risk from malicious use
Biological cyber attacks
Fake content to harm individuals
Risks from malfunctions
Bias
Loss of control
Systematic risks
Privacy violations
Large scale labor market disruptions
Bengio emphasized that all of these risk areas are important, and none can be left out if the well-being of society is at stake.
A primary concern surrounding AI systems is the uncertainty of their outputs. Just as social and economic systems rely on structured interactions to reduce uncertainty, AI systems must also engage with one another to enhance robustness. Professor Michael Jordan (UC Berkeley/Inria), the chair of the conference, talked about a new perspective on these interactions by drawing an analogy to economic markets, where incentive-driven AI agents, based on their capabilities, constraints, and objectives, participate in a dynamic game—both among themselves and with humans. This market-driven approach suggests that deploying a larger number of smaller, more specialized AI agents could help distribute risk and improve overall system stability.
Elefants and bees
Multiple speakers emphasized that through the current hype, LLMs, which are famously cost-intensive and resource-hungry get used in all kinds of scenarios, even when they might not be the optimal tool for the job. One speaker illustrated the issue by comparing LLMs to elephants, whereas smaller machine-learning models, so-called Bees, can be tailor-made for tasks they are meant to solve. Bees can not only be more cost-effective and sustainable, but also perform much better than the elephant in the room. This idea of having many bees instead of these large elephants also ties nicely with economic ideas of AI agents interacting with each other in a dynamic market.
Yann Le Cun, who has spoken out against continued research on LLMs from the academic side multiple times, argued in his talk that these LLM-elephants don’t even display all necessary characteristics for human-like artificial intelligence. In his eyes, LLMs carry technological limitations that cannot be overcome by scaling via money or data.
AI and sustainability:
As alluded to, a critical topic at the summit was the environmental footprint of AI development, highlighted by organizations such as Climate Change AI. AI systems contribute to environmental impact at multiple levels, including infrastructure and operations.
At the infrastructure level, significant costs and carbon emissions arise from:
Mining and production of hardware
Transportation and logistics
Disposal and recycling of AI-related components
At the operational level, emissions occur throughout various AI lifecycle stages, including model development, training, and deployment. While individual AI inferences may consume relatively little energy, the cumulative environmental impact scales with the growing number of users and their usage patterns. To illustrate, estimates suggest that by 2029, AI accelerators could account for approximately 1.5% of global electricity consumption (link).
Takeaways:
The recent advancements demonstrated by DeepSeek highlight the potential to train smaller models with reduced computational costs while maintaining competitive performance. This development signals a shift in AI trends, moving away from the dominance of massive foundational models toward more specialized AI agents. These task-specific agents, capable of interacting both with humans and other AI systems, could redefine the future of AI deployment, making systems more efficient and adaptable.
Moreover, the social and environmental impact of AI development must be carefully considered at all levels. Regulatory frameworks such as the “AI Act” play a crucial role in translating awareness into actionable policies, ensuring that AI advancements align with ethical, sustainable, and socially responsible practices.

Commentaires