Artificial intelligence: a transformative technology with risks and opportunities

  • 11 Jul, 2025
    | Salome K

Artificial intelligence—or AI for short—has become a profoundly general-purpose technology, reshaping every corner of the economy and society. The speed at which applications like ChatGPT have taken the world by storm is unprecedented. Within just two months of its launch, ChatGPT surpassed 100 million users, a sign of the impact generative AI promises. Research from consulting firms predicts that AI could generate up to $15.7 trillion in additional value by 2030, an economic boost comparable to the combined GDP of China and India in 2018.

The core of this impact lies in two pillars: productivity gains and the creation of new goods and services. Labor productivity could rise substantially; GDP growth of 14.5% and 26% is expected in North America and China, respectively, while Europe and developed parts of Asia could achieve increases of between 9% and 12%.

This growth potential has several dimensions: from increased efficiency in the business world to advanced medical screening. For example, a joint study by MIT and Stanford showed that introducing AI assistants in call centers increased output by an average of 14%, and up to 35% for inexperienced employees. AI is also gaining ground in the medical sector: the US Food and Drug Administration (FDA) has approved more than a thousand medical AI devices, with a heavy emphasis on radiology, where algorithms help detect conditions like lung cancer or fatty liver disease.

In addition, AI contributes to the energy transition: intelligent predictions based on machine learning optimize electricity grids and improve the efficiency of battery ‑charging systems. In the public sector, “smart” applications such as self-regulating traffic lights and fraud detection systems are becoming increasingly common. Small and medium-sized enterprises also benefit, thanks to generative tools that make marketing, customer service, and software development significantly cheaper and more accessible.

Risks on multiple fronts

But with great promise come profound risks. Generative AI facilitates new forms of disinformation and manipulation. In January 2024, tens of thousands of voters in New Hampshire were contacted using a deepfake voice impersonating President Biden, in a campaign urging abstention from voting. Although the consultant responsible was released, this incident demonstrated how dangerous and cheap such manipulations can be.

Economics and geopolitics also bear the brunt of the downside. International reports, such as those from Europol, warn that criminal networks—including proxies of hostile states—are using AI to write malware, escalate identity fraud, and attack critical infrastructure.

Ethical questions arise when AI systems are trained on biased data, which can perpetuate existing discrimination. UN Secretary-General António Guterres warned that AI could pose “an existential threat” if human oversight is lost, especially in military domains.

The labor market is also experiencing disruption, with routine jobs disappearing while demand for highly skilled AI specialists explodes. Without significant retraining, a divide between “AI-skilled” and “AI-poor” professionals threatens. Regulation is sharpening this playing field: the European AI Act, in particular – effective since August 2024 – prohibits high-risk applications and imposes severe penalties of up to 7% of global turnover on offenders.

Towards a responsible AI ‑policy

The regulation that follows the European AI Act is based on a risk-based approach, categorizing applications from minimal to unacceptable risk. This creates room for innovation while protecting citizens. At the same time, the EU is investing in public infrastructure—such as open datasets and trusted cloud environments—to prevent tech giants from profiting exclusively.

Education plays a crucial role: from primary school to adult education, basic digital skills such as statistics, algorithmic thinking, and ethics should be central. Experiments in AI ‑sandboxes promote safe innovation, while international collaborations, such as the OECD Guidelines and the UNESCO ethical frameworks, are crucial to prevent misuse.

A recent development is the legal codification of these ethical principles through the Framework Convention on Artificial Intelligence , adopted in May 2024 under the Council of Europe. This convention, now signed by more than 50 countries, enshrines guidelines on transparency, human rights, and democratic values in AI ‑applications.

In April 2025, the EU also announced an ambitious program: the construction of so-called AI gigafactories—supercomputers with more than 100,000 GPUs per site. These core facilities, part of the €20 billion InvestAI initiative, are intended to place Europe at the forefront of AI development, competing with the US and China.

At the same time, the urgency is high: during the AI Action Summit in Paris, world leaders called for lighter oversight in exchange for more room for investment, amid growing concerns about the balance between innovation and protection. The transatlantic consensus demonstrates that cooperation is now higher than ever, with political and scientific efforts to create a balanced, future-proof framework.

Russia and AI: Between Potential and Strategic Isolation

AI is also at the top of the political agenda in Russia. President Putin stated in 2017 that “whoever masters AI will dominate the world.” However, implementation is proving more difficult than the ambitious rhetoric suggests.

Knowledge gain versus hardware ‑shortage

Russia boasts strong scientific institutes, such as MIPT, Skoltech, and ITMO, which enjoy international prestige for their theoretical work in algorithmics, robotics, and cybersecurity. State-owned companies like Sberbank and Yandex have demonstrated that Russian AI can compete internationally with projects like Kandinsky (a text-to-image model), although Russia only ranked a mediocre 12th to 17th among 44 models in benchmarks.

The bottleneck appears to lie elsewhere: a lack of regulations, investment, hardware, and staff. Sanctions imposed after the 2022 invasion of Ukraine have severely limited access to modern GPUs. For example, Sberbank has only been able to acquire approximately 9,000 GPUs since 2022, compared to hundreds of thousands in the US. This hardware shortage is limiting the development of large-scale models. Russian companies are therefore turning to cheaper, often outdated Chinese resources, which are suitable for smaller inference tasks but insufficient for training large models.

Brain drain and talent loss

Another blind spot is the emigration of knowledge workers. Since 2022, an estimated 10% of IT ‑professionals have left, with many settling in countries like Armenia, Israel, Georgia, and Europe. In this vacuum, Russia is choosing to concentrate on state-owned enterprises like Sberbank, Yandex, and Rostec, which benefit from market distortions and government support, but simultaneously suffer from capital flight, labor shortages, and limited scope for innovation.

Strategic dependence on China

Moscow is trying to close the gap through intensified cooperation with China. At Putin’s behest, Sberbank has announced a broad AI partnership with Chinese institutes, focusing on shared projects and access to Chinese models like DeepSeek ‑V3. Russia is seeking to join the BRICS alliances, but remains in 31st place globally in AI capacity rankings, just behind smaller countries like Portugal and Ireland.

However, the efficiency of this Friendship is uncertain: Chinese chips often have process sizes of 7 nm or higher, far behind the American top (such as Nvidia A100/H100), which limits their military and scientific potential .

Economic and geopolitical perspective

Despite the challenges, the AI ‑market in Russia is growing: forecasts for 2025 estimate a revenue increase from 130–300 billion rubles to a potential 600–800 billion rubles thanks to AI integration in consumer applications. However, robot density in industry remains exceptionally low (6 robots per 10,000 employees versus a global average of 141), suggesting limited productivity gains.

Without rapid access to hardware, capital, and human capital, Russia will increase its strategic dependence on China, and the risk remains that domestic AI ‑policy will serve only control, surveillance, and military applications – rather than broad economic innovation.

Conclusion: AI as a geopolitical dial

AI is developing not only as an economic engine but also as a geopolitical lever. With InvestAI and gigafactories, and by signing international frameworks, the EU is taking a leading role in shaping a responsible technological ecosystem.

Russia, on the other hand, teeters on the brink of technological isolation. Despite its intellectual potential and targeted investments, it continues to struggle with sanctions, brain drain, and a lack of infrastructure. While the alliance with China provides much-needed resources, it doesn’t rule out the possibility of Russia losing significant ground unless this balance is structurally improved.

The crucial question, therefore, is not whether AI will continue, but how countries ensure that this technology leads to broad prosperity, resilience, and sovereignty. Not participating is not an option; rather, choosing dependency is.

ⓒ Antonio Georgopalis