Artificial Intelligence (AI) is increasingly being presented as a transformative force capable of solving some of humanity's most pressing challenges. From healthcare to climate modelling, its potential is undeniable. Yet, a closer look at how AI is currently being deployed across the world suggests a different reality—one that resembles not optimal progress, but a growing misallocation of global resources.
The issue is not whether AI is powerful. The issue is where that power is being directed, and at what cost.
In economic terms, AI today is behaving less like a productivity tool and more like liquidity in an under-regulated system—efficient at the micro level but potentially distortionary at the macro level.
Efficiency in conflict, but no reduction in cost
One of the most visible applications of AI is in modern conflict. AI-enabled drones, automated surveillance systems, and predictive targeting mechanisms have significantly reduced the cost per operation. A drone costing a few thousand dollars can now perform tasks that once required multi-million-dollar systems.
From a purely financial standpoint, this appears to be an efficiency gain.
However, as seen repeatedly in financial systems, lower marginal cost does not necessarily translate into lower total expenditure. Instead, it often produces the opposite effect.
When the cost of deployment falls:
* the frequency of use increases
* the threshold for engagement declines
* and overall expenditure either remains constant or expands
AI in conflict is following this exact pattern. It is not reducing the cost of war—it is changing the structure of spending while sustaining its scale.
More importantly, it raises a deeper question. If global capital allocation were aligned with long-term human welfare, one would expect the largest investments in AI to be directed towards healthcare, education, or climate resilience. Instead, a significant portion continues to flow into defence and conflict-related applications, where returns are strategic or political rather than social.
This is not a technological issue. It is a question of priorities.
Low-cost influence and rising political risk
A similar dynamic is emerging in political systems.
AI has dramatically reduced the cost of influencing public opinion. Through targeted messaging, automated content generation, and algorithm-driven amplification, it is now possible to shape narratives at a scale and speed that were previously unimaginable.
Smart Telecom's SmartCell opens Smart Centre in Putalisadak
Again, this represents efficiency—lower cost per unit of influence.
But, as in poorly regulated financial markets, efficiency without safeguards introduces systemic risk.
The consequences are already visible:
* the rapid spread of misinformation
* declining trust in institutions
* increasing political polarisation
What is particularly concerning is that these influence systems are not marginal. They are attracting significant financial resources globally, often running into tens of billions of dollars during major political cycles.
At the same time, critical sectors such as public health systems, education reform, and climate adaptation remain underfunded in many parts of the world.
This raises a fundamental question:
Why is capital increasingly being deployed to shape perception, rather than to address structural economic challenges?
Labour markets: disruption without transition
Unlike conflict or politics, AI's impact on labour markets is more measurable. Estimates suggest that AI could generate trillions of dollars in economic value while simultaneously displacing hundreds of millions of jobs over time.
Such transitions are not new. Technological progress has always reallocated labour. What is different this time is the speed and scale of change.
In well-functioning systems, transitions are supported by:
* reskilling mechanisms
* institutional buffers
* adaptive education systems
However, current labour markets—particularly in developing economies—lack these mechanisms at the required scale.
The result is not just job displacement, but transition risk:
* workers lose roles faster than they can acquire new skills
* inequality widens
* and economic stability is tested
Exposure without preparedness
This gap is particularly visible among younger populations.
Across the world, and increasingly in countries like Nepal, young people are highly exposed to AI tools. They use them frequently and adapt quickly. However, this exposure often lacks depth.
Familiarity with AI does not equate to the ability to apply it productively within a professional context. The risk is similar to what is observed in shallow financial markets—high participation without sufficient understanding of underlying fundamentals.
Without structured skill development, this can lead to over-reliance without capability.
The structural lag in education
At the centre of this issue lies the education system.
Most systems continue to emphasize memorisation and standardised testing, even as AI increasingly automates these very functions. Critical skills such as analytical thinking, problem-solving, and interdisciplinary application remain underdeveloped.
This is particularly problematic for developing economies. As seen in capital markets, importing structures without adapting them to local realities often leads to inefficiency. The same applies to education.
Without reform, the gap between what the market demands and what the system produces will continue to widen.
A question of allocation, not innovation
Across conflict, politics, and labour markets, a common pattern emerges.
AI improves efficiency at the micro level. It reduces costs, increases speed, and enhances capability. But at the macro level, it is contributing to misalignment in resource allocation.
Significant global capital—financial, intellectual, and institutional—is being directed towards:
* defence systems
* political influence mechanisms
While comparatively less is invested in:
* human capital development
* public goods
* long-term economic resilience
This is not a failure of technology. It is a failure of how priorities are being set.
Conclusion: Direction will define outcome
AI will undoubtedly reshape the global economy. It has the capacity to enhance productivity, improve services, and solve complex problems.
But efficiency alone does not guarantee positive outcomes.
Without clear direction, even the most powerful systems can amplify existing distortions rather than resolve them.
The real challenge, therefore, is not technological advancement—but strategic allocation.
Whether AI becomes a tool for inclusive growth or a driver of inequality and instability will depend on one critical factor: whether global resources are aligned with long-term societal needs, or short-term strategic interests.
That is a policy question. And it is one that cannot be deferred.