[Your shopping cart is empty

News

Trust is the new currency in the AI agent economy


As Nobel Laureate Kenneth Arrow once observed, every economic transaction has an element of trust. Today, as more transactions are handled by AI agents, that trust is facing new pressures. Global trust levels are in decline, while the presence of AI agents in our daily lives and business systems is rapidly increasing.
In a pessimistic scenario, this could erode confidence. In an optimistic one, it opens pathways to reimagine trust and to fuel economic growth.
The connection between societal trust and economic performance is well documented. According to Deloitte Insights, a 10-percentage point increase in the share of trusting people within a country should raise annual per capita real GDP growth by about 0.5 of a percentage point. However, that relationship is evolving as we move beyond human-to-human interactions toward agentic exchanges.
Trust will continue to shape outcomes in the AI-powered economy. The real question is: What kind of trust will matter most – and how do we build it?
Towards an AI agent economy
The digital economy is becoming agentic. AI agents are moving from assistive tools to autonomous entities, executing transactions, allocating resources and making decisions.
AI has matured over decades, but today marks a tipping point. According to Gartner's Hype Cycle for Artificial Intelligence, AI agents are at the very peak of expectations, with an expected implementation period of 2-5 years. By 2028, ~33% of enterprise software applications will include agentic AI with at least 15% of day-to-day work decisions being made autonomously through AI agents. The AI agent economy will be a fully fledged reality requiring entirely new forms of accountability, collaboration and trust.
At the heart of trust are two foundational components: competence (the ability to execute) and intent (the purpose behind actions). While few now question the competence of advanced technologies, intent remains a foggy frontier.
Why trust varies – and why it matters
Research shows that trust in AI varies significantly across regions and demographics. According to research by the KPMG and University of Melbourne, people in advanced economies are less trusting in AI (39% vs. 57%) and accepting (65% vs. 84%) compared to emerging economies. From a sociological perspective, in these environments, trust remains grounded in interpersonal relationships or traditional institutions.
The latest Edelman Trust Barometer Global Report describes the current environment as a “crisis of grievance”. The greater the sense of grievance, the deeper the suspicion toward AI. Individuals who feel a heightened sense of injustice or discontent are significantly less likely to trust AI – and are notably more uneasy with its use by businesses. As trust in institutions erodes, so too does comfort with AI’s growing role in business and governance.
Understanding how trust is formed will be essential.
Ways to earn trust
As autonomous agents proliferate, we must rethink how trust functions across three key domains:
•    Human-to-human trust. The foundations of interpersonal trust remain shared values, reciprocity and past experience. But the digital layer is changing how we perceive others. When a familiar face on a video call could be an AI-generated avatar, the psychological cues we rely on to form trust are challenged. This raises new questions about authenticity, identity and interaction norms in a hybrid human-AI world.
•    Agent-to-agent trust. Trust between AI agents is formed through the exchange of signals – performance history, reputational data and predictable behaviour. Agents will evaluate one another based on competence (technical execution, reliability) and intent (alignment of goals, transparency of decision-making). Trust in this space becomes an engineering problem: how to design systems that can assess, verify and adapt trust over time.
•    Human-to-agent trust. For humans to trust AI agents, those agents must display persistent identity and predictable behaviour. People trust consistency. Just as we remember reliable partners, AI agents must remember and adapt to users, offering continuity and coherence in interaction. Trust erodes when AI behaves erratically or pretends to be something it’s not. Authenticity and memory must be built into agent design.
The foundations of AI-era trust
One of the greatest challenges to trust in AI remains a lack of clarity around agent intent. For example, autonomous vehicles may be statistically safer than human drivers, yet they are still distrusted by many due to uncertainty about the values guiding their decisions.
This points to a broader need for transparent, explainable intent within AI systems – not just capabilities, but motivations. From a systems perspective, we also face technical challenges: how to ensure seamless and secure data exchange, how to verify agent identity across platforms, and how to create common protocols that allow for the transmission of not just information, but trust itself.
But perhaps the most difficult challenge lies in mindset. As Sequoia Capital venture firm observed, success in the agent economy requires more than new technology – it demands a new kind of leadership, one that understands what AI agents can and cannot do, and how they should be governed.
AI window of opportunity
The next five years offer a narrow but critical window to shape how trust functions in a world of autonomous agents. The global AI agents market size is projected to reach $50.31 billion by 2030, according to Grand View Research. As the agent economy evolves, the stakes will be higher than ever. Fraud and security threats could multiply exponentially unless robust trust frameworks are established.
Experts called 2024 "The Year of the Deepfake". As time progresses, the development of autonomous AI agents, the possibilities for fakes and fraud will only expand. This is fraught with a decrease in trust and economic indicators. For example, Deloitte’s Center for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States alone. Losses on the scale of the global economy will exceed AI agents’ market size.
We can choose to let mistrust grow, driven by confusion, manipulation and digital overload. Or we can build new trust architectures, grounded in clarity, consistency and shared human values, augmented by intelligent agents.
Weforum
Jul 29, 2025 11:46
Number of visit : 82

Comments

Sender name is required
Email is required
Characters left: 500
Comment is required