Skip to content
Company
About Finaira
Culture & Environment
Values & Leadership Principles
Board of Directors
Leadership Team
Contact Us
Solutions & Services
Innovation, Research & Leadership
Innovation & Research
Insights & Thought Leadership
Work at Finaira
ع
March 18, 2026

The AI Productivity Paradox: How You Use AI Matters More Than Whether You Use It

Written by Fady Abouelghit

Based on: Shen & Tamkin (2026)  |  Yang et al. (2025), Perplexity / Harvard

By: Fady Abouelghit – Manager, AI Services Engineering

AI tools are spreading fast, and the productivity case for them is well-documented. But buried in two recent studies is a question that deserves more attention: are we using AI in ways that make us sharper, or in ways that quietly make us more dependent and less capable? The answer, it turns out, is not about whether you use AI or not. It is about how you use it. This article draws on two recent studies to map the terrain: one showing the scale and shape of AI agent adoption in the real world, and one measuring what different modes of AI use actually do to the people relying on it. The key message is simple but consequential: cognitive engagement is the variable that separates AI use that builds capability from AI use that erodes it.

Setting the Scene

If 2024 was the year of the chatbot, 2025 has firmly become the year of the AI agent. Autonomous systems that don’t just answer questions but actually browse the web, manage files, and execute multi-step tasks on your behalf are no longer a research curiosity. They are in production, being used by millions every day. The global agentic AI market, estimated at $8 billion in 2025, is projected to reach $199 billion by 2034.

Two recent papers offer complementary lenses on this shift: a large-scale field study from Perplexity and Harvard examining who uses AI agents and what for, and a randomized controlled experiment from Anthropic researchers asking a more uncomfortable question: what does heavy AI reliance actually do to the people using it?

Together, they sketch a picture that’s both exciting and worth paying attention to.

AI Agents in the Wild

Yang et al. (2025) provide the first large-scale behavioral study of a general-purpose AI agent, built on Perplexity’s Comet browser and its embedded Comet Assistant. Drawing on hundreds of millions of anonymized user interactions collected between July and October 2025, the study asks three deceptively simple questions: Who is using AI agents? How intensively? And for what?

57%

of queries: Productivity & Learning combined

36%

Productivity & Workflow topic share

21%

Learning & Research topic share

9×

more queries from early adopters vs. GA cohort

Figure 1 · Key adoption & usage metrics from Yang et al. (2025). Source: Perplexity Comet, n = hundreds of millions of queries.

The study introduces a hierarchical agentic taxonomy organizing use cases across three levels: topic, subtopic, and task. The top-level distribution is striking: productivity and learning alone account for 57% of all agentic queries, ahead of shopping, media, travel, and everything else.

Topic
Share
Top subtopics
Productivity & Workflow
36%
Document editing (8%), Email mgmt (7%), Coding (part of workflow)
Learning & Research
21%
Courses (13%), Research summarization & analysis (8%)
Media & Entertainment
16%
Social media, videos, news, sports
Shopping & Commerce
10%
Goods search & filtering (9%), services booking
Travel & Leisure / Job & Career / Other
17%
Flights, trips, job search, professional networking

Figure 2 · Agentic query distribution by topic, Yang et al. (2025) hierarchical taxonomy.

The study also finds that earlier adopters, users in higher-GDP countries, and knowledge workers (digital tech, academia, finance, marketing) are disproportionately represented, both in adoption and in usage intensity. Over time, a telling longitudinal pattern emerges: users gradually shift away from travel and media toward productivity, learning, and career topics, suggesting that as people get more comfortable with AI agents, they gravitate toward cognitively heavier use.

The Learning Cost Nobody Talks About

Shen & Tamkin (2026), working through Anthropic’s Fellows Program, take a different starting point. Instead of asking what people do with AI, they ask what it costs them. Their randomized experiment assigned 52 experienced Python developers to complete tasks using an unfamiliar async library (Trio), with or without AI assistance. After the task, everyone took the same comprehension quiz. No AI allowed.

The results were sobering. The AI-assisted group scored 17% lower on the knowledge quiz (Cohen’s d = 0.738, p = 0.010). Even more striking: they did not actually finish faster. Task completion time was not significantly different between the two groups (p = 0.391). The productivity gains that AI assistance is famous for? In this setting, learning something genuinely new, they largely did not materialize.

Outcome
AI Group
No-AI Group
Result
Task completion time
~24 min
~25 min
No sig. diff. (p = 0.391)
Knowledge quiz score
~53%
~70%
−17 pts (p = 0.010*)
Debugging sub-score
Lowest
Highest
Largest gap observed

Figure 3 · Main study results (n = 52), Shen & Tamkin (2026). Skill gap holds across all experience levels.

Debugging showed the largest performance gap between groups. That’s worth pausing on: debugging is precisely the skill you need to supervise AI-generated code and catch its errors. The capability that AI most threatens to erode is the one we’d need most in a world full of AI. 

It’s Not “Use AI or Don’t”: It’s How

Here’s where the story gets more nuanced, and more hopeful. Shen & Tamkin did not just measure outcomes; they also analyzed the qualitative interaction patterns of every participant through screen recordings. They identified six distinct AI usage patterns, and three of them actually preserved learning outcomes despite using AI.

Interaction Pattern
Quiz Score
What it means
Conceptual Inquiry
86%
Asks AI to explain concepts; builds real understanding
Hybrid Code + Explanation
65%
Uses AI for code but demands reasoning behind it
Iterative AI Debugging
68%
Engages actively, testing and refining AI output
Patterns with lower learning outcomes
AI Delegation
39%
Hands off task entirely; copy-pastes AI output
Generation then Comprehension
24%
Gets full solution first, barely reviews it
Progressive AI Reliance
35%
Starts independently, drifts into full dependence

Figure 4 · Six AI interaction patterns and their knowledge quiz scores, Shen & Tamkin (2026). Green = learning-preserving; Red = learning-degrading.

The dividing line is cognitive engagement. Learners who asked AI to explain its reasoning, challenged its output, or used it as a thinking partner rather than an answer machine retained their skills. Those who simply delegated, treating AI as a vending machine for solutions, paid a real learning cost. 

What Both Papers Are Really Saying

The section ties both papers together by arguing that Yang et al. establish the scale of the moment: people are already using AI agents heavily for cognitively demanding tasks like learning, research, and coding. Shen & Tamkin add the warning label: engagement quality is not automatic, and passive delegation erodes the very skills (debugging, critical understanding) you would need to keep AI in check. The conclusion is not anti-AI; it is that how you use it determines whether it amplifies or quietly hollows out your capabilities. The hopeful note is that users naturally drift toward more cognitively demanding AI use over time, and the practical takeaway is simple: use AI to extend your thinking, not replace it. Ask why, not just what, and the productivity gains come without the skill cost.

Ultimately, every practitioner sits somewhere on a spectrum. At one end: maintaining knowledge, staying in control, and challenging AI to deliver more relevant and efficient business outcomes. At the other: gradually losing knowledge, becoming increasingly dependent on AI, and eventually accepting lower-quality results without the skills to recognize or correct them. That is not a technological choice; it is a professional posture. And it is one that individuals and organizations need to make deliberately, rather than by default.

What We Are Doing Differently at Finaira

At Finaira, this is not an abstract debate. We operate at the intersection of financial services and AI adoption, which means we encounter both sides of the spectrum every day: teams eager to unlock productivity gains, and the very real risk that poorly structured adoption quietly degrades the institutional knowledge those teams have spent years building.

Our approach is built around what we call structured AI engagement: a set of adoption practices designed to ensure that AI tools enhance the capability of our people and our clients’ teams, rather than replace the thinking that makes that capability valuable.

01

Explain-Back Protocol

AI outputs are never accepted at face value. Team members are expected to articulate what the model produced, why it makes sense, and where it might be wrong, before acting on it.

02

Deliberate Skill Rings

Certain core competencies are designated as AI-free zones in training and onboarding, preserving the foundational knowledge needed to supervise and challenge AI effectively.

03

Client Adoption Framework

In client implementations, we embed AI engagement guidelines from day one: structured prompting habits, validation checkpoints, and periodic reviews to assess whether AI use is building or eroding domain expertise.

Figure 5 · Finaira’s three-pillar approach to structured AI engagement

The research is clear that productivity and skill development are not mutually exclusive, but they do not come automatically either. Getting both requires intentional design: at the individual level in how people prompt and engage with AI, and at the organizational level in how teams structure their workflows and safeguard their expertise. That deliberate approach is what we are committed to building, both for ourselves and for the clients we work with.

Insights & Thought Leadership

Related Articles

Written by Ahmed Hani
March 4, 2026

Why GenAI in FinTech Needs More Than a Good LLM

Written by Engy Fawaz
February 23, 2026

Beyond the Chatbot: Why AI is the New Survival Engine for Banking

Linkedin-in

Company

About Finaira
Culture & Environment
Board of Directors
Leadership Team
Values & Leadership Principles
Innovation & Research
Insights & Thought Leadership
Solutions & Services
Work at Finaira
Contact Us

© 2026 FinAIra. All Rights Reserved.

Terms and Conditions
Privacy Policy
Company
About Finaira
Culture & Environment
Values & Leadership Principles
Board of Directors
Leadership Team
Contact Us
Solutions & Services
Innovation, Research & Leadership
Innovation & Research
Insights & Thought Leadership
Work at Finaira

Driving Innovation in FinTech