HomeWall Street WhispersStudy: Professionals Judge Colleagues Who Use AI Tools Negatively

Study: Professionals Judge Colleagues Who Use AI Tools Negatively

Published on

Staff Reporter

A new study from Duke University reveals that employees who utilize AI tools at work may face negative judgments from coworkers and managers, putting their professional reputation at risk.

According to Ars Technica, AI tools like ChatGPT, Claude, and Google Gemini are becoming more common in workplaces, promising to enhance productivity. However, research published in the Proceedings of the National Academy of Sciences by Duke’s Fuqua School of Business indicates that using these AI tools can carry hidden social costs.

The study, titled “Evidence of a Social Evaluation Penalty for Using AI,” involved four experiments with over 4,400 participants to assess both expected and actual evaluations of AI users. Results consistently showed that employees who relied on AI were viewed as lazier, less competent, less diligent, less independent, and less confident compared to those who used traditional methods or no assistance.

Interestingly, this social stigma against AI use transcended demographic boundaries, indicating a widespread bias. This could pose a significant obstacle to AI adoption in workplaces, as employees might hesitate to use these tools out of fear of how they’ll be perceived by peers and superiors.

The research also highlighted that workers were less inclined to disclose their use of AI to colleagues and managers due to worries about negative repercussions. This aligns with reports of “secret cyborgs”—employees who secretly use AI due to company restrictions on AI-generated content.

The bias against AI usage even influenced hiring decisions. In simulations, managers who did not use AI themselves were less likely to hire candidates who did. In contrast, managers who frequently used AI showed a preference for candidates who also utilized these tools, underscoring how personal experience shapes perceptions.

 

The study found that perceptions of laziness largely explained the social penalty associated with AI use. However, this penalty diminished significantly when AI use was clearly beneficial for the task at hand.

These findings create a challenge for organizations promoting AI integration. While AI tools promise efficiency and increased productivity, the accompanying social stigma could hinder their acceptance and impose additional burdens on both users and non-users who must verify the quality of AI-generated outputs or detect AI use in academic assignments.

Latest articles

Why Insurance and Investing Should Stay Separate

The pitch sounds enticing: get lifelong insurance protection while building wealth in a single...

Seeking Moral Direction in the Dark

In a church bulletin I once read, there was a piece of advice that...

Investing in a World That’s Tired of Progress

  As we navigate a world that feels increasingly unsteady, it's crucial to consider how...

The World’s Biggest Gold Mines

  Gold prices have surged to record highs, driven by geopolitical tensions, economic uncertainty, and...

More like this

How Wall Street’s High-Speed Trading Machines Manipulate the Market

  Once upon a time, Wall Street was a bustling marketplace. Today, it’s more like...

California’s Retirement Fund Faces $330 Million Loss in Clean Energy Investment

The California Public Employees’ Retirement System (CalPERS) has reported a staggering 71% loss on...

These States Are Set to Lead America’s Population Growth Over the Next 25 Years

In the next 25 years, Texas is expected to see an increase of 8.6...