HomeWall Street WhispersStudy: Professionals Judge Colleagues Who Use AI Tools Negatively

Study: Professionals Judge Colleagues Who Use AI Tools Negatively

Published on

Staff Reporter

A new study from Duke University reveals that employees who utilize AI tools at work may face negative judgments from coworkers and managers, putting their professional reputation at risk.

According to Ars Technica, AI tools like ChatGPT, Claude, and Google Gemini are becoming more common in workplaces, promising to enhance productivity. However, research published in the Proceedings of the National Academy of Sciences by Duke’s Fuqua School of Business indicates that using these AI tools can carry hidden social costs.

The study, titled “Evidence of a Social Evaluation Penalty for Using AI,” involved four experiments with over 4,400 participants to assess both expected and actual evaluations of AI users. Results consistently showed that employees who relied on AI were viewed as lazier, less competent, less diligent, less independent, and less confident compared to those who used traditional methods or no assistance.

Interestingly, this social stigma against AI use transcended demographic boundaries, indicating a widespread bias. This could pose a significant obstacle to AI adoption in workplaces, as employees might hesitate to use these tools out of fear of how they’ll be perceived by peers and superiors.

The research also highlighted that workers were less inclined to disclose their use of AI to colleagues and managers due to worries about negative repercussions. This aligns with reports of “secret cyborgs”—employees who secretly use AI due to company restrictions on AI-generated content.

The bias against AI usage even influenced hiring decisions. In simulations, managers who did not use AI themselves were less likely to hire candidates who did. In contrast, managers who frequently used AI showed a preference for candidates who also utilized these tools, underscoring how personal experience shapes perceptions.

 

The study found that perceptions of laziness largely explained the social penalty associated with AI use. However, this penalty diminished significantly when AI use was clearly beneficial for the task at hand.

These findings create a challenge for organizations promoting AI integration. While AI tools promise efficiency and increased productivity, the accompanying social stigma could hinder their acceptance and impose additional burdens on both users and non-users who must verify the quality of AI-generated outputs or detect AI use in academic assignments.

Latest articles

Warren Buffett’s $286 Billion Portfolio: 32.9% Invested in AI Stocks

Staff Reporter On May 3, Warren Buffett announced he will step down as CEO of...

This Recession Forecasting Tool Hasn’t Missed Since 1966 — A Clear Message for Wall Street

  Staff Reporter Forecasting a U.S. recession is more complex than it seems. Recently, Wall Street...

One in Ten Britons Have No Savings, FCA Reports

Staff Reporter A recent report from the UK’s Financial Conduct Authority (FCA) reveals that one...

5 Timeless Warren Buffett Quotes to Transform Your Investment Strategy

Staff Reporter Warren Buffett is known for his wisdom in investing, and his insights are...

More like this

Father of Five Spends $1,400 in One Day at Disney World : Is the Magic Pricing Out the Middle Class?

  Staff Reporter A Florida father of five recently took his family to Disney World, spending...

The World’s 50 Most Valuable Companies

Staff Reporter The world’s most valuable companies play a crucial role in the global economy,...

Moody’s Alerts to Risks from Increasing Retail Investments in Private Credit

  Staff Reporter Moody's ratings agency issued a warning on Wednesday about the growing risks that...