HomeWall Street WhispersStudy: Professionals Judge Colleagues Who Use AI Tools Negatively

Study: Professionals Judge Colleagues Who Use AI Tools Negatively

Published on

Staff Reporter

A new study from Duke University reveals that employees who utilize AI tools at work may face negative judgments from coworkers and managers, putting their professional reputation at risk.

According to Ars Technica, AI tools like ChatGPT, Claude, and Google Gemini are becoming more common in workplaces, promising to enhance productivity. However, research published in the Proceedings of the National Academy of Sciences by Duke’s Fuqua School of Business indicates that using these AI tools can carry hidden social costs.

The study, titled “Evidence of a Social Evaluation Penalty for Using AI,” involved four experiments with over 4,400 participants to assess both expected and actual evaluations of AI users. Results consistently showed that employees who relied on AI were viewed as lazier, less competent, less diligent, less independent, and less confident compared to those who used traditional methods or no assistance.

Interestingly, this social stigma against AI use transcended demographic boundaries, indicating a widespread bias. This could pose a significant obstacle to AI adoption in workplaces, as employees might hesitate to use these tools out of fear of how they’ll be perceived by peers and superiors.

The research also highlighted that workers were less inclined to disclose their use of AI to colleagues and managers due to worries about negative repercussions. This aligns with reports of “secret cyborgs”—employees who secretly use AI due to company restrictions on AI-generated content.

The bias against AI usage even influenced hiring decisions. In simulations, managers who did not use AI themselves were less likely to hire candidates who did. In contrast, managers who frequently used AI showed a preference for candidates who also utilized these tools, underscoring how personal experience shapes perceptions.

 

The study found that perceptions of laziness largely explained the social penalty associated with AI use. However, this penalty diminished significantly when AI use was clearly beneficial for the task at hand.

These findings create a challenge for organizations promoting AI integration. While AI tools promise efficiency and increased productivity, the accompanying social stigma could hinder their acceptance and impose additional burdens on both users and non-users who must verify the quality of AI-generated outputs or detect AI use in academic assignments.

Latest articles

AI Gold Rush: Nvidia Becomes First Company to Reach $4 Trillion Market Cap

Nvidia has made headlines as the first company to achieve a $4 trillion market...

27.3% of Warren Buffett’s $258.7 Billion Portfolio Invested in Two American Stocks

Warren Buffett has frequently emphasized the importance of believing in America. Investors who buy...

Gold ETFs See Biggest Inflow in Five Years in First Half of 2025, Reports WGC

Gold exchange-traded funds (ETFs) backed by physical gold experienced their largest semi-annual inflow since...

2 AI Stocks to Consider Selling Before a 25% Drop, Say Wall Street Analysts

Artificial intelligence (AI) stocks have captured the market's attention, and for good reason. However,...

More like this

Poll: Majority of Managers Rely on AI for Raises and Layoffs

In a new survey released Thursday, concerns about the influence of artificial intelligence on...

U.S. Housing Finance Chief Calls for Investigation Into Fed’s Powell

  William Pulte, the director of the U.S. Federal Housing and chairman of Fannie Mae...

Amazon May Soon Employ More Robots than Humans in Warehouses

  Amazon is on the verge of having more robots than human workers in its...