HomeWall Street WhispersStudy: Professionals Judge Colleagues Who Use AI Tools Negatively

Study: Professionals Judge Colleagues Who Use AI Tools Negatively

Published on

Staff Reporter

A new study from Duke University reveals that employees who utilize AI tools at work may face negative judgments from coworkers and managers, putting their professional reputation at risk.

According to Ars Technica, AI tools like ChatGPT, Claude, and Google Gemini are becoming more common in workplaces, promising to enhance productivity. However, research published in the Proceedings of the National Academy of Sciences by Duke’s Fuqua School of Business indicates that using these AI tools can carry hidden social costs.

The study, titled “Evidence of a Social Evaluation Penalty for Using AI,” involved four experiments with over 4,400 participants to assess both expected and actual evaluations of AI users. Results consistently showed that employees who relied on AI were viewed as lazier, less competent, less diligent, less independent, and less confident compared to those who used traditional methods or no assistance.

Interestingly, this social stigma against AI use transcended demographic boundaries, indicating a widespread bias. This could pose a significant obstacle to AI adoption in workplaces, as employees might hesitate to use these tools out of fear of how they’ll be perceived by peers and superiors.

The research also highlighted that workers were less inclined to disclose their use of AI to colleagues and managers due to worries about negative repercussions. This aligns with reports of “secret cyborgs”—employees who secretly use AI due to company restrictions on AI-generated content.

The bias against AI usage even influenced hiring decisions. In simulations, managers who did not use AI themselves were less likely to hire candidates who did. In contrast, managers who frequently used AI showed a preference for candidates who also utilized these tools, underscoring how personal experience shapes perceptions.

 

The study found that perceptions of laziness largely explained the social penalty associated with AI use. However, this penalty diminished significantly when AI use was clearly beneficial for the task at hand.

These findings create a challenge for organizations promoting AI integration. While AI tools promise efficiency and increased productivity, the accompanying social stigma could hinder their acceptance and impose additional burdens on both users and non-users who must verify the quality of AI-generated outputs or detect AI use in academic assignments.

Latest articles

Global Equity Funds Experience Largest Outflows in Three Months Amid Middle East Tensions

By Agencies Global equity funds saw a significant net outflow of $19.82 billion for the...

Does Warren Buffett Have Insights That Wall Street Overlooks?

Warren Buffett, the CEO of Berkshire Hathaway(NYSE:BRK-A)(NYSE:BRK-B), is known for his long-term investment strategy,...

Could Buying Berkshire Hathaway Stock Today Set You Up for Life?

By Courtney Carlsen Berkshire Hathaway (NYSE:BRK-A)(NYSE:BRK-B) has delivered exceptional returns over the past 60 years thanks to...

World Bank Calls for “Radical” Debt Transparency in Developing Nations

By Agencies The World Bank has issued a strong call for "radical" debt transparency among...

More like this

Sam Altman: Meta Seeks to Attract OpenAI Talent with $100 Million Bonuses

  Sam Altman, CEO of OpenAI, claims that Mark Zuckerberg’s Meta is attempting to woo...

Goldman Sachs Warns of Potential Long-Term Glut in Data Centers

In a recent communication to clients, Goldman Sachs analyst Vinay Viswanathan provided insights into...

Warner Bros. Credit Rating Downgraded to Junk Status by Fitch

By Agencies Fitch Ratings announced on Wednesday that it has downgraded Warner Bros. Discovery (WBD)...