Staff Reporter
A new study from Duke University reveals that employees who utilize AI tools at work may face negative judgments from coworkers and managers, putting their professional reputation at risk.
According to Ars Technica, AI tools like ChatGPT, Claude, and Google Gemini are becoming more common in workplaces, promising to enhance productivity. However, research published in the Proceedings of the National Academy of Sciences by Duke’s Fuqua School of Business indicates that using these AI tools can carry hidden social costs.
The study, titled “Evidence of a Social Evaluation Penalty for Using AI,” involved four experiments with over 4,400 participants to assess both expected and actual evaluations of AI users. Results consistently showed that employees who relied on AI were viewed as lazier, less competent, less diligent, less independent, and less confident compared to those who used traditional methods or no assistance.
Interestingly, this social stigma against AI use transcended demographic boundaries, indicating a widespread bias. This could pose a significant obstacle to AI adoption in workplaces, as employees might hesitate to use these tools out of fear of how they’ll be perceived by peers and superiors.
The research also highlighted that workers were less inclined to disclose their use of AI to colleagues and managers due to worries about negative repercussions. This aligns with reports of “secret cyborgs”—employees who secretly use AI due to company restrictions on AI-generated content.
The bias against AI usage even influenced hiring decisions. In simulations, managers who did not use AI themselves were less likely to hire candidates who did. In contrast, managers who frequently used AI showed a preference for candidates who also utilized these tools, underscoring how personal experience shapes perceptions.
The study found that perceptions of laziness largely explained the social penalty associated with AI use. However, this penalty diminished significantly when AI use was clearly beneficial for the task at hand.
These findings create a challenge for organizations promoting AI integration. While AI tools promise efficiency and increased productivity, the accompanying social stigma could hinder their acceptance and impose additional burdens on both users and non-users who must verify the quality of AI-generated outputs or detect AI use in academic assignments.