AI is probably going to affect your job. We just don’t yet know when or how much — or how to feel about it exactly.
Most Americans agree that AI will have a major impact on workers in the next 20 years, and they’re more likely to say it will hurt more than help, according to a new survey from the Pew Research Center.
But at the same time, most Americans think AI will have little or no impact on them personally.
In other words, AI will harm thee, but not me.
That’s a similar sentiment to what Vox heard talking to workers who’ve deployed generative AI tools like ChatGPT, Bing, and Bard at work. Knowledge workers said the software helps them save time and avoid drudge work, allowing them to write code more quickly or spin up business memos or marketing copy with just a few prompts. But, to a person, these workers felt that even though others’ jobs might be at risk of being obviated by AI, theirs was likely safe thanks in part to their mastery over those tools.
The refrain was frequently a version of this tweet: “AI will not replace you. A person using AI will.” While people are certainly embracing some forms of AI, they find some types, like those that would hire, fire, or monitor them, distasteful. That could be an issue depending on how exactly AI becomes integrated into the workplace.
The truth is that while AI tools show a remarkable ability to replicate what was often high-paid human work, we don’t yet know if that will translate into less work for humans or simply different — and perhaps even better — work. A recent study by OpenAI, the makers of ChatGPT and its more advanced successor GPT-4, found that high-paid jobs that require degrees had the most exposure to the capabilities of these tools. The study didn’t say whether those jobs would be erased or augmented by the technology.
Other forms of AI have been incorporated into various workplace applications in both manual and computer-assisted work for the last decade, according to Julia Dhar, managing director, partner, and global lead of the Behavioral Science Lab at Boston Consulting Group. In manufacturing, that’s meant AI decides when to start producing one good instead of another based on sales and other demand forecasts. In services, it’s shown up in call centers, prompting workers to offer different responses based on how the interaction progresses or even the tone of a person’s voice.
But so far, thanks to the costs and technical capabilities needed to scale AI in the workplace, AI penetration in the workplace is still low. Dhar sees that as an opportunity to make sure that the way AI is used at work is beneficial both to companies and to workers.
“I think that we have focused not enough of the public conversation around trust,” Dhar said. “We’ve talked about trustworthy AI, but we have talked hardly at all about trust between employers and employees, and how this could be a trust-building opportunity rather than a trust-destroying opportunity.”
The Pew study, which surveyed more than 11,000 Americans, suggests that trust is lacking. While people like certain types of AI at work, lots of it has them on edge, specifically when it’s used in hiring, firing, and monitoring.
Seventy-one percent opposed the idea of AI making final hiring decisions (just 7 percent were for it) and 55 percent were against its use in making firing decisions, according to the Pew report. A plurality didn’t want it to be used to review applications at all or to decide who gets promotions. Many felt AI lacked the human touch that would allow it to see things like the potential in a candidate who didn’t exactly match a job description or how well a person might get along with their coworkers.
The use of AI is already commonplace in so-called applicant tracking software, which most major companies use in their hiring process. This widespread technology allows companies to use keywords or criteria — like whether or not they have a college degree or a gap in their resume — to automatically winnow down the mass of incoming online applications. But many, including employers themselves, fear that those broad strokes could end up excluding people who would be perfectly good candidates.
Jason Schloetzer, associate professor at Georgetown University’s McDonough School of Business, found that more than half of human resource managers are either using AI-based tech in hiring or intend to do so very soon. He says that AI is creeping into more advanced levels of the hiring process, like the first round of interviews, where he says candidates respond to employer questions into a webcam, and employers use AI to analyze their response and even body language to decide whether or not they make it to the next round.
“It’s prevalent enough that our students are being trained by career services on how to handle those interviews,” Schloetzer added.
In a way, these practices have encouraged workers and candidates to use AI themselves. Many are turning to tools like ChatGPT to write their resumes or cover letters — in part to offload a tedious task, but also as a way to fight back in the hiring process, bot to bot.
Unsurprisingly, most people surveyed by Pew also oppose using AI in creepier ways, like monitoring their movements and facial expressions while they work or tracking when they’re at their desks and what exactly they’re doing. This sort of technology has become increasingly common in the workplace, from Amazon warehouses to the office, since the start of the pandemic as bosses, leery of remote work and quiet quitting, try to ensure productivity. But as the Wall Street Journal reported, there’s little proof the technology works, and some evidence suggests it can even be counterproductive, causing people to be demoralized and less productive. Such so-called productivity trackers have also led to a rise in people trying to outwit them, with hacks like mouse jigglers — devices that can physically move a mouse on a desk with no human present — that make it look like they’re working.
Vox spoke with a professional services worker at a midsize marketing company that uses activity monitoring software to track remote workers’ keystrokes and mouse movements and takes occasional photographs to check that she’s at her computer. The employee, who asked that we not use her name so as not to get her in trouble at work, said that while she finds the software annoying, she’s developed ways to get around it. She browses social media on her personal phone and makes sure not to slack off for more than 10 minutes at a time so the software doesn’t flag her to her bosses.
Overall, she’s ambivalent and doesn’t think it affects her productivity one way or another. She does believe there’s a bright side in that she doesn’t do any work that’s not at her computer, so when she’s off she’s really off.
“I hate that I don’t hate it,” she said, but added that she’d probably opt next time for a job that didn’t track her. As for the facial monitoring, she said, “If they want to see how busted I show up for my remote job, then good for them.”
BCG’s Dhar, however, warns against such monitoring AI, saying it causes companies to “mistake activity for productivity.”
“It really sends a message to people that doing anything that is observable is better than doing the very often unobservable hard work of human cognition or relationship building or spotting safety hazards,” she said.
For now, it’s impossible to predict how exactly AI adoption will impact the workplace.
Georgetown’s Schloetzer said it will likely mean some jobs are lost, some are added, but for the most part, a lot of existing jobs will be reconfigured. What’s certain is that prominent use of AI in the workplace will eventually happen.
“I don’t even think it’s worth debating this stuff,” Schloetzer said “I think we just need to be prepared for it to be rolled out.”