Social How Walmart, Delta, Chevron and Starbucks are using AI to monitor employee messages

LeonardoBjj

Professional Wrestler
@Brown
Joined
Jan 17, 2010
Messages
4,532
Reaction score
5,495
Hayden Field @HAYDENFIELD


  • Aware, an AI firm specializing in analyzing employee messages, said companies including Walmart, Delta, T-Mobile, Chevron and Starbucks are using its technology.
  • Aware said its data repository contains messages that represent about 20 billion individual interactions across more than 3 million employees.
  • “A lot of this becomes thought crime,” Jutta Williams, co-founder of Humane Intelligence, said of AI employee surveillance technology in general. She added, “This is treating people like inventory in a way I’ve not seen.”
corbis-42-169915581.jpg

Cue the George Orwell reference.

Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom and other popular apps.

Huge U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, as well as European brands including Nestle and AstraZeneca, have turned to a seven-year-old startup, Aware, to monitor chatter among their rank and file, according to the company.

Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.

Using the anonymized data in Aware’s analytics product, clients can see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, he said.
cr_LEDE-PHOTO-Chelan-31559059148_efe31ea6d6_o-1024x568.jpg

Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. But its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.

Aware said Walmart, T-Mobile, Chevron and Starbucks use its technology for governance risk and compliance, and that type of work accounts for about 80% of the company’s business.

CNBC didn’t receive a response from Walmart, T-Mobile, Chevron, Starbucks or Nestle regarding their use of Aware. A representative from AstraZeneca said the company uses the eDiscovery product but that it doesn’t use analytics to monitor sentiment or toxicity. Delta told CNBC that it uses Aware’s analytics and eDiscovery for monitoring trends and sentiment as a way to gather feedback from employees and other stakeholders, and for legal records retention in its social media platform.
20999952-0-image-a-24_1573731047971.jpg

It doesn’t take a dystopian novel enthusiast to see where it could all go very wrong.

Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.

Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen.”

Employee surveillance AI is a rapidly expanding but niche piece of a larger AI market that’s exploded in the past year, following the launch of OpenAI’s ChatGPT chatbot in late 2022. Generative AI quickly became the buzzy phrase for corporate earnings calls, and some form of the technology is automating tasks in just about every industry, from financial services and biomedical research to logistics, online travel and utilities.
 
Aware’s revenue has jumped 150% per year on average over the past five years, Schumann told CNBC, and its typical customer has about 30,000 employees. Top competitors include Qualtrics, Relativity, Proofpoint, Smarsh and Netskope.

By industry standards, Aware is staying quite lean. The company last raised money in 2021, when it pulled in $60 million in a round led by Goldman Sachs Asset Management. Compare that with large language model, or LLM, companies such as OpenAI and Anthropic, which have raised billions of dollars each, largely from strategic partners.
images

‘Tracking real-time toxicity’​

Schumann started the company in 2017 after spending almost eight years working on enterprise collaboration at insurance company Nationwide.

Before that, he was an entrepreneur. And Aware isn’t the first company he’s started that’s elicited thoughts of Orwell.
In 2005, Schumann founded a company called BigBrotherLite.com. According to his LinkedIn profile, the business developed software that “enhanced the digital and mobile viewing experience” of the CBS reality series “Big Brother.” In Orwell’s classic novel “1984,” Big Brother was the leader of a totalitarian state in which citizens were under perpetual surveillance.

I built a simple player focused on a cleaner and easier consumer experience for people to watch the TV show on their computer,” Schumann said in an email.

images

At Aware, he’s doing something very different.

Every year, the company puts out a report aggregating insights from the billions — in 2023, the number was 6.5 billion — of messages sent across large companies, tabulating perceived risk factors and workplace sentiment scores. Schumann refers to the trillions of messages sent across workplace communication platforms every year as “the fastest-growing unstructured data set in the world.”

When including other types of content being shared, such as images and videos, Aware’s analytics AI analyzes more than 100 million pieces of content every day. In so doing, the technology creates a company social graph, looking at which teams internally talk to each other more than others.
21336_alt7.jpg

“It’s always tracking real-time employee sentiment, and it’s always tracking real-time toxicity,” Schumann said of the analytics tool. “If you were a bank using Aware and the sentiment of the workforce spiked in the last 20 minutes, it’s because they’re talking about something positively, collectively. The technology would be able to tell them whatever it was.”

Aware confirmed to CNBC that it uses data from its enterprise clients to train its machine-learning models. The company’s data repository contains about 6.5 billion messages, representing about 20 billion individual interactions across more than 3 million unique employees, the company said.

When a new client signs up for the analytics tool, it takes Aware’s AI models about two weeks to train on employee messages and get to know the patterns of emotion and sentiment within the company so it can see what’s normal versus abnormal, Schumann said.
images

“It won’t have names of people, to protect the privacy,” Schumann said. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”


But Aware’s eDiscovery tool operates differently. A company can set up role-based access to employee names depending on the “extreme risk” category of the company’s choice, which instructs Aware’s technology to pull an individual’s name, in certain cases, for human resources or another company representative.

“Some of the common ones are extreme violence, extreme bullying, harassment, but it does vary by industry,” Schumann said, adding that in financial services, suspected insider trading would be tracked.

For instance, a client can specify a “violent threats” policy, or any other category, using Aware’s technology, Schumann said, and have the AI models monitor for violations in Slack, Microsoft Teams and Workplace by Meta. The client could also couple that with rule-based flags for certain phrases, statements and more. If the AI found something that violated a company’s specified policies, it could provide the employee’s name to the client’s designated representative.
IMG-0f4cce2fe5d3e89a514484821b10726f-V.jpg.32560fd931fd92ea2552e18f12b4ad5d.jpg

This type of practice has been used for years within email communications. What’s new is the use of AI and its application across workplace messaging platforms such as Slack and Teams.

Amba Kak, executive director of the AI Now Institute at New York University, worries about using AI to help determine what’s considered risky behavior.

“It results in a chilling effect on what people are saying in the workplace,” said Kak, adding that the Federal Trade Commission, Justice Department and Equal Employment Opportunity Commission have all expressed concerns on the matter, though she wasn’t speaking specifically about Aware’s technology. “These are as much worker rights issues as they are privacy issues.”

Schumann said that though Aware’s eDiscovery tool allows security or HR investigations teams to use AI to search through massive amounts of data, a “similar but basic capability already exists today” in Slack, Teams and other platforms.

“A key distinction here is that Aware and its AI models are not making decisions,” Schumann said. “Our AI simply makes it easier to comb through this new data set to identify potential risks or policy violations.”
 

Privacy concerns​

Even if data is aggregated or anonymized, research suggests, it’s a flawed concept. A landmark study on data privacy using 1990 U.S. Census data showed that 87% of Americans could be identified solely by using ZIP code, birth date and gender. Aware clients using its analytics tool have the power to add metadata to message tracking, such as employee age, location, division, tenure or job function.

FXtbY0jVUAY_zt2-1024x576.jpg

“What they’re saying is relying on a very outdated and, I would say, entirely debunked notion at this point that anonymization or aggregation is like a magic bullet through the privacy concern,” Kak said.

Additionally, the type of AI model Aware uses can be effective at generating inferences from aggregate data, making accurate guesses, for instance, about personal identifiers based on language, context, slang terms and more, according to recent research.

“No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems,” Kak said. “There is no one who can tell you with a straight face that these challenges are solved.”

And what about employee recourse? If an interaction is flagged and a worker is disciplined or fired, it’s difficult for them to offer a defense if they’re not privy to all of the data involved, Williams said.
images

“How do you face your accuser when we know that AI explainability is still immature?” Williams said.

Schumann said in response: “None of our AI models make decisions or recommendations regarding employee discipline.”

“When the model flags an interaction,” Schumann said, “it provides full context around what happened and what policy it triggered, giving investigation teams the information they need to decide next steps consistent with company policies and the law.”

https://www.cnbc.com/2024/02/09/ai-...ack-teams-messages-using-tech-from-aware.html
 
As the resident HR guy, this is old news. All internal messages have always been accessible and always monitored for sentiment.

If you think we ARENT reading everything you say, you’re simply naive.

It’s very important for retention efforts and sentiment analysis.

It’s also really funny when people get fired to be able to read them the things that led to their firing.
 
Let’s just do away with privacy across the board. If nobody anywhere has any secrets then we will be on a level playing field. I’m fine with it, as long as it includes politicians, business executives, religious leaders, and government officials.
 
As the resident HR guy, this is old news. All internal messages have always been accessible and always monitored for sentiment.

If you think we ARENT reading everything you say, you’re simply naive.

It’s very important for retention efforts and sentiment analysis.

It’s also really funny when people get fired to be able to read them the things that led to their firing.
Sounds like a shitty company to work for. Only time messages should actively be reviewed is for legal matters or similar instances. Seems like a waste of resources to be reviewing employees messages for the sake of. If this is what is seen as what it takes to manage people then that appears to be a management issue.
 
Sounds like a shitty company to work for. Only time messages should actively be reviewed is for legal matters or similar instances. Seems like a waste of resources to be reviewing employees messages for the sake of. If this is what is seen as what it takes to manage people then that appears to be a management issue.
yeah thats a shit company led by shit people for sure.
 
Sounds like a shitty company to work for. Only time messages should actively be reviewed is for legal matters or similar instances. Seems like a waste of resources to be reviewing employees messages for the sake of. If this is what is seen as what it takes to manage people then that appears to be a management issue.
It’s every company. If you think otherwise, you’re mistaken.
 
Yep. Many companies have to for regulatory reasons. And we all agree to it when we sign our employment contract, it’s in the code of conduct, etc. The company owns the hardware and software we use to send those messages.

Exactly. My company has the right to discipline me if I post anything on social media that might bring the company into disrepute. I've known employees to be given Final Written Warnings for something they posted on Facebook.

Thank God the company doesn't know about my Stormfront account...:oops:
 
Not surprised. As the tech becomes more widely available they'll use this to surveil and gamify every single aspect of employee life.

At some point we'll have to go full Battlestar Galactica and completely dissociate to anything that connects to the internet to retain some level of privacy. Analog everything. Can't surveil something written on a piece of paper. The NPCs will allow themselves to be fully sucked into the panopticon, but that's to be expected as there's truly nothing going in those noggins of theirs.
 
Back
Top