- Joined
- Jan 17, 2010
- Messages
- 4,820
- Reaction score
- 5,896
Hayden Field @HAYDENFIELD
Cue the George Orwell reference.
Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom and other popular apps.
Huge U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, as well as European brands including Nestle and AstraZeneca, have turned to a seven-year-old startup, Aware, to monitor chatter among their rank and file, according to the company.
Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.
Using the anonymized data in Aware’s analytics product, clients can see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, he said.
Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. But its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.
Aware said Walmart, T-Mobile, Chevron and Starbucks use its technology for governance risk and compliance, and that type of work accounts for about 80% of the company’s business.
CNBC didn’t receive a response from Walmart, T-Mobile, Chevron, Starbucks or Nestle regarding their use of Aware. A representative from AstraZeneca said the company uses the eDiscovery product but that it doesn’t use analytics to monitor sentiment or toxicity. Delta told CNBC that it uses Aware’s analytics and eDiscovery for monitoring trends and sentiment as a way to gather feedback from employees and other stakeholders, and for legal records retention in its social media platform.
It doesn’t take a dystopian novel enthusiast to see where it could all go very wrong.
Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.
Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen.”
Employee surveillance AI is a rapidly expanding but niche piece of a larger AI market that’s exploded in the past year, following the launch of OpenAI’s ChatGPT chatbot in late 2022. Generative AI quickly became the buzzy phrase for corporate earnings calls, and some form of the technology is automating tasks in just about every industry, from financial services and biomedical research to logistics, online travel and utilities.
- Aware, an AI firm specializing in analyzing employee messages, said companies including Walmart, Delta, T-Mobile, Chevron and Starbucks are using its technology.
- Aware said its data repository contains messages that represent about 20 billion individual interactions across more than 3 million employees.
- “A lot of this becomes thought crime,” Jutta Williams, co-founder of Humane Intelligence, said of AI employee surveillance technology in general. She added, “This is treating people like inventory in a way I’ve not seen.”
Cue the George Orwell reference.
Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom and other popular apps.
Huge U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, as well as European brands including Nestle and AstraZeneca, have turned to a seven-year-old startup, Aware, to monitor chatter among their rank and file, according to the company.
Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.
Using the anonymized data in Aware’s analytics product, clients can see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, he said.
Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. But its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.
Aware said Walmart, T-Mobile, Chevron and Starbucks use its technology for governance risk and compliance, and that type of work accounts for about 80% of the company’s business.
CNBC didn’t receive a response from Walmart, T-Mobile, Chevron, Starbucks or Nestle regarding their use of Aware. A representative from AstraZeneca said the company uses the eDiscovery product but that it doesn’t use analytics to monitor sentiment or toxicity. Delta told CNBC that it uses Aware’s analytics and eDiscovery for monitoring trends and sentiment as a way to gather feedback from employees and other stakeholders, and for legal records retention in its social media platform.
It doesn’t take a dystopian novel enthusiast to see where it could all go very wrong.
Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.
Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen.”
Employee surveillance AI is a rapidly expanding but niche piece of a larger AI market that’s exploded in the past year, following the launch of OpenAI’s ChatGPT chatbot in late 2022. Generative AI quickly became the buzzy phrase for corporate earnings calls, and some form of the technology is automating tasks in just about every industry, from financial services and biomedical research to logistics, online travel and utilities.