fbpx
Home » Slack has been siphoning user data to train AI models without asking permission

Slack has been siphoning user data to train AI models without asking permission

Smarter collaboration, but at what cost?

0 comment 503 views

FACEPALM: For organizations, the specter of internal data being used to train AI models raises serious concerns around security and compliance. But Slack has still apparently been slurping up messages, files, and data to train its AI features behind the scenes. Even worse, users were automatically opted into this arrangement without knowledge or consent.

The revelation, which blew up online this week after a user called it out on X/Twitter, has plenty of people peeved that Slack didn’t make this clearer from the jump. Corey Quinn, an executive at Duckbill Group, kicked up the fuss with an angry post asking “I’m sorry Slack, you’re doing f**king WHAT with user DMs, messages, files, etc?”

Quinn was referring to an excerpt from Slack’s Privacy Principles that reads, “To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.”

Slack was quick to respond under the same post, confirming that it’s indeed using customer content to train certain AI tools in the app. But it drew a line – that data isn’t going towards their premium AI offering, which they bill as completely isolated from user information.

Still, most were caught off guard by Slack’s main AI features relying on an open tap into everyone’s private conversations and files. Several users argued there should’ve been prominent heads-up, letting people opt out before any data collection commenced.

The opt-out process itself is…

Read The Full Article at Techspot

related posts

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept