🚨 Stanford University studied the privacy policies of six major AI giants, including OpenAI, Google, and Meta, and the conclusions are chilling: These six companies are all using your conversations to train their models. This is enabled by default, and they haven't actually sought your consent. Here are the actual findings of this paper. They evaluated all these documents under the California Consumer Privacy Act, the most comprehensive privacy law in the US. The results are worse than you might imagine. Every company collects your chat data by default and feeds it to their models for training. Some companies retain your conversations indefinitely, with no expiration date and no automatic deletion. Your data just sits there forever, feeding future versions of the model. Some of these companies even have human employees read your chat logs as part of the training process. Not anonymized summaries. But your actual, unedited conversations. But the real danger lies here: For companies like Google, Meta, Microsoft, and Amazon, these giants also operate search engines, social media platforms, e-commerce websites, and cloud services. Your AI conversations don't just stay within the chatbot's fold. They're merged with all the other information these companies already have about you: Your search history Your shopping data Your social media activity Files you upload Researchers describe a real-world scenario that should make you think: You ask an AI chatbot for a heart-healthy dinner recipe. The model infers you might have cardiovascular disease. This category then flows into the company's wider ecosystem. You start seeing drug ads. This information reaches insurance databases. This impact accumulates over time. You just asked a question about dinner, but the system built a health profile for you. The problem gets even worse when you see children's data. Four out of six companies appear to be incorporating children's chat data into their model training. Google announced it will use teenagers' data for training with opt-in. Anthropic says it doesn't collect children's data but doesn't verify user age. Microsoft says it collects data from users under 18 but claims it won't use it for training. Children are legally unable to consent to this. And most parents are completely unaware that this is happening. The opt-out mechanism is a complete maze. Some companies offer an opt-out option, others don't. Those that do offer an opt-out option hide the button deep within settings pages that most users will never find. Privacy policies themselves are written in obscure legal jargon, making them difficult even for professional researchers dedicated to reading these documents. And there's an unresolved structural problem: Currently, there is no comprehensive federal privacy law regulating how AI companies handle chat data. State-cobbled-together laws leave huge loopholes. Researchers are explicitly calling for three measures: Mandatory federal regulation; Active opt-in rather than passive opt-out for model training; Automatic filtering of personal information before chat input enters the training pipeline. Currently, none of these three measures exist. The deeply unsettling truth is: Every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you're contributing to a training dataset. Your medical issues, your emotional state, your financial details, your uploaded private documents. And the companies doing this are doing everything they can to make it impossible for you to stop them. PS: We're not paying users; we're paying educational materials!
Risk and Disclaimer:The content shared by the author represents only their personal views and does not reflect the position of CoinWorldNet (币界网). CoinWorldNet does not guarantee the truthfulness, accuracy, or originality of the content. This article does not constitute an offer, solicitation, invitation, recommendation, or advice to buy or sell any investment products or make any investment decisions
No Comments
edit
comment
collection45
like24
share