ARTICLE AD BOX
Anthropic has made sweeping changes to how it handles the data of its users in the consumer tiers. The company behind popular AI chatbot Claude announced on Thursday that it will start training its AI on user data unless users choose to opt out of it by 28 September.
The AI startup also announced that it is extending the data retention policy for the messages sent to Claude to five years for the users that don't opt out of AI training.
Notably, Anthropic did not use the messages of users to train its AI models in the past and the company also claimed to delete all user prompts and outputs within 30 days if not required legally. However, the company still saved the inputs and outputs of users for up to 2 years in case of policy violations.
The new updates will impact all of Claude's consumer tier users including Claue Free, Pro, and Max subscribers, including when they use Claude Code from their linked accounts. However, all of the company's commercial plans such as Claude for Work, Claude Gov, Claude for Education, or API won't be affected by this change in policy including when using service via third parties like Amazon Bedrock and Google Cloud’s Vertex AI.
Anthropic says by participating in the model training, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.”
“You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.” the company added
Existing Claude users have time until 28 September
Anthropic will soon start showing a pop-up window to existing users with the headline: “Updates to Consumer Terms and Policy”. The subtext then includes an option for “You can now improve Claude” which is turned on by default and unsuspecting users may go on to click on the accept option and have their data trained by the AI models.
Users can also click on the ‘Not Now’ option and defer the decision until 28 September, when they will be forced to make a choice in order to continue using Claude.
If the user choose to accept the new terms their data will be used for training AI models immediately but Anthropic says it will only apply to new or resumed chats and coding sessions and not for past conversations. But if you revisit a conversation and type in a message, the whole chat may be used for training the AI.
Meanwhile, new users will be asked to select their prefrences for data training during the sign up process.
How to turn off Claude from training on your chats?
If you weren't aware about the new changes, it is likely that you could have opted in and are having your conversations being trainined on by Antrhopic's AI be default. Here's how you can opt out Antrhopic's AI training.
On the website:
Click on your profile on the bottom left corner and go to Settings
Click on Privacy and you should now see a ‘Help improve Claude’ toggle
Make sure you turn the toggle to off
On the app:
Click on the three stacked lines at the top left corner
Click on Settings
Now tap on Privacy and make sure that the Help Improve Claude toggle is turned off

4 months ago
11





English (US) ·