{"id":16298,"date":"2023-05-02T15:55:24","date_gmt":"2023-05-02T15:55:24","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2023\/05\/02\/how-to-ask-openai-for-your-personal-data-to-be-deleted-or-not-used-to-train-its-ais\/"},"modified":"2023-05-02T15:55:24","modified_gmt":"2023-05-02T15:55:24","slug":"how-to-ask-openai-for-your-personal-data-to-be-deleted-or-not-used-to-train-its-ais","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2023\/05\/02\/how-to-ask-openai-for-your-personal-data-to-be-deleted-or-not-used-to-train-its-ais\/","title":{"rendered":"How to ask OpenAI for your personal data to be deleted or not used to train its AIs"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\">Users of ChatGPT in Europe can now use web forms or other means provided by OpenAI to request deletion of their personal data in order to stop the chatbot processing (and producing) information about them. They can also request an opt-out of having their data used to train its AIs.<\/p>\n<p>Why might someone not want their personal data to become fodder for AI? There is a long list of possible reasons, not least the fact OpenAI never asked permission in the first place \u2014 despite <a href=\"https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX:12012P\/TXT\" target=\"_blank\" rel=\"noopener\">privacy being a human right<\/a>. Put another way, people may be concerned about what such a powerful and highly accessible technology could be used to reveal about named individuals. Or indeed take issue with the core flaw of large language models (LLMs) making up false information.<\/p>\n<p>ChatGPT has quickly shown itself to be an accomplished liar, including <a href=\"https:\/\/www.theguardian.com\/technology\/2023\/apr\/06\/australian-mayor-prepares-worlds-first-defamation-lawsuit-over-chatgpt-content\" target=\"_blank\" rel=\"noopener\">about named individuals<\/a> \u2014 with the risk of <a href=\"https:\/\/techcrunch.com\/2022\/06\/01\/whos-liable-for-ai-generated-lies\/\" target=\"_blank\" rel=\"noopener\">reputational damage or other types of harm<\/a> flowing if AI is able to automate fake news about you or people close to you.<\/p>\n<p>And just imagine what a highly trained mimic of how you personally converse might be able to do to you (or to your loved ones) were such an AI model to be misused.<\/p>\n<p>Another batch of issues relate to intellectual property rights. If you have a white collar job you might be worried about generative AI driving push-button commercial exploitation of a particular writing style or some other core professional expertise which could make your own labor redundant or less valuable. And, again, the tech giants behind these AI models typically aren\u2019t offering individuals any compensation for exploiting their data for profit.<\/p>\n<p>You may also have a non-individual concern \u2014 such as the risk of AI chatbots scaling bias and discrimination \u2014 and simply wish for your information not to play any part.<\/p>\n<p>Or perhaps you worry about the future of competitive markets and innovation if vast amounts of data continue to accumulate with a handful of tech giants in an era of data-dependent AI services. And while removing your own data from the pool is just a drop in an ocean it\u2019s one way to register active dissent which could also encourage others to do the same \u2014 scaling into an act of collective protest.<\/p>\n<p>Additionally, you might be uncomfortable your data is being used so opaquely \u2014 before more robust laws have been passed to govern how AI can be applied. So ahead of a proper legal governance framework for safe and trustworthy usage of such a powerful technology you may prefer to hold back your data; i.e. to wait until there are <a href=\"https:\/\/techcrunch.com\/2023\/04\/21\/eu-ai-act-generative-ai\/\" target=\"_blank\" rel=\"noopener\">stronger checks and balances applied to generative AI<\/a> operators.<\/p>\n<p>While there are lots of reasons why individuals might want to shield their information from big data mining AI giants there are \u2014 for now \u2014 only limited controls on offer. And these limited controls are mostly only available to users in Europe where data protection laws do already apply.<\/p>\n<p>Scroll lower down for details on how to exercise available data rights \u2014 or read on for the context.<\/p>\n<h2>From viral sensation to regulatory intervention<\/h2>\n<p>ChatGPT has been impossible to miss this year. The virality of the ask-it-anything \u201cgeneral purpose\u201d AI chatbot has seen the tech travelling all over the mainstream media in recent months as commentators from across the subject spectrum kick its tyres and get wowed by a simulacrum of human responsiveness which, nonetheless, is not human. It\u2019s just been trained on lots of our web-based chatter (among other data sources) to function as an accomplished mimic of how people communicate.<\/p>\n<p>However the existence of such a capable-seeming natural language technology has directed attention onto the detail of how ChatGPT was developed.<\/p>\n<p>Notably, the buzz around ChatPT has drawn particular attention from privacy and data protection regulators in the European Union \u2014 where an early intervention by Italy\u2019s data protection watchdog at the end of March, acting on powers it has under the bloc\u2019s General Data Protection Regulation (GDPR), led to a temporary suspension of ChatGPT at the start of last month.<\/p>\n<p>A major concern raised by the watchdog is whether OpenAI used people\u2019s data lawfully when it developed the technology. And it is continuing to investigate this question.<\/p>\n<p>Italy\u2019s watchdog has also taken issue with the quality of information OpenAI provides about how it\u2019s using people\u2019s data. Without proper disclosures there are questions about whether it\u2019s meeting the GDPR\u2019s fairness and accountability requirements, too.<\/p>\n<p>Additionally, the regulator has said it\u2019s worried about the safety of minors accessing ChatGPT. And it wants the company to add age verification tech.<\/p>\n<p>The bloc\u2019s General Data Protection Regulation (GDPR) also provides people in the region with a suite of data control rights \u2014 such as the ability to ask for incorrect info about them to be corrected or for their data to be deleted. And if we\u2019ve learnt anything about AI chatbots over the last few months it\u2019s how readily they lie. (Aka \u201challucinate\u201d in techno-solutionist speak).<\/p>\n<p>Shortly after Italy\u2019s DPA stepped in to warn OpenAI that it suspected a series of GDPR breaches, the company<a href=\"https:\/\/techcrunch.com\/2023\/04\/25\/openai-previews-business-plan-for-chatgpt-launches-new-privacy-controls\/\" target=\"_blank\" rel=\"noopener\"> launched some new privacy tools<\/a> \u2014 giving users a button to switch off a chat history feature which logged all their interactions with the chatbot, saying this would result in conversations started when the history feature had been disabled <em>not<\/em> being used to train and improve its AI models.<\/p>\n<p>That was followed by OpenAI making some privacy disclosures and presenting additional controls \u2014 timed to meet <a href=\"https:\/\/techcrunch.com\/2023\/04\/12\/chatgpt-italy-gdpr-order\/\" target=\"_blank\" rel=\"noopener\">a deadline set by the Italian DPA for it to implement a preliminary package of measures<\/a> in order to comply with the bloc\u2019s privacy rules. The upshot is OpenAI now provides web users with <em>some<\/em> say over what it does with their information \u2014 although most of the concessions it\u2019s offered are region-specific. So the first step to protecting your information from big data-driven AI miners is to live in the Europe Union (or European Economic Area), where data protection rights exist and are being actively enforced.<\/p>\n<p>As it stands, UK citizens still benefit from the EU data protection framework being embedded in their national law \u2014 so also have the full sweep of GDPR rights \u2014 although the <a href=\"https:\/\/techcrunch.com\/2023\/03\/08\/uk-data-reform-bill-no-2\/\" target=\"_blank\" rel=\"noopener\">government\u2019s post-Brexit reforms look set to water down<\/a> the national data protection regime, so it remains to be seen how the domestic approach might change. (UK ministers also <a href=\"https:\/\/techcrunch.com\/2023\/03\/29\/uk-ai-white-paper\/\" target=\"_blank\" rel=\"noopener\">recently signalled<\/a> they don\u2019t intend to bring in any bespoke rules for applying AI for the foreseeable future.)<\/p>\n<p>Beyond Europe, <a href=\"https:\/\/www.theregister.com\/2023\/04\/06\/canadas_privacy_chatgpt\/\" target=\"_blank\" rel=\"noopener\">Canada\u2019s privacy commissioner is investigating complaints<\/a> about the technology. Other countries have passed GDPR-style data protection regimes so powers exist for regulators to flex.<\/p>\n<h2>How to ask OpenAI to delete personal data about you<\/h2>\n<p>OpenAI has <a href=\"https:\/\/help.openai.com\/en\/articles\/7842364-how-chatgpt-and-our-language-models-are-developed\" target=\"_blank\" rel=\"noopener\">said<\/a> that individuals in \u201ccertain jurisdictions\u201d (such as the EU) can object to the processing of their personal information by its AI models by filling out <a href=\"https:\/\/share.hsforms.com\/1UPy6xqxZSEqTrGDh4ywo_g4sk30\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">this form<\/a>. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression \u201cin accordance with applicable laws\u201d.<\/p>\n<p>The web form for making a deletion of data about you request is entitled \u201cOpenAI Personal Data Removal Request\u201d. Here\u2019s the link to it: <a href=\"https:\/\/share.hsforms.com\/1UPy6xqxZSEqTrGDh4ywo_g4sk30\" target=\"_blank\" rel=\"noopener\">https:\/\/share.hsforms.com\/1UPy6xqxZSEqTrGDh4ywo_g4sk30<\/a><\/p>\n<p>Web users are asked to provide it with their contact data and details of the data subject for whom the request is being made; the country whose laws apply in this person\u2019s case; to specify whether they are a public figure or not (and if the former, to provide more context about what type of public figure they are); to provide evidence of data processing in the form of prompts that generated responses from the model which mentioned the data subject and screenshots of relevant outputs.<\/p>\n<p>Users are also asked to make sworn statements that the information provided is accurate and acknowledge that incomplete submissions may not be acted upon by OpenAI prior to submitting the form.<\/p>\n<p>The process is similar to the \u2018right to be forgotten\u2019\u00a0<a href=\"https:\/\/reportcontent.google.com\/forms\/rtbf\" target=\"_blank\" rel=\"noopener\">web form Google has provided<\/a> for years \u2014 initially for Europeans seeking to exercise GDPR rights by having inaccurate, outdated or irrelevant personal data delisted from its search engine results.<\/p>\n<p>The GDPR provides individuals with several rights other than requesting data deletion \u2014 such as asking for their information to be corrected, restricted, or for a transfer of their personal data.<\/p>\n<p>OpenAI stipulates that individuals can seek to exercise such rights over personal information that may be included in its training information by emailing <a href=\"https:\/\/techcrunch.com\/2023\/05\/02\/chatgpt-delete-data\/mailto:dsar@openai.com\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">dsar@openai.com<\/a>. However the company told the Italian regulator that correcting inaccurate data generated by its models is not technically feasible at this point. So it will presumably respond to emailed requests for a correction of AI-generated disinformation by offering to delete personal data instead. (<em>But if you\u2019ve made such a request and had a respond from OpenAI get in touch by emailing tips@techcrunch.com<\/em>.)<\/p>\n<p>In its blog post, OpenAI also warns that it could deny (and\/or otherwise only partially act on) requests for other reasons, writing: \u201cPlease be aware that, in accordance with privacy laws, some rights may not be absolute. We may decline a request if we have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with all applicable privacy laws. If you feel we have not adequately addressed an issue, you have the right to lodge a complaint with your local supervisory authority.\u201d<\/p>\n<p>How the company handles Europeans\u2019 Data Subject Access Requests (DSARs) may determine whether ChatGPT faces a wave of user complaints which could lead to more regulatory enforcement in the region in future.<\/p>\n<p>Since OpenAI has not established a local legal entity that\u2019s responsible for its processing of EU user data, watchdogs in any Member State are empowered to act on concerns on their patch. Hence Italy\u2019s quick action.<\/p>\n<h2>How to ask OpenAI not to use your data for training AIs<\/h2>\n<p>Following the Italian DPA\u2019s intervention OpenAI revised its privacy policy to state that the legal basis it relies upon for processing people\u2019s data to train its AIs is something that\u2019s referred to in the GDPR as \u201clegitimate interests\u201d (LI).<\/p>\n<p>In its privacy policy, OpenAI writes that its legal bases for processing \u201cyour Personal Information\u201d include [emphasis ours]:<\/p>\n<blockquote>\n<p>Our legitimate interests in protecting our Services from abuse, fraud, or security risks, or in <strong>developing, improving, or promoting our Services, including when we train our models<\/strong>. This may include the processing of Account Information, Content, Social Information, and Technical Information.<\/p>\n<\/blockquote>\n<p>There is still a question mark over whether relying on LI for a general purpose AI chatbot will be deemed an appropriate and valid legal basis for the processing under the GDPR, as the Italian watchdog (<a href=\"https:\/\/techcrunch.com\/2023\/04\/13\/chatgpt-spain-gdpr\/\" target=\"_blank\" rel=\"noopener\">and others<\/a>) continues to investigate.<\/p>\n<p>These detailed investigations are likely to take some time before we have any final decisions \u2014 which could, potentially, lead to orders that it stop using LI for this processing (which would leave OpenAI with the option of asking users for their consent, complicating its ability to develop the technology at the kind of scale and velocity it has to date). Although EU DPAs may ultimately decide its use of LI in this context is okay.<\/p>\n<p>In the meanwhile, OpenAI is legally required to provide users with certain rights as a consequence of its claim to be relying upon LI \u2014 notably this means it must offer a right to object to the processing.<\/p>\n<p>Facebook was also <a href=\"https:\/\/techcrunch.com\/2023\/04\/04\/facebook-tracking-ads-opt-out-eu\/\" target=\"_blank\" rel=\"noopener\">recently forced to offer such an opt out to European users<\/a> \u2014 i.e. to its processing of their data for targeting behavioral ads \u2014 also after switching to claiming LI as its legal basis for processing people\u2019s information. (Additionally the company is facing <a href=\"https:\/\/techcrunch.com\/2022\/11\/21\/meta-surveillance-gdpr-right-to-object-lawsuit\/\" target=\"_blank\" rel=\"noopener\">a class action style lawsuit in the UK<\/a> for its prior failure to offer an opt out for ad targeting processing, given the GDPR contains an absolute requirement for any data processing for direct marketing \u2014 which perhaps goes some way to explaining OpenAI\u2019s keenness to emphasize it\u2019s not in the same business as adtech giant Facebook, hence its claim that: \u201cWe don\u2019t use data for selling our services, advertising, or building profiles of people \u2014 we use data to make our models more helpful for people.\u201d)<\/p>\n<p>In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on LI, pointing users towards more information about requesting an opt out \u2014 when it writes: \u201cSee\u00a0<a href=\"https:\/\/help.openai.com\/en\/articles\/5722486-how-your-data-is-used-to-improve-model-performance\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a> for instructions on how you can opt out of our use of your information to train our models.\u201d<\/p>\n<p>This link opens to another blog post where it promotes the notion that AI will \u201cimprove over time\u201d, at the same time as encouraging users not to exercise their right to object to the personal data processing by claiming that \u201cshar[ing] your data with us\u2026 helps our models become more accurate and better at solving your specific problems and it also helps improve their general capabilities and safety\u201d. (But, well, can we call it sharing data if the stuff was already taken without asking?)<\/p>\n<p>OpenAI then offers users a couple of choices for opting out their data out of its training: Either via (another) web form or directly in account settings.<\/p>\n<p>You can opt out of your data being used to train its AI by filling in this <a href=\"https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLScrnC-_A7JFs4LbIuzevQ_78hVERlNqqCPCt3d8XqnKOfdRdQ\/viewform\" target=\"_blank\" rel=\"noopener\">web form<\/a> \u2014 which is for individual users of ChatGPT \u2014 and called a \u201cUser content opt out request\u201d.<\/p>\n<p>Users can also disable training on their data via ChatGPT account settings (under \u201cData Controls\u201d). Assuming they have an account.<\/p>\n<p>But \u2014 be warned! \u2014 the settings route to opt out is replete with dark patterns seeking to discourage the user from shutting off OpenAI\u2019s ability to use their data to train its AI models.<\/p>\n<p>(And in neither case is it clear how non-users of ChatGPT can opt out of their data being processed since the company either requires you have an account or requests your account details via the form; so we\u2019ve asked it for clarity.)<\/p>\n<p>To find the Data Controls menu you click on the three dots next to your account name at the bottom left of the screen (under the chat history bar); then click \u201cSettings\u201d; then click to \u201cShow\u201d the aforementioned Data Controls (nice dark pattern hiding this toggle!); then slide the toggle to switch off \u201cChat History &amp; Training\u201d.<\/p>\n<p>To say OpenAI is discouraging users from using the settings route to opt out of training is an understatement. Not least because it\u2019s linked this action to the inconvenience of losing access to your ChatGPT history. But the moment you toggle it back on your chats reappear (at least if you re-enable history within 30 days, per its <a href=\"https:\/\/techcrunch.com\/2023\/04\/25\/openai-previews-business-plan-for-chatgpt-launches-new-privacy-controls\/\" target=\"_blank\" rel=\"noopener\">previously disclosed data retention policy<\/a>.)<\/p>\n<p>Additionally, after you\u2019ve disabled training the sidebar of your historical chats is replaced by a brightly colored button that\u2019s displayed around the eyeline which sits there permanently nudging users to \u201cEnable chat history\u201d. There\u2019s no mention on this button that clicking it toggles back on OpenAI\u2019s ability to train on your data. Instead OpenAI has found space for a meaningless power button icon \u2014 presumably as another visual trick to encourage users to power up the feature so it can regain access to their data.<\/p>\n<div id=\"attachment_2536581\" style=\"width: 418px\" class=\"wp-caption aligncenter\"><\/p>\n<p id=\"caption-attachment-2536581\" class=\"wp-caption-text\">Image credit: Natasha Lomas\/TechCrunch<\/p>\n<\/div>\n<p>Given that users who opt for the settings method to block training will lose ChatGPT\u2019s chat history functionality, submitting the web form looks to offer a better path \u2014 since, in theory, you might be able to retain the functionality despite asking for your conversations not to be training fodder. (And, at the least, you have recorded your objection in a formal format which should perhaps count for more than toggling on\/off a bright green button.)<\/p>\n<p>That said, at the time of writing it\u2019s not clear whether OpenAI will, in the case of objecting via the form, disable chat history functionality anyway, once it\u2019s processed a web form submission asking for data not to be used for training AIs. (Again, we\u2019ve asked the company for clarity on this point and will update this report if we get it.)<\/p>\n<p>There\u2019s a further caveat in OpenAI\u2019s blog post \u2014 where it writes of opting out that:<\/p>\n<blockquote>\n<p>We retain certain data from your interactions with us, but we take steps to reduce the amount of personal information in our training datasets before they are used to improve our models. This data helps us better understand user needs and preferences, allowing our model to become more efficient over time.<\/p>\n<\/blockquote>\n<p>So it\u2019s also not even clear what exact personal data are being firewalled from its training pool when users ask for their info not to be AI training fodder vs other types of information you input which it may still carry on processing anyway\u2026<br \/>In short, this smells like fudge. (Or what\u2019s known in the industry as compliance theatre.)<\/p>\n<p>Thing is, the GDPR has a broad definition of personal data \u2014 meaning it\u2019s not just direct identifiers (such as names and email addresses) which fall under the regulation\u2019s framework but many types of information that could be used and\/or combined to identify a natural person. So that means another key question here is how much of a reduction is OpenAI actually applying to its data processing activities when users opt out? Transparency and fairness are other key principles within the GDPR. So these sorts of questions are likely to keep European data protection agencies busy for the foreseeable future.<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2023\/05\/02\/chatgpt-delete-data\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Users of ChatGPT in Europe can now use web forms or other means provided by OpenAI to request deletion of their personal data in order to stop the chatbot processing (and producing) information about them. They can also request an opt-out of having their data used to train its AIs. Why might someone not want [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":16299,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-16298","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/16298","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=16298"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/16298\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/16299"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=16298"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=16298"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=16298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}