{"id":11017,"date":"2023-04-01T00:18:03","date_gmt":"2023-04-01T00:18:03","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2023\/04\/01\/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms\/"},"modified":"2023-04-01T00:18:03","modified_gmt":"2023-04-01T00:18:03","slug":"ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2023\/04\/01\/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms\/","title":{"rendered":"Ethicists fire back at &#8216;AI Pause&#8217; letter they say &#8216;ignores the actual harms&#8217;"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\">A group of well-known AI ethicists have written a counterpoint to this week\u2019s controversial letter asking for a six-month \u201cpause\u201d on AI development, criticizing it for a focus on hypothetical future threats when real harms are attributable to misuse of the tech today.<\/p>\n<p>Thousands of people, including such familiar names as Steve Wozniak and Elon Musk, <a href=\"https:\/\/techcrunch.com\/2023\/03\/28\/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months\/\" target=\"_blank\" rel=\"noopener\">signed the open letter from the Future of Life institute earlier this week,<\/a> proposing that development of AI models like GPT-4 should be put on hold in order to avoid \u201closs of control of our civilization,\u201d among other threats.<\/p>\n<p>Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell are all major figures in the domains of AI and ethics, known (in addition to their work) for being pushed out of Google over a <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noopener\">paper<\/a> criticizing the capabilities of AI. They are currently working together at the DAIR Institute, <a href=\"https:\/\/techcrunch.com\/2021\/12\/02\/google-timnit-gebru-ai-research-dair\/\" target=\"_blank\" rel=\"noopener\">a new research outfit<\/a> aimed at studying and exposing and preventing AI-associated harms.<\/p>\n<p>But they were not to be found on the list of signatories, and now have <a href=\"https:\/\/www.dair-institute.org\/blog\/letter-statement-March2023\" target=\"_blank\" rel=\"noopener\">published a rebuke<\/a> calling out the letter\u2019s failure to engage with existing problems caused by the tech.<\/p>\n<p>\u201cThose hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,\u201d they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.<\/p>\n<p>The choice to worry about a Terminator- or Matrix-esque robot apocalypse is a red herring when we have, in the same moment, reports of companies like Clearview AI <a href=\"https:\/\/www.nytimes.com\/2023\/03\/31\/technology\/facial-recognition-false-arrests.html\" target=\"_blank\" rel=\"noopener\">being used by the police to essentially frame an innocent man<\/a>. No need for a T-1000 when you\u2019ve got Ring cams on every front door accessible via online rubber-stamp warrant factories.<\/p>\n<p>While the DAIR crew agree with some of the letter\u2019s aims, like identifying synthetic media, they emphasize that action must be taken now, on today\u2019s problems, with remedies we have available to us:<\/p>\n<blockquote>\n<p>What we need is regulation that enforces transparency. Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.<\/p>\n<p class=\"TextBlock__paragraph body\">The current race towards ever larger \u201cAI experiments\u201d is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.<\/p>\n<p class=\"TextBlock__paragraph body\">It is indeed time to act: but the focus of our concern should not be imaginary \u201cpowerful digital minds.\u201d Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.<\/p>\n<\/blockquote>\n<p>Incidentally, this letter echoes a sentiment I heard from Uncharted Power founder Jessica Matthews at yesterday\u2019s AfroTech event in Seattle: \u201cYou should not be afraid of AI. You should be afraid of the people building it.\u201d (Her solution: become the people building it.)<\/p>\n<p>While it is vanishingly unlikely that any major company would ever agree to pause its research efforts in accordance with the open letter, it\u2019s clear judging from the engagement it received that the risks \u2014 real and hypothetical \u2014 of AI are of great concern across many segments of society. But if they won\u2019t do it, perhaps someone will have to do it for them.<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2023\/03\/31\/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A group of well-known AI ethicists have written a counterpoint to this week\u2019s controversial letter asking for a six-month \u201cpause\u201d on AI development, criticizing it for a focus on hypothetical future threats when real harms are attributable to misuse of the tech today. Thousands of people, including such familiar names as Steve Wozniak and Elon [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":11018,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-11017","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/11017","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=11017"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/11017\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/11018"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=11017"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=11017"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=11017"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}