{"id":98244,"date":"2024-05-18T13:31:00","date_gmt":"2024-05-18T13:31:00","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/05\/18\/this-week-in-ai-openai-moves-away-from-safety-techcrunch\/"},"modified":"2024-05-18T13:31:00","modified_gmt":"2024-05-18T13:31:00","slug":"this-week-in-ai-openai-moves-away-from-safety-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/05\/18\/this-week-in-ai-openai-moves-away-from-safety-techcrunch\/","title":{"rendered":"This Week in AI: OpenAI moves away from safety | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Keeping up with an industry as fast-moving as\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/08\/04\/age-of-ai-everything-you-need-to-know-about-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">AI<\/a>\u00a0is a tall order. So until an AI can do it for you, here\u2019s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn\u2019t cover on their own.<\/p>\n<p class=\"wp-block-paragraph\">By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we\u2019re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly \u2014 so be on the lookout for more editions.<\/p>\n<p class=\"wp-block-paragraph\">This week in AI, OpenAI once again dominated the news cycle (despite Google\u2019s best efforts) with a product launch, but also, with some palace intrigue. The company unveiled GPT-4o, its most capable generative model yet, and just days later effectively disbanded a team working on the problem of developing controls to prevent \u201csuperintelligent\u201d AI systems from going rogue.<\/p>\n<p class=\"wp-block-paragraph\">The dismantling of the team generated a lot of headlines, predictably. Reporting \u2014 <a href=\"https:\/\/techcrunch.com\/2024\/05\/17\/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says\/\" target=\"_blank\" rel=\"noopener\">including ours<\/a> \u2014 suggests that OpenAI deprioritized the team\u2019s safety research in favor of launching new products like the aforementioned GPT-4o, ultimately leading to the <a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/ilya-sutskever-openai-co-founder-and-longtime-chief-scientist-departs\/\" target=\"_blank\" rel=\"noopener\">resignation<\/a> of the team\u2019s two co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever. <\/p>\n<p class=\"wp-block-paragraph\">Superintelligent AI is more theoretical than real at this point; it\u2019s not clear when \u2014 or whether \u2014 the tech industry will achieve the breakthroughs necessary in order to create AI capable of accomplishing any task a human can. But the coverage from this week would seem to confirm one thing: that OpenAI\u2019s leadership \u2014 in particular CEO Sam Altman \u2014 has increasingly chosen to prioritize products over safeguards. <\/p>\n<p class=\"wp-block-paragraph\">Altman reportedly \u201c<a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2023-11-20\/sam-altman-openai-latest-inside-his-shock-firing-by-the-board\" target=\"_blank\" rel=\"noreferrer noopener\">infuriated<\/a>\u201d Sutskever by rushing the launch of AI-powered features at OpenAI\u2019s first dev conference last November. And he\u2019s <a href=\"https:\/\/www.nytimes.com\/2023\/11\/21\/technology\/openai-altman-board-fight.html\" target=\"_blank\" rel=\"noreferrer noopener\">said to have been<\/a>\u00a0critical of Helen Toner, director at Georgetown\u2019s Center for Security and Emerging Technologies and a former member of OpenAI\u2019s board, over a paper she co-authored that cast OpenAI\u2019s approach to safety in a critical light \u2014 to the point where he attempted to push her off the board. <\/p>\n<p class=\"wp-block-paragraph\">Over the past year or so, OpenAI\u2019s let its chatbot store <a href=\"https:\/\/techcrunch.com\/2024\/03\/20\/openais-chatbot-store-is-filling-up-with-spam\/\" target=\"_blank\" rel=\"noopener\">fill up with spam<\/a> and (allegedly) <a href=\"https:\/\/www.theverge.com\/2024\/4\/6\/24122915\/openai-youtube-transcripts-gpt-4-training-data-google\" target=\"_blank\" rel=\"noopener\">scraped data from YouTube<\/a> against the platform\u2019s terms of service while voicing ambitions to let its AI generate depictions of <a href=\"https:\/\/techcrunch.com\/2024\/05\/10\/this-week-in-ai-openai-considers-allowing-ai-porn\/\" target=\"_blank\" rel=\"noopener\">porn<\/a> and gore. Certainly, safety seems to have taken a back seat at the company \u2014 and a growing number of OpenAI safety researchers have come to the conclusion that their work would be better supported elsewhere.  <\/p>\n<p class=\"wp-block-paragraph\">Here are some other AI stories of note from the past few days:<\/p>\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong><a href=\"https:\/\/techcrunch.com\/2024\/05\/16\/openai-inks-deal-to-train-ai-on-reddit-data\/\" target=\"_blank\" rel=\"noopener\">OpenAI + Reddit:<\/a><\/strong> In more OpenAI news, the company reached an agreement with Reddit to use the social site\u2019s data for AI model training. Wall Street welcomed the deal with open arms \u2014 but Reddit users may not be so pleased. <\/li>\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/techcrunch.com\/2024\/05\/15\/the-top-ai-announcements-from-google-i-o\/\" target=\"_blank\" rel=\"noopener\"><strong>Google\u2019s AI:<\/strong><\/a> Google hosted its annual I\/O developer conference this week, during which it debuted <em>a ton<\/em> of AI products. We rounded them up <a href=\"https:\/\/techcrunch.com\/2024\/05\/15\/the-top-ai-announcements-from-google-i-o\/\" target=\"_blank\" rel=\"noopener\">here<\/a>, from the video-generating Veo to AI-organized results in Google Search to upgrades to Google\u2019s Gemini chatbot apps. <\/li>\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/techcrunch.com\/2024\/05\/15\/anthropic-hires-instagram-co-founder-as-head-of-product\/\" target=\"_blank\" rel=\"noopener\"><strong>Anthropic hires Krieger:<\/strong><\/a> Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/03\/26\/instagram-co-founders-ai-powered-news-app-artifact-may-not-be-shutting-down-after-all\/\" target=\"_blank\" rel=\"noopener\">Artifact<\/a>\u00a0(which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the company\u2019s first chief product officer. He\u2019ll oversee both the company\u2019s consumer and enterprise efforts. <\/li>\n<li class=\"wp-block-list-item\"><strong><a href=\"https:\/\/techcrunch.com\/2024\/05\/10\/anthropic-now-lets-kids-use-its-ai-tech-within-limits\/\" target=\"_blank\" rel=\"noopener\">AI for kids:<\/a> <\/strong>Anthropic announced last week that it would begin allowing developers to create kid-focused apps and tools built on its AI models \u2014 so long as they follow certain rules. Notably, rivals like Google disallow their AI from being built into apps aimed at younger ages. <\/li>\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/techcrunch.com\/2024\/05\/11\/at-the-ai-film-festival-humanity-triumphed-over-tech\/\" target=\"_blank\" rel=\"noopener\"><strong>AI film festival:<\/strong><\/a> AI startup Runway held its second-ever AI film festival earlier this month. The takeaway? Some of the more powerful moments in the showcase came not from AI, but the more human elements. <\/li>\n<\/ul>\n<h2 class=\"wp-block-heading\" id=\"h-more-machine-learnings\">More machine learnings<\/h2>\n<p class=\"wp-block-paragraph\">AI safety is obviously top of mind this week with the OpenAI departures, but Google Deepmind is plowing onwards <a href=\"https:\/\/deepmind.google\/discover\/blog\/introducing-the-frontier-safety-framework\/\" target=\"_blank\" rel=\"noopener\">with a new \u201cFrontier Safety Framework.\u201d<\/a> Basically it\u2019s the organization\u2019s strategy for identifying and hopefully preventing any runaway capabilities \u2014 it doesn\u2019t have to be AGI, it could be a malware generator gone mad or the like.<\/p>\n<figure class=\"wp-block-image size-large\"><figcaption class=\"wp-element-caption\"><strong>Image Credits:<\/strong> Google Deepmind<\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">The framework has three steps: 1. Identify potentially harmful capabilities in a model by simulating its paths of development. 2. Evaluate models regularly to detect when they have reached known \u201ccritical capability levels.\u201d 3. Apply a mitigation plan to prevent exfiltration (by another or itself) or problematic deployment. <a href=\"https:\/\/storage.googleapis.com\/deepmind-media\/DeepMind.com\/Blog\/introducing-the-frontier-safety-framework\/fsf-technical-report.pdf\" target=\"_blank\" rel=\"noopener\">There\u2019s more detail here<\/a>. It may sound kind of like an obvious series of actions, but it\u2019s important to formalize them or everyone is just kind of winging it. That\u2019s how you get the bad AI.<\/p>\n<p class=\"wp-block-paragraph\">A rather different risk has been identified by Cambridge researchers, who are rightly concerned at the proliferation of chatbots that one trains on a dead person\u2019s data in order to provide a superficial simulacrum of that person. You may (as I do) find the whole concept somewhat abhorrent, but it could be used in grief management and other scenarios if we are careful. The problem is we are not being careful.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1304\" height=\"704\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg?w=680\" alt=\"\" class=\"wp-image-2781359\" srcset=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg 1304w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg?resize=150,81 150w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg?resize=300,162 300w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg?resize=768,415 768w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg?resize=680,367 680w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/manana.jpg?resize=1200,648 1200w\" sizes=\"auto, (max-width: 1304px) 100vw, 1304px\"\/><figcaption class=\"wp-element-caption\"><strong>Image Credits:<\/strong> Cambridge University \/ T. Hollanek<\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">\u201cThis area of AI is an ethical minefield,\u201d <a href=\"https:\/\/www.cam.ac.uk\/research\/news\/call-for-safeguards-to-prevent-unwanted-hauntings-by-ai-chatbots-of-dead-loved-ones\" target=\"_blank\" rel=\"noopener\">said lead researcher Katarzyna Nowaczyk-Basi\u0144ska<\/a>. \u201cWe need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.\u201d The team identifies numerous scams, potential bad and good outcomes, and discusses the concept generally (including fake services) in a <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s13347-024-00744-w\" target=\"_blank\" rel=\"noopener\">paper published in Philosophy &amp; Technology<\/a>. Black Mirror predicts the future once again!<\/p>\n<p class=\"wp-block-paragraph\">In less creepy applications of AI, <a href=\"https:\/\/news.mit.edu\/2024\/scientists-use-generative-ai-complex-questions-physics-0516\" target=\"_blank\" rel=\"noopener\">physicists at MIT <\/a>are looking at a useful (to them) tool for predicting a physical system\u2019s phase or state, normally a statistical task that can grow onerous with more complex systems. But training up a machine learning model on the right data and grounding it with some known material characteristics of a system and you have yourself a considerably more efficient way to go about it. Just another example of how ML is finding niches even in advanced science.<\/p>\n<p class=\"wp-block-paragraph\">Over at CU Boulder, they\u2019re talking about how AI can be used in disaster management. The tech may be useful for quick prediction of where resources will be needed, mapping damage, even helping train responders, but people are (understandably) hesitant to apply it in life-and-death scenarios.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"750\" height=\"565\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/cu-boulder-ai-workshop.webp?w=680\" alt=\"\" class=\"wp-image-2781383\" srcset=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/cu-boulder-ai-workshop.webp 750w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/cu-boulder-ai-workshop.webp?resize=150,113 150w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/cu-boulder-ai-workshop.webp?resize=300,226 300w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/cu-boulder-ai-workshop.webp?resize=680,512 680w\" sizes=\"auto, (max-width: 750px) 100vw, 750px\"\/><figcaption class=\"wp-element-caption\">Attendees at the workshop.<\/figcaption><figcaption class=\"wp-element-caption\"><strong>Image Credits:<\/strong> CU Boulder<\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/www.colorado.edu\/ceae\/2024\/05\/08\/cu-boulder-pioneers-culturally-sensitive-ai-solutions-disasters\" target=\"_blank\" rel=\"noopener\">Professor Amir Behzadan <\/a>is trying to move the ball forward on that, saying \u201cHuman-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding and inclusivity among team members, survivors and stakeholders.\u201d They\u2019re still at the workshop phase, but it\u2019s important to think deeply about this stuff before trying to, say, automate aid distribution after a hurricane.<\/p>\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/studios.disneyresearch.com\/2024\/05\/08\/cads-unleashing-the-diversity-of-diffusion-models-through-condition-annealed-sampling\/\" target=\"_blank\" rel=\"noopener\">Lastly some interesting work out of Disney Research<\/a>, which was looking at how to diversify the output of diffusion image generation models, which can produce similar results over and over for some prompts. Their solution? \u201cOur sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment.\u201d I simply could not put it better myself.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"684\" height=\"556\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/CADS-Image.png?w=680\" alt=\"\" class=\"wp-image-2781386\" srcset=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/CADS-Image.png 684w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/CADS-Image.png?resize=150,122 150w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/CADS-Image.png?resize=300,244 300w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/05\/CADS-Image.png?resize=680,553 680w\" sizes=\"auto, (max-width: 684px) 100vw, 684px\"\/><figcaption class=\"wp-element-caption\"><strong>Image Credits:<\/strong> Disney Research<\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">The result is a much wider diversity in angles, settings, and general look in the image outputs. Sometimes you want this, sometimes you don\u2019t, but it\u2019s nice to have the option.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2024\/05\/18\/this-week-in-ai-openai-moves-away-from-safety\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Keeping up with an industry as fast-moving as\u00a0AI\u00a0is a tall order. So until an AI can do it for you, here\u2019s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn\u2019t cover on their own. By the way, TechCrunch plans to launch an AI newsletter [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":98245,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-98244","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/98244","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=98244"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/98244\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/98245"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=98244"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=98244"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=98244"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}