{"id":98156,"date":"2024-05-17T16:01:00","date_gmt":"2024-05-17T16:01:00","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/05\/17\/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says-techcrunch\/"},"modified":"2024-05-17T16:01:00","modified_gmt":"2024-05-17T16:01:00","slug":"openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/05\/17\/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says-techcrunch\/","title":{"rendered":"OpenAI created a team to control &#8216;superintelligent&#8217; AI \u2014 then let it wither, source says | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p class=\"wp-block-paragraph\">OpenAI\u2019s <a href=\"https:\/\/techcrunch.com\/2023\/07\/05\/openai-is-forming-a-new-team-to-bring-superintelligent-ai-under-control\/\" target=\"_blank\" rel=\"noopener\">Superalignment team<\/a>, responsible for developing ways to govern and steer \u201csuperintelligent\u201d AI systems, was promised 20% of the company\u2019s compute resources, according to a person from that team. But requests for a fraction of that compute were often denied, blocking the team from doing their work.<\/p>\n<p class=\"wp-block-paragraph\">That issue, among others, pushed several team members to resign this week, including co-lead Jan Leike, a former DeepMind researcher who while at OpenAI was involved with the development of ChatGPT, GPT-4 and ChatGPT\u2019s predecessor, InstructGPT.<\/p>\n<p class=\"wp-block-paragraph\">Leike went public with some reasons for his resignation on Friday morning. \u201cI have been disagreeing with OpenAI leadership about the company\u2019s core priorities for quite some time, until we finally reached a breaking point,\u201d Leike wrote in a series of posts on X. \u201cI believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren\u2019t on a trajectory to get there.\u201d<\/p>\n<figure class=\"wp-block-embed aligncenter is-type-rich is-provider-twitter wp-block-embed-twitter\">\n<div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">Building smarter-than-human machines is an inherently dangerous endeavor.<\/p>\n<p>OpenAI is shouldering an enormous responsibility on behalf of all of humanity.<\/p>\n<p>\u2014 Jan Leike (@janleike) <a href=\"https:\/\/twitter.com\/janleike\/status\/1791498183543251017?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">May 17, 2024<\/a><\/p><\/blockquote>\n<\/div>\n<\/figure>\n<p class=\"wp-block-paragraph\">OpenAI did not immediately return a request for comment about the resources promised and allocated to that team. <\/p>\n<p class=\"wp-block-paragraph\">OpenAI formed the Superalignment team last July, and it was led by Leike and OpenAI co-founder Ilya Sutskever, <a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/ilya-sutskever-openai-co-founder-and-longtime-chief-scientist-departs\/\" target=\"_blank\" rel=\"noopener\">who also resigned from the company this week<\/a>. It had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years. Joined by scientists and engineers from OpenAI\u2019s previous alignment division as well as researchers from other orgs across the company, the team was to contribute research informing the safety of both in-house and non-OpenAI models, and, through initiatives including a research grant program, solicit from and share work with the broader AI industry.<\/p>\n<p class=\"wp-block-paragraph\">The Superalignment team did manage to publish a body of safety research and funnel millions of dollars in grants to outside researchers. But, as product launches began to take up an increasing amount of OpenAI leadership\u2019s bandwidth, the Superalignment team found itself having to fight for more upfront investments \u2014 investments it believed were critical to the company\u2019s stated mission of developing superintelligent AI for the benefit of all humanity.<\/p>\n<p class=\"wp-block-paragraph\">\u201cBuilding smarter-than-human machines is an inherently dangerous endeavor,\u201d Leike continued. \u201cBut over the past years, safety culture and processes have taken a backseat to shiny products.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Sutskever\u2019s battle with OpenAI CEO Sam Altman served as a major added distraction.<\/p>\n<p class=\"wp-block-paragraph\">Sutskever, along with OpenAI\u2019s old board of directors, moved to abruptly fire Altman late last year over concerns that Altman hadn\u2019t been \u201cconsistently candid\u201d with the board\u2019s members. Under pressure from OpenAI\u2019s investors, including Microsoft, and many of the company\u2019s own employees, Altman was eventually reinstated, much of the board resigned and Sutskever <a href=\"https:\/\/www.nytimes.com\/2024\/05\/14\/technology\/ilya-sutskever-leaving-openai.html\" target=\"_blank\" rel=\"noreferrer noopener\">reportedly<\/a> never returned to work.<\/p>\n<p class=\"wp-block-paragraph\">According to the source, Sutskever was instrumental to the Superalignment team \u2014 not only contributing research but serving as a bridge to other divisions within OpenAI. He would also serve as an ambassador of sorts, impressing the importance of the team\u2019s work on key OpenAI decision makers.<\/p>\n<figure class=\"wp-block-embed aligncenter is-type-rich is-provider-twitter wp-block-embed-twitter\">\n<div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">i&#8217;m super appreciative of <a href=\"https:\/\/twitter.com\/janleike?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">@janleike<\/a>&#8216;s contributions to openai&#8217;s alignment research and safety culture, and very sad to see him leave. he&#8217;s right we have a lot more to do; we are committed to doing it. i&#8217;ll have a longer post in the next couple of days.<\/p>\n<p>\ud83e\udde1 <a href=\"https:\/\/t.co\/t2yexKtQEk\" target=\"_blank\">https:\/\/t.co\/t2yexKtQEk<\/a><\/p>\n<p>\u2014 Sam Altman (@sama) <a href=\"https:\/\/twitter.com\/sama\/status\/1791543264090472660?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">May 17, 2024<\/a><\/p><\/blockquote>\n<\/div>\n<\/figure>\n<p class=\"wp-block-paragraph\">Following the departures of Leike and Sutskever, John Schulman, another OpenAI co-founder, has moved to head up the type of work the Superalignment team was doing, but there will no longer be a dedicated team \u2014 instead, it will be a loosely associated group of researchers embedded in divisions throughout the company. An OpenAI spokesperson described it as \u201cintegrating [the team] more deeply.\u201d<\/p>\n<p class=\"wp-block-paragraph\">The fear is that, as a result, OpenAI\u2019s AI development won\u2019t be as safety-focused as it could\u2019ve been.<\/p>\n<p class=\"wp-block-paragraph\"><em>We\u2019re launching an AI newsletter! Sign up\u00a0<\/em><em><a target=\"_blank\" href=\"https:\/\/techcrunch.com\/newsletters\/techcrunch-ai\/\" rel=\"noreferrer noopener\">here<\/a><\/em><em>\u00a0to start receiving it in your inboxes on June 5.<\/em><\/p>\n<\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2024\/05\/17\/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI\u2019s Superalignment team, responsible for developing ways to govern and steer \u201csuperintelligent\u201d AI systems, was promised 20% of the company\u2019s compute resources, according to a person from that team. But requests for a fraction of that compute were often denied, blocking the team from doing their work. That issue, among others, pushed several team members [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":98157,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-98156","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/98156","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=98156"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/98156\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/98157"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=98156"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=98156"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=98156"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}