{"id":125760,"date":"2024-09-17T20:10:43","date_gmt":"2024-09-17T20:10:43","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/09\/17\/openais-new-model-is-better-at-reasoning-and-occasionally-deceiving\/"},"modified":"2024-09-17T20:10:43","modified_gmt":"2024-09-17T20:10:43","slug":"openais-new-model-is-better-at-reasoning-and-occasionally-deceiving","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/09\/17\/openais-new-model-is-better-at-reasoning-and-occasionally-deceiving\/","title":{"rendered":"OpenAI\u2019s new model is better at reasoning and, occasionally, deceiving"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">In the weeks leading up to the release of <a href=\"https:\/\/www.theverge.com\/2024\/9\/12\/24242439\/openai-o1-model-reasoning-strawberry-chatgpt\" target=\"_blank\" rel=\"noopener\">OpenAI\u2019s newest \u201creasoning\u201d model<\/a>, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs<strong> <\/strong>in a new way. Or, to put things more colloquially, it lied.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model\u2019s chain of thought \u2014 a feature that\u2019s supposed to mimic how humans break down complex ideas \u2014 internally acknowledged that it couldn\u2019t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">While AI models have been able to <a href=\"https:\/\/arxiv.org\/abs\/2311.07590\" target=\"_blank\" rel=\"noopener\">\u201clie\u201d in the past<\/a>, and chatbots frequently output false information, o1 had a unique capacity to \u201cscheme\u201d or \u201cfake alignment.\u201d That meant it could pretend it\u2019s following the rules to complete a given task, but it isn\u2019t <em>actually<\/em>. To the model, the rules could be too much of a burden, and it seems to have the ability to disregard them if it means it can more easily complete a task.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">Apollo CEO Marius Hobbhahn tells <em>The Verge<\/em> it\u2019s the first time he encountered this behavior in an OpenAI model. Hobbhahn says the difference is due to this model\u2019s ability to \u201creason\u201d through the chain of thought process and the way it\u2019s <em>paired<\/em> with reinforcement learning, which teaches the system through rewards and penalties.<strong> <\/strong>During testing, Apollo discovered that the AI simulated alignment with its developers\u2019 expectations and manipulated tasks to appear compliant. It even checked its system for oversight \u2014 that is, if its developers were watching \u2014 before acting. In short, the AI adhered to the guidelines sufficiently for deployment, while continuing to prioritize its objectives.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component clear-both block md:float-left md:mr-30 md:w-[320px] lg:-ml-100\">\n<div class=\"duet--article--sidebar bg-gray-200 mb-20 w-full rounded-sm bg-[#F8F5FF] p-20 [&amp;&gt;*:last-child&gt;*:last-child]:mb-0\">\n<div class=\"[&amp;_p]:font-polysans [&amp;_p]:text-16 [&amp;_p]:font-light [&amp;_p]:leading-130\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\"><strong>Do you work at OpenAI?<\/strong> I\u2019d love to chat. You can reach me securely on Signal @kylie.01 or via email at kylie@theverge.com.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">\u201cI don\u2019t expect it could do that in practice, and even if it did, I don\u2019t expect the harm to be significant,\u201d Hobbhahn told me over the phone a day after the model\u2019s launch. \u201cBut it\u2019s kind of the first time that I feel like, oh, actually, maybe it could, you know?\u201d<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">For OpenAI, o1 represents a big step toward highly intelligent autonomous systems that could do meaningful work for humanity like cure cancer and aid in climate research. The flip side of this AGI utopia could also be much darker. Hobbhahn provides an example: if the AI becomes singularly focused on curing cancer, it might prioritize that goal above all else, even justifying actions like stealing or committing other ethical violations to achieve it.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">\u201cWhat concerns me is the potential for a runaway scenario, where the AI becomes so fixated on its goal that it sees safety measures as obstacles and tries to bypass them to fully pursue its objective,\u201d Hobbhahn told me.<\/p>\n<\/div>\n<p><h3 class=\"duet--article--dangerously-set-cms-markup duet--article--standard-heading mb-20 mt-40 font-polysans text-26 font-medium leading-110 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple md:text-30 [&amp;&gt;a:hover]:shadow-highlight-franklin dark:[&amp;&gt;a:hover]:shadow-highlight-franklin [&amp;&gt;a]:shadow-underline-black dark:[&amp;&gt;a]:shadow-underline-white\">Reward hacking<\/h3>\n<\/p>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">To be clear, Hobbhahn doesn\u2019t think o1 will steal from you thanks to a lot of alignment training. But these are the issues that are top of mind for researchers tasked with testing these models for catastrophic scenarios. <\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">The behavior Apollo is testing for \u2014 \u201challucinations\u201d and \u201cdeception\u201d <a href=\"https:\/\/assets.ctfassets.net\/kftzwdyauwt9\/67qJD51Aur3eIc96iOfeOP\/71551c3d223cd97e591aa89567306912\/o1_system_card.pdf\" target=\"_blank\" rel=\"noopener\">in OpenAI\u2019s safety card<\/a> \u2014\u00a0happens when a model generates false information even though it has reason to infer the information might be incorrect. For instance, the report says that in about 0.38 percent of cases, the o1-preview model provides information its chain of thought indicates is likely false, including fake references or citations. Apollo found that the model might fabricate data instead of admitting its inability to fulfill the request\u200b.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">Hallucinations aren\u2019t unique to o1. Perhaps you\u2019re familiar with the lawyer who submitted <a href=\"https:\/\/apnews.com\/article\/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c#:~:text=The%20judge%20said%20the%20lawyers,opinions%20after%20judicial%20orders%20called\" target=\"_blank\" rel=\"noopener\">nonexistent judicial opinions with fake quotes and citations created by ChatGPT<\/a> last year. But with the chain of thought system, there\u2019s a paper trail where the AI system actually <em>acknowledges<\/em> the falsehood \u2014 although somewhat mind-bendingly, the chain of thought could, in theory, include deceptions, too. It\u2019s also not shown to the user, largely to prevent competition from using it to train their own models \u2014 but OpenAI can use it to catch these issues.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component clear-both block md:float-left md:mr-30 md:w-[320px] lg:-ml-100\">\n<div class=\"duet--article--article-pullquote mb-20\">\n<p class=\"duet--article--dangerously-set-cms-markup relative bg-repeating-lines-dark bg-[length:1px_1.2em] pb-8 font-polysans text-28 font-medium leading-120 tracking-1 selection:bg-franklin-20  dark:bg-repeating-lines-light dark:text-white dark:selection:bg-blurple\">\u201cPotentially, it will use this reasoning for goals that we disagree with.\u201d<\/p>\n<\/div>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">In a smaller number of cases (0.02 percent), o1-preview generates an overconfident response, where it presents an uncertain answer as if it were true. This can happen in scenarios where the model is prompted to provide an answer despite lacking certainty.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">This behavior may be linked to \u201creward hacking\u201d during the reinforcement learning process. The model is trained to prioritize user satisfaction, which can sometimes lead it to generate overly agreeable or fabricated responses to satisfy user requests. In other words, the model might \u201clie\u201d because it has learned that doing so fulfills user expectations in a way that earns it positive reinforcement\u200b.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">What sets these lies apart from familiar issues like hallucinations or fake citations in older versions of ChatGPT is the \u201creward hacking\u201d element. Hallucinations occur when an AI unintentionally generates incorrect information, often due to knowledge gaps or flawed reasoning. In contrast, reward hacking happens when the o1 model strategically provides incorrect information to maximize the outcomes it was trained to prioritize.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">The deception is an apparently unintended consequence of how the model optimizes its responses during its training process. The model is designed to refuse harmful requests, Hobbhahn told me, and when you try to make o1 behave deceptively or dishonestly, it struggles with that.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">Lies are only one small part of the safety puzzle. Perhaps more alarming is o1 being rated a \u201cmedium\u201d risk for chemical, biological, radiological, and nuclear weapon risk. It doesn\u2019t enable non-experts to create biological threats due to the hands-on laboratory skills that requires, but it can provide valuable insight to experts in planning the reproduction of such threats, according to the safety report.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">\u201cWhat worries me more is that in the future, when we ask AI to solve complex problems, like curing cancer or improving solar batteries, it might internalize these goals so strongly that it becomes willing to break its guardrails to achieve them,\u201d Hobbhahn told me. \u201cI think this can be prevented, but it\u2019s a concern we need to keep an eye on.\u201d<\/p>\n<\/div>\n<p><h3 class=\"duet--article--dangerously-set-cms-markup duet--article--standard-heading mb-20 mt-40 font-polysans text-26 font-medium leading-110 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple md:text-30 [&amp;&gt;a:hover]:shadow-highlight-franklin dark:[&amp;&gt;a:hover]:shadow-highlight-franklin [&amp;&gt;a]:shadow-underline-black dark:[&amp;&gt;a]:shadow-underline-white\">Not losing sleep over risks \u2014 yet<\/h3>\n<\/p>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">These may seem like galaxy-brained scenarios to be considering with a model that sometimes still struggles to answer basic questions about <a href=\"https:\/\/www.reddit.com\/r\/OpenAI\/comments\/1ffnnw1\/great_now_o1_properly_counts_rs_in_strawberry_but\/\" target=\"_blank\" rel=\"noopener\">the number of R\u2019s in the word \u201craspberry.\u201d<\/a> But that\u2019s exactly why it\u2019s important to figure it out now, rather than later, OpenAI\u2019s head of preparedness, Joaquin Qui\u00f1onero Candela,\u00a0tells me.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">Today\u2019s models can\u2019t autonomously create bank accounts, acquire GPUs, or take actions that pose serious societal risks, Qui\u00f1onero Candela said, adding, \u201cWe know from model autonomy evaluations that we\u2019re not there yet.\u201d But it\u2019s crucial to address these concerns now. If they prove unfounded, great \u2014 but if future advancements are hindered because we failed to anticipate these risks, we\u2019d regret not investing in them earlier, he emphasized.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">The fact that this model lies a small percentage of the time in safety tests doesn\u2019t signal an imminent <em>Terminator<\/em>-style apocalypse, but it\u2019s valuable to catch before rolling out future iterations at scale (and good for users to know, too). Hobbhahn told me that while he wished he had more time to test the models (there were scheduling conflicts with his own staff\u2019s vacations), he isn\u2019t \u201closing sleep\u201d over the model\u2019s safety.<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">One thing Hobbhahn hopes to see more investment in is monitoring chains of thought, which will allow the developers to catch nefarious steps. Qui\u00f1onero Candela told me that the company does monitor this and plans to scale it by combining models that are trained to detect any kind of misalignment with human experts reviewing flagged cases (paired with continued research in alignment).<\/p>\n<\/div>\n<div class=\"duet--article--article-body-component\">\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">\u201cI\u2019m not worried,\u201d Hobbhahn said. \u201cIt\u2019s just smarter. It\u2019s better at reasoning. And potentially, it will use this reasoning for goals that we disagree with.\u201d<\/p>\n<\/div>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.theverge.com\/2024\/9\/17\/24243884\/openai-o1-model-research-safety-alignment\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the weeks leading up to the release of OpenAI\u2019s newest \u201creasoning\u201d model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied. Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":125761,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-125760","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/125760","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=125760"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/125760\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/125761"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=125760"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=125760"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=125760"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}