{"id":234965,"date":"2026-04-16T20:26:44","date_gmt":"2026-04-16T20:26:44","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2026\/04\/16\/physical-intelligence-a-hot-robotics-startup-says-its-new-robot-brain-can-figure-out-tasks-it-was-never-taught-techcrunch\/"},"modified":"2026-04-16T20:26:44","modified_gmt":"2026-04-16T20:26:44","slug":"physical-intelligence-a-hot-robotics-startup-says-its-new-robot-brain-can-figure-out-tasks-it-was-never-taught-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2026\/04\/16\/physical-intelligence-a-hot-robotics-startup-says-its-new-robot-brain-can-figure-out-tasks-it-was-never-taught-techcrunch\/","title":{"rendered":"Physical Intelligence, a hot robotics startup, says its new robot brain can figure out tasks it was never taught | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\"><a href=\"https:\/\/www.pi.website\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Physical Intelligence<\/a>, the two-year-old, San Francisco-based robotics startup that has quietly become one of the most closely watched AI companies in the Bay Area, published <a href=\"https:\/\/www.pi.website\/blog\/pi07\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">new research<\/a> Thursday showing that its latest model can direct robots to perform tasks they were never explicitly trained on \u2014 a capability the company\u2019s own researchers say caught them off guard.<\/p>\n<p class=\"wp-block-paragraph\">The new model, called \u03c00.7, represents what the company describes as an early but meaningful step toward the long-sought goal of a general-purpose robot brain: One that can be pointed at an unfamiliar task, coached through it in plain language, and actually pull it off. If the findings hold up to scrutiny, they suggest that robotic AI may be approaching an inflection point similar to what the field saw with large language models \u2014 where capabilities begin compounding in ways that outpace what the underlying data would seem to predict.<\/p>\n<p class=\"wp-block-paragraph\">But first: The core claim in the paper is compositional generalization \u2014 the ability to combine skills learned in different contexts to solve problems the model has never encountered. Until now, the standard approach to robot training has been essentially rote memorization \u2014 collect data on a specific task, train a specialist model on that data, then repeat for every new task. \u03c00.7, Physical Intelligence says, breaks that pattern.<\/p>\n<p class=\"wp-block-paragraph\">\u201cOnce it crosses that threshold where it goes from only doing exactly the stuff that you collect the data for to actually remixing things in new ways,\u201d says Sergey Levine, a co-founder of Physical Intelligence and a UC Berkeley professor focused on AI for robotics, \u201cthe capabilities are going up more than linearly with the amount of data. That much more favorable scaling property is something we\u2019ve seen in other domains, like language and vision.\u201d<\/p>\n<p class=\"wp-block-paragraph\">The paper\u2019s most striking demonstration involves an air fryer the model had essentially never seen in training. When the research team investigated, they found only two relevant episodes in the entire training dataset: One where a different robot merely pushed the air fryer closed, and one from an open source dataset where yet another robot placed a plastic bottle inside one on someone\u2019s instructions. The model had somehow synthesized those fragments, plus broader web-based pretraining data, into a functional understanding of how the appliance works.<\/p>\n<p class=\"wp-block-paragraph\">\u201cIt\u2019s very hard to track down where the knowledge is coming from, or where it will succeed or fail,\u201d says Lucy Shi, a researcher at Pi and Stanford computer science Ph.D. student. Still, with zero coaching, the model made a passable attempt at using the appliance to cook a sweet potato. With step-by-step verbal instructions \u2014 essentially, a human walking the robot through the task the way you might explain something to a new employee \u2014 it performed successfully.<\/p>\n<p class=\"wp-block-paragraph\">That coaching capability matters because it suggests robots could be deployed in new environments and improved in real time without additional data collection or model retraining.<\/p>\n<p class=\"wp-block-paragraph\">So what does it all mean? The researchers aren\u2019t shy about the model\u2019s limitations and are careful not to get ahead of themselves. In at least one case, they point the finger squarely at their own team.<\/p>\n<p class=\"wp-block-paragraph\">\u201cSometimes the failure mode is not on the robot or on the model,\u201d says Shi. \u201cIt\u2019s on us. Not being good at prompt engineering.\u201d She describes an early air fryer experiment that produced a 5% success rate. After spending about half an hour refining how the task was explained to the model, it jumped to 95%, she says.<\/p>\n<figure class=\"wp-block-image alignfull size-large\"><figcaption class=\"wp-element-caption\"><span class=\"wp-block-image__credits\"><strong>Image Credits:<\/strong>Physical Intelligence<\/span><\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">The model also isn\u2019t yet capable of executing complex multi-step tasks autonomously from a single high-level command. \u201cYou can\u2019t tell it, \u2018Hey, go make me some toast\u2019,\u201d Levine says. \u201cBut if you walk it through \u2014 \u2018for the toaster, open this part, push that button, do this\u2019 \u2014 then it actually tends to work pretty well.\u201d<\/p>\n<p class=\"wp-block-paragraph\">The team also acknowledged that standardized benchmarks for robotics don\u2019t really exist, which makes external validation of their claims difficult. Instead, the company measured \u03c00.7 against its own previous specialist models \u2014 purpose-built systems trained on individual tasks \u2014 and found that the generalist model matched their performance across a range of complex work, including making coffee, folding laundry, and assembling boxes.<\/p>\n<p class=\"wp-block-paragraph\">What may be most notable about the research \u2014 if you take the researchers at their word \u2014 is not any single demo but the degree to which the results surprised them, people whose job it is to know exactly what is in the training data and therefore what the model should and shouldn\u2019t be able to do.<\/p>\n<p class=\"wp-block-paragraph\">\u201cMy experience has always been that when I deeply know what\u2019s in the data, I can kind of just guess what the model will be able to do,\u201d says Ashwin Balakrishna, a research scientist at Physical Intelligence. \u201cI\u2019m rarely surprised. But the last few months have been the first time where I\u2019m genuinely surprised. I just bought a gear set randomly and asked the robot, \u2018Hey, can you rotate this gear?\u2019 And it just worked.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Levine recalled the moment researchers first encountered GPT-2 generating a story about <a href=\"https:\/\/openai.com\/index\/better-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">unicorns in the Andes<\/a>. \u201cWhere the heck did it learn about unicorns in Peru?\u201d he says. \u201cThat\u2019s such a weird combination. And I think that seeing that in robotics is really special.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Naturally, critics will point to an uncomfortable asymmetry here: Language models had the entire internet to learn from. Robots don\u2019t, and no amount of clever prompting fully closes that gap. But when asked where he expects the skepticism, Levine points somewhere else entirely.<\/p>\n<p class=\"wp-block-paragraph\">\u201cThe criticism that can always be leveled at any robotic generalization demo is that the tasks are kind of boring,\u201d he says. \u201cThe robot is not doing a backflip.\u201d He pushes back on that framing, arguing that the distinction between an impressive robot demo and a robotic system that actually generalizes is precisely the point. Generalization, he suggests, will always look less dramatic than a carefully choreographed stunt \u2014 but it is considerably more useful.<\/p>\n<p class=\"wp-block-paragraph\">The paper itself uses careful hedging language throughout, describing \u03c00.7 as showing \u201cearly signs\u201d of generalization and \u201cinitial demonstrations\u201d of new capabilities. These are research results, not a deployed product.<\/p>\n<p class=\"wp-block-paragraph\">When asked directly when a system based on these findings might be ready for real-world deployment, Levine declines to speculate. \u201cI think there\u2019s good reason to be optimistic, and certainly it\u2019s progressing faster than I expected a couple of years ago,\u201d he says. \u201cBut it\u2019s very hard for me to answer that question.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Physical Intelligence has raised over $1 billion to date and was most recently valued at $5.6 billion. A significant part of the investor enthusiasm around the company traces to Lachy Groom, a co-founder who spent years as one of Silicon Valley\u2019s most well-regarded angel investors \u2014 backing Figma, Notion, and Ramp, among others \u2014 before deciding that Physical Intelligence was the company he\u2019d been looking for. That pedigree has helped the startup attract serious institutional money even as it has refused to offer investors a commercialization timeline. <\/p>\n<p class=\"wp-block-paragraph\">The company is now said to be in discussions for a new round that would nearly double that valuation figure to <a href=\"https:\/\/techcrunch.com\/2026\/03\/27\/physical-intelligence-is-reportedly-in-talks-to-raise-1-billion-again\/\" target=\"_blank\" rel=\"noopener\">$11 billion<\/a>. The team declined to comment.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2026\/04\/16\/physical-intelligence-a-hot-robotics-startup-says-its-new-robot-brain-can-figure-out-tasks-it-was-never-taught\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Physical Intelligence, the two-year-old, San Francisco-based robotics startup that has quietly become one of the most closely watched AI companies in the Bay Area, published new research Thursday showing that its latest model can direct robots to perform tasks they were never explicitly trained on \u2014 a capability the company\u2019s own researchers say caught them [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":234966,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-234965","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/234965","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=234965"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/234965\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/234966"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=234965"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=234965"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=234965"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}