{"id":151312,"date":"2025-02-20T16:56:40","date_gmt":"2025-02-20T16:56:40","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2025\/02\/20\/figures-humanoid-robot-takes-voice-orders-to-help-around-the-house-techcrunch\/"},"modified":"2025-02-20T16:56:40","modified_gmt":"2025-02-20T16:56:40","slug":"figures-humanoid-robot-takes-voice-orders-to-help-around-the-house-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2025\/02\/20\/figures-humanoid-robot-takes-voice-orders-to-help-around-the-house-techcrunch\/","title":{"rendered":"Figure\u2019s humanoid robot takes voice orders to help around the house | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Figure founder and CEO Brett Adcock Thursday <a rel=\"nofollow noopener\" href=\"https:\/\/www.figure.ai\/news\/helix\" target=\"_blank\">revealed<\/a> a new machine learning model for humanoid robots. The news, which arrives two weeks after Adcock announced the Bay Area robotics firm\u2019s<a href=\"https:\/\/techcrunch.com\/2025\/02\/04\/figure-drops-openai-in-favor-of-in-house-models\/\" target=\"_blank\" rel=\"noopener\"> decision to step away from an OpenAI collaboration<\/a>, is centered around Helix, a \u201cgeneralist\u201d Vision-Language-Action (VLA) model.<\/p>\n<p class=\"wp-block-paragraph\">VLAs are a new phenomenon for robotics, leveraging vision and language commands to process information. Currently, the best-known example of the category is <a href=\"https:\/\/techcrunch.com\/2024\/01\/04\/google-outlines-new-methods-for-training-robots-with-video-and-large-language-models\/\" target=\"_blank\" rel=\"noopener\">Google DeepMind\u2019s RT-2<\/a>, which trains robots through a combination of video and large language models (LLMs).<\/p>\n<p class=\"wp-block-paragraph\">Helix works in a similar fashion, combining visual data and language prompts to control a robot in real time. Figure writes, \u201cHelix displays strong object generalization, being able to pick up thousands of novel household items with varying shapes, sizes, colors, and material properties never encountered before in training, simply by asking in natural language.\u201d<\/p>\n<figure class=\"wp-block-image size-large\"><figcaption class=\"wp-element-caption\"><span class=\"wp-block-image__credits\"><strong>Image Credits:<\/strong>Figure<\/span><\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">In an ideal world, you could simply tell a robot to do something and it would just do it. That is where Helix comes in, according to Figure. The platform is designed to bridge the gap between vision and language processing. After receiving a natural language voice prompt, the robot visually assesses its environment and then performs the task.<\/p>\n<p class=\"wp-block-paragraph\">Figure offers examples like, \u201cHand the bag of cookies to the robot on your right\u201d or, \u201cReceive the bag of cookies from the robot on your left and place it in the open drawer.\u201d Both of these examples involve a pair of robots working together. This is because Helix is designed to control two robots at once, with one assisting the other to perform various household tasks.<\/p>\n<p class=\"wp-block-paragraph\">Figure is showcasing the VLM by highlighting the work the company has been doing with its 02 humanoid robot in the home environment. Houses are notoriously tricky for robots, given they lack the structure and consistency of warehouses and factories.<\/p>\n<p class=\"wp-block-paragraph\">Difficulty with learning and control are major hurdles standing between complex robot systems and the home. These issues, along with five- to six-digit price tags, are why the home robot hasn\u2019t taken precedence for most humanoid robotics companies. Generally speaking, the approach is to build robots for industrial clients, both improving reliability and bringing down costs before tackling dwellings. Housework is a conversation for a few years from now. <\/p>\n<p class=\"wp-block-paragraph\">When TechCrunch <a href=\"https:\/\/techcrunch.com\/2024\/09\/12\/face-to-face-with-figures-new-humanoid-robot\/\" target=\"_blank\" rel=\"noopener\">toured Figure\u2019s Bay Area offices<\/a> in 2024, Adcock showed off a some off the paces the company was putting its humanoid through in the home setting. It appeared at the time that the work was not being prioritized, as Figure focuses on workplace pilots with corporations like BMW.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"450\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2025\/02\/VLA_Full_Quality_MASTER_21925A.2025-02-20-11_33_54.gif?w=680\" alt=\"\" class=\"wp-image-2968721\"\/><figcaption class=\"wp-element-caption\"><span class=\"wp-block-image__credits\"><strong>Image Credits:<\/strong>Figure<\/span><\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">With Thursday\u2019s Helix announcement, Figure is making it clear that the home should be a priority in its own right. It\u2019s a challenging and complex setting for testing these sorts of training models. Teaching robots to do complex tasks in the kitchen \u2014 for example \u2014 opens them up to a broad range of actions in different settings.<\/p>\n<p class=\"wp-block-paragraph\">\u201cFor robots to be useful in households, they will need to be capable of generating intelligent new behaviors on-demand, especially for objects they\u2019ve never seen before,\u201d Figure says. \u201cTeaching robots even a single new behavior currently requires substantial human effort: either hours of PhD-level expert manual programming or thousands of demonstrations.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Manual programming won\u2019t scale for the home. There are simply too many unknowns. Kitchens, living rooms, and bathrooms vary dramatically from one to the other. The same can be said for the tools used for cooking and cleaning. Besides, people leave messes, rearrange furniture, and prefer a range of different environmental lighting. This method takes way too much time and money \u2014 though Figure <a href=\"https:\/\/techcrunch.com\/2025\/02\/14\/figure-ai-is-in-talks-to-raise-1-5b-at-15x-its-last-valuation\/\" target=\"_blank\" rel=\"noopener\">certainly has plenty of the latter<\/a>.<\/p>\n<p class=\"wp-block-paragraph\">The other option is training \u2013 and lots of it. Robotic arms trained to pick and place objects in labs often use this method. What you don\u2019t see are the hundreds of hours of repetition is takes to make a demo robust enough to take on highly variable tasks. To pick something up right the first time, a robot needs to have done so hundreds of times in the past.<\/p>\n<p class=\"wp-block-paragraph\">Like so much surrounding humanoid robotics at the moment, work on Helix is still at a very early stage. Viewers should be advised that a lot of work happens behind the scenes to create the kinds of short, well-produced videos seen in this post. Today\u2019s announcement is, in essence, a recruiting tool designed to bring more engineers on board to help grow the project.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2025\/02\/20\/figures-humanoid-robot-takes-voice-orders-to-help-around-the-house\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Figure founder and CEO Brett Adcock Thursday revealed a new machine learning model for humanoid robots. The news, which arrives two weeks after Adcock announced the Bay Area robotics firm\u2019s decision to step away from an OpenAI collaboration, is centered around Helix, a \u201cgeneralist\u201d Vision-Language-Action (VLA) model. VLAs are a new phenomenon for robotics, leveraging [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":151313,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-151312","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/151312","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=151312"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/151312\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/151313"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=151312"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=151312"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=151312"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}