{"id":96632,"date":"2024-05-11T18:20:39","date_gmt":"2024-05-11T18:20:39","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/05\/11\/openai-could-debut-a-multimodal-ai-digital-assistant-soon\/"},"modified":"2024-05-11T18:20:39","modified_gmt":"2024-05-11T18:20:39","slug":"openai-could-debut-a-multimodal-ai-digital-assistant-soon","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/05\/11\/openai-could-debut-a-multimodal-ai-digital-assistant-soon\/","title":{"rendered":"OpenAI could debut a multimodal AI digital assistant soon"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">OpenAI has been showing some of its customers a new multimodal AI model that can both talk to you and recognize objects, according to a <a href=\"https:\/\/www.theinformation.com\/articles\/openai-develops-ai-voice-assistant-as-it-chases-google-apple?rc=r6gev9\" target=\"_blank\" rel=\"noopener\">new report from <em>The Information<\/em><\/a>. Citing unnamed sources who\u2019ve seen it, the outlet says this could be part of what the company <a href=\"https:\/\/www.theverge.com\/2024\/5\/10\/24153421\/openai-chatgpt-google-search-competitor-service-io\" target=\"_blank\" rel=\"noopener\">plans to show on Monday<\/a>.<\/p>\n<\/div>\n<div>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">The new model reportedly offers faster, more accurate interpretation of images and audio than what its existing separate transcription and text-to-speech models can do<em>. <\/em>It would apparently be able to help customer service agents \u201cbetter understand the intonation of callers\u2019 voices or whether they\u2019re being sarcastic,\u201d and \u201ctheoretically,\u201d the model can help students with math or translate real-world signs, writes <em>The Information.<\/em><\/p>\n<\/div>\n<div>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">The outlet\u2019s sources say the model can outdo GPT-4 Turbo at \u201canswering some types of questions,\u201d but is still susceptible to confidently getting things wrong. <\/p>\n<\/div>\n<div>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">It\u2019s possible OpenAI is also readying a new built-in ChatGPT ability to make phone calls, according to Developer Ananay Arora, who posted the above screenshot of call-related code. Arora also <a href=\"https:\/\/x.com\/ananayarora\/status\/1789085434779259331\" target=\"_blank\">spotted evidence<\/a> that OpenAI had provisioned servers intended for real-time audio and video communication.<\/p>\n<\/div>\n<div>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white\">None of this would be GPT-5, if it\u2019s being unveiled next week. CEO Sam Altman has <a href=\"https:\/\/www.theverge.com\/2024\/5\/10\/24153767\/sam-altman-openai-google-io-search-engine-launch\" target=\"_blank\" rel=\"noopener\">explicitly denied<\/a> that its upcoming announcement has anything to do with the model that\u2019s supposed to be \u201c<a href=\"https:\/\/www.businessinsider.com\/openai-launch-better-gpt-5-chatbot-2024-3\" target=\"_blank\" rel=\"noopener\">materially better<\/a>\u201d than GPT-4. <em>The Information<\/em> writes GPT-5 may be publicly released by the end of the year. <\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.theverge.com\/2024\/5\/11\/24154307\/openai-multimodal-digital-assistant-chatgpt-phone-calls\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has been showing some of its customers a new multimodal AI model that can both talk to you and recognize objects, according to a new report from The Information. Citing unnamed sources who\u2019ve seen it, the outlet says this could be part of what the company plans to show on Monday. The new model [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":96633,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-96632","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/96632","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=96632"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/96632\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/96633"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=96632"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=96632"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=96632"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}