{"id":78225,"date":"2024-02-24T14:45:12","date_gmt":"2024-02-24T14:45:12","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/02\/24\/this-week-in-ai-addressing-racism-in-ai-image-generators-techcrunch\/"},"modified":"2024-02-24T14:45:12","modified_gmt":"2024-02-24T14:45:12","slug":"this-week-in-ai-addressing-racism-in-ai-image-generators-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2024\/02\/24\/this-week-in-ai-addressing-racism-in-ai-image-generators-techcrunch\/","title":{"rendered":"This Week in AI: Addressing racism in AI image generators | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\">Keeping up with an industry as fast-moving as\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/08\/04\/age-of-ai-everything-you-need-to-know-about-artificial-intelligence\/\" data-mrf-link=\"https:\/\/techcrunch.com\/2023\/08\/04\/age-of-ai-everything-you-need-to-know-about-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">AI<\/a>\u00a0is a tall order. So until an AI can do it for you, here\u2019s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn\u2019t cover on their own.<\/p>\n<p id=\"speakable-summary\">This week in AI, Google <a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/google-gemini-image-pause-people\/\" target=\"_blank\" rel=\"noopener\">paused<\/a> its AI chatbot Gemini\u2019s ability to generate images of people after a segment of users complained about historical inaccuracies. Told to depict \u201ca Roman legion,\u201d for instance, Gemini would show an anachronistic, cartoonish group of racially diverse foot soldiers while rendering \u201cZulu warriors\u201d as Black.<\/p>\n<p>It appears that Google \u2014 like some other AI vendors, including OpenAI \u2014 had implemented clumsy hardcoding under the hood to attempt to \u201ccorrect\u201d for biases in its model. In response to prompts like \u201cshow me images of only women\u201d or \u201cshow me images of only men,\u201d Gemini would refuse, asserting such images could \u201ccontribute to the exclusion and marginalization of other genders.\u201d Gemini was also loath to generate images of people identified solely by their race \u2014 e.g. \u201cwhite people\u201d or \u201cblack people\u201d \u2014 out of ostensible concern for \u201creducing individuals to their physical characteristics.\u201d<\/p>\n<p>Right wingers have latched on to the bugs as evidence of a \u201cwoke\u201d agenda being perpetuated by the tech elite. But it doesn\u2019t take Occam\u2019s razor to see the less nefarious truth: Google, burned by its tools\u2019 biases before (see: <a href=\"https:\/\/www.nytimes.com\/2023\/05\/22\/technology\/ai-photo-labels-google-apple.html\" target=\"_blank\" rel=\"noopener\">classifying Black men as gorillas<\/a>, mistaking thermal guns in Black people\u2019s hands <a href=\"https:\/\/algorithmwatch.org\/en\/google-vision-racism\/\" target=\"_blank\" rel=\"noopener\">as weapons<\/a>, etc.), is so desperate to avoid history repeating itself that it\u2019s manifesting a less biased world in its image-generating models \u2014 however erroneous.<\/p>\n<p>In her best-selling book \u201cWhite Fragility,\u201d anti-racist educator Robin DiAngelo writes about how the erasure of race \u2014 \u201ccolor blindness,\u201d by another phrase \u2014 contributes to systemic racial power imbalances rather than mitigating or alleviating them. By purporting to \u201cnot see color\u201d or reinforcing the notion that simply acknowledging the struggle of people of other races is sufficient to label oneself \u201cwoke,\u201d people <em>perpetuate<\/em> harm by avoiding any substantive conservation on the topic, DiAngelo says.<\/p>\n<p>Google\u2019s ginger treatment of race-based prompts in Gemini didn\u2019t avoid the issue, per se \u2014 but disingenuously attempted to conceal the worst of the model\u2019s biases. One could argue (<a href=\"https:\/\/www.technologyreview.com\/2019\/02\/04\/137602\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/\" target=\"_blank\" rel=\"noopener\">and many have<\/a>) that these biases shouldn\u2019t be ignored or glossed over, but addressed in the broader context of the training data from which they arise \u2014 i.e. society on the world wide web.<\/p>\n<p>Yes, the data sets used to train image generators generally contain more white people than Black people, and yes, the images of Black people in those data sets reinforce negative stereotypes. That\u2019s why image generators <a href=\"https:\/\/www.washington.edu\/news\/2023\/11\/29\/ai-image-generator-stable-diffusion-perpetuates-racial-and-gendered-stereotypes-bias\/\" target=\"_blank\" rel=\"noopener\">sexualize certain women of color<\/a>, <a href=\"https:\/\/www.businessinsider.com\/ai-art-generators-dalle-stable-diffusion-racial-gender-bias-ceo-2023-3#:~:text=The%20study%20found%2097%25%20of,suite%20research%20company%20Cristkolder%20Associates.\" target=\"_blank\" rel=\"noopener\">depict white men in positions of authority<\/a> and generally favor <a href=\"https:\/\/news.engin.umich.edu\/2023\/12\/biases-in-large-image-text-ai-model-favor-wealthier-western-perspectives\/\" target=\"_blank\" rel=\"noopener\">wealthy Western perspectives<\/a>.<\/p>\n<p>Some may argue that there\u2019s no winning for AI vendors. Whether they tackle \u2014 or choose not to tackle \u2014 models\u2019 biases, they\u2019ll be criticized. And that\u2019s true. But I posit that, either way, these models are lacking in explanation \u2014 packaged in a fashion that minimizes the ways in which their biases manifest.<\/p>\n<p>Were AI vendors to address their models\u2019 shortcomings head on, in humble and transparent language, it\u2019d go a lot further than haphazard attempts at \u201cfixing\u201d what\u2019s essentially unfixable bias. We all have bias, the truth is \u2014 and we don\u2019t treat people the same as a result. Nor do the models we\u2019re building. And we\u2019d do well to acknowledge that.<\/p>\n<p>Here are some other AI stories of note from the past few days:<\/p>\n<ul>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/the-women-in-ai-making-a-difference\/\" target=\"_blank\" rel=\"noopener\">Women in AI:<\/a> <\/strong>TechCrunch launched a series highlighting notable women in the field of AI. Read the list <a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/the-women-in-ai-making-a-difference\/\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/stable-diffusion-3-arrives-to-solidify-early-lead-in-ai-imagery-against-sora-and-gemini\/\" target=\"_blank\" rel=\"noopener\">Stable Diffusion v3:<\/a> <\/strong>Stability AI has announced Stable Diffusion 3, the latest and most powerful version of the company\u2019s image-generating AI model, based on a new architecture.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/help-me-write-chrome-gets-a-built-in-ai-writing-tool-powered-by-gemini\/\" target=\"_blank\" rel=\"noopener\">Chrome gets GenAI:<\/a><\/strong> Google\u2019s new Gemini-powered tool in Chrome allows users to rewrite existing text on the web \u2014 or generate something completely new.<\/li>\n<li><a href=\"https:\/\/techcrunch.com\/2024\/02\/21\/are-you-blacker-than-chatgpt-take-this-quiz-to-find-out\/\" target=\"_blank\" rel=\"noopener\"><b>Blacker than ChatGPT:<\/b><\/a> Creative ad agency McKinney developed a quiz game, Are You Blacker than ChatGPT?, to shine a light on AI bias.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/21\/hundreds-of-ai-luminaries-sign-letter-calling-for-anti-deepfake-legislation\/\" target=\"_blank\" rel=\"noopener\">Calls for laws:<\/a> <\/strong>Hundreds of AI luminaries signed a public letter earlier this week calling for anti-deepfake legislation in the U.S.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/21\/match-group-inks-deal-with-openai-says-press-release-written-by-chatgpt\/\" target=\"_blank\" rel=\"noopener\">Match made in AI:<\/a>\u00a0<\/strong>OpenAI has a new customer in Match Group, the owner of apps including Hinge, Tinder and Match, whose employees will use OpenAI\u2019s AI tech to accomplish work-related tasks.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/21\/google-deepmind-forms-a-new-org-focused-on-ai-safety\/\" target=\"_blank\" rel=\"noopener\">DeepMind safety:<\/a>\u00a0<\/strong>DeepMind, Google\u2019s AI research division, has formed a new org, AI Safety and Alignment, made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/21\/google-launches-two-new-open-llms\/\" target=\"_blank\" rel=\"noopener\">Open models:<\/a> <\/strong>Barely a week after launching the latest iteration of its\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/02\/16\/what-is-google-gemini-ai\/\" data-mrf-link=\"https:\/\/techcrunch.com\/2024\/02\/16\/what-is-google-gemini-ai\/\" target=\"_blank\" rel=\"noopener\">Gemini models<\/a>, Google released Gemma, a new family of lightweight open-weight models.<\/li>\n<li><strong><a href=\"https:\/\/techcrunch.com\/2024\/02\/20\/house-punts-on-ai-with-directionless-new-task-force\/\" target=\"_blank\" rel=\"noopener\">House task force:<\/a> <\/strong>The U.S. House of Representatives has founded a task force on AI that \u2014 as Devin writes \u2014 feels like a punt after years of indecision that show no sign of ending.<\/li>\n<\/ul>\n<h2>More machine learnings<\/h2>\n<p>AI models seem to know a lot, but what do they actually know? Well, the answer is nothing. But if you phrase the question slightly differently\u2026 they do seem to have internalized some \u201cmeanings\u201d that are similar to what humans know. Although no AI truly understands what a cat or a dog is, could it have some sense of similarity encoded in its embeddings of those two words that is different from, say, cat and bottle? <a href=\"https:\/\/arxiv.org\/abs\/2310.18348\" target=\"_blank\" rel=\"noopener\">Amazon researchers believe so.<\/a><\/p>\n<p>Their research compared the \u201ctrajectories\u201d of similar but distinct sentences, like \u201cthe dog barked at the burglar\u201d and \u201cthe burglar caused the dog to bark,\u201d with those of grammatically similar but different sentences, like \u201ca cat sleeps all day\u201d and \u201ca girl jogs all afternoon.\u201d They found that the ones humans would find similar were indeed internally treated as more similar despite being grammatically different, and vice versa for the grammatically similar ones. OK, I feel like this paragraph was a little confusing, but suffice it to say that the meanings encoded in LLMs appear to be more robust and sophisticated than expected, not totally naive.<\/p>\n<p>Neural encoding is proving useful in prosthetic vision, <a href=\"https:\/\/actu.epfl.ch\/news\/a-machine-learning-framework-that-encodes-images-2\/\" target=\"_blank\" rel=\"noopener\">Swiss researchers at EPFL have found<\/a>. Artificial retinas and other ways of replacing parts of the human visual system generally have very limited resolution due to the limitations of microelectrode arrays. So no matter how detailed the image is coming in, it has to be transmitted at a very low fidelity. But there are different ways of downsampling, and this team found that machine learning does a great job at it.<\/p>\n<div id=\"attachment_2670034\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><\/p>\n<p id=\"caption-attachment-2670034\" class=\"wp-caption-text\"><strong>Image Credits:<\/strong> EPFL<\/p>\n<\/div>\n<p>\u201cWe found that if we applied a learning-based approach, we got improved results in terms of optimized sensory encoding. But more surprising was that when we used an unconstrained neural network, it learned to mimic aspects of retinal processing on its own,\u201d said Diego Ghezzi in a news release. It does perceptual compression, basically. They tested it on mouse retinas, so it isn\u2019t just theoretical.<\/p>\n<p>An interesting application of computer vision by Stanford researchers hints at a mystery in how children develop their drawing skills. The team solicited and analyzed 37,000 drawings by kids of various objects and animals, and also (based on kids\u2019 responses) how recognizable each drawing was. Interestingly, it wasn\u2019t just the inclusion of signature features like a rabbit\u2019s ears that made drawings more recognizable by other kids.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-2670051\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp\" alt=\"\" width=\"1024\" height=\"423\" srcset=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp 1500w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp?resize=150,62 150w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp?resize=300,124 300w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp?resize=768,317 768w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp?resize=680,281 680w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp?resize=1200,496 1200w, https:\/\/techcrunch.com\/wp-content\/uploads\/2024\/02\/Drawings_Examples.webp?resize=50,21 50w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"\/><\/p>\n<p>\u201cThe kinds of features that lead drawings from older children to be recognizable don\u2019t seem to be driven by just a single feature that all the older kids learn to include in their drawings. It\u2019s something much more complex that these machine learning systems are picking up on,\u201d said lead researcher Judith Fan.<\/p>\n<p><a href=\"https:\/\/actu.epfl.ch\/news\/gpt-3-transforms-chemical-research\/\" target=\"_blank\" rel=\"noopener\">Chemists (also at EPFL) found<\/a> that LLMs are also surprisingly adept at helping out with their work after minimal training. It\u2019s not just doing chemistry directly, but rather being fine-tuned on a body of work that chemists individually can\u2019t possibly know all of. For instance, in thousands of papers there may be a few hundred statements about whether a high-entropy alloy is single or multiple phase (you don\u2019t have to know what this means \u2014 they do). The system (based on GPT-3) can be trained on this type of yes\/no question and answer, and soon is able to extrapolate from that.<\/p>\n<p>It\u2019s not some huge advance, just more evidence that LLMs are a useful tool in this sense. \u201cThe point is that this is as easy as doing a literature search, which works for many chemical problems,\u201d said researcher Berend Smit. \u201cQuerying a foundational model might become a routine way to bootstrap a project.\u201d<\/p>\n<p>Last, <a href=\"https:\/\/newsroom.haas.berkeley.edu\/research\/internet-images-may-be-turning-back-the-clock-on-gender-bias-research-finds\/\" target=\"_blank\" rel=\"noopener\">a word of caution from Berkeley researchers<\/a>, though now that I\u2019m reading the post again I see EPFL was involved with this one too. Go Lausanne! The group found that imagery found via Google was much more likely to enforce gender stereotypes for certain jobs and words than text mentioning the same thing. And there were also just way more men present in both cases.<\/p>\n<p>Not only that, but in an experiment, they found that people who viewed images rather than reading text when researching a role associated those roles with one gender more reliably, even days later. \u201cThis isn\u2019t only about the frequency of gender bias online,\u201d said researcher Douglas Guilbeault. \u201cPart of the story here is that there\u2019s something very sticky, very potent about images\u2019 representation of people that text just doesn\u2019t have.\u201d<\/p>\n<p>With stuff like the Google image generator diversity fracas going on, it\u2019s easy to lose sight of the established and frequently verified fact that the source of data for many AI models shows serious bias, and this bias has a real effect on people.<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2024\/02\/24\/this-week-in-ai-addressing-racism-in-ai-image-generators\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Keeping up with an industry as fast-moving as\u00a0AI\u00a0is a tall order. So until an AI can do it for you, here\u2019s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn\u2019t cover on their own. This week in AI, Google paused its AI chatbot Gemini\u2019s [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":78226,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-78225","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/78225","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=78225"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/78225\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/78226"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=78225"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=78225"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=78225"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}