{"id":53875,"date":"2023-11-14T09:53:36","date_gmt":"2023-11-14T09:53:36","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2023\/11\/14\/giskards-open-source-framework-evaluates-ai-models-before-theyre-pushed-into-production-techcrunch\/"},"modified":"2023-11-14T09:53:36","modified_gmt":"2023-11-14T09:53:36","slug":"giskards-open-source-framework-evaluates-ai-models-before-theyre-pushed-into-production-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2023\/11\/14\/giskards-open-source-framework-evaluates-ai-models-before-theyre-pushed-into-production-techcrunch\/","title":{"rendered":"Giskard\u2019s open-source framework evaluates AI models before they\u2019re pushed into production | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\"><a href=\"https:\/\/www.giskard.ai\/\" target=\"_blank\" rel=\"noopener\">Giskard<\/a> is a French startup working on an open-source testing framework for large language models. It can alert developers of risks of biases, security holes and a model\u2019s ability to generate harmful or toxic content.<\/p>\n<p>While there\u2019s a lot of hype around AI models, ML testing systems will also quickly become a hot topic as regulation is about to be enforced in the EU with the AI Act, and in other countries. Companies that develop AI models will have to prove that they comply with a set of rules and mitigate risks so that they don\u2019t have to pay hefty fines.<\/p>\n<p>Giskard is an AI startup that embraces regulation and one of the first examples of a developer tool that specifically focuses on testing in a more efficient manner.<\/p>\n<p>\u201cI worked at Dataiku before, particularly on NLP model integration. And I could see that, when I was in charge of testing, there were both things that didn\u2019t work well when you wanted to apply them to practical cases, and it was very difficult to compare the performance of suppliers between each other,\u201d Giskard co-founder and CEO Alex Combessie told me.<\/p>\n<p>There are three components behind Giskard\u2019s testing framework. First, the company has released <a href=\"https:\/\/github.com\/Giskard-AI\/giskard\" target=\"_blank\" rel=\"noopener\">an open-source Python library<\/a> that can be integrated in an LLM project \u2014 and more specifically retrieval-augmented generation (RAG) projects. It is quite popular on GitHub already and it is compatible with other tools in the ML ecosystems, such as Hugging Face, MLFlow, Weights &amp; Biases, PyTorch, Tensorflow and Langchain.<\/p>\n<p>After the initial setup, Giskard helps you generate a test suite that will be regularly used on your model. Those tests cover a wide range of issues, such as performance, hallucinations, misinformation, non-factual output, biases, data leakage, harmful content generation and prompt injections.<\/p>\n<p>\u201cAnd there are several aspects: you\u2019ll have the performance aspect, which will be the first thing on a data scientist\u2019s mind. But more and more, you have the ethical aspect, both from a brand image point of view and now from a regulatory point of view,\u201d Combessie said.<\/p>\n<p>Developers can then integrate the tests in the continuous integration and continuous delivery (CI\/CD) pipeline so that tests are run every time there\u2019s a new iteration on the code base. If there\u2019s something wrong, developers receive a scan report in their GitHub repository, for instance.<\/p>\n<p>Tests are customized based on the end use case of the model. Companies working on RAG can give access to vector databases and knowledge repositories to Giskard so that the test suite is as relevant as possible. For instance, if you\u2019re building a chatbot that can give you information on climate change based on the most recent report from the IPCC and using a LLM from OpenAI, Giskard tests will check whether the model can generate misinformation about climate change, contradicts itself, etc.<\/p>\n<div id=\"attachment_2628335\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><\/p>\n<p id=\"caption-attachment-2628335\" class=\"wp-caption-text\"><strong>Image Credits:<\/strong> Giskard<\/p>\n<\/div>\n<p>Giskard\u2019s second product is an AI quality hub that helps you debug a large language model and compare it to other models. This quality hub is part of Giskard\u2019s <a href=\"https:\/\/www.giskard.ai\/pricing\" target=\"_blank\" rel=\"noopener\">premium offering<\/a>. In the future, the startup hopes it will be able to generate documentation that proves that a model is complying with regulation.<\/p>\n<p>\u201cWe\u2019re starting to sell the AI Quality Hub to companies like the Banque de France and L\u2019Or\u00e9al \u2014 to help them debug and find the causes of errors. In the future, this is where we\u2019re going to put all the regulatory features,\u201d Combessie said.<\/p>\n<p>The company\u2019s third product is called LLMon. It\u2019s a real-time monitoring tool that can evaluate LLM answers for the most common issues (toxicity, hallucination, fact checking\u2026) before the response is sent back to the user.<\/p>\n<p>It currently works with companies that use OpenAI\u2019s APIs and LLMs as their foundational model, but the company is working on integrations with Hugging Face, Anthropic, etc.<\/p>\n<h2>Regulating use cases<\/h2>\n<p>There are several ways to regulate AI models. Based on conversations with people in the AI ecosystem, it\u2019s still unclear whether the AI Act will apply to foundational models from OpenAI, Anthropic, Mistral and others, or only on applied use cases.<\/p>\n<p>In the latter case, Giskard seems particularly well positioned to alert developers on potential misuses of LLMs enriched with external data (or, as AI researchers call it, retrieval-augmented generation, RAG).<\/p>\n<p>There are currently 20 people working for Giskard. \u201cWe see a very clear market fit with customers on LLMs, so we\u2019re going to roughly double the size of the team to be the best LLM antivirus on the market,\u201d Combessie said.<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2023\/11\/14\/giskards-open-source-framework-evaluates-ai-models-before-theyre-pushed-into-production\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Giskard is a French startup working on an open-source testing framework for large language models. It can alert developers of risks of biases, security holes and a model\u2019s ability to generate harmful or toxic content. While there\u2019s a lot of hype around AI models, ML testing systems will also quickly become a hot topic as [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":53876,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-53875","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/53875","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=53875"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/53875\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/53876"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=53875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=53875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=53875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}