<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>NLP on XAI Today</title>
    <link>https://xai.today/tags/nlp/</link>
    <description>Recent content in NLP on XAI Today</description>
    <generator>Hugo</generator>
    <language>en-US</language>
    <lastBuildDate>Sun, 30 Jun 2024 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://xai.today/tags/nlp/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Gender Controlled Data Sets for XAI Research</title>
      <link>https://xai.today/posts/gender-controlled-data-sets-for-xai-research/</link>
      <pubDate>Sun, 30 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/gender-controlled-data-sets-for-xai-research/</guid>
      <description>&lt;p&gt;The paper &lt;a href=&#34;https://arxiv.org/pdf/2406.11547v1&#34;&gt;&amp;ldquo;GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations&amp;rdquo;&lt;/a&gt; introduces a novel dataset, GECO, to evaluate biases in AI explanations, specifically focusing on gender. The authors constructed the dataset with sentence pairs that differ only in gendered pronouns or names, enabling a controlled analysis of gender biases in AI-generated text. GECOBench, an accompanying benchmark, assesses different explainable AI (XAI) methods by measuring their ability to detect and mitigate biases within this context.&lt;/p&gt;&#xA;&lt;p&gt;The study investigates biases in language models, emphasizing that traditional AI systems often produce biased explanations due to their training on unbalanced datasets. By employing GECO, the researchers show how these biases manifest and affect AI explanations. They demonstrate that existing XAI methods, which aim to make AI decisions more transparent, also carry biases, potentially reinforcing stereotypes or presenting skewed explanations.&lt;/p&gt;&#xA;&lt;p&gt;Moreover, the authors evaluate several fine-tuning and debiasing strategies to reduce bias in AI models. Their findings suggest that certain fine-tuning approaches can significantly decrease gender bias in explanations without compromising the model&amp;rsquo;s overall performance. This highlights the importance of combining XAI methods with robust debiasing techniques to create fairer and more trustworthy AI systems.&lt;/p&gt;&#xA;&lt;p&gt;The paper also provides a comprehensive framework for evaluating bias in XAI methods by using GECOBench. This benchmark allows for a standardized comparison across different methods, providing insights into their strengths and limitations concerning gender bias. It helps identify which methods are more susceptible to biases and under what conditions, promoting the development of better XAI techniques.&lt;/p&gt;&#xA;&lt;p&gt;Overall, the paper underscores the critical need for datasets like GECO and benchmarks like GECOBench in understanding and mitigating biases in AI explanations. It calls for further research and development in the field of fair and explainable AI, providing resources and guidelines for future studies to build upon. The dataset and code are made publicly available, fostering community efforts toward more equitable AI systems. The paper&amp;rsquo;s findings have broad implications for the design of AI systems, particularly those deployed in sensitive or high-stakes environments.&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>Combatting Fake News With XAI</title>
      <link>https://xai.today/posts/combatting-fake-news-with-xai/</link>
      <pubDate>Mon, 13 Mar 2023 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/combatting-fake-news-with-xai/</guid>
      <description>&lt;p&gt;With all the unhinged hype over ChatGPT stealing everyone&amp;rsquo;s jobs and AI taking over the world, it&amp;rsquo;s great to see postiive use cases for Machine Learning (ML) technologies. As usual, eXplainable Artificial Intelligence (XAI) has something to contribute to the ethical landscape of fairness and transparency. &lt;a href=&#34;https://www.europapress.es/sociedad/noticia-using-explainable-artificial-intelligence-to-combat-fake-news-20220922102227.html&#34;&gt;In this recent news article&lt;/a&gt; we see a concerted attempt to combat fake news with XAI and a pretty sophisticated tech stack.&lt;/p&gt;&#xA;&lt;p&gt;With the rise of social media and other online platforms, the spread of fake news has become a major problem. Fake news is defined as news stories that are intentionally false and designed to mislead readers. It is often spread through social media and can have serious consequences, such as influencing public opinion and even swaying elections.&lt;/p&gt;&#xA;&lt;p&gt;XAI is a field of AI that focuses on making machine learning models transparent and explainable. By using XAI, it is possible to detect and filter out fake news, while also providing a clear explanation of how the model came to its decision. One way XAI can help in combatting fake new is through the generation of counterfactual explanations. Counterfactual explanations are used to explain how a model would have made a different decision if the input data had been different. In the context of fake news detection, counterfactual explanations can be used highlight words and phrases that were critical in the classification as fake or not fake. The counterfactual interpretation is that when those phrases are substituted, the news article would flip its classification. In the article, you can see a good example of the SHAP Force Plot highlighting specific text elements that add to or detract from the suspect nature of text extract (the figure labelled &amp;ldquo;&lt;em&gt;Explainability module developed for multiclass classification&lt;/em&gt;&amp;rdquo;).&lt;/p&gt;&#xA;&lt;p&gt;If we are going to have any chance of finding solutions to the technology driven problems of today, then we need to embrace positive applications of ML and not get carried along with all the negative hype. XAI provides us with many such positive examples.&lt;/p&gt;&#xA;</description>
    </item>
  </channel>
</rss>
