<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Methods on XAI Today</title>
    <link>https://xai.today/tags/methods/</link>
    <description>Recent content in Methods on XAI Today</description>
    <generator>Hugo</generator>
    <language>en-US</language>
    <lastBuildDate>Tue, 09 Jul 2024 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://xai.today/tags/methods/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Explainable AI for Improved Heart Disease Prediction</title>
      <link>https://xai.today/posts/optimized-ensemble-heart-disease-prediction/</link>
      <pubDate>Tue, 09 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/optimized-ensemble-heart-disease-prediction/</guid>
      <description>&lt;p&gt;The paper &amp;ldquo;&lt;a href=&#34;https://www.mdpi.com/2078-2489/15/7/394&#34;&gt;Optimized Ensemble Learning Approach with Explainable AI for Improved Heart Disease Prediction&lt;/a&gt;&amp;rdquo; focuses on explaining machine learning models in healthcare, similar to my original work in &amp;ldquo;&lt;a href=&#34;https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-020-01201-2&#34;&gt;Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences&lt;/a&gt;&amp;rdquo;. The newer paper combines a novel Bayesian method to optimally tune the hyper-paremeters of ensemble models such as AdaBoost, XGBoost and Random Forest and then applies the now well established SHAP method to assign Shapley values to each feature. The authors use their method to analyse three heart disease prediction datasets, included the well-known Cleveland set used as a benchmark in many ML research papers.&lt;/p&gt;&#xA;&lt;p&gt;SHAP (&lt;a href=&#34;https://arxiv.org/abs/1705.07874&#34;&gt;Lundberg and Lee&lt;/a&gt;) came hot on the heels of the revolutionary LIME method (&lt;a href=&#34;https://arxiv.org/abs/1602.04938&#34;&gt;Ribeiro, Singh and Guestrin&lt;/a&gt;), which together delivered a paradigm shift in the usefulness and feasibility of eXplainable Artificial Intelligence (XAI). In fact, LIME was published at exactly the time I was becoming interested in the topic of XAI and served as inspiration for my own Ph.D journey. Both methods fall into the category of Additive Feature Attribution Methods (AFAM) and work by assign a unitless value to each level of the set of input features. The main benefits of AFAM become clear when viewing a beeswarm plot of their responses across a larger dataset, such as the whole training data. Patterns emerge showing which input variables affect the response variable most strongly, and in which direction. This usage is much more sophisticated than classic variable importance plots, which lack direction and mathematical guarantees offered by SHAP.&lt;/p&gt;&#xA;&lt;p&gt;In the clinical setting, these mathematical guarentees mean that the resulting variable sensitivity information could be used to create a broader diagnostic tool. However, while this approach can provide a general understanding of which variables drive a model&amp;rsquo;s predictions, it lacks the fine-grained, instance-specific clarity offered by perfect fidelity, decompositional methods.&lt;/p&gt;&#xA;&lt;p&gt;On the other hand, my original method Ada-WHIPS (firmly within the decompositional methods category) enhances interpretability in clinical settings by providing direct, case-specific explanations, making it a powerful tool for clinicians needing detailed transparency for patient-specific decision-making. Given the choice of an AdaBoost model (or a Gradient Boosted Model, or a Random Forest), it makes sense to use an XAI method that is highly targeted to these decomposable ensembles. Ada-WHIPS digs deep into the internal structure of AdaBoost models, redistributing the adaptive classifier weights generated during model training (and therefore a function of the training data distribution) to extract interpretable rules at the decision node level.&lt;/p&gt;&#xA;&lt;p&gt;One area where Ada-WHIPS could benefit from the techniques in the new paper is the use of Bayesian methods to tune hyperparameters. Their approach potentially leads to improved model accuracy, a crucial factor in high-stakes environments like healthcare and &amp;ldquo;juicing up&amp;rdquo; the model internals for greater accuracy in the generated decision nodes. However, the paper appears to omit any detail about how this approach is deployed. This omission is indeed a great pity because, from what I understood, the Bayesian parameter selection was actually the authors&amp;rsquo; novel contribution (the use of ensembles and SHAP on these particular datasets being nothing particularly new).&lt;/p&gt;&#xA;&lt;p&gt;In conclusion, the SHAP-based approach offers valuable insights at a macro level, the new paper boasts improvements in model accuracy through Bayesian tuning, and my Ada-WHIPS method&amp;rsquo;s per-instance clarity and actionable insights should prove practical in scenarios where clinicians require detailed explanations of specific cases. I would be delighted to see some confluence of the three ideas, so that the benefits from each can combine and reinforce the use of highly targeted explainability in clinical applications.&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>Algebraic Aggregation of Random Forests</title>
      <link>https://xai.today/posts/algebraic-aggregation-forests/</link>
      <pubDate>Thu, 10 Aug 2023 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/algebraic-aggregation-forests/</guid>
      <description>&lt;p&gt;In my paper, &amp;ldquo;&lt;a href=&#34;https://link.springer.com/article/10.1007/s10462-020-09833-6&#34;&gt;CHIRPS: Explaining random forest classification&lt;/a&gt;&amp;rdquo;, I took an empirical approach to addressing model transparency by extracting rules that make Random Forest (RF) models more interpretable. Importantly, this was done without sacrificing the high levels of accuracy achieved by the high-performing RF models.&lt;/p&gt;&#xA;&lt;p&gt;The recently published &amp;ldquo;&lt;a href=&#34;https://link.springer.com/article/10.1007/s10009-021-00635-x?fromPaywallRec=false&#34;&gt;Algebraic aggregation of random forests: towards explainability and rapid evaluation&lt;/a&gt;&amp;rdquo; by Gossen and Steffen provides a theoretical counterpart, offering essential proofs and a mathematical framework for achieving explainability with RF models.&lt;/p&gt;&#xA;&lt;p&gt;While my paper focused on simplifying complex models by rule extraction on a per instance basis, this subsequent work introduces Algebraic Decision Diagrams (ADDs) to aggregate Random Forests, optimizing their structure and enhancing interpretability at the model level. Both papers aim to improve model transparency, though by different means: my approach is empirical, leveraging rule extraction to clarify black-box models, whereas the latter introduces algebraic methods to combine decision trees into efficient, understandable diagrams.&lt;/p&gt;&#xA;&lt;p&gt;The mathematical concepts in Gossen and Steffen&amp;rsquo;s paper, such as path reduction and algebraic operations, support model simplification. Importantly, the authors provide formal proofs that this aggregation retains the original model&amp;rsquo;s accuracy. This complements the practical focus in my paper, where the goal was also to maintain accuracy while increasing explainability.&lt;/p&gt;&#xA;&lt;p&gt;Ultimately, the two papers reach the same destination—improving transparency of RF models—but by different routes. While my paper uses rule extraction to bring clarity to complex models, the subsequent work constructs a theoretical basis using algebraic tools, providing formal assurances to the outcomes I demonstrated empirically. Together, they offer complementary perspectives on making RF models more understandable and efficient.&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>Explaining Random Forests with Representative Trees</title>
      <link>https://xai.today/posts/forest-for-trees/</link>
      <pubDate>Thu, 15 Jun 2023 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/forest-for-trees/</guid>
      <description>&lt;p&gt;The paper &amp;ldquo;&lt;a href=&#34;https://link.springer.com/article/10.1007/s41237-023-00205-2?fromPaywallRec=false&#34;&gt;Can’t see the forest for the trees: Analyzing groves to explain random forests&lt;/a&gt;&amp;rdquo; explores a novel take on model-specific explanations, as outlined in my own research (e.g. you can look at &amp;ldquo;&lt;a href=&#34;https://link.springer.com/article/10.1007/s10462-020-09833-6&#34;&gt;CHIRPS: Explaining random forest classification&lt;/a&gt;&amp;rdquo; as a reference). This new paper by Szepannek and von Holt seeks to make Random Forests (RF) more interpretable. RF are notoriously hard to explain due to their complexity and these novel methods works well for both classification and regression, which is a very useful extension to the field.&lt;/p&gt;&#xA;&lt;p&gt;The authors introduce most representative trees (MRT) and surrogate trees, essentially distilling a simpler model to run side by side with the black box RF. MRTs focus on highlighting individual trees within a random forest that best explain the overall model behavior, while surrogate trees mimic the forest with simpler, more digestible versions. I have some reservations about the latter approach, because my own research showed that any surrogate model comes with a failure rate, which is the number of examples that the surrogate classifies differently than the black box model under scrutiny. I also question the assertion that a model of 10 or 24 decision trees really is so interpretable. Even a model of this reduced size still likely contains far too many components for a human-in-the-loop to consider and understand.&lt;/p&gt;&#xA;&lt;p&gt;In any case, to give the authors their due credit, they navigate the trade-offs between accuracy and interpretability of both MRT and surrogate tree methods, and propose a novel concept called &lt;em&gt;groves&lt;/em&gt;: small collections of decision trees that balance the need for interpretability with predictive accuracy. Groves provide a middle ground by combining the benefits of MRTs and surrogate models, reducing the overall complexity while still offering meaningful insights into how the model operates. This approach aligns with the goal of making models more transparent and trustworthy.&lt;/p&gt;&#xA;&lt;p&gt;Through various case studies, the paper shows how groves and surrogate trees can be effectively applied to real-world datasets. The trade-off between model accuracy and explainability remains a central challenge. Yet, in these studies, groves provide a workable compromise by making it easier for humans to understand what is driving the model’s predictions without overwhelming them with unnecessary detail.&lt;/p&gt;&#xA;&lt;p&gt;The discussion also highlights a key challenge in using groves: deciding on the right number of trees to use for explanation. Using too many trees risks overwhelming the user with information (as I have already pointed out), while too few might fail to capture the complexity of the underlying model and run with an untenable failure rate. I dicuss ways to achieve a zero failure rate in my thesis. Keeping explanations concise and accessible is just a part of the complete picture.&lt;/p&gt;&#xA;&lt;p&gt;In conclusion, this paper underscores the crucial need for enhancing the interpretability of machine learning models, particularly in high-stakes fields like healthcare and finance, where decision transparency is essential. By extending the work in interpretability through methods like groves and surrogate trees, it addresses the challenge of making powerful models like random forests more understandable.&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>Explaining Random Forests with Boolean Satisfiability</title>
      <link>https://xai.today/posts/explaining-rf-with-sat/</link>
      <pubDate>Mon, 21 Jun 2021 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/explaining-rf-with-sat/</guid>
      <description>&lt;p&gt;The paper &amp;ldquo;&lt;a href=&#34;https://arxiv.org/pdf/2105.10278#page=7&amp;zoom=100,72,821&#34;&gt;On Explaining Random Forests with SAT&lt;/a&gt;&amp;rdquo; uses Boolean satisfiability (SAT) methods to provide a formal framework for generating explanations of Random Forest (RF) predictions. A key result in the paper is that abductive explanations (AXp) and contrastive explanations (CXp) can be derived by encoding the RF’s decision paths into propositional logic.&lt;/p&gt;&#xA;&lt;p&gt;Encoding a decision path as propositional logic, is an entirely reasoned approach and quite straightforward, as I showed in my paper &lt;a href=&#34;https://link.springer.com/article/10.1007/s10462-020-09833-6&#34;&gt;CHIRPS: Explaining random forest classification&lt;/a&gt;. The decision paths of an RF model can be transformed into a Boolean formula in Conjunctive Normal Form (CNF). For example, each decision tree in the forest is represented as a set of clauses. Following the paths for a single example prediction essentially carves out a region of the feature space with a set of step functions, resulting in a sub-region that must return the target response. When the clauses of this step functions set correspond to a subset of the features, a change in the remaining feature inputs has no effect on the model prediction. This subset is a prime implicant (PI) explanation.&lt;/p&gt;&#xA;&lt;p&gt;A PI-explanation is a minimal subset of features that are sufficient to guarantee a particular prediction made by a machine learning model. It represents the smallest set of conditions that, if held constant, would lead to the same classification result. Essentially, it&amp;rsquo;s the most concise explanation of why the model arrived at its decision, highlighting the critical features responsible for that prediction. In fact, my own research centred finding soft PI-explanations, and revealing the limits where they no longer hold true for extreme outliers and unusual examples.&lt;/p&gt;&#xA;&lt;p&gt;The authors of this paper show that finding AXp and CXp by this PI-explanation method reduces to solving a SAT problem and is therefore NP-hard but can be polynomial under specific conditions. This insight into the problem complexity is significant because it establishes that generating explanations is feasible when those assumptions are met and opens up the method to practical applications with real-world data.&lt;/p&gt;&#xA;&lt;p&gt;Overall, the SAT-based methodology enables a structured, efficient way to uncover the decision-making process of Random Forests, ensuring that their predictions are not just accurate but also explainable, which is crucial for domains requiring transparency like healthcare and finance.&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>How Subsets of the Training Data Aﬀect a Prediction</title>
      <link>https://xai.today/posts/training-subsets-affect-prediction/</link>
      <pubDate>Sun, 20 Dec 2020 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/training-subsets-affect-prediction/</guid>
      <description>&lt;p&gt;I was quite excited by the title of a new paper, on pre-publication this month. &lt;a href=&#34;https://www.academia.edu/84191713/Explainable_Artificial_Intelligence_How_Subsets_of_the_Training_Data_Affect_a_Prediction&#34;&gt;&amp;ldquo;Explainable Artificial Intelligence: How Subsets of the Training Data Affect a Prediction&amp;rdquo;&lt;/a&gt; by Andreas Brandsæter and Ingrid K. Glad, at first glance, appeared to have some close alignment to my own work &lt;a href=&#34;https://link.springer.com/article/10.1007/s10462-020-09833-6&#34;&gt;CHIRPS: Explaining random forest classification&lt;/a&gt;, published earlier this year in June. It&amp;rsquo;s generally highly desirable to connect with other researchers with which you share common ground, working contemporaneously. Often, fruitful collaborations are born.&lt;/p&gt;&#xA;&lt;p&gt;As it turns out, the authors have taken a fairly different approach to mine. The CHIRPS method discovers a large, high precision subset of neighbours in the training data, using a minimal number of constraints, that share the same classification from the model, and returns robust statistics that proxy for precision and coverage. Brandsæter and Glad&amp;rsquo;s method is a novel approach that works with regression and time series problems, and pre-supposes that there are subsets in the data (that may or may not be adjacent) that can be set up &lt;em&gt;in advance&lt;/em&gt; to reveal regions of influence on the final prediction of a given data point. We share a recognition of the importance of interpretability in AI and machine learning, especially in critical applications.&lt;/p&gt;&#xA;&lt;p&gt;Tthe authors propose a methodology that uses Shapley values to measure the importance of different training data subsets in shaping model predictions. Shapley values, originating from coalitional game theory, are adapted here to quantify the contribution of each subset of training data as if each subset were a “player” influencing the outcome of the model&amp;rsquo;s prediction. This approach offers a fresh perspective by directly associating predictions with specific training data subsets, which can reveal patterns or biases that feature-based explanations might miss.&lt;/p&gt;&#xA;&lt;p&gt;The paper delves into the theoretical framework of Shapley values in a coalitional game context and extends this to analyze subset importance. The authors describe how their methodology can pinpoint the impact of specific subsets on predictions, facilitating insights into model behavior, training data errors, and potential biases. By using subsets rather than individual data points or features, this approach is particularly well-suited to models that rely on large, high-dimensional datasets where feature importance alone may not fully capture influential patterns. This method is demonstrated to be useful in understanding how similar predictions may stem from different subsets of data, emphasizing the complex interactions within training data that influence predictions.&lt;/p&gt;&#xA;&lt;p&gt;Through several case studies, the paper demonstrates how Shapley values for subset importance can be applied in real-world scenarios. For example, in time series data and autonomous vehicle predictions, subsets of training data based on chronological segmentation reveal how specific periods contribute to model outputs. This approach is shown to be valuable for identifying anomalies or segment-specific patterns that could affect model accuracy or introduce biases. Additionally, by explaining the squared error for predictions, the authors illustrate how this methodology can also diagnose errors in training data, which could improve overall model reliability.&lt;/p&gt;&#xA;&lt;p&gt;The authors discuss limitations and challenges, particularly around the computational complexity of retraining models on multiple subsets to calculate Shapley values. They suggest that, while computationally intensive, this process can be optimized with parallel processing and may not need to be repeated for each new test instance. They also propose potential applications of this methodology in tailoring training data acquisition strategies, such as for cases where predictions are most critical, which can improve model performance by selectively sampling from influential subsets.&lt;/p&gt;&#xA;&lt;p&gt;In conclusion, Brandsæter and Glad’s paper represents a significant advancement in explainable AI by emphasizing the training data’s impact on model predictions. By shifting focus to data-centric explanations, their approach highlights how subsets within the data contribute directly to individual predictions, expanding the interpretative toolkit beyond traditional feature importance. This approach aligns with my own work on CHIRPS, underscoring the notion that providing contextual information from training data strengthens model transparency and interpretability. Using training data as a reference framework enables explainable AI methods to draw on established statistical theory, which ultimately lends robustness to explanations, even in black-box models. Together, these methods suggest a promising direction for explainable AI, wherein training data subsets serve as crucial elements to understand and elucidate model behavior effectively.&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>Counterfactual Explanations Help Identify Sources of Bias</title>
      <link>https://xai.today/posts/counterfactual-explanations-help-identify-bias/</link>
      <pubDate>Sun, 08 Mar 2020 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/counterfactual-explanations-help-identify-bias/</guid>
      <description>&lt;p&gt;By the end of 2020, the topic of eXplainable Artificial Intelligence (XAI) has become quite mainstream. One important developlment is counterfactual explanations, which (among other benefits) can to identify and reduce bias in machine learning models. Counterfactual explanations provide insights by showing how minimal changes in input features can alter model predictions. This approach has been crucial in exposing biased behavior, especially in sensitive applications like credit scoring or hiring. By identifying how protected attributes (e.g., gender or race) affect outcomes, practitioners could better address and mitigate unfair biases in AI systems (Verma et al., 2020).&lt;/p&gt;&#xA;&lt;p&gt;&lt;em&gt;Reference: Verma, S., &amp;amp; Rubin, J. (2020). Fairness Definitions Explained. Proceedings of the 2020 ACM/IEEE International Workshop on Software Fairness.&lt;/em&gt;&lt;/p&gt;&#xA;</description>
    </item>
    <item>
      <title>Faithful and Customizable Explanations of Black Box Models</title>
      <link>https://xai.today/posts/faithful-customizable-explanations/</link>
      <pubDate>Sun, 05 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://xai.today/posts/faithful-customizable-explanations/</guid>
      <description>&lt;p&gt;The authors of &amp;ldquo;&lt;a href=&#34;https://dl.acm.org/doi/10.1145/3306618.3314229&#34;&gt;Faithful and Customizable Explanations of Black Box Models&lt;/a&gt;&amp;rdquo; (MUSE) share a common goal with my own research: addressing the challenge of making machine learning models interpretable. Both emphasize the importance of transparency in decision-making, particularly in scenarios where human trust and understanding are critical, such as healthcare, judicial decisions, and financial assessments. Both they and I see decision rule structures as the ideal explanation format to explain model behaviour.&lt;/p&gt;&#xA;&lt;p&gt;MUSE uses a two-level decision set framework, which combines subspace descriptors and decision logic to generate explanations for different regions of the feature space. This is useful for zooming in to specific features and observation subsets of interest. Just like my own research, this is a highly user centric approach, emphasising a Human-in-the-Loop process of expert review of model decisions. My method differs in that it facilitates an individual detail review, potentially allowing the expert user to respond to individuals seeking some kind of review or redress over an automated decision. In essence, this is a response to the “computer says no” problem. The explanations are tailored to specific needs or contexts.&lt;/p&gt;&#xA;&lt;p&gt;This focus on end-user interaction reflects a broader effort in both frameworks to build trust in machine learning outputs by providing meaningful insights. Despite these similarities, the research ideas diverge in significant ways. MUSE has a broader scope, offering global explanations as well as targeted insights into specific subspaces of the model&amp;rsquo;s behaviour. It is designed to be model-agnostic, meaning it can work with any type of predictive system. My research has a specific focus on Decision Tree ensembles (Random Forest and Boosting methods), explaining how such a classifier reached a decision for a particular data point, emphasising precision and counterfactual reasoning.&lt;/p&gt;&#xA;&lt;p&gt;The methodologies also differ. MUSE employs optimization techniques to create compact and interpretable decision sets that balance fidelity, unambiguity, and interpretability. My approach, in contrast, extracts decision paths from random forests using frequent pattern mining, constructing rules that highlight the most influential attributes in a model&amp;rsquo;s classification. These distinct methods reflect their differing objectives: MUSE aims to provide a comprehensive view of a model&amp;rsquo;s behaviour, while I seek to zero in on individual classifications with a high degree of local accuracy.&lt;/p&gt;&#xA;&lt;p&gt;Together, these research approaches represent two sides of the same coin: one offering a high-level overview and the other delivering precise, localised explanations. There is a lot of scope for combining the two methods in a collaborative framework for holistic explanations.&lt;/p&gt;&#xA;</description>
    </item>
  </channel>
</rss>
