The Impact of GHG Emissions on Human Health and its Environment using XAI
Stanley Ziweritin1, David Waheed Idowu2
1S. Ziiweritin, Department of Estate Management and Valuation, Akanu Ibiam Federal Polytechnic, Unwana-Afikpo, Nigeria.
2I. D. Waheed, Department of Computer Science, University of Portharcourt, Nigeria.
Manuscript received on 20 July 2024 | Revised Manuscript received on 26 July 2024 | Manuscript Accepted on 15 September 2024 | Manuscript published 30 September 2024 | PP: 7-14 | Volume-13 Issue-3, September 2024 | Retrieval Number: 100.1/ijrte.C814013030924 | DOI: 10.35940/ijrte.C8140.13030924
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Explainable AI(XAI) is a revolutionary concept in artificial intelligence that supports professionals in creating trust between people in the decisions of learning models. Greenhouse gases created in the atmosphere is driving our weather to become more irregular and intense. This endangers human health, affects crops and plants. XAI techniques remain popular, but they cannot disclose system behavior in a way that promotes analysis. Predicting GHG emissions and their impact on human health is an important aspect of monitoring emission rates by industries and other sectors. However, a handful of investigations have being used to examine the collective effect of industries such as construction, transportation, CO2, and others on emission patterns. This research tackles a knowledge vacuum by offering an explainable machine learning model. This framework employed a random forest classifier combined with two different explainable AI methodologies to give insights into the viability of the proposed learning model. The goal is to use XAI in determining the impact of GHG emissions on humans and its environment. A quantitative survey was carried out to investigate the possibilities of determining GHG emission rates more explainable. We created a random forest model, trained on GHG emission data using SHAP and LIME techniques. This was helpful in providing local and global explanations on model sample order by similarity, output value, and original sample ranking. The model resulted in high accuracy and enhanced interpretability with XAI, allowing decision makers comprehend what the AI system truly tells us. LIME exceeded SHAP in terms of comprehension, and satisfaction. In terms of trustworthiness, SHAP surpassed LIME.
Keywords: LIME, SHAP, Random Forest, Explainable AI, interpretability
Scope of the Article: Computer Science and Applications