{"id":3329,"date":"2023-03-01T07:00:00","date_gmt":"2023-03-01T13:00:00","guid":{"rendered":"https:\/\/baylor.ai\/?p=3329"},"modified":"2023-11-04T20:00:34","modified_gmt":"2023-11-05T01:00:34","slug":"editorial-designing-ai-using-a-human-centered-approach-explainability-and-accuracy-toward-trustworthiness","status":"publish","type":"post","link":"https:\/\/lab.rivas.ai\/?p=3329","title":{"rendered":"(Editorial) Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"585\" src=\"https:\/\/baylor.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-1024x585.png\" alt=\"\" class=\"wp-image-3331\" srcset=\"https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-1024x585.png 1024w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-300x171.png 300w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-768x439.png 768w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-1536x878.png 1536w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-863x493.png 863w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta-189x108.png 189w, https:\/\/lab.rivas.ai\/wp-content\/uploads\/2023\/11\/DALL\u00b7E-2023-11-04-19.47.45-A-collage-in-Baylor-University-colors-of-green-and-gold.-The-central-element-is-a-human-brain-intertwined-with-circuitry-and-AI-symbols.-This-is-juxta.png 1792w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In the rapidly evolving world of artificial intelligence (AI), the IEEE Transactions on Technology and Society recently published a special issue that delves into the heart of AI&#8217;s most pressing challenges and opportunities. This editorial piece has garnered widespread attention. <a href=\"https:\/\/ieeexplore.ieee.org\/document\/10086944\">Read the full editorial here<\/a>.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite>J. R. Schoenherr, R. Abbas, K. Michael, P. Rivas and T. D. Anderson, &#8220;Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness,&#8221; in IEEE Transactions on Technology and Society, vol. 4, no. 1, pp. 9-23, March 2023, doi: <a href=\"https:\/\/ieeexplore.ieee.org\/document\/10086944\" data-type=\"link\" data-id=\"https:\/\/ieeexplore.ieee.org\/document\/10086944\">10.1109\/TTS.2023.3257627<\/a>.<\/cite><\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Essence of the Special Issue<\/strong><\/h3>\n\n\n\n<p>This special issue comprises eight thought-provoking papers that collectively address the multifaceted nature of AI. The journey begins with a reconceptualization of AI, leading to discussions on the pivotal role of explainability and accuracy in AI systems. The papers emphasize that designing AI with a human-centered approach while recognizing the importance of ethics is not a zero-sum game.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Key Highlights<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Reconceptualizing AI:<\/strong> Clarke, a Fellow of the Australian Computer Society, revisits the original conception of AI and proposes a fresh perspective, emphasizing the synergy between human and artifact capabilities.<\/li>\n\n\n\n<li><strong>The Challenge of Explainability:<\/strong> Adamson, a Past President of the IEEE\u2019s Society on the Social Implications, delves into the complexities of AI systems, highlighting the concealed nature of many AI algorithms and the need for post-hoc reasoning.<\/li>\n\n\n\n<li><strong>Trustworthy AI:<\/strong> Petkovic underscores that trustworthy AI requires both accuracy and explainability. He emphasizes the importance of explainable AI (XAI) in ensuring user trust, especially in high-stakes applications.<\/li>\n\n\n\n<li><strong>Bias in AI:<\/strong> A team of researchers, including Nagpal, Singh, Singh, Vatsa, and Ratha, evaluates the behavior of face recognition models, shedding light on potential biases related to age and ethnicity.<\/li>\n\n\n\n<li><strong>AI in Healthcare:<\/strong> Dhar, Siuly, Borra, and Sherratt discuss the challenges and opportunities of deep learning in the healthcare domain, emphasizing the ethical considerations surrounding medical data.<\/li>\n\n\n\n<li><strong>AI in Education:<\/strong> Tham and Verhulsdonck introduce the &#8220;stack&#8221; analogy for designing ubiquitous learning, emphasizing the importance of a human-centered approach in smart city contexts.<\/li>\n\n\n\n<li><strong>Ethics in Computer Science Education:<\/strong> Peterson, Ferreira, and Vardi discuss the role of ethics in computer science education, emphasizing the need for emotional engagement to understand the potential impacts of technology.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>A Call to Action<\/strong><\/h3>\n\n\n\n<p>As guest editors deeply engaged in human-centric approaches to AI, we challenge all stakeholders in the AI design process to consider the multidimensionality of AI. It&#8217;s crucial to move beyond the trade-offs mindset and prioritize accuracy and explainability. If a decision made by an AI system cannot be explained, especially in critical sectors like finance and healthcare, should it even be proposed?<\/p>\n\n\n\n<p>This special issue is a testament to the importance of ethics, accuracy, explainability, and trustworthiness in AI. It underscores the need for a human-centered approach to designing AI systems that benefit society. <em>For a deeper understanding of each paper and to explore the insights shared by the authors, <a href=\"https:\/\/ieeexplore.ieee.org\/xpl\/tocresult.jsp?isnumber=10086685&amp;punumber=8566059\">check out the full special issue in IEEE Transactions on Technology and Society<\/a>.<\/em><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the rapidly evolving world of artificial intelligence (AI), the IEEE Transactions on Technology and Society recently published a special issue that delves into the heart of AI&#8217;s most pressing challenges and opportunities. This editorial piece has garnered widespread attention. Read the full editorial here. J. R. Schoenherr, R. Abbas, K. Michael, P. Rivas and &hellip; <a href=\"https:\/\/lab.rivas.ai\/?p=3329\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">(Editorial) Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[3,5],"class_list":["post-3329","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-ai-ethics-standards","tag-ai-orthopraxy"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/3329","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3329"}],"version-history":[{"count":5,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/3329\/revisions"}],"predecessor-version":[{"id":3335,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=\/wp\/v2\/posts\/3329\/revisions\/3335"}],"wp:attachment":[{"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3329"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3329"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lab.rivas.ai\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3329"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}