Explainable Artificial Intelligence [electronic resource] : Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I / edited by Luca Longo, Sebastian Lapuschkin, Christin Seifert.

Colaborador(es): Longo, Luca [editor.] | Lapuschkin, Sebastian [editor.] | Seifert, Christin [editor.] | SpringerLink (Online service)Tipo de material: TextoTextoSeries Communications in Computer and Information Science ; 2153Editor: Cham : Springer Nature Switzerland : Imprint: Springer, 2024Edición: 1st ed. 2024Descripción: XVII, 494 p. 143 illus., 137 illus. in color. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9783031637872Tema(s): Artificial intelligence | Natural language processing (Computer science) | Application software | Computer networks  | Artificial Intelligence | Natural Language Processing (NLP) | Computer and Information Systems Applications | Computer Communication NetworksFormatos físicos adicionales: Printed edition:: Sin título; Printed edition:: Sin títuloClasificación CDD: 006.3 Clasificación LoC:Q334-342TA347.A78Recursos en línea: Libro electrónicoTexto
Contenidos:
-- Intrinsically interpretable XAI and concept-based global explainability. -- Seeking Interpretability and Explainability in Binary Activated Neural Networks. -- Prototype-based Interpretable Breast Cancer Prediction Models: Analysis and Challenges. -- Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model. -- Revisiting FunnyBirds evaluation framework for prototypical parts networks. -- CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models. -- Unveiling the Anatomy of Adversarial Attacks: Concept-based XAI Dissection of CNNs. -- AutoCL: AutoML for Concept Learning. -- Locally Testing Model Detections for Semantic Global Concepts. -- Knowledge graphs for empirical concept retrieval. -- Global Concept Explanations for Graphs by Contrastive Learning. -- Generative explainable AI and verifiability. -- Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation. -- Generative Inpainting for Shapley-Value-Based Anomaly Explanation. -- Challenges and Opportunities in Text Generation Explainability. -- NoNE Found: Explaining the Output of Sequence-to-Sequence Models when No Named Entity is Recognized. -- Notion, metrics, evaluation and benchmarking for XAI. -- Benchmarking Trust: A Metric for Trustworthy Machine Learning. -- Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI. -- Conditional Calibrated Explanations: Finding a Path between Bias and Uncertainty. -- Meta-evaluating stability measures: MAX-Sensitivity & AVG-Senstivity. -- Xpression: A unifying metric to evaluate Explainability and Compression of AI models. -- Evaluating Neighbor Explainability for Graph Neural Networks. -- A Fresh Look at Sanity Checks for Saliency Maps. -- Explainability, Quantified: Benchmarking XAI techniques. -- BEExAI: Benchmark to Evaluate Explainable AI. -- Associative Interpretability of Hidden Semantics with Contrastiveness Operators in Face Classification tasks.
En: Springer Nature eBookResumen: This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
Star ratings
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura Copia número Estado Fecha de vencimiento Código de barras
Libro Electrónico Biblioteca Electrónica
Colección de Libros Electrónicos 1 No para préstamo

-- Intrinsically interpretable XAI and concept-based global explainability. -- Seeking Interpretability and Explainability in Binary Activated Neural Networks. -- Prototype-based Interpretable Breast Cancer Prediction Models: Analysis and Challenges. -- Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model. -- Revisiting FunnyBirds evaluation framework for prototypical parts networks. -- CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models. -- Unveiling the Anatomy of Adversarial Attacks: Concept-based XAI Dissection of CNNs. -- AutoCL: AutoML for Concept Learning. -- Locally Testing Model Detections for Semantic Global Concepts. -- Knowledge graphs for empirical concept retrieval. -- Global Concept Explanations for Graphs by Contrastive Learning. -- Generative explainable AI and verifiability. -- Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation. -- Generative Inpainting for Shapley-Value-Based Anomaly Explanation. -- Challenges and Opportunities in Text Generation Explainability. -- NoNE Found: Explaining the Output of Sequence-to-Sequence Models when No Named Entity is Recognized. -- Notion, metrics, evaluation and benchmarking for XAI. -- Benchmarking Trust: A Metric for Trustworthy Machine Learning. -- Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI. -- Conditional Calibrated Explanations: Finding a Path between Bias and Uncertainty. -- Meta-evaluating stability measures: MAX-Sensitivity & AVG-Senstivity. -- Xpression: A unifying metric to evaluate Explainability and Compression of AI models. -- Evaluating Neighbor Explainability for Graph Neural Networks. -- A Fresh Look at Sanity Checks for Saliency Maps. -- Explainability, Quantified: Benchmarking XAI techniques. -- BEExAI: Benchmark to Evaluate Explainable AI. -- Associative Interpretability of Hidden Semantics with Contrastiveness Operators in Face Classification tasks.

This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.

UABC ; Perpetuidad

Con tecnología Koha