Der Artikel wird am Ende des Bestellprozesses zum Download zur Verfügung gestellt.

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

Sofort lieferbar | Lieferzeit: Sofort lieferbar I
ISBN-13:
9783030289546
Veröffentl:
2019
Seiten:
439
Autor:
Wojciech Samek
Serie:
11700, Lecture Notes in Artificial Intelligence Lecture Notes in Computer Science
eBook Typ:
PDF
eBook Format:
EPUB
Kopierschutz:
1 - PDF Watermark
Sprache:
Englisch
Beschreibung:

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.
Towards Explainable Artificial Intelligence.- Transparency: Motivations and Challenges.- Interpretability in Intelligent Systems: A New Concept?.- Understanding Neural Networks via Feature Visualization: A Survey.- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation.- Unsupervised Discrete Representation Learning.- Towards Reverse-Engineering Black-Box Neural Networks.- Explanations for Attributing Deep Neural Network Predictions.- Gradient-Based Attribution Methods.- Layer-Wise Relevance Propagation: An Overview.- Explaining and Interpreting LSTMs.- Comparing the Interpretability of Deep Networks via Network Dissection.- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison.- The (Un)reliability of Saliency Methods.- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation.- Understanding Patch-Based Learningof Video Data by Explaining Predictions.- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks.- Interpretable Deep Learning in Drug Discovery.- Neural Hydrology: Interpreting LSTMs in Hydrology.- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI.- Current Advances in Neural Decoding.- Software and Application Patterns for Explanation Methods.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.

Google Plus
Powered by Inooga