MAKE-xAI 2020

CD-MAKE 2020 Workshop on explainable Artificial Intelligence 

The workshop page can be found here: https://human-centered.ai/explainable-ai-2020/

GOAL

In this cross-disciplinary workshop we aim to bring together international  experts cross-domain, interested in making machine decisions transparent, interpretable, transparent, reproducible, replicable, re-traceable, re-enactive, comprehensible, explainable towards ethical-responsible AI/machine learning.

All submissions will be peer reviewed by three members of our international scientific committee – see authors instructions here: https://cd-make.net/authors-area/submission

Accepted papers will be presented at the workshop orally or as poster and published in the IFIP CD-MAKE Volume of Springer Lecture Notes (LNCS), see LNCS 11015 as example.

Additionally there is also the opportunity to submit to our thematic collection “explainable AI in Medical Informatics and Decision Making” in Springer/Nature BMC Medical Informatics and Decision Making (MIDM), SCI-Impactfactor 2,134
https://human-centered.ai/special-issue-explainable-ai-medical-informatics-decision-making/

TOPICS:

In-line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas –  Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work in order to bring together experts from different fields, e.g. computer science, psychology, sociology, philosophy, law, business, … experts who would otherwise possibly not meet together. This cross-domain integration and appraisal of different fields of science and industry shall provide an atmosphere to foster different perspectives and opinions; it will offer a platform for novel crazy ideas and a fresh look on the methodologies to put these ideas into business.

Topics include but are not limited to (alphabetically – not prioritized):

  • Abstraction of human explanations (“How can xAI learn from human explanation straregies?”)
  • Acceptance (“How to ensure acceptance of AI/ML among end users?”)
  • Accountability and responsibility (“Who is to blame if something goes wrong?”)
  • Action Influence Graphs
  • Active machine learning algorithm design for interpretability purposes
  • Adversarial attacts detection, explanation and defense (“How can we interpret adversarial examples?”)
  • Adaptive personal xAI systems (“To whom, when, how to provide explanations?”)
  • Adaptable explanation interfaces (“How can we adapt xAI-interfaces to the needs, demands, requirements of end-users and domain experts?”)
  • Affective computing for successful human-AI interaction  and human-robot interaction
  • Argumentation theories of explanations
  • Artificial advice givers
  • Bayesian rule lists
  • Bayesian modeling and optimization (“How to design efficient methods for learning interpretable models?”)
  • Bias and fairness in explainable AI (“How to avoid bias in machin learning applications?”)
  • Bridiging the gap between humans and machines (concepts, methods, tools, …)
  • Causal learning, causal discovery, causal reasoning, causal explanations, and causal inference
  • Causality and causability research (“measuring understanding”, benchmarking, evaluation of interpretable systems)
  • Cognitive issues of explanation and understanding (“understanding understanding”)
  • Combination of statistical learning approaches with large knowledge repositories (ontologies, terminologies)
  • Combination of deep learning approaches with traditional AI approaches
  • Comparison of human intelligence vs. artificial intelligence (HCI — KDD)
  • Computational behavioural science (“How are people thinking, judging, making decisions – and explaining it?”)
  • Constraints-based explanations
  • Counterfactual explanations (“What did not happen?”, “How can we provide counterfactual explanations?”)
  • Contrastive explanation methods (CEM) as e.g. in Criminology or medical diagnosis
  • Cyber security, cyber defense and malicious use of adversarial examples
  • Data dredging explainability and causal inference
  • Decision making and decision support systems (“Is a human-like decision good enough?)
  • Dialogue systems for enhanced human-ai interaction
  • Emotional intelligence (“Emotion AI”) and emotional UI
  • Ethical aspects of AI in general and human-AI interaction in particular
  • Evaluation criteria
  • Explanation agents and recommender systems
  • Explanatory user interfaces and Human-Computer Interaction (HCI) for explainable AI
  • Explainable reinforcement learning
  • Explainable and verifiable activity recognition
  • Explaining agent behaviour (“How to know if the agent is going to make a mistake and when?”)
  • Explaining robot behviour (“Why did you take action x in state s?”)
  • Fairness, accountability and trust (“How to ensure trust in AI?”)
  • Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability
  • Frameworks for reasoning about causality
  • Gradient based interpretability to understand data sensitivity
  • Graphical causal inference and graphical models for explanation and causality
  • Ground truth
  • Group recommender systems
  • Human-AI interaction and intelligent interfaces
  • Human-AI teaming for ensuring trustworthy AI systems
  • Human-centered AI
  • Human-in-the-loop learning approaches, methodologies, tools and systems
  • Human rights vs. robot rights
  • Implicit knowlege elicitation
  • Industrial applications of xAI, e.g. in medicine, autonomous driving, production, finance, ambient assisted living, etc.
  • Integrating of deep learning approaches with grammars of graphical models
  • Interactive machine learning with a human-in-the-loop
  • Interactive machine learning with (many) humans-in-the-loop (crowd intelligence)
  • Interpretability in ranking algorithms (“How to explain ranking algorithms, e.g. patient ranking in helath, in human interpretable ways?”)
  • Interpretability in reinforement learning
  • Interpretable representation learning (“How to make sense of data assigned to similar representations?”)
  • Kandinsky Patterns experiments and extensions
  • Legal aspects of AI/ML (“Who is to blame if an error occurs?”)
  • Metrics for evaluation of the quality of explanations
  • Misinformation in Social Media, Ground truth evaluation and explanation
  • Model explainability, quality and provenance
  • Moral principles and moral dilemmas of current and future AI
  • Natural Language Argumentation interfaces for explanation generation
  • Natural Language generation for explanatory models
  • Non-IID learning models, algorithms, analytics, recommenders
  • Novel intelligent future user interfaces (e.g. affective mobile interfaces)
  • Novel methods, algorithms, tools, procedures for supporting explainability in the AI/ML pipeline
  • Personalized xAI
  • Philosophical approaches of explainability, theories of mind (“When is it enough explained? Do we have a degree of saturation?”)
  • Policy explanations to humans (“How to explain and why is the next step the best action to select?”)
  • Proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes
  • Privacy, surveillance, control and agency
  • Psychology of human concept learning and tranfer to machine learning
  • Python for nerds (Python tricks of the trade – relevant for explainable AI)
  • Quality of explanations and how to measure quality of explanations
  • Rendering of reasoning processes
  • Reproducibility, replicability, retraceability, reenactivity
  • Self-explanatory agents and decision support systems
  • Similarity measures for xAI
  • Social implications of AI (“What AI impacts”), e.g. labour trends, human-human interaction, machine-machine interaction
  • Soft Decision Trees (SDT)
  • Spartanic approaches of explanations (“What is the most simplest explanation?”)
  • Structureal causal equations (SCM)
  • Theoretical approaches of explainability (“What makes a good explanation?”)
  • Theories of explainable/interpretable models
  • Tools for model understanding (diagnostic, debugging, introspection, visualization, …)
  • Transparent reasoning
  • Trustworthy human-AI teaming under uncertainties
  • Understanding understanding
  • Understanding Markov decision processes and prtially observable markov decision processes
  • Usability of Human-AI interfaces
  • Visualizing learned representations
  • Web- and mobile-based cooperative intelligent information systems and tools

Workshop Organizers:

Randy GOEBEL, University of Alberta, Edmonton, CA (workshop co-chair)
Andreas HOLZINGER, University of Alberta, Edmonton, CA, and Medical University Graz, AT  (workshop co-chair)
Peter KIESEBERG, University of Applied Sciences, St.Poelten, AT
Freddy LECUE, Thales, Montreal, CA, and INRIA, Sophia Antipolis, FR
Luca LONGO, Knowledge & Data Engineering Group, Trinity College, Dublin, IE

Inquiries please directly to a.holzinger AT human-centered.ai

Program Committee 2020:

Jose Maria ALONSO, CiTiUS – University of Santiago de Compostela, Spain
CD-MAKE Area(1, 2) Google Scholar

Tarek R. BESOLD, Telefonica Innovation Alpha, Barcelona, Spain
CD-MAKE Area(1,2) Google Scholar

Guido BOLOGNA, Computer Vision and Multimedia Lab, Université de Genève, Geneva, Switzerland
Google Scholar

Federico CABITZA, Università degli Studi di Milano-Bicocca, DISCO, Milano, Italy
Google Scholar

Ajay CHANDER, Computer Science Department, Stanford University and Fujitsu Labs of America, United States

Freddy LECUE, Accenture Technology Labs, Dublin, IE and INRIA Sophia Antipolis, France
CD-MAKE Area (1, 2) Google Scholar

Daniele MAGAZZENI, Trusted Autonomous Systems Hub, King’s College London, United Kingdom
Google Scholar

Tim MILLER, School of Computing and Information Systems, The University of Melbourne, Australia
Google Scholar

Huamin QU, Human-Computer Interaction Group & HKUST VIS, Hong-Kong University of Science & Technology, China
Google Scholar

Andrea VEDALDI, Visual Geometry Group, University of Oxford, United Kingdom
Google Scholar

Jianlong Zhou, Faculty of Engineering and Information Technology,University of Technology Sydney, Australia
Google Scholar

Christian BAUCKHAGE, Fraunhofer Institute Intelligent Analysis, IAIS, Sankt Augustin, and University of Bonn, Germany
Google Scholar

Hani HAGRAS, Computational Intelligence Centre, School of Computer Science & Electronic Engineering, University of Essex, United Kingdom
Google Scholar

Barbara HAMMER, Machine Learning Group, Center of Excellence & Faculty of Technology, Bielefeld University, Germany
CD-MAKE Area (1, 2) Google Scholar

Brian Y. LIM, Department of Computer Science, National University of Singapore, Singapore
CD-MAKE Area (1, 2) Google Scholar

Luca LONGO, School of Computer Science, Technological University Dublin, IE
CD-MAKE Area (1, 2, 3) Google Scholar

Vaishak BELLE, Belle Lab, Centre for Intelligent Systems and their Applications, School of Informatics, University of Edinburgh, UK
Google Scholar

Frenay BENOIT, Universite de Namur, BE
Google Scholar

Enrico BERTINI, New York University, Tandon School of Engineering, US
Google Scholar

Aldo FAISAL, Department of Computing, Brain and Behaviour Lab, Imperial College London, UK
Google Scholar

Bryce GOODMAN, Oxford Internet Institute and San Francisco Bay Area, CA, US
Google Scholar

Shunjun LI, Cyber Security Group, University of Kent, Canterbury, UK
Google Scholar

Fabio MERCORIO, University of Milano-Bicocca, CRISP Research Centre, Milano, IT
Google Scholar

Brian RUTTENBERG, Charles River Analytics, Cambridge, MA, US
Google Scholar

Wojciech SAMEK, Machine Learning Group, Fraunhofer Heinrich Hertz Institute, Berlin, DE
Google Scholar

Gerhard SCHURZ, Düsseldorf Center for Logic and Philosophy of Science, University Düsseldorf, DE
Google Scholar

Janusz WOJTUSIAK, Machine Learning and Inference Lab, George Mason University, Fairfax, US
Google Scholar

Alison SMITH, University of Maryland, MD, US
Google Scholar

Mohan SRIDHARAN, University of Auckland, NZ
Google Scholar

Simone STUMPF, City, University London, UK
Google Scholar

Ramya Malur SRINIVASAN, Fujitsu Labs of America, Sunnyvale, CA, US
Google Scholar