Special Session on Explainability of Learning Machines
Anchorage, Alaska,
May, 2017 (exact day TBA)
** PREPARE YOUR PAPERS NOW: November 15, 2016, submission deadline **
(submit through the IJCNN website; and select our special session)
Research progress in machine learning and pattern recognition has led to a variety of modeling techniques with (almost) human-like performance in a variety of tasks. A clear example of this type of models are neural networks, whose deep variants dominate the arenas of computer vision and natural language processing among other fields. Although this type of models have obtained astounding results in a variety of tasks (e.g., face recognition with facenet [1]), they are limited in their explainability and interpretability. That is, in general, users cannot say too much about:
This, in turn, raises multiple questions about decisions (why a decision is preferred over another, and how confident is the learning machine in its decision, the series of steps that led the learning machine to a given decision) and model structure (why a determined parameter configuration was chosen, what do parameters mean, how a user could interpret the learned model, what additional knowledge would be required from the user/world to improve the model). Hence, while good performance is a critical required characteristic for learning machines, explainability/interpretability capabilities are highly needed if one wants to take learning machines to the next step, and in particular include them into decision support systems involving human supervision (for instance in medicine or in security). It is only recently that there have been efforts from the community in this direction, see e.g., [2,3,4] (there is even an open call in this topic from DARPA, see [5]), therefore we think it is the perfect time to organize a special session around this relevant topic.
We organize a special session on explainable machine learning. This session aims at compiling the latest efforts and research advances from the scientific community in enhancing traditional machine learning algorithms with explainability capabilities at both the learning and decision stages. Likewise the special session targets novel methodologies and algorithms implementing explanatory mechanisms.
We foresee that this special session will capture a snapshot of cutting edge research on explainable learning machines and will serve to identify priority research directions in this interesting and novel research topic.
Topics
The scope of the special session comprises all aspects of explainability of learning machines, including but not limited to the following topics:
In addition, because of the theme of an associated competition (under evaluation) we consider the following topics also relevant for the special session:
Important dates
Please prepare and submit your paper according to the guidelines in:
http://www.ijcnn.org/paper-submission
Make sure to select the special session on Explainability and Machine Learning
Organizers
NEWS
July 1: ChaLearn LAP and FotW Challenge and Workshop @ CVPR2016 Was a success!, thank you very much for your interest and participation (photos coming soon).
June 30: Joint Contest on Multimedia Challenges Beyond Visual Analysis @ICPR16 Started. Featuring four interesting competitions.
January 25: Tracks 1 (Apparent Age Estimation), 2 (Accessories Classification) and 3 (Smile and Gender Classification) started. Enjoy the Challenge!
Thanks to our SS IJCNN 2017 sponsors: Microsoft Research, ChaLearn, University of Barcelona, INAOE, Unviersite Paris Saclay, and more TBA. This research has been partially supported by projects TIN2012-39051 and TIN2013-43478-P.