Skip to main content

SoBigData Articles

2 Data Science PhD positions on Explainable AI funded by the ERC grant XAI

The ERC Advanced Grant XAI “Science & technology for the eXplanation of AI decision making”, led by Fosca Giannotti of the Italian CNR, in collaboration with the PhD program in “Data Science” by Scuola Normale Superiore in Pisa, invites applications for 2 fully funded PhD student positions (three years) to pursue doctoral research in the soaring field of “Explainable Machine Learning for Transparent and Trustworthy Artificial Intelligence”.

The PhD positions in “Data Science” starting with the academic year 2019-2020 have been increased of 2 three-year grants on the project EU ERC "Science and technology for the eXplanation of AI decision making" (XAI). The successful candidates, will benefit of a research grant, corresponding to a gross salary of approximately 25,000 Euro per year.

Info on the application procedure: https://www.sns.it/en/admissions/phd/how-to-apply-for-the-phd-courses
Online application at: https://serse.sns.it
Deadline for applications: 11:59 pm (CEST) of Thursday, 29 August, 2019.

More information on the Data Science PhD, a multi-disciplinary, multi-institution PhD program jointly offered by Scuola Normale Superiore, University of Pisa, CNR – the Italian National Research Council, Scuola Superiore Sant’Anna and Scuola IMT Lucca, can be found here:

https://www.sns.it/en/data-science
https://datasciencephd.eu/

For more info on the application procedure, contact: phd@sns.it
For more info on the research topic, contact the Principal investigator of XAI ERC project, Fosca Giannotti, at fosca.giannotti@isti.cnr.it


EU ERC Advanced Grant 2019 - "Science and technology for the eXplanation of AI decision making" (XAI) – Principal investigator: Fosca Giannotti, ISTI-CNR, Pisa

 

Black box AI systems for automated decision making and classification, often based on machine learning over (big) data, map a user's features into a class or a score, or classify images and text data, without exposing the reasons. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from the training data, such as human prejudices and collection artifacts, which may lead to unfair or wrong decisions.

 

The XAI ERC project focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, following several research lines: how to devise machine learning models that are transparent-by-design, how to perform black-box model auditing and explanation, how to reveal data and algorithmic bias, how to learn causal relationships, how to make explainable AI work in concrete domains, such as healthcare, risk assessment, justice, etc. See the PI’s recent survey for more information:

Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51, 5, Article 93 (August 2018), 42 pages. DOI: https://doi.org/10.1145/3236009