Gabriele Nocco

Data Scientist
AS Roma, Italy

Areas of Expertise:
Big Data
Machine Learning and graph algorithm

Short Bio:
Gabriele Nocco, with a master’s degree in Mathematics at the University of “Roma Tre”, began working in IT consulting companies for the largest Italian public entities in security and data enhancement projects for ten years, with particular focus on security, Big Data, Machine Learning and graph algorithms.

During this period he specialized in Machine Learning and Artificial Intelligence at the University of Sapienza, getting in touch with the research group of professors Uncini-Scardapane, and producing some publications for international conferences.

Also during this period he is engaged in the dissemination of the concepts of Machine Lerning, Data Science, Big Data and Artificial Intelligence, and organizing single or monthly dissemination events.

He is Speaker in single events, academic and industrial, and in community scheduled events. Professor of topics related to Data Science in public schools, universities and a second level Master at CERN.

He is the founder of the cycle of events “Meetup Machine Learning / Data Science Rome”, which has now reached the fifth year of life.

Founder of the “IAML: Italian Association for Machine Learning”, which aims to represent a hub of Italian Machine Learning communities.

Since 2018 he has been responsible for the data of AS Roma, responsible for the Data Lake project and for all the company’s Analytics projects (“ABACUS” project)


Explainable Machine Learning
The course will attempt to provide the student with the knowledge of the research branch of machine learning that attempts to explain the behavior of a machine learning model, which is often considered a black box.

To do this, the student should at least be familiar with the most common supervised and unsupervised algorithms, such as regressions, tree-based models, SVM and neural networks. Minimal programming knowledge is strongly recommended.


  • Motivation: Risks of black-box in Industries, Need for Transparency, Model Validation, Scientific Consistency, Feature Importance
  • Evaluation of Explainability: Accuracy, Coverage / Representativeness, Complexity, Human friendliness
  • Explainability By Design
  • Algorithms: Perturbation-based, LIME and its variants, SHAP, GAMs, etc
  • Exercises

The final exam will be carried out through the execution of practical exercises.