About

Pietro Barbiero is a computational scientist and researcher at the University of Cambridge with over three years of expertise in machine learning, neural networks, and evolutionary algorithms, particularly in precision medicine. His work integrates advanced AI methods with mathematical modeling to address complex challenges in healthcare and computational sciences. Currently pursuing a Doctor of Philosophy (Ph.D.) in Artificial Intelligence at the University of Cambridge, Pietro has been actively involved as a Research Assistant since August 2020. His academic journey also includes a Master of Engineering (MEng) in Mathematical Engineering from Politecnico di Torino, showcasing his strong foundation in quantitative disciplines. Pietro's contributions at Cambridge are centered on innovative projects such as the "Digital Patient." This groundbreaking initiative focuses on developing a "digital twin" of patients, combining AI techniques with mathematical modeling to create comprehensive frameworks for predicting and monitoring physiological conditions. This project has the potential to revolutionize personalized medicine by enabling real-time diagnostics and tailored treatment plans. Another notable endeavor, "Deep Competitive Learning," highlights Pietro’s work in enhancing unsupervised learning techniques. By creating gradient-based competitive layers for integration with deep learning models, he has advanced the capabilities of AI systems to tackle unstructured data effectively. These projects reflect Pietro’s commitment to pushing the boundaries of AI research and its practical applications. Pietro’s professional experience includes a stint as an Algorithm Developer at S.d.O Servizi di Organizzazione in Italy, where he honed his skills in computational algorithms and software development. This blend of academic rigor and industry exposure positions him uniquely to bridge theoretical concepts with real-world implementations. His proficiency spans an array of technical domains, including Artificial Intelligence, Machine Learning, and Neural Networks. Endorsed for his skills, Pietro's expertise is evident in his ability to design and execute complex AI-driven solutions. He has a collaborative ethos, evident from his involvement in interdisciplinary projects and his engagement with the academic community. Pietro’s work has garnered attention for its innovation and potential to transform sectors such as healthcare and data science. With his dedication to advancing the frontiers of AI and a proven track record of impactful research, he continues to make significant strides in the realm of computational science.

View More >>

Skills

Experience

Research Assistant

University of Cambridge

Aug-2020 to Present

Publication

  • dott image December, 2024

From Charts to Atlas: Merging Latent Spaces into One

Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent sp...

  • dott image September, 2023

Categorical Foundations of Explainable AI: A Unifying Theory

Explainable AI (XAI) aims to address the human need for safe and reliable AI systems. However, numerous surveys emphasize the absence of a sound mathematical formalization of key XAI notions...

  • dott image July, 2023

Bridging Equational Properties and Patterns on Graphs: an AI-Based Approach

Journal : Proceedings of Machine Learning Research

AI-assisted solutions have recently proven successful when applied to Mathematics and have opened new possibilities for exploring unsolved problems that have eluded traditional approaches fo...

  • dott image July, 2023

SHARCS: Shared Concept Space for Explainable Multimodal Learning

Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling tas...

  • dott image May, 2023

Interpretable Graph Networks Formulate Universal Algebra Conjectures

The rise of Artificial Intelligence (AI) recently empowered researchers to investigate hard mathematical problems which eluded traditional approaches for decades. Yet, the use of AI in Unive...

  • dott image April, 2023

Global Explainability of GNNs via Logic Combination of Learned Concepts

While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, ...

  • dott image February, 2023

GCI: A Graph Concept Interpretation Framework

Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks. An important challenge facing ...

  • dott image January, 2023

Logic Explained Networks

Journal : Artificial Intelligence 1872-7921

The large and still increasing popularity of deep learning clashes with a major limit of neural network architectures, that consists in their lack of capability in providing human-understand...

  • dott image January, 2023

Extending Logic Explained Networks to Text Classification

Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions. However, these models have only been a...

  • dott image December, 2022

Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off

Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy. Concept bottleneck models promote trustworthiness by c...

Scholar9 Profile ID

S9-122024-2007269

Publication
Publication

(13)

Review Request
Article Reviewed

(0)

Citations
Citations

(0)

Network
Network

(2)

Conferences
Conferences/Seminar

(0)

PrevNext
SuMoTuWeThFrSa
  12345
6789101112
13141516171819
20212223242526
27282930