Privacy-Preserving Training

The Data Sharing Problem

Banks, hospitals, and telecom providers all face network attacks — but they can’t pool their data to train better detectors. Privacy regulations, competitive concerns, and security policies prevent it. Yet a model trained on one organization’s data may not generalize to another’s network. How do you train a robust classifier across organizations without anyone revealing their data?

What You’ll Work On

This theme explores the intersection of secure multiparty computation (MPC), federated learning, and differentiable logics. The goal is to enable collaborative training where logical constraints improve robustness, and cryptographic protocols protect privacy.

Possible thesis directions:

What You’ll Learn

Relevant Literature

Supervisors: Alessandro Bruni (ITU), Nicola Dragoni (DTU) – see Team