Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Privacy-preserving Data Processing at Scale: How Much Can You Trust Your Cloud Provider?
Pascal Felber, Université de Neuchâtel, Switzerland

Trustworthy Federated Learning Systems
Lydia Chen, Technology University Delft, Netherlands

Available Soon
Valeria Cardellini, Università degli Studi di Roma "Tor Vergata", Italy

 

Privacy-preserving Data Processing at Scale: How Much Can You Trust Your Cloud Provider?

Pascal Felber
Université de Neuchâtel
Switzerland
 

Brief Bio
Pascal Felber received his M.Sc. and Ph.D. degrees in Computer Science from the Swiss Federal Institute of Technology. From 1998 to 2002, he has worked at Oracle Corporation and Bell-Labs (Lucent Technologies) in the USA. From 2002 to 2004, he has been an Assistant Professor at Institut EURECOM in France. Since October 2004, he is a Professor of Computer Science at the University of Neuchâtel, Switzerland, working in the field of dependable, concurrent, and distributed computing. He has published over 200 research papers in various journals and conferences.


Abstract
The processing of large amounts of data requires significant computing power and scalable architectures. This trend makes the use of Cloud computing and off-premises data centres particularly attractive but exposes companies to the risk of data theft. This is a key challenge toward outsourcing data processing to external Cloud providers, as data represents for many companies their most valuable asset. In this talk, we will discuss recent and emerging mechanisms to support privacy-preserving data processing, i.e., confidential computing, on untrusted architectures.



 

 

Trustworthy Federated Learning Systems

Lydia Chen
Technology University Delft
Netherlands
 

Brief Bio
Lydia Y. Chen is a Professor in the Department of Computer Science at the University of Neuchatel in Switzerland and Delft University of Technology in the Netherlands. Prior to joining TU Delft, she was a research staff member at the IBM Research Zurich Lab from 2007 to 2018. She holds a PhD from Pennsylvania State University and a BA from National Taiwan University. Her research interests are distributed machine learning, dependability management, large-scale data processing systems and services. More specifically, her work focuses on developing machine learning and stochastic models, and applying these techniques to application domains, such as data centers and AI systems.
She has published more than 100 papers in peer-reviewed journals and serves on the technical program committees of system and AI conferences and the editorial boards on multiple of IEEE Transactions journals.


Abstract
Federated learning (FL) is an emerging collaborative learning paradigm which enables data owners to extract knowledge and learn models jointly.  FL empowers the model democracy by inviting the contribution of crowd and protects the data privacy by keeping the data on premise. In this talk, I will first discuss its worthiness and technical challenges on training diversified learning models, from classification, graph, to generative ones. Through concrete examples, I will then demonstrate the vulnerability issues stemming from the malicious crowd, covering poisoning attacks, freerider attacks, and data reconstruction attacks. I will conclude this talk with a discussion on the defense strategies to baffle adversaries and strengthen the trust of FL. 



 

 

Keynote Lecture

Valeria Cardellini
Università degli Studi di Roma "Tor Vergata"
Italy
 

Brief Bio
Available Soon


Abstract
Available Soon



footer