Ανακοινώσεις

Μάθημα 22/12, 3μμ-6μμ: ομιλία Στρατή Ιωαννίδη + επίλυση ασκήσεων

 
Picture of Άρης Παγουρτζής
Μάθημα 22/12, 3μμ-6μμ: ομιλία Στρατή Ιωαννίδη + επίλυση ασκήσεων
by Άρης Παγουρτζής - Thursday, 21 December 2017, 10:11 PM
 

Το αυριανό μάθημα θα αποτελείται από 2 μέρη: στο πρώτο θα μας μιλήσει ο Στρατής Ιωαννίδης (Northeastern) -- δείτε τίτλο και περίληψη παρακάτω. Στο 2ο μέρος θα λύσουμε / συζητήσουμε ασκήσεις από τις σειρές 2, 3, και 4.  Όπως και την προηγούμενη φορά, ελάτε προετοιμασμένοι με τις λύσεις σας για να μας τις παροιυσιάσετε!


===========================================================
Talk: Friday, 22 Dec 2017, 15:00-16:30, room 1.1.29 (Old ECE Bld. NTUA)

Speaker: Stratis Ioannidis (Northeastern University)

Title: Secure Function Evaluation at Scale

Abstract: Secure Function Evaluation (SFE) allows an interested party to evaluate a function over private data without learning anything about the inputs other than the outcome of this computation. This offers a strong privacy guarantee: SFE enables, e.g., a medical researcher, a statistician, or a data analyst, to conduct a study over private, sensitive data, without jeopardizing the privacy of the study's participants (patients, online users, etc.). Nevertheless, applying SFE to “big data” poses several challenges. First, beyond any computational overheads due to encryptions and decryptions, executing an algorithm securely may lead to a polynomial blowup in the total work  compared to execution in the clear. For large datasets, even going from linear to quadratic complexity is prohibitive. Second, secure evaluations of algorithms should also maintain parallelizability: an algorithm that is easy to parallelize in the clear should also maintain this property in its SFE version, if its execution is to scale. This poses a challenge, as communication patterns between processors may reveal a lot about the private inputs. In this talk, we show that several machine learning and data mining algorithms can be executed securely while leading to only (a) a logarithmic blow-up in total work and (b) a logarithmic increase in the execution time when executed in parallel. Our techniques generalize to any algorithm that is graph-parallel, i.e., can be expressed through a sequence of scatter, gather, and apply operations. This includes several algorithms of practical interest, including page rank, matrix factorization, and training neural networks.
============================================================