Distributed Machine Learning: Communication, Efficiency, and Privacy Avrim Blum Carnegie Mellon University Joint work with Maria-Florina Balcan, Shai Fine, and Yishay Mansour [RaviKannan60 Happy birthday Ravi! And thank you for many enjoyable years working together on challenging problems where

machine learning meets highdimensional geometry This talk Algorithms for machine learning in distributed, cloud-computing context. Related to interest of Ravis in algorithms for cloud-computing. For full details see [Balcan-B-Fine-Mansour COLT12] Machine Learning What is Machine Learning about? Making useful, accurate generalizations or predictions from data. Given access to sample of some population, classified in some way, want to learn some rule that

will have high accuracy over population as a whole. Typical ML problems: Given sample of images, classified as male or female, learn a rule to classify new images. Machine Learning What is Machine Learning about? Making useful, accurate generalizations or predictions from data. Given access to sample of some population, classified in some way, want to learn some rule that will have high accuracy over population as a whole. Typical ML problems:

Given set of protein sequences, labeled by function, learn rule to predict functions of new proteins. Distributed Learning Many ML problems today involve massive amounts of data distributed across multiple locations. Distributed Learning Many ML problems today involve massive amounts of data distributed across multiple locations. Click data

Distributed Learning Many ML problems today involve massive amounts of data distributed across multiple locations. Customer data Distributed Learning Many ML problems today involve massive amounts of data distributed across multiple locations. Scientific data Distributed Learning

Many ML problems today involve massive amounts of data distributed across multiple locations. Each has only a piece of the overall data pie Distributed Learning Many ML problems today involve massive amounts of data distributed across multiple locations. In order to learn over the combined D, holders will need to

communicate. Distributed Learning Many ML problems today involve massive amounts of data distributed across multiple locations. Classic ML question: how much data is needed to learn a given type of function well? Distributed Learning Many ML problems today involve massive amounts of data distributed across

multiple locations. These settings bring up a new question: how much Pluscommunication? issues like privacy, etc. That is the focus of this talk. Distributed Learning: Scenarios Two natural high-level scenarios: 1. Each location has data from same distribution. So each could in principle learn on its own.

But want to use limited communication to speed up ideally to centralized learning rate. [Dekel, GiliadBachrach, Shamir, Xiao] 2. Overall distribution arbitrarily partitioned. Learning without communication is impossible. This will be our focus here. The distributed PAC learning model Goal is to learn unknown function f 2 C given labeled data from some prob. distribution D. However, D is arbitrarily partitioned among k entities (players) 1,2,,k. [k=2 is

interesting] + + + + - The distributed PAC learning model Goal is to learn unknown function f 2 C given labeled data from some prob. distribution D. However, D is arbitrarily partitioned among k entities (players) 1,2,,k. [k=2 is interesting] D= (D1 + D2 + + Dk)/k

Players can sample (x,f(x)) from their own Di. 1 k 2 D1 D D2

The distributed PAC learning model Goal is to learn unknown function f 2 C given labeled data from some prob. distribution D. However, D is arbitrarily partitioned among k entities (players) 1,2,,k. [k=2 is interesting] Goal: learn good rule over Players can sample (x,f(x)) from their own combined D. Di. 1 k

2 D1 D D2 The distributed PAC learning model Interesting special case to think about: k=2. One has the positives and one has the

negatives. How much communication to learn, e.g., a linear view separator? good In general, k as small compared to + + sample size needed for learning. + + 1

2 + + + + + + + + + + + + - - - -- -- The distributed PAC learning model

Some simple baselines. Baseline #1: based on fact that can learn any class of VC-dim d to error from O(d/ log 1/) samples Each player sends 1/k fraction of this to player 1 Player 1 finds good rule h over sample. Sends h to others. Total: 1 round, O(d/ log 1/) examples sent. D1 D D2

The distributed PAC learning model Some simple baselines. Baseline #2: Suppose function class has an online algorithm A with mistake-bound M. E.g., Perceptron algorithm learns linear separators of margin with mistake-bound O(1/2). + + + + - D1 D

D2 The distributed PAC learning model Some simple baselines. Baseline #2: Suppose function class has an online algorithm A with mistake-bound M. Player 1 runs A, broadcasts current hypothesis. If any player has a counterexample, sends to player 1. Player 1 updates, re-broadcasts. At most M examples and rules communicated. D1

D D2 Dependence on 1/ Had linear dependence in d and 1/, or M and no dependence on 1/. [ = final error rate] Can you get O(d log 1/) examples of communication? Yes. Distributed boosting

D1 D D2 Distributed Boosting Idea: Run baseline #1 for = . [everyone sends a small amount of data to player 1, enough to learn to error ] Get initial rule h1, send to others.

D1 D D2 Distributed Boosting Idea: Players then reweight their Di to focus on regions h1 did poorly. Repeat + + Distributed implementation

+ + + of Adaboost + + Algorithm. + + -+ Some additional low-order communication - needed too (players send -current - - performance level to #1, so can request more data from players where h doing badly). Key point: each round uses only O(d) samples D1and lowers error

D2multiplicatively. D Distributed Boosting Final result: O(d) examples of communication per round + low order extra bits. O(log 1/) rounds of communication. So, O(d log 1/) examples of communication in total plus low order extra info. D1 D

D2 Agnostic learning (no perfect h) [Balcan-Hanneke] give robust halving alg that can be implemented in distributed setting. Based on analysis of a generalized active learning model. Algorithms especially suited to distributed setting. D1 D

D2 Agnostic learning (no perfect h) [Balcan-Hanneke] give robust halving alg that can be implemented in distributed setting. Get error 2*OPT(C) + using total of only O(k log|C| log(1/)) examples. Not computationally efficient, but says logarithmic dependence possible in principle. D1 D

D2 Can we do better for specific classes of functions? D1 D D2 Interesting class: parity functions

Examples x 2 {0,1}d. f(x) = xvf mod 2, for unknown vf. Interesting for k=2. Classic communication LB for determining if two subspaces intersect. Implies (d2) bits LB to output good v. What if allow rules that look different? D1 D D2 Interesting class: parity functions

Examples x 2 {0,1}d. f(x) = xvf mod 2, for unknown vf. Parity has interesting property that: (a) Can be learned using . [Given dataset S of size O(d/), just solve the linear system] S vector vh (b) Can be learned using in reliable-useful [if x in subspace model ofby Rivest-Sloan88. spanned S, predict accordingly, else say ??]

S x f(x) ?? Interesting class: parity functions Examples x 2 {0,1}d. f(x) = xvf mod 2, for unknown vf. Algorithm: Each player i PAC-learns over Di to get parity function gi. Also R-U learns to get rule hi. Sends gi to other player. Uses rule: if hi predicts, use it; else use g3-i. Can one extend to k=3?

D1 D g1 g2 h1 h2 Linear Separators Linear separators thru origin. (can assume pts on sphere) Say we have a near-uniform prob. distrib. D over Sd.

VC-bound, margin bound, Perceptron mistake-bound all give O(d) examples needed to learn, so O(d) examples of communication using baselines (for constant k, ). Can one do better? + + + + - - - Linear Separators

Idea: Use margin-version of Perceptron alg [update until f(x)(w x) 1 for all x] and run round-robin. + + + + - - - Linear Separators Idea: Use margin-version of Perceptron alg

[update until f(x)(w x) 1 for all x] and run round-robin. So long as examples xi of player i and xj of player j are reasonably orthogonal, updates of player j dont mess too much with data of player i. Few updates ) no damage. Many updates ) lots of progress! Linear Separators Idea: Use margin-version of Perceptron alg [update until f(x)(w x) 1 for all x] and run round-robin. If overall distrib. D is near uniform [density bounded by cunif], then total communication (for constant k, ) is O((d log d)1/2) rather than O(d).

Get similar savings for general distributions? Preserving Privacy of Data Natural also to consider privacy in this setting. Data elements could be patient records, customer records, click data. Want to preserve privacy of individuals involved. Compelling notion of differential privacy: if replace any one record with fake record, S S2 ~ D2 [Dwork, Nissim, ] Sk ~ nobody

tell. 1 ~ D1 else can Dk 1011011011101011101 1001 Preserving Privacy of Data Natural also to consider privacy in this setting. For all sequences of interactions , e- Pr(A(Si)=)/Pr(A(Si)=) e 1- S1 ~ D1 Dk

1011011011101011101 1001 S2 ~ D2 probability over randomness in A 1+ Sk ~ Preserving Privacy of Data

Natural also to consider privacy in this setting. A number of algorithms have been developed for differentially-private learning in centralized setting. Can ask how to maintain without increasing communication overhead. S1 ~ D1 Dk 1011011011101011101 1001 S2 ~ D2

Sk ~ Preserving Privacy of Data Another notion that is natural to consider in this setting. A kind of privacy for data holder. View distrib Di as non-sensitive (statistical info about population of people who are sick in city i). But the sample Si Di is sensitive (actual patients). S1 ~ D1 S2 ~ D2

Sk ~ Reveal no more about Si other than inherent in Dk Di? Preserving Privacy of Data Another notion that is natural to consider in this setting. Di Protocol Si Want to reveal no more info

about Si than is inherent in Di. S1 ~ D1 Dk S2 ~ D2 Sk ~ Preserving Privacy of Data Another notion that is natural to consider in this setting. Di Si

Actual sample Si Protocol Ghost sample sample PrS i,Si [8, Pr(A(Si)=)/Pr(A(Si)=) 2 1 ] 1 -

. this guarantee Can get algorithms with Conclusions As we move to large distributed datasets, communication issues become important. Rather than only ask how much data is needed to learn well, also ask how much communication do we need? Also issues like privacy become more critical. Quite a number of open questions.