專題演講 主講人:陳素雲教授(中研院統計所)

  • 事件日期: 2022-06-10
  • 演講者:  /  主持人:
This is an image


題 目:Robust aggregation for federated learning by minimum gamma-divergence estimation 

主講人:陳素雲教授(中研院統計所)

時 間:111年6月10日(星期五)上午10:40-11:30

地 點:線上演講-使用Google Meet

使用Google Meet線上直播,

演講開始前20分鐘可進入會議,請點選下列連結後按下「要求加入」即可
摘要
 
Federated learning is a framework for multiple devices or institutions, called local clients, to collaboratively train a global model without sharing their data. For federated learning with a central server, an aggregation algorithm integrates model information sent from local clients to update the parameters for a global model. Sample mean is the simplest and most commonly used aggregation method. However, it is not robust for data with outliers or under the Byzantine problem, where Byzantine clients send malicious messages to interfere with the learning process. Some robust aggregation methods were introduced in literature including marginal median, geometric median and trimmed-mean. In this talk, we will introduce an alternative robust aggregation method, named
Gamma-mean, which is the minimum divergence estimation based on a robust density power divergence. This gamma-mean aggregation mitigates the influence of Byzantine clients by assigning fewer weights. This weighting scheme is data-driven and controlled by the gamma value. Robustness from the viewpoint of the influence function is discussed and some numerical results are presented.