Berivan Işık 30.12.2022 tarihinde OGAM'da seminer düzenledi. Sunum materyaline  buradan erişebilirsiniz.

Berivan İsik OGAM Seminar

Title: Sparsity in Neural Networks

Tarih : 30 Aralık 2022 13:30,  Ayaslı Araştırma Merkezi ARC211

Abstract: I will talk about my two recent works on sparsity in neural networks. In the first part, I will describe our information-theoretic formulation of the neural network compression problem. In addition to characterizing the theoretical limits of neural network compression, this formulation shows that pruning (sparsification), implicitly or explicitly, must be a part of a good compression algorithm. This observation bridges a gap between parts of the literature pertaining to neural network compression and data compression, respectively, providing insight into the empirical success of model pruning. We also propose a novel pruning strategy derived from our information-theoretic formulation and show that it outperforms the relevant baselines on CIFAR-10 and ImageNet datasets. In the second part, I will talk about our recent communication-efficient federated learning strategy: Federated Probabilistic Mask Training (FedPM). While prior work has made great progress in compressing the weight updates through gradient compression methods, we propose a radically different approach that does not update the weights at all. Instead, our method freezes the weights at their initial random values and learns how to sparsify the random network for the best performance. To this end, the clients collaborate in training a stochastic binary mask to find the optimal sparse random network within the original one. At the end of the training, the final model is a sparse network with random weights – or a subnetwork inside the dense random network. We show improvements in accuracy, communication (less than 1 bit per parameter (bpp)), convergence speed, and final model size (less than 1 bpp) over relevant baselines on MNIST, EMNIST, CIFAR-10, and CIFAR-100 datasets, in the low bitrate regime under various system configurations.

Links: [first paper] An Information-Theoretic Justification for Model Pruning https://arxiv.org/pdf/2102.08329.pdf
          [second paper] Sparse Random Networks for Communication-Efficient Federated Learning https://arxiv.org/pdf/2209.15328.pdf

Bio: Berivan Işık is a PhD student at Stanford University with a focus on information theory, machine learning, compression, and privacy. She received her MS degree from Stanford (2021) and her BS degree from Middle East Technical University (2019), both in Electrical Engineering. She was a research intern at Google in 2021 and 2022 and is currently an applied scientist intern at Amazon. Her recent research interests are model compression, federated learning, learned data compression, differential privacy, and robustness and fairness in machine learning. She is the recipient of the Stanford Graduate Fellowship and the 2022 ICML Outstanding Reviewer Award.


Son Güncelleme:
01/01/2023 - 21:55