Chamara Sandeepa
With the proliferation of data and Artificial Intelligence (AI), an emerging concern of privacy can be observed in recent years. This will be more significant through upcoming network infrastructure such as Beyond 5G (B5G)/6G since the means of collecting and inferring based on user data will continue to increase. A Privacy Preserved Machine Learning (PPML) technique called Federated Learning (FL) is emerging as a solution to provide high-quality ML models while maintaining data privacy. Data used in FL does not need to be moved from its original source. Instead, ML models are trained locally near the data source and aggregated remotely to create a global model. Though it appears to be a promising future for PPML, the latest research on FL privacy shows that it is vulnerable to numerous privacy attacks, including reconstruction, model inversion, and inference attacks. Several mechanisms, such as Differential Privacy (DP), Secure Multiparty Computation (SMC), and Blockchain-based mechanisms, are investigated actively as potential solutions for privacy vulnerabilities in FL. However, their tradeoffs of model accuracy, privacy with performance, and practicality in resource-constrained environments such as Edge AI and IoT applications are challenging problems to investigate and address. Therefore, a robust, lightweight, and privacy-enhanced FL platform for Edge AI applications is essential to fulfilling privacy requirements expected in future B5G/6G networks.