申请试用
HOT
登录
注册
 
Great Models with Great Privacy: Optimizing ML and AI Over Sensitive Data (conti

Great Models with Great Privacy: Optimizing ML and AI Over Sensitive Data (conti

Spark开源社区
/
发布于
/
8191
人观看
There is a growing feeling that privacy concerns dampen innovation in machine learning and AI applied to personal and/or sensitive data. After all, ML and AI are hungry for rich, detailed data and sanitizing data to improve privacy typically involves redacting or fuzzing inputs, which multiple studies have shown can seriously affect model quality and predictive power. While this is technically true for some privacy-safe modeling techniques, it’s not true in general. The root cause of the problem is two-fold. First, most data scientists have never learned how to produce great models with great privacy. Second, most companies lack the systems to make privacy-preserving machine learning & AI easy. This talk will challenge the implicit assumption that more privacy means worse predictions. Using practical examples from production environments involving personal and sensitive data, the speakers will introduce a wide range of techniques-from simple hashing to advanced embeddings-for high-accuracy, privacy-safe model development. Key topics include pseudonymous ID generation, semantic scrubbing, structure-preserving data fuzzing, task-specific vs. task-independent sanitization and ensuring downstream privacy in multi-party collaborations. In addition, we will dig into embeddings as a unique deep learning-based approach for privacy-preserving modeling over unstructured data. Special attention will be given to Spark-based production environments.
0点赞
0收藏
1下载
确认
3秒后跳转登录页面
去登陆