Image and Video
Privacy Protection of Face Image data
In recent years, with the development of AI technology, face recognition has been applied in many aspects, such as attendance access control, authentication, video surveillance, etc. But face images belong to very sensitive personal privacy information. In the face recognition system (including the training stage and the recognition stage), they must be protected safely and effectively to prevent illegal persons from stealing, modifying or forging them.
In recent years, artificial intelligence governance principles such as General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), etc. has been designed to regulate face data collection, storage, transmission, processing, use, etc., to solve data abuse, data theft, privacy disclosure and other security issues. Data encryption is mostly used in the actual implementation, but the decrypted plaintext face data is still needed in the actual face recognition training process. In addition, the encrypted data is reversible, that is, the original face data can be recovered after decryption. Therefore, they are still vulnerable to leakage and abuse.
For the face-related tasks, researchers have introduced various face de-identification and k-anonymity methods to help protect the privacy. Inspired by the generative adversarial networks (GAN), researchers proposed to train one obfuscate net to protect the input data from reconstruction attack in an adversarial manner. Client-server model based DNNs inference framework is proposed as a privacy-preserving scheme. The whole mode can be split into two parts where the first part can be deployed on client side and use the differential privacy (DP) algorithm to generate DP outputs. However, these researches mainly focus on the classification tasks without considering the storage resource utilization, which has a certain classification performance drop.
This project focus on research privacy protection scheme for face images, it is mainly used for face recognition task, in order to achieve both preventing face image reconstruction attack, and having high recognition accuracy.
- A face image data privacy protection system, which should include algorithm design document, source code
- The protected face image data should be de-identification, anonymity, strong enough to against reconstruction attack.
- The protected face image data should have less information loss while the recognition rate drop no more than 5%@far=10^-6.
- The storage footprint should be less than 50k
Related Research Topics
- face de-identification
- differential privacy
- homomorphic encryption
- generative adversarial networks (GAN)
- adversarial trainingclient-server inference
Suggested Collaboration Method
AIR (Alibaba Innovative Research), one-year collaboration project.