Privacy Preserving Computational Cameras
The ongoing transformation of computer vision research is driven by two important trends. The mobile revolution has made available billions of networked cameras, which have brought computer vision to the Internet of Things (IoT). In addition, the advent of deep learning has enabled inference on large datasets, improving existing vision techniques and creating novel applications. These advances have the potential to positively impact a wide range of fields including security, search and rescue, agriculture, environmental monitoring, exploration, health, and energy. However, the privacy implications of releasing millions of networked vision sensors into the world would likely lead to significant societal push-back and legal restrictions. We aim to expand the range of places and personal devices where connected cameras can be deployed by developing privacy preserving computational cameras that perform efficient and robust privacy processing at the camera level. To this end, we show novel computational cameras that perform privacy processing via optical filtering of the incident light-field and via sensor-level application-specific integrated circuits (ASICs). Further, we show a novel learning framework, which, through adversarial training, successfully yields an encoder that permanently limits inference of a chosen private attribute, while preserving a generic notion of information or the estimation of a different desired attribute.
|Learning Privacy Preserving Encodings through Adversarial Training||Pre-capture Privacy for Small Vision Sensors|
|Francesco Pittaluga, Sanjeev J. Koppal, Ayan Chakrabarti||Francesco Pittaluga, Sanjeev J. Koppal|
|arXiv Preprint 2018||PAMI 2017, CVPR 2015|
|We present a framework to learn privacy-preserving encodings of images (or other high-dimensional data) to inhibit inference of a chosen private attribute. Rather than encoding a fixed dataset or inhibiting a fixed estimator, we aim to to learn an encoding function such that even after this function is fixed, an estimator with knowledge of the encoding is unable to learn to accurately predict the private attribute, when generalizing beyond a training set. We formulate this as adversarial optimization of an encoding function against a classifier for the private attribute, with both modeled as deep neural networks. We describe an optimization approach which successfully yields an encoder that permanently limits inference of the private attribute, while preserving either a generic notion of information, or the estimation of a different, desired, attribute. We experimentally validate the efficacy of our approach on private tasks of real-world complexity, by learning to prevent detection of scene classes from the Places-365 dataset.
Paper (arXiv Preprint 2018)
|The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.
Paper (PAMI 2017)
Paper (CVPR 2015)
|Sensor-level Privacy for Thermal Cameras|
|Francesco Pittaluga, Aleksandar Zivkovic, Sanjeev J. Koppal|
|As cameras turn ubiquitous, balancing privacy and utility becomes crucial. To achieve both, we enforce privacy at the sensor level, as incident photons are converted into an electrical signal and then digitized into image measurements. We present sensor protocols and accompanying algorithms that degrade facial information for thermal sensors, where there is usually a clear distinction between humans and the scene. By manipulating the sensor processes of gain, digitization, exposure time, and bias voltage, we are able to provide privacy during the actual image formation process and the original face data is never directly captured or stored. We show privacy-preserving thermal imaging applications such as temperature segmentation, night vision, gesture recognition and HDR imaging
Paper (ICCP 2016)