Event-based Dual Photography for Transparent Scene Reconstruction

Event-based Dual Photography for Transparent Scene Reconstruction

Light transport contains all light information between the light source and the image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-noise ratio, storage and computation of a large number of measurements. In this paper, we propose a novel hardware setup that combines a flying-spot MEMS modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular using event light transport, depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene have been achieved.

Read Article →
PPMGR Dataset: Plausible Planetary Material Geometry and Reflectance Imaging Dataset

PPMGR Dataset: Plausible Planetary Material Geometry and Reflectance Imaging Dataset

Find full dataset here: PPMGR Dataset Characterization of objects in all dimensions at a microscopic level is important in numerous applications including surface analysis on…

Read Article →
Optical MEMS Enable Next Generation Solutions for Robot Vision and Human-Robot Interaction

Optical MEMS Enable Next Generation Solutions for Robot Vision and Human-Robot Interaction

MEMS mirrors-based sensing and interaction systems designed for robots and drones are essential as they offer solutions with the lowest power consumption, weight, and cost in high volume. However, existing MEMS Mirror based solutions have not achieved the necessary compactness and efficiency for robotics. In this paper we describe and demonstrate MEMS Mirror-based 3D perception sensing (SyMPL 3D Lidar) and animated visual messaging (Vector Graphics Laser Projection with Playzer) systems optimized for robots and drones. These sub systems each consume <1W in power, at least 10x lower than other solutions in the market, weigh <50g, and have small form factors. Furthermore, we will show that combining these two systems leads to new capabilities and functionalities that meet the demands of robot vision and human-robot interaction.

Read Article →
SaccadeCam: Adaptive Visual Attention for Monocular Depth Sensing

SaccadeCam: Adaptive Visual Attention for Monocular Depth Sensing

Most monocular depth sensing methods use conventionally captured images that are created without considering scene content. In contrast, animal eyes have fast mechanical motions, called saccades, that control how the scene is imaged by the fovea, where resolution is highest. In this paper, we present the SaccadeCam framework for adaptively distributing resolution onto regions of interest in the scene. Our algorithm for adaptive resolution is a self-supervised network and we demonstrate results for end-to-end learning for monocular depth estimation. We also show preliminary results with a real SaccadeCam hardware prototype.

Read Article →
Dense Lissajous Sampling and Interpolation for Dynamic Light-Transport

Dense Lissajous Sampling and Interpolation for Dynamic Light-Transport

Light-transport represents the complex interactions of light in a scene. Fast, compressed, and accurate light-transport capture for dynamic scenes is an open challenge in vision and graphics. In this paper, we integrate the classical idea of Lissajous sampling with novel control strategies for dynamic light-transport applications such as relighting water drops and seeing around corners. In particular, this paper introduces an improved Lissajous projector hardware design and discusses calibration and capture for a microelectromechanical (MEMS) mirror-based projector. Further, we show progress towards speeding up the hardware-based Lissajous subsampling for dual light transport frames, and investigate interpolation algorithms for recovering back the missing data. Our captured dynamic light transport results show complex light scattering effects for dense angular sampling, and we also show dual non-line-of-sight (NLoS) capture of dynamic scenes. This work is the first step towards adaptive Lissajous control for dynamic light-transport. Please see accompanying video for all the results.

Read Article →
Foveating Cameras

Foveating Cameras

Most cameras today photograph their entire visual field. In contrast, decades of active vision research have proposed foveating camera designs, which allow for selective scene viewing. However, active vision’s impact is limited by slow options for mechanical camera movement. We propose a new design, called FoveaCam, and which works by capturing reflections off a tiny, fast moving mirror. FoveaCams can obtain high resolution imagery on multiple regions of interest, even if these are at different depths and viewing directions. We first discuss our prototype and optical calibration strategies. We then outline a control algorithm for the mirror to track target pairs. Finally, we demonstrate a practical application of the full system to enable eye tracking at a distance for frontal faces.

Read Article →
Towards a MEMS-based Adaptive LIDAR

Towards a MEMS-based Adaptive LIDAR

Most active depth sensors sample their visual field using a fixed pattern, decided by accuracy, speed and cost trade-offs, rather than scene content. However, a number of recent works have demonstrated that adapting measurement patterns to scene content can offer significantly better trade-offs. We propose a hardware LIDAR design that allows flexible real-time measurements according to dynamically specified measurement patterns. Our flexible depth sensor design consists of a controllable scanning LIDAR that can foveate, or increase resolution in regions of interest, and that can fully leverage the power of adaptive depth sensing. We describe our optical setup and calibration, which enables fast sparse depth measurements using a scanning MEMS (micro-electro mechanical) mirror. We validate the efficacy of our prototype LIDAR design by testing on over 75 static and dynamic scenes spanning a range of environments. We also show CNN-based depth-map completion from measurements obtained by our sensor. Our experiments show that our sensor can realize adaptive depth sensing systems.

Read Article →
Flying-Dot Photography

Flying-Dot Photography

The light transport captures a scene’s visual complexity. Acquiring light transport for dynamic scenes is difficult, since any change in viewpoint, materials, illumination or geometry also varies the transport. One strategy to capture dynamic light transport is to use a fast “flying-dot” projector; i.e., where an impulse light-probe is quickly scanned across the scene. We have built a novel fast flying-dot projector prototype using a high speed camera and a scanning MEMS (Micro-electro- mechanical system) mirror. Our contributions are calibration strategies that enable dynamic light transport acquisition at near video rates with such a system. We develop new methods for overcoming the effects of MEMS mirror resonance. We utilize new algorithms for denoising impulse scanning at high frame rates and compare the trade-offs in visual quality between frame rate and illumination power. Finally, we show the utility of our calibrated setup by demonstrating graphics applications such as video relighting, direct/global separation, and dual videography for dynamic scenes such as fog, water, and glass.

Read Article →
Revealing Scenes by Inverting Structure from Motion Reconstructions

Revealing Scenes by Inverting Structure from Motion Reconstructions

Read Article →
Learning Privacy Preserving Encodings through Adversarial Training

Learning Privacy Preserving Encodings through Adversarial Training

Read Article →