Research Interest


Panoptes: Infrastructure Camera Control. Steerable surveillance cameras offer a unique opportunity to support multiple vision applications simultaneously. However, state-of-art camera systems do not support this as they are often limited to one application per camera. We believe that we should break the one-to-one binding between the steerable camera and the application. By doing this we can quickly move the camera to a new view needed to support a different vision application. When done well, the scheduling algorithm can support a larger number of applications over an existing network of surveillance cameras. With this in mind we developed Panoptes, a technique that virtualizes a camera view and presents a different fixed view to different applications. A scheduler uses camera controls to move the camera appropriately providing the expected view for each application in a timely manner, minimizing the impact on application performance. Experiments with a live camera setup demonstrate that Panoptes can support multiple applications, capturing up to 80% more events of interest in a wide scene, compared to a fixed view camera. [IPSN 2017]
Activity Sensing using Ceiling Photosensors. This project explores the feasibility of tracking motion and activities of humans using visible light. Shadows created by casting visible light on humans and objects are sensed using sensors that are em- bedded along with the light sources. Existing Visible Light Sensing (VLS) techniques require either light sensors deployed on the floor or a person carrying a device. Our approach seeks to measure light reflected off the floor to achieve an entirely device-free and light- source based system. We co-locate photosensors with LED light sources to observe the changes in light level occurring on the floor. The system employs a highly sensitive light measurement technique, together with a time division flickering scheme to differentiate between light nodes. [ACM VLCS 2016]
TextureCode. Embedded screen–camera communication techniques encode information in screen imagery that can be decoded with a camera receiver yet remains unobtrusive to the human observer. We study the design space for flicker-free embedded screen–camera communication. In particular, we identify an orthogonal dimension to prior work: spatial content-adaptive encoding, and observe that it is essential to combine multiple dimensions to achieve both high capacity and minimal flicker. From these insights, in TextureCode, we develop content-adaptive encoding techniques that exploit visual features such as edges and texture to unobtrusively communicate information. TextureCode is able to achieve an average goodput of about 22 kbps, significantly outperforming existing work while remaining flicker-free. [INFOCOM 2016]


Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.