Researchers at the Indian Institute of Technology in Madras (IIT-Madras) and Rice University in the United States have developed algorithms for lensless, miniature cameras. These cameras have several vision applications in augmented or virtual reality (AR/VR), security, smart wearables, and robotics where cost, form factor, and weight can become major constraints.
Lensless cameras do not have a lens, which, in a conventional camera, acts as the focusing element allowing the sensor to capture a sharp photograph. Due to the absence of this focusing element, the lensless camera captures a multiplexed or globally blurred measurement of the scene. IIT-Madras and Rice University researchers have developed a deep learning algorithm for producing photo-realistic images from the blurred lensless capture. Taking out a lens can lead to the miniaturisation of a camera. Researchers globally are trying to find substitutes for lenses, a report explained.
The IIT-Madras team was led by Assistant Professor Kaushik Mitra of the Department of Electrical Engineering. Professor Ashok Veeraraghavan led the research team at Rice University. In 2016, Veeraraghavan’s lab at Rice University developed a lensless camera where a thin optical mask was placed in front of the sensor at a distance of approximately 1 mm. However, because of the absence of focusing elements, the lensless camera captured blurred images, restricting their commercial use.
IIT-Madras and Rice researchers have now developed a computational solution to this problem. The team developed a de-blurring algorithm, which can correct the blurred images taken from a lensless camera. The findings were presented as a paper in the IEEE International Conference on Computer Vision and an extended version appeared in IEEE Transactions on Pattern Analysis and Machine Intelligence.
Mitra explained that existing algorithms to deblur images based on traditional optimisation schemes yield low-resolution ‘noisy images’. The research teams used deep learning to develop a reconstruction algorithm called FlatNet for lensless cameras resulting in significant improvement. FlatNet was tested on various real and challenging scenarios and was found to be effective in de-blurring images captured by the lensless camera, he added.
Lensless imaging is a new technology, and its potential in solving imaging problems has not been completely explored. The researchers are designing lensless cameras using data-driven techniques, devising efficient algorithms for inference on lensless captures, and looking into applications like endoscopy and smart surveillance.
The basic architecture of a camera has remained the same for many years – a lens focuses the incoming light rays from the scene to the sensor, recording a 2D projection of the scene. Although conventional imaging is ubiquitous, it has issues like bulky form-factor, weight, and expensive optics. Lenless imaging systems get rid of the bulky lenses and replace them with ultra-thin optical masks and accompanying computations. This results in a lensless camera that is only a few millimeters thin, lightweight, and cost-effective.
The research team was funded by the National Science Foundation (NSF) CAREER and NSF Expeditions, United States Neural Engineering System Design – Defense Advanced Research Projects Agency, United States, among others.