Cameras have become a ubiquitous interface between the real world and computers. Although their applications span across disciplines, today’s cameras acquire information in the same way they did in the 19th century: they aim to record an ideal image of the scene with a complex stack of lenses, while computation is only performed after the capture. I will investigate computational cameras that learn to replace the function of lenses with differentiable nanophotonic computation, allowing us to lift fundamental limitations of conventional cameras. Learning wavefront manipulation at a sub-wavelength scale will not only make it possible to develop ultra-thin cameras, two orders of magnitude thinner and lighter than today, but to also perform neural network computing in the optics before sensing, at the speed of light, and computation even in the scene itself, conceptually turning diffuse scene surfaces into mirrors that become part of tomorrow’s cameras.
Fellow