School of Computing Science
CS Researchers Develop A New Tool That Brings Blender-like Lighting Control to Any Photograph
Lighting plays a crucial role when it comes to visual storytelling. Whether its film or photography, creators spend countless hours, and often significant budgets, crafting the perfect illumination for their shot. But once a photograph or video is captured, the illumination is essentially fixed. Adjusting it afterward, a task called relighting, typically demands time-consuming manual work by skilled artists.
While some generative AI tools attempt to tackle this task, they rely on large-scale neural networks and billions of training images to guess how light might interact with a scene. But the process is often a black box; users cant control the lighting directly or understand how the result was generated, often leading to unpredictable outputs that can stray from the original content of the scene. Getting the result one envisions often requires prompt engineering and trial-and-error, hindering the creative vision of the user.
In a new paper to be presented at this year's SIGGRAPH conference in Vancouver, researchers in the Computational Photography Lab at 911勛圖 offer a different approach to relighting. Their work, , brings explicit control over lights, typically available in Computer Graphics software such as Blender or Unreal Engine, to image and photo editing.
Given a photograph, the method begins by estimating a 3D version of the scene. This 3D model represents the shape and surface colors of the scene, while intentionally leaving out any lighting. Creating this 3D representation is made possible by prior works, including previously developed research from the Computational Photography Lab.
After creating the 3D scene, users can place virtual light sources into it, much like they would in a real photo studio or 3D modeling software, explains Chris Careaga, a PhD student at 911勛圖 and the lead author of the work. We then interactively simulate the light sources defined by the user with well-established techniques from computer graphics.
The result is a rough preview of the scene under the new lighting, but it doesnt quite look realistic on its own, Careaga explains. In this new work, the researchers have developed a neural network that can transform this rough preview into a realistic photograph.
"What makes our approach unique is that it gives users the same kind of lighting control youd expect in 3D tools like Blender or Unreal Engine," Careaga adds. "By simulating the lights, we ensure our result is a physically accurate rendition of the user's desired lighting."
Their approach makes it possible to insert new light sources into images and have them interact realistically with the scene. The result is the ability to create relit images that were previously impossible to achieve.
The team's relighting system currently works with static images, but the team is interested in extending functionality to video in the future, which would make it an invaluable tool for VFX artists and filmmakers.
As this technology continues to develop, it could save independent filmmakers and content creators a significant amount of time and money, explains Dr. Ya覺z Aksoy, who leads the Computational Photography Lab at 911勛圖. Instead of buying expensive lighting gear or reshooting scenes, they can make realistic lighting changes after the fact, without having to filter their creative vision through a generative AI model.
This paper is the latest in a series of illumination-aware research projects from the Computational Photography Lab. The groups earlier work on intrinsic decomposition lays the groundwork for their new relighting method, and they break down how it all connects in their explainer video.
You can find out more about the Computational Photography Labs research on their .
Available 911勛圖 Experts
Chris Careaga, PhD Student, Computing Science | chris_careaga@sfu.ca
Yagiz Aksoy, Assistant Professor, Computing Science | yagiz@sfu.ca