The "Friends" footage you haven't seen, but the AI has made up for it|ECCV 2022

By    3 Aug,2022

After that, NeRF is used to reconstruct detailed 3D scene information. Neurally rendered radiation fields optimize the volumetric representation of the scene as a vector function defined by continuous 5D coordinates consisting of position and view direction. That is, the 5D coordinates are sampled along the camera rays, which are used to composite the image.


The next step is to process the information about the characters in the scene. For multi-camera scenarios, NeRF can directly reconstruct the 3D information of the human body after determining the location of the characters.


In the case of a single shot, we need to use the change of human pose, camera position and environment structure information in the upper and lower frames to reconstruct.


As you can see from the experimental results, the method can eventually combine the 3D information obtained and re-render a new image.


In the ablation experiment, if the characteristics of the camera and the characters are not determined, the final results are not as good as they should be.

And, the researchers also analyzed the data of the obtained scenes, including camera distance and character position distribution. Editing options were also provided to delete or insert selected objects.


The team has now open sourced the code and paper data. The research team is from UC Berkeley's Artificial Intelligence Research Lab. The authors say the paper's methods are equally applicable in areas such as film and sports programming.


2/2

POPULAR CATEGORY