BSLIVE X / Turning 2D Video Into 3D Animated Depth Relief Mesh
I am sharing a breakdown of this ongoing project I am doing using using Google AI understanding of depth and Blender 3D.
Basically I started from here:
http://stereo.jpn.org/jpn/stphmkr/google/colabe.html
Above is a tutorial my ”spmaker” the stereophotomaker creator, he made this tool utilizing Google AI that allows you to predict and generate 3D depth map images from static 2D or sequence of 2D images.
His tutorial uses Google Colab virtual environment that can uses the GPU power and also the prebuilt neural network database modules etc. Google Colab gives you Jupyter Notebook Python online that you can mount into your Google Drive, which is pretty insane in itself.
Then I am using Blender VSE Video Sequence Editor and a bit of Blender Compositing to extract frames that I use to create animated 3D mesh relief data.
NOTE: If you have a powerful computer with good GPU you can actually run all this inside your computer or directly from Blender!
GITHUB:
https://github.com/enzyme69/blendersushi/issues/576
1 blend to extract photos from live photo or video
2 blend to crop image sequence
3 blend to project into plane and generate 3d relief from depth
Enjoy.