项目作者: p-ranav

项目描述 :
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models
高级语言: Python
项目地址: git://github.com/p-ranav/merged_depth.git
创建时间: 2021-03-19T19:57:12Z
项目社区:https://github.com/p-ranav/merged_depth

开源协议:MIT License

下载




merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute depth estimation.

Quick Start

First, download the pretrained models using the download_models script.

Next, run the infer script - this will run on all images in test/input and save the results to test/output.

  1. python3 -m pip install -r requirements.txt
  2. python3 -m merged_depth.utils.download_models
  3. python3 -m merged_depth.infer

If you’re using Anaconda3, the following has been tested to work (in Windows):

  1. conda create --name merged_depth python=3.6
  2. conda activate merged_depth
  3. conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
  4. python3 -m pip install -r requirements.txt
  5. python3 -m merged_depth.utils.download_models
  6. python3 -m merged_depth.infer

The results include (1) a _depth.npy file that you can load (see load_and_display_depth.py), (2) a _stacked.png file that shows the original and colorized depth images.

To run the predictor on a single input, use infer_single.py

  1. python3 -m merged_depth.infer_single ~/foo/bar/test.png

Sample Output

The output depth is absolute depth in meters. The colorizer range is [0, 20].