experiments | ||
figures | ||
ifield | ||
.envrc | ||
.gitignore | ||
.localenv | ||
.localenv-bootstrap-conda | ||
.remoteenv | ||
.remoteignore.toml | ||
ablation.md | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
MARF: The Medial Atom Ray Field Object Representation
Publication | Arxiv | Training data | Network weights
TL;DR: We achieve fast surface rendering by predicting n maximally inscribed spherical intersection candidates for each camera ray.
Entering the Virtual Environment
The environment is defined in pyproject.toml
using Poetry and reproducibly locked in poetry.lock
.
We propose three ways to enter the venv:
# Requires Python 3.10 and Poetry
poetry install
poetry shell
# Will bootstrap a Miniconda 3.10 environment into .env/ if needed, then run poetry
source .localenv
Evaluation
Pretrained models
You can download our pre-trained modelsfrom <https://mega.nz/file/t01AyTLK#7ZNMNgbqT9x2mhq5dxLuKeKyP7G0slfQX1RaZxifayw>. It should be unpacked into the root directory, such that the
experiment` folder gets merged.
The interactive renderer
We automatically create experiment names with a schema of {{model}}-{{experiment-name}}-{{hparams-summary}}-{{date}}-{{random-uid}}
.
You can load experiment weights using either the full path, or just the random-uid
bit.
From the experiments
directory:
./marf.py model {{experiment}} viewer
If you have downloaded our pre-trained network weights, consider trying:
./marf.py model nqzh viewer # Stanford Bunny (single-shape)
./marf.py model wznx viewer # Stanford Buddha (single-shape)
./marf.py model mxwd viewer # Stanford Armadillo (single-shape)
./marf.py model camo viewer # Stanford Dragon (single-shape)
./marf.py model ksul viewer # Stanford Lucy (single-shape)
./marf.py model oxrf viewer # COSEG four-legged (multi-shape)
Training and Evaluation Data
You can download a pre-computed archive from https://mega.nz/file/9tsz3SbA#V6SIXpCFC4hbqWaFFvKmmS8BKir7rltXuhsqpEpE9wo.
It should be extracted into the root directory such that a data
directory is added to the root directory.
Optionally, you may compute the data yourself.
Single-shape training data:
# takes takes about 23 minutes, mainly due to lucy
download-stanford bunny happy_buddha dragon armadillo lucy
preprocess-stanford bunny happy_buddha dragon armadillo lucy \
--precompute-mesh-sv-scan-uv \
--compute-miss-distances \
--fill-missing-uv-points
Multi-shape training data:
# takes takes about 29 minutes
download-coseg four-legged --shapes
preprocess-coseg four-legged \
--precompute-mesh-sv-scan-uv \
--compute-miss-distances \
--fill-missing-uv-points
Evaluation data:
# takes takes about 2 hour 20 minutes, mainly due to lucy
preprocess-stanford bunny happy_buddha dragon armadillo lucy \
--precompute-mesh-sphere-scan \
--compute-miss-distances
# takes takes about 4 hours
preprocess-coseg four-legged \
--precompute-mesh-sphere-scan \
--compute-miss-distances
Training
Our experiments are defined using YAML config files, optionally templated using Jinja2 as a preprocessor.
These templates accept additional input from the command line in the form of -Okey=value
options.
Our whole experiment matrix is defined in marf.yaml.j12
. We select between different experiment groups using -Omode={single,ablation,multi}
, and which experiment using -Oselect={{integer}}
From the experiments
directory:
CPU mode:
./marf.py model marf.yaml.j2 -Oexperiment_name=cpu_test -Omode=single -Oselect=0 fit
GPU mode:
./marf.py model marf.yaml.j2 -Oexperiment_name=cpu_test -Omode=single -Oselect=0 fit --accelerator gpu --devices 1