Code Monkey home page Code Monkey logo

gaussian_surfels's Introduction

High-quality Surface Reconstruction using Gaussian Surfels

| Project | Paper | arXiv | Data |

The code builds upon the fantastic 3DGS code base and borrows the data preprocessing/loading part from IDR.

Environment Setup

We did our experiments on Ubuntu 22.04.3, CUDA 11.8, and conda environment on Python 3.7.

Clone this repository:

git clone https://github.com/turandai/gaussian_surfels.git
cd gaussian_surfels

Create conda environment:

conda env create --file environment.yml
conda activate gaussian_surfels

If you need to recompile and reinstall the CUDA rasterizer:

cd submodules/diff-gaussian-rasterization
python setup.py install && pip install .

Data Preparation

We test our method on subsets of on DTU and BlendedMVS datasets. We select 15 scenes from DTU and 18 scenes from BlendedMVS, then preprocess and normalize the data following IDR data convention. We also adopt Omnidata to generate monocular normal prior. You can download the data from here.

To test on your own unposed data, we recommend to use COLMAP for SfM initialization. To estimate monocular normal for your own data, please follow Omnidata for additional environment setup. Download the pretrained model and run the normal estimation by:

cd submodules/omnidata
sh tools/download_surface_normal_models.sh
python estimate_normal.py --img_path path/to/your/image/directory

Note that precomputed normal of forementioned scenes from DTU and BlendedMVS are included in the downloaded dataset, so you don't have to run the normal estimation for them.

Training

To train a scene:

python train.py -s path/to/your/data/directory

Trained model will be save in output/. To render images and reconstruct mesh from a trianed model:

python render.py -m path/to/your/trained/model --img --depth 10

We use screened Poisson surface reconstruction to extract mesh, at this line in render.py:

poisson_mesh(path, wpos, normal, color, poisson_depth, prune_thrsh)

In your output folder, xxx_plain.ply is the original mesh after Poisson reconstruction with the default depth of 10. For scenes in larger scales, you may use a higher depth level. For a smoother mesh, you may decrease the depth value. We prune the Poisson mesh with a certain threshold to remove outlying faces and output xxx_pruned.ply. This process sometimes may over-prune the mesh and results in holes. You may increase the "prune_thrsh" parameter accordingly.

Evalutation

To evaluate the geometry accuracy on DTU, you have to download the DTU ground truth point cloud. For BlendedMVS evaluation, we fused, denoised and normalized the ground truth multi-view depth maps to a global point cloud as the ground truth geometry, which is included in our provided dataset for download. We follow previous work to use this code to calculate the Chamfer distance between the ground truth point cloud and the reconstructed mesh.

# DTU evaluation:
python eval.py --dataset dtu --source_path path/to/your/data/directory --mesh_path path/to/your/mesh --dtu_gt_path path/to/DTU/MVSData --dtu_scanId ID
# BlendedMVS evaluation:
python eval.py --dataset bmvs --source_path path/to/your/data/directory --mesh_path path/to/your/mesh

BibTeX

@inproceedings{Dai2024GaussianSurfels,
  author    = {Dai, Pinxuan and Xu, Jiamin and Xie, Wenxiang and Liu, Xinguo and Wang, Huamin and Xu, Weiwei},
  title     = {High-quality Surface Reconstruction using Gaussian Surfels},
  publisher = {Association for Computing Machinery},
  booktitle = {SIGGRAPH 2024 Conference Papers},
  year      = {2024},
  doi       = {10.1145/3641519.3657441}
}

gaussian_surfels's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gaussian_surfels's Issues

Error with custom datasets

hi, thanks for your great work, some errors when train with custom datasets

Loading cameras: 3500 for training and 500 for testing Number of points at initialisation : 1269404 Training progress: 0%| | 0/40000 [00:00<?, ?it/s]Traceback (most recent call last): File "gaussian_surfels/train.py", line 300, in <module> training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from) File "gaussian_surfels/train.py", line 145, in training loss.backward() File "conda_envs/envs/surfel_gs/lib/python3.10/site-packages/torch/_tensor.py", line 396, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "conda_envs/envs/surfel_gs/lib/python3.10/site-packages/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Function _RasterizeGaussiansBackward returned an invalid gradient at index 2 - got [0, 0, 3] but expected shape compatible with [0, 16, 3]
@turandai Looking forward to your reply, thanks.

Could not recognize scene type!

I have followed all the steps properly with my own dataset. But during the training I am getting the error saying
Traceback (most recent call last):
File "train.py", line 300, in
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "train.py", line 39, in training
scene = Scene(dataset, gaussians, opt.camera_lr, shuffle=False, resolution_scales=[1, 2, 4])
File "/home/ghost/gaussian_surfels-main/scene/init.py", line 55, in init
assert False, "Could not recognize scene type!"
AssertionError: Could not recognize scene type!

-should I mention the path of the dataset anywhere in the code or how can I solve this error?

GPU OOMs

Training on ~600 images with 1080p images leads to OOM during training.
I reduced the last scale to be 2 instead of 1 during training to make it fit. This needs to be fixed.
However, after that I get OOM during the poisson meshing:

File "render.py", line 106, in render_sets
    render_set(dataset.model_path, True, "train", scene.loaded_iter, scene.getTrainCameras(scales[0]), gaussians, pipeline, background, write_image, poisson_depth)
  File "render.py", line 88, in render_set
    poisson_mesh(mesh_path, resampled[:, :3], resampled[:, 3:6], resampled[:, 6:], poisson_depth, 1 * 1e-4)
  File "./gaussian_surfels/utils/general_utils.py", line 234, in poisson_mesh
    nn_dist, nn_idx, _ = knn_points(torch.from_numpy(vert).to(torch.float32).cuda()[None], vtx.cuda()[None], K=4)
RuntimeError: CUDA out of memory. Tried to allocate 14.17 GiB (GPU 0; 44.38 GiB total capacity; 39.34 GiB already allocated; 1.34 GiB free; 41.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation

Update:
Reducing the scale in the render script makes it work, still wondering how it is possible to reduce the usage.

A side note: It would be great if you could share the differences against the original GS repo.

chamfer distance calculation for 3DGS

Hello,

Thanks for your awesome work! I am running the code to generate the result in the paper. I am facing issues getting the chamfer distance on 3DGS(splatting). I can only get the result on surfels. Could you please tell me the steps to generate the chamfer distance on 3DGS or share the code with me? Also, could you tell me how to generate the chamfer distance result on customized data after training and rendering on surfels?

Thanks.

GPU OOM problem

Why my GPU Memory-Usage increasing during the training progress.And it cause OOM when the training iteration is about 5k. My GPU Memory is 24G.

How long for Poisson meshing?

Hello,

first of all, thank you for your work! I am excited to test your code on my dataset.

I first tried testing it on a DTU scene (scan37) and the training ran smoothly and was pretty quick. However I am now trying to extract the mesh by running the render.py script as you mention but once it gets to the poisson meshing part, it seems to get stuck. How long is it supposed to take for the poisson meshing to complete? The script seems to now be stuck at:
Poisson meshing: 25%|████████████████ | 1/4 [00:00<00:00, 64527.75it/s]
for the past 20 minutes. Is this normal? How long should it take?

Best regards,
Brianne

Possibility of GPU memory leak

Hello, I found that when using render.py to calculate PSNR, the size of the video memory occupied kept increasing until OOM. In order to find the reason, I commented out all the parts after the render function

render_pkg = render(view, gaussians, pipeline, background, [float('inf'), float('inf')])

but the video memory usage continued to increase until OOM. I suspect that there is a memory leak in the render function.
I am completely unfamiliar with cuda code, especially the cuda extension of pytorch. I hope the author or others can confirm whether there is a memory leak problem, or fix the problem of video memory growing all the time.

Indexed tensor mismatch

The training ran successfully, but when I then run

python render.py -m ./output/camera_2f66e742-c --img --depth 10

I get the error underneath and the process halts:

Looking for config file in ./output/camera_2f66e742-c/cfg_args Config file found: ./output/camera_2f66e742-c/cfg_args Rendering ./output/camera_2f66e742-c Loading trained model at iteration 15000 [12/05 18:32:45] Found camera.npz file, assuming IDR data format! [12/05 18:32:45] Generating random point cloud (1000000)... [12/05 18:32:48] Loading cameras: 144 for training and 0 for testing [12/05 18:32:48] Rendering progress: 0it [00:00, ?it/s] /home/user/miniconda3/envs/gaussian_surfels/lib/python3.10/site-packages/numpy/core/fromnumeric.py:3504: RuntimeWarning: Mean of empty slice. return _methods._mean(a, axis=axis, dtype=dtype, /home/user/miniconda3/envs/gaussian_surfels/lib/python3.10/site-packages/numpy/core/_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide ret = ret.dtype.type(ret / rcount) Rendering progress: 0%| | 0/144 [00:00<?, ?it/s]/home/user/Documents/gaussian_surfels/utils/image_utils.py:86: UserWarning: Using torch.cross without specifying the dim arg is deprecated. Please either pass the dim explicitly or simply use torch.linalg.cross. The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at /opt/conda/conda-bld/pytorch_1711403380909/work/aten/src/ATen/native/Cross.cpp:63.) n_ul = torch.cross(p_u, p_l) Rendering progress: 100%|██████████████████████████████████████████████| 144/144 [01:34<00:00, 1.53it/s] Mesh refining: 50%|███████████████████████████▌ | 2/4 [00:46<00:46, 23.43s/it]Traceback (most recent call last): File "/home/user/Documents/gaussian_surfels/render.py", line 133, in <module> render_sets(model.extract(args), args.iteration, pipeline.extract(args), args.skip_train, args.skip_test, args.img, args.depth) File "/home/user/Documents/gaussian_surfels/render.py", line 112, in render_sets render_set(dataset.model_path, True, "train", scene.loaded_iter, scene.getTrainCameras(scales[0]), gaussians, pipeline, background, write_image, poisson_depth) File "/home/user/Documents/gaussian_surfels/render.py", line 88, in render_set poisson_mesh(mesh_path, resampled[:, :3], resampled[:, 3:6], resampled[:, 6:], poisson_depth, 1 * 1e-4) File "/home/user/Documents/gaussian_surfels/utils/general_utils.py", line 237, in poisson_mesh nn_color = torch.mean(color[nn_idx], axis=1) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) Mesh refining: 50%|███████████████████████████ | 2/4 [04:23<04:23, 131.83s/it]

Based on RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) How can I make sure the "indexed tensor" is gpu instead of cpu?

Training with masks

Hello,

I am currently attempting to train your model on my own dataset using masks. I am using the colmap format and estimated the normals with your provided script. I first trained without the --with_mask argument and it worked well but the results are not satisfying. So I tried training with the --with_mask argument but it didn't seem to be using the masks. I realized in this line:

mask = load_mask(f'{images_folder}/../mask/{image_name[-3:]}.png')[None]

that this was not the correct path to my mask data. My masks have exactly the same name as their corresponding images. So I changed this line to mask = load_mask(f'{images_folder}/../mask/{image_name}.png')[None]
But now when training with the correct path to my masks and with (but also without) the --with_mask argument I get the following error:

  File "train.py", line 302, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "train.py", line 145, in training
    loss.backward()
  File "/home/imc/miniconda3/envs/gaussian_surfels/lib/python3.8/site-packages/torch/_tensor.py", line 525, in backward
    torch.autograd.backward(
  File "/home/imc/miniconda3/envs/gaussian_surfels/lib/python3.8/site-packages/torch/autograd/__init__.py", line 267, in backward
    _engine_run_backward(
  File "/home/imc/miniconda3/envs/gaussian_surfels/lib/python3.8/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Function _RasterizeGaussiansBackward returned an invalid gradient at index 2 - got [0, 0, 3] but expected shape compatible with [0, 16, 3] 

Would you have any suggestions as to why this is happening?

NerfSyntheticInfo

Hello, I'm sorry to bother you. When I use your code to run input data containing transformers. json, it always stops midway and reports an error indicating it has been killed. Have you ever encountered this problem?

BlendedMVS evaluation script

Could you share the evaluation script you use to produce the BlendedMVS results? I assume it should be pretty similar to the DTU evaluation script, but since the ground truth data formats are pretty different, I'm wondering if you could share the actual script that you used to obtain the BlendedMVS results. I noticed that there is a function eval_bmvs commented out in the code but don't see it defined anywhere. Thanks!

Are the camera parameters learnable?

Very impressive work! I'm trying to implement my ideas based on your code. But I noticed that the camera parameters q and T are nn.Parameter in

self.q = nn.Parameter(self.q.to(torch.float32).contiguous().requires_grad_(True))
self.T = nn.Parameter(self.T.to(torch.float32).contiguous().requires_grad_(True))

Are they updated with the network learning or just some unusable definitions?

Running on Windows

Thank you very much for releasing the code. It works on Windows (CUDA 11.8 and PyTorch 2.1.1) with the following changes:

  1. directory errors: several changes on slashes with os.path.join and split: “/”, “\”.
    Check the similar issue on SuGaR here: Anttwo/SuGaR#66

  2. unistd.h error on ext.cpp:
    change #include <unistd.h> to #include <io.h>

  3. cuda_utils.cu error:
    change 'or' to || on line 527

I also received the following error during rendering:

File "C:...\gaussian_surfels\utils\general_utils.py", line 238, in poisson_mesh
nn_color = torch.mean(color[nn_idx], axis=1)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

To fix it, simply add before the line 238
nn_idx = nn_idx.to('cpu') and it works fine.

Hope this helps for those having similar issues.

Cuda error in diff-gaussian-rasterizer when passing a pre-computed cov3D

I passed cov3D_precomp into the rasterizer to apply a transform matrix.

if pipe.compute_cov3D_python:              # set to True
        cov3D_precomp = pc.get_covariance(scaling_modifier, transforms.squeeze())
else:
        scales = pc.get_scaling
        rotations = pc.get_rotation

But when I use the rasterizer, it reports a cuda error.

rendered_image, rendered_normal, rendered_depth, rendered_opac, radii = rasterizer(
        means3D = means3D,
        means2D = means2D,
        shs = shs,
        colors_precomp = colors_precomp,
        opacities = opacity,
        scales = scales,
        rotations = rotations,
        cov3D_precomp = cov3D_precomp)
Traceback (most recent call last):
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/shenwenhao/.vscode-server/extensions/ms-python.debugpy-2024.2.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
    cli.main()
  File "/home/shenwenhao/.vscode-server/extensions/ms-python.debugpy-2024.2.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
    run()
  File "/home/shenwenhao/.vscode-server/extensions/ms-python.debugpy-2024.2.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "/home/shenwenhao/.vscode-server/extensions/ms-python.debugpy-2024.2.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/home/shenwenhao/.vscode-server/extensions/ms-python.debugpy-2024.2.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/home/shenwenhao/.vscode-server/extensions/ms-python.debugpy-2024.2.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "train.py", line 393, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "train.py", line 128, in training
    render_pkg = render(viewpoint_cam, gaussians, pipe, background, patch_size)
  File "/home/shenwenhao/GauHuman_gaussian_surfels/gaussian_renderer/__init__.py", line 144, in render
    rendered_image, rendered_normal, rendered_depth, rendered_opac, radii = rasterizer(
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/site-packages/diff_gaussian_rasterization/__init__.py", line 234, in forward
    return rasterize_gaussians(
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/site-packages/diff_gaussian_rasterization/__init__.py", line 35, in rasterize_gaussians
    return _RasterizeGaussians.apply(
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/home/shenwenhao/anaconda3/envs/pt2/lib/python3.8/site-packages/diff_gaussian_rasterization/__init__.py", line 104, in forward
    num_rendered, color, normal, depth, opac, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

I was sure that all the input tensors to the rasterizer are on the same device. I wonder if there is some error in the rasterizer?

I have the pytorch2.0, python3.8.

Download Dataset Failed

Request to upload dataset to Google Drive due to slow download from OneDrive

I’ve encountered an issue while attempting to download the dataset from the provided OneDrive link. The download speed has been exceedingly slow, and unfortunately, it has resulted in multiple failed download attempts. This seems to be a recurring problem for me, potentially due to geographical location or network congestion.

To ensure smooth access to this valuable resource for myself and potentially other users facing similar difficulties, I kindly request if the dataset could be uploaded to Google Drive as an alternative download option. Google Drive is generally known for its wider accessibility and more consistent download speeds across different regions.

Thank you very much for considering this request. Your efforts in maintaining and sharing this dataset are highly appreciated.

Mismatching in screen positions calculation

Thanks for your great work!

I was trying to implement my idea based on your code. But when I use your world2scrn() function to project the 3D points gain with the depth2wpos() function to the camera views, I noticed there was an obvious gap between the GT screen positions (pixel positions, uv) and the projected output. I want to know why this happens. I've checked your code but could not figure this out.

The functions I used:

def depth2wpos(depth, mask, camera):

def world2scrn(xyz, cams, pad):

Hoping to get answers from you!

Differences and connections with 2DGS algorithm

I have recently been reading about the Gaussian surfel splatting (GSS) algorithm and the 2DGS algorithm, and found that the ideas of the two are very similar, and both are excellent algorithms. I have summarized the differences and connections between the two, which may not be correct, and I hope to hear the author's opinion.
Common points:

  1. The main idea of ​​both is to flatten the 3DGS and make it into a surface element to achieve accurate reconstruction of the surface of the object
  2. Both use the depth-normal consistent constraint, so that the depth map is smoother

Differences:

  1. GSS is based on 3DGS, and the z-axis is compressed to zero to achieve the surfel. The intersection depth of the ray-splat is approximated by Taylor expansion. 2DGS directly models the surfel and uses the accurate ray-splat method to calculate the intersection of the surfel and the light.
  2. The calculation method of the normal vector map is different. The GSS algorithm obtains the normal vector of each pixel by weighting. 2DGS uses the normal vector of the surfel when the cumulative opacity reaches 0.5 as the normal vector of the pixel. However, looking at the code to implement 2DGS, it seems that the normal vector also uses weighted average.
  3. The depth map of GSS uses weighted average depth, while the depth of 2DGS uses median depth.
  4. GSS supports the normal vector estimated by monocular as a priori constraint
  5. GSS also has opacity loss and mask loss. The former has a relatively small impact, and the latter is generally not available in data, so it has no impact.
  6. 2DGS proposes depth distortion loss, which helps to compress all surfels together instead of forming a thick layer of point cloud on the surface of the object

I did an experiment on the Waymo scene: Segment-102751 data(from GaussianPro by kcheng1021 https://drive.google.com/file/d/1DXQRBcUIrnIC33WNq8pVLKZ_W1VwON3k/view?usp=sharing). GSS has a good result after turning on the monocular normal prior constraint. The result will be worse after turning it off, but the result of 2DGS is much worse. Other custom datasets show similar result.
2DGS uses a more accurate projection model, and the projection code is more concise, so I think theoretically transplanting the innovation of GSS to 2DGS can get better results. I wonder what the author thinks?

Mask Loss

Hello,

I have a question about the mask loss. In the paper you mention that the mask loss is computed as the binary cross-entropy between $\sum_{i=0} T_i \alpha_i$ and the mask. But in the code the mask loss is computed as such:

loss_mask = (opac * (1 - pool(mask_gt))).mean()

Is there a reason you changed it? Or is this somehow equivalent?

Thanks for your help!

Unable to calculate chamfer distance

I cannot calculate the chamfer distance for the BMVS dataset as the reconstructed mesh and ground truth points are in different coordinate spaces.
Please help me resolve the issue.

image

no download_surface_normal_models

no    download_surface_normal_models


cd submodules/omnidata
sh download_surface_normal_models
python estimate_normal.py --img_path path/to/your/image/directory

Slow decrease in loss

Hello. When I was training other datasets, it performed well when I used the parameters run from colmap (converted to JSON format through colmap2nerf here). But when I use the standard internal and external parameter data provided in the dataset, its results become very strange.
图片
Above are the results obtained using colmap data.
图片
The above is the result obtained from the camera parameters provided in the dataset.
图片

The biggest change after changing the parameters is a decrease in loss
图片
At first, I thought it was a parameter issue, but when I changed the processing I did when reading the parameters above, he started reporting errors.
图片
Like what others have encountered, so it seems that the parameters are not a problem because they can run normally without being changed。 Here are the rendering results I saved every 1000 times:

图片
图片
图片
test3001
test4001
test5001
test6001
test7001
test8001
test9001
test10001
test11001
test12001

Guidance Needed for Training on Custom Data

I'm encountering some issues while attempting to train on custom data using gaussian surfels. Here's my process so far:

  1. I've generated the COLMAP positions for my custom data and stored my images in the image directory. The COLMAP sparse model is also located at the same level as the image directory. /mnt/f/check/dff_100/sparse/0
  2. Upon running the command python estimate_normal.py --img_path /mnt/f/check/dff_100/image, a folder called normal is created at the same level as the image folder, i.e., /mnt/f/check/dff_100/normal, containing the same number of images as in the image folder. It seems that Omnidata is functioning correctly up to this point.

However, when I attempt to proceed to the training phase using the command python train.py -s /mnt/f/check/dff_100, I encounter the following error:
Screenshot 2024-05-05 184117

Could someone provide guidance on how to successfully train the model using my own custom data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.