Code Monkey home page Code Monkey logo

compact3d's People

Contributors

arghavan-kpm avatar klnavaneet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

compact3d's Issues

kmeans.cls_ids causes division by zero error.

When I'm training and reach about 23% where save_kmeans() is called, I get a division by zero error on line 244 in train_kmeans.py

244 n_bits = int(np.ceil(np.log2(len(kmeans.cls_ids))))

I checked the traceback and kmeans.cls_ids returned an empty tensor array similar to its instantiation in line 20 in kmeans_quantize.py

20 self.cls_ids = torch.empty(0)

Opacity regularization is broken

Hello - I'd like to run Compact3D with opacity_reg parameter after the recent update on July 31, 2024. However, this change seems to be breaking since when I run the bash script run.sh, I get the following error when the execution hits the opacity regularization at the 16000th iteration:

[ITER 16000] Evaluating train: L1 0.01976204663515091 PSNR 30.19481582641602 [08/08 17:43:24]
Num Gaussians:  616981 [08/08 17:43:24]
Traceback (most recent call last):
  File "train_kmeans.py", line 410, in <module>
    args.checkpoint_iterations, args.start_checkpoint, args.debug_from, args)
  File "train_kmeans.py", line 225, in training
    gaussians.prune(0.005, scene.cameras_extent, size_threshold)
AttributeError: 'GaussianModel' object has no attribute 'prune'

It seems like the version of the method with opacity regularization uses a method named prune under the GaussianModel which is not available within this or the original 3D gaussian splatting code base.

forward_scale_rot ?

if 'scale_rot' in quantized_params:
                kmeans_scrot_q.forward_scale_rot(gaussians, assign=assign)

def forward_scale_rot(self, gaussian, assign=False):
    """Combine both scaling and rotation for a single k-Means"""
    if self.vec_dim == 0:
        self.vec_dim = gaussian._rotation.shape[1] + gaussian._scaling.shape[1]
    feat_scaled = torch.cat([self.rescale(gaussian._scaling), self.rescale(gaussian._rotation)], 1)
    feat = torch.cat([gaussian._scaling, gaussian._rotation], 1)
    if assign:
        self.cluster_assign(feat, feat_scaled)
    else:
        self.update_centers(feat)
    sampled_centers = torch.gather(self.centers, 0, self.nn_index.unsqueeze(-1).repeat(1, self.vec_dim))
    gaussian._scaling_q = gaussian._scaling - gaussian._scaling.detach() + sampled_centers[:, :3]
    gaussian._rotation_q = gaussian._rotation - gaussian._rotation.detach() + sampled_centers[:, 3:]

This configuration exists in the code. What is the effect?
I think it may difficult to quant??

comparing storage memory requirements

Hi,
I'd like to compare the storage memory usage for both quantized and non-quantized model.

I was thinking of comparing the size of the checkpoint file from non-quantized training (using train.py from the original 3DGS paper) and the files that are being loaded from the --load_quant option.
However, since files used from --load_quant option are not the only files for restoring the scene, I cannot compute the exact storage memory required for restoring a scene.

So, I'd like to know if there is any way to compare the storage memory usage for both quantized and non-quantized models.
Thanks.

quant_params sh dc rot sc?

The default "quant_params" is [sh dc rot sc]. I found the size of the model not still large with the default quant_params. It may be not the parameters of the paper. Can you provide the quant_params of the paper?

monitor the memory usage

Nice work! I want to know how to monitor the memory-use, I have done experiments on 3dgs and compGS, but I don't know how to compare their memory usage.

How to visualize the trained model with SIBR viewer?

Hello,

Would it be possible to provide details on how to reconstruct the 3DGS model from the compressed file and view it in SIBR viewer?

I have downloaded your MipNerf-360 trained models here but I'm unable to view it with SIBR viewer

Using this command I use:
SIBR_gaussianViewer_app.exe -m "C:\Users\Downloads\compgs_4k\e041_a_kmeans_512_10it_st15000_tot30000_fr100diff_last5k\mipnerf360\bonsai\"

And here is the error:
image
image

Note that it is not about memory issue - I managed to visualize uncompressed models (bigger file size) of original 3DGS models.

Is there something I need to do before viewing the compressed trained models? Thank you

Where is the code similar to RLE located in which file?

I noticed in the paper it says, 'Moreover, since the Gaussians are a set of non-ordered elements, we compress the representation further by sorting the Gaussians based on one of the quantized parameters and storing the indices using the Run-Length-Encoding (RLE) method.' However, I didn't find the code for sorting an index in the code. Can you please tell me where the specific implementation of this section is?

Acquiring for reconstruction script for your compression method

Hey, Would you mind providing me with the reconstruction or restoring python scripty for you compression method. I mean converting point_cloud.ply, kmeans_centers.pth, kmeans_args.npy, and kmeans_inds.bin into one point_cloud.ply file. In that case, I can directly open the rebuildt 3DGS with SIBR viewer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.