Code Monkey home page Code Monkey logo

text2cinemagraph's Introduction

Text2Cinemagraph

This is the official PyTorch implementation of "Text-Guided Synthesis of Eulerian Cinemagraphs".


Method Details


We introduce a fully automated method, Text2Cinemagraph, for creating cinemagraphs from text descriptions - an especially challenging task when prompts feature imaginary elements and artistic styles, given the complexity of interpreting the semantics and motions of these images. In this method, we propose an idea of synthesizing image twins from a single text prompt using Stable Diffusion - a pair of an artistic image and its pixel-aligned corresponding natural-looking twin. While the artistic image depicts the style and appearance detailed in our text prompt, the realistic counterpart greatly simplifies layout and motion analysis. Leveraging existing natural image and video datasets, we accurately segment the realistic image and predict plausible motion given the semantic information. The predicted motion is then transferred to the artistic image to create the final cinemagraph.

Getting Started

Environment Setup

  • Run the following commands to set up the dependencies reqiured for this project.
    git clone https://github.com/text2cinemagraph/artistic-cinemagraph.git
    cd text2cinemagraph
    conda create -n t2c python=3.9
    conda activate t2c
    conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
    conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev
    conda install -c conda-forge gxx_linux-64=11.2
    pip install git+https://github.com/NVlabs/ODISE.git
    pip install -r requirements.txt
    conda install -c anaconda cupy
    
    If there are ninja related errors in installing mask2former refer to this link

Download Pretrained Models

  • Run the following command to download the preatrined (Optical Flow Prediction, Text-Direction Guided Optical Flow Prediction, Video Generation) models,
    gdown https://drive.google.com/u/4/uc?id=1Cx64SC12wXzDjg8U0ujnKx8V2G6SbCIb&export=download
    tar -xvf checkpoints.tar
    
  • Download sd-v1-4-full-ema.ckpt using,
    mkdir -p img2img/models/ldm/stable-diffusion-v1
    cd img2img/models/ldm/stable-diffusion-v1
    wget https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4-full-ema.ckpt
    cd ../../../../
    
  • Download diffusers stable-diffusion-v1-4
    cd checkpoints
    git lfs install
    git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
    cd ../
    
    If there are issues with installing git-lfs refer to this issue

Inference (Artistic Domain)

  • To generate the first result from above exmaple, run the following command,
    python inference_t2c.py --config configs/inference.yaml
    
  • To generate the text guided direction result displayed above, run the following command,
    #to generate the left example
    python  inference_t2c.py \
              --config configs/inference_directional.yaml \
              --use_hint \
              --prompt "a large river flowing in left to right, downwards direction in front of a mountain in the style of starry nights painting"
    
    #to generate the right example
    python  inference_t2c.py \
              --config configs/inference_directional.yaml \
              --use_hint \
              --prompt "a large river flowing in upwards, right to left direction in front of a mountain in the style of starry nights painting"
    
    Note that since we randomly sample a theta based on quadrant which the text direction corresponds to, exact replicability might not be possible.
Artistic Image (s1) Natural Image (s21) ODISE Mask (s3)
Self-Attention Mask (s4) Optical Flow (s5) Cinemagraph (s6)
  • Since the total time needed to run all the components might be large and the user might not be satisfied with the end result due to output of some intermediate compont, we suggest the user to run each component separately in such scenario. Below we show how to run inference in a stage-wise manner,
    python inference_t2c.py --config configs/inference.yaml --stage s1
    
    # Generate the twin (natural) image
    python inference_t2c.py --config configs/inference.yaml --stage s2
    
    # Generate ODISE mask
    python inference_t2c.py --config configs/inference.yaml --stage s3
    
    # Generate Self-Attention mask (guided using ODISE mask)
    python inference_t2c.py --config configs/inference.yaml --stage s4
    
    # Predict optical flow
    python inference_t2c.py --config configs/inference.yaml --stage s5
    
    # Generate the cinemagraph
    python inference_t2c.py --config configs/inference.yaml --stage s6
    

Tip and Tricks for achieving better results (Artistic Domain)

Change the following parameters in inference.yaml or inference_directional.yaml if you do not achieve desired results,

  • twin_extraction:prompt: change the input text prompt if the images generated by --stage s1 are not desirable.
  • twin_extraction:seed: change the seed if the images generated by --stage s1 are not desirable and the user does not want to change the prompt.
  • twin_generation:prompt: by default it can be None. If the output of --stage s2 does not look semantically similar to the artistic image, try spacifying the edit prompt manually.
  • odise:vocab: if the ODISE generated mask is including some regions that the user does not want, change the vocab to specify only the desired regions.
  • attn_mask:n_clusters: change the number of clusters if the generated mask from --stage s4 is not representative of the desired regions of motion in the final cinemagraph.
  • attn_mask:threshold: it specifies what is the minimum percentage of pixel overlap between the ODISE mask and the Self-Attention cluster to be considered inside the mask. Increase the value to reduce the amount of Self-Attention clusters included in the final mask and vice versa.
  • attn_mask:cluster_type: change the cluster type between kmeans or spectral (this is only for very fine-grained refinement).
  • attn_mask:erosion_iter: if the mask is slightly overlaping with boundaries of static region in --stage s4 increase the value of erosion_iter between [2,5] at intervals of 1 to retract the mask slightly.
  • video:n_frames: use 60 in cases of motion restricted to small regions (like waterfall) and 120 for large body motion (like sea).
  • video:speed: change the speed (recommended value between [0.25, 3]) to change the speed of motion in the generated cinemagraph. If the user notices grey rigions apprearing in the cienmagraph, try lowering the speed.

Data Preparation for Training

Optical Flow and Videos

  • The dataset for ground-truth optical flows and videos are taken from Animating Pictures with Eulerian Motion Fields. Download the train and validation dataset using,
    gdown https://drive.google.com/u/0/uc?id=19f2PsKEaeAmspd1ceGkOEMhZsZNquZyF&export=download
    cd dataset
    unzip eulerian_data.zip
    
    Note that we use the entire validation dataset as test dataset (and do not use it during training process).

Masks (ODISE)

  • For testing on real-domain data, we use masks generated by ODISE. To generate the masks (after completing the above step), run the following command,
    python demo/gen_mask.py \
              --input dataset/eulerian_data/validation \
              --output dataset/eulerian_data/validation_masks_odise \
              --vocab "water, waterfall, river, ocean, lake, sea"
    

Text Guided Direction Control

  • For training the optical flow prediction model that can predict flow following the direction of motion in the input prompt, we generate optical dense flow hint maps, similar to Controllable Animation of Fluid Elements in Still Images. The optical flow hints are generated from Ground-Truth optical flow with 1,2,3,4 and 5 hints. The code for generating hints is taken from SLR-SFS.
    python dataset/generate_flow_hint.py \
              --dataroot dataset/eulerian_data/train \
              --save_path dataset/eulerian_data/train_motion_hints \
              --n_clusters 5
    

Artistic Domain Prompts

  • The prompts used to generate artistic domain examples are located in dataset/prompts.txt and the corresponding edit prompts (used to generate the natural verion of the artistic images) are located in dataset/prompts_twin.txt. Note that the edit prompts can be specified manually or can also be atumatically derived from the artistic prompts if not specified otherwise.

Training

Optical Flow Prediction

  • For training the optical flow prediction model, that predicts optical flow without taking text direction guidance, use the following command,

    python train_motion.py \
              --name <expriment-name-1> \
              --gpu_ids 0,1,2,3 \
              --no_instance \
              --label_nc 0 \
              --input_nc 4 \
              --output_nc 2 \
              --fineSize 512 \
              --batchSize 16 \
              --norm instance \
              --dataset_name motion \
              --motion_norm 64.0 \
              --netG spadexattnunetsd \
              --dataroot dataset/eulerian_data/train \
              --no_vgg_loss \
              --use_epe_loss \
              --use_prompts \
              --use_mask \
              --mask_path dataset/eulerian_data/train_masks_odise \
              --captions_file ./dataset/captions/file2captions-eularian-train-blip2-20-15.txt
    

    Note that in addition to the input image and mask, we condition the flow prediction on text prompt. We generate the text prompts for the images in the train and validation dataset using BLIP2.

Optical Flow Prediction (for text guidance direction)

  • For training the optical flow prediction model, that predicts optical flow conditioned on text direction guidance, use the following command,

    python train_motion.py \
              --name <expriment-name-2> \
              --gpu_ids 0,1,2,3 \
              --no_instance \
              --label_nc 0 \
              --input_nc 6 \
              --output_nc 2 \
              --fineSize 512 \
              --batchSize 16 \
              --norm sync:spectral_instance \
              --dataset_name motion \
              --motion_norm 64.0 \
              --netG spadeunet \
              --dataroot dataset/eulerian_data/train \
              --no_vgg_loss \
              --use_epe_loss \
              --use_mask \
              --mask_path dataset/eulerian_data/train_masks_odise \
              --use_hint \
              --hints_path dataset/eulerian_data/train_motion_hints
    

    Note that in our experiments, for predicting optical flow conditioned on text direction guidance, we do not use text conditoning by Cross-Attention layers, as the input consists of the image, mask and dense optical flow hint. The motivation for using text conditioning along with image and mask in previous method was that text inherently contains class information, like a ‘waterfall’ or ‘river’, which can be useful to determine the natural direction in the predicted flow. However, in this case direction is already given as input dense flow hint. This helps in reducing the model size (as we do not need to use expensive Cross-Attention layers).

Video Generation

  • For first stage training (training using Ground-Truth Optical Flow) of the video generation model, use the following command,

    python train_video.py \
              --name <expriment-name-3> \
              --gpu_ids 0,1,2,3 \
              --no_instance \
              --label_nc 0 \
              --input_nc 8 \
              --output_nc 3 \
              --fineSize 512 \
              --batchSize 16 \
              --norm_G sync:spectral_instance \
              --dataset_name frame \
              --netG spadeunet4softmaxsplating \
              --dataroot dataset/eulerian_data/train \
              --use_l1_loss \
              --tr_stage stage1 \
              --frames_basepath dataset/eulerian_data/train
    
  • We train the flow perdiction model additionally for 50 epochs on optical flow predicted by the Optical Flow Prediction model. To make the training process more efficient, we precompute and store all the optical flow predictions for training data before starting training. To generate the optical flow using the Optical Flow Prediction model use the following command,

    python test_motion.py \
              --<expriment-name-1> \
              --phase train \
              --no_instance \
              --label_nc 0 \
              --input_nc 4 \
              --output_nc 2 \
              --fineSize 512 \
              --batchSize 1 \
              --which_epoch 200 \
              --netG spadexattnunetsd \
              --dataroot dataset/eulerian_data/train \
              --use_mask \
              --use_prompts \
              --captions_file ./dataset/captions/file2captions-eularian-train-blip2-20-15.txt
    
  • For second stage training (training using Optical Flow predicted by model) of the video generation model, use the following command,

    python train_video.py \
              --name <experiment-name-3> \
              --continue_train \
              --niter 150 \
              --gpu_ids 0,1,2,3 \
              --no_instance \
              --label_nc 0 \
              --input_nc 8 \
              --output_nc 3 \
              --fineSize 512 \
              --batchSize 16 \
              --norm_G sync:spectral_instance \
              --dataset_name frame \
              --netG spadeunet4softmaxsplating \
              --dataroot dataset/eulerian_data/train \
              --use_l1_loss \
              --tr_stage stage2 \
              --frames_basepath dataset/eulerian_data/train \
              --motion_basepath results/motion-7-1/train_200/images \
              --motion_norm 64.0
    

    Note that we use the Video Generation model, trained with Optical Flow Prediction model (w/o using text direction guidance) to generate videos for both the scenarios, i.e., w/ and w/o text direction guidance.

Evaluation (Real Domain)

Generate Results

  • To predict Optical Flow for the validation dataset on single images, use the following command,
    python test_motion.py \
              --name <experiment-name-1> \
              --no_instance \
              --label_nc 0 \
              --input_nc 4 \
              --output_nc 2 \
              --fineSize 512 \
              --batchSize 1 \
              --netG spadexattnunetsd \
              --dataset_name motion \
              --dataroot dataset/eulerian_data/validation \
              --use_mask \
              --use_seg_mask \
              --use_prompts \
              --mask_path dataset/eulerian_data/validation_masks_odise \
              --captions_file dataset/captions/file2captions-eularian-validation-blip2-20-15.txt
    
    Note that to predict optical flows in using our pretrained models, after downloading the models, replace <experiment-name-1> with motion-pretrained.
  • To generate cinemagraphs using the predicted optical flows, in previous step, for the validation dataset, use the following command,
    python test_video.py \
              --name<experiment-name-3> \
              --no_instance \
              --label_nc 0 \
              --input_nc 8 \
              --output_nc 3 \
              --fineSize 512 \
              --batchSize 1 \
              --dataset_name frame \
              --netG spadeunet4softmaxsplating \
              --dataroot dataset/eulerian_data/validation \
              --motion_basepath results/<experiment-name-1>/test_latest/images \
              --speed 1.0
    
    Note that to generate cinemagraphs in using our pretrained models, after downloading the models, replace <experiment-name-3> with video-pretrained.

Compute FVD on Real Domain Results

  • To predict FVD_16, where frames are samples at a rate=3 (with total 16 frames sampled out of 60) use the following command,

    python evaluate/compute_fvd.py \
              --pred_path <generated-video-dir> \
              --gt_path dataset/eulerian_data/validation \
              --type fvd_16
    
  • To predict FVD_60, where frames are samples at a rate=1 (with all 60 frames samples) use the following command,

    python evaluate/compute_fvd.py \
              --pred_path <generated-video-dir> \
              --gt_path dataset/eulerian_data/validation \
              --type fvd_60
    

The code for FVD computation has been taken from StyleGAN-V.

Citation

@article{mahapatra2023synthesizing,
    title={Text-Guided Synthesis of Eulerian Cinemagraphs},
    author={Mahapatra, Aniruddha and Siarohin, Aliaksandr and Lee, Hsin-Ying and Tulyakov, Sergey and Zhu, Jun-Yan},
    journal={arXiv preprint arXiv:2307.03190},
    year={2023}
}

Acknowledgments

The code for this project was built using the codebase of pix2pixHD, ODISE, plug-and-play, SLR-SFS. The symmetric-splatting code was built on top of softmax-splatting. The code for evalution metric (FVD) was build on codebase of StyleGAN-V. We are very thankful to the authors of the corresponding works for releasing their code.

We are also grateful to Nupur Kumari, Gaurav Parmar, Or Patashnik, Songwei Ge, Sheng-Yu Wang, Chonghyuk (Andrew) Song, Daohan (Fred) Lu, Richard Zhang, and Phillip Isola for fruitful discussions. This work is partly supported by Snap Inc. and was partly done while Aniruddha was an intern at Snap Inc.

text2cinemagraph's People

Contributors

text2cinemagraph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text2cinemagraph's Issues

Can't load the configuration of '../Tune-A-Video/checkpoints/stable-diffusion-v1-4' at stage s5

Hi, I used Google Colab with V100 (24GB RAM) to run inference_t2c.py file.
I followed the same instructions to setup the environment (using conda to create virtual environment) and downloaded necessary pretrained models. Everything was fine until stage s5 (optical flow part). I got the following error messages:

Traceback (most recent call last):
File "/content/text2cinemagraph/inference_t2c.py", line 149, in
main()
File "/content/text2cinemagraph/inference_t2c.py", line 139, in main
predict_flow(exp_config)
File "/content/text2cinemagraph/test_motion.py", line 79, in predict_flow
model = create_model(opt)
File "/content/text2cinemagraph/models/models.py", line 13, in create_model
model.initialize(opt)
File "/content/text2cinemagraph/models/pix2pixHD_model.py", line 45, in initialize
self.text_encoder = CLIPTextModel.from_pretrained(pretrained_model_path, subfolder="text_encoder")
File "/usr/local/envs/t2c/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2079, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "/usr/local/envs/t2c/lib/python3.9/site-packages/transformers/models/clip/configuration_clip.py", line 134, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/envs/t2c/lib/python3.9/site-packages/transformers/configuration_utils.py", line 565, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/envs/t2c/lib/python3.9/site-packages/transformers/configuration_utils.py", line 641, in _get_config_dict
raise EnvironmentError(
OSError: Can't load the configuration of '../Tune-A-Video/checkpoints/stable-diffusion-v1-4'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '../Tune-A-Video/checkpoints/stable-diffusion-v1-4' is the correct path to a directory containing a config.json file.

Is this a PATH problem or anything (model or config) I have to download as well?
Really appreciate your help!

Endless Loop *.gif

Thank you very much for the interesting code!

I tried it and the result is not an Endless Loop Gif, but a 2 second movie.
Will you provide the code to generate the Endless Loop?

SD 1.5?

Would this work with SD1.5? This really looks awesome

Encountered a problem when executing "conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev"

Firstly, I created a docker using the command docker run --gpus all -itd --ipc=host --name=guyu --rm -v /gdata/cold1/guyu:/userhome pytorch/pytorch:1.13.1-cuda11.6-cudnn8-devel, and then used the command docker exec -it guyu /bin/bash to enter the docker. After that, I configured following the instructions in README.md. But I encountered a problem when executing conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev. Here is the error report.

(t2c) root@535ca7f76d3b:/userhome/text2cinemagraph# conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev
Collecting package metadata (current_repodata.json): done                                                                                                                                                                                      
Solving environment: done                                                                                                                                                                                                                      
                                                                                                                                                                                                                                               
                                                                                                                                                                                                                                               
==> WARNING: A newer version of conda exists. <==                                                                                                                                                                                              
  current version: 22.11.1                                                                                                                                                                                                                     
  latest version: 23.5.2                                                                                                                                                                                                                       
                                                                                                                                                                                                                                               
Please update conda by running                                                                                                                                                                                                                 
                                                                                                                                                                                                                                               
    $ conda update -n base -c defaults conda                                                                                                                                                                                                   
                                                                                                                                                                                                                                               
Or to minimize the number of packages updated during conda update use                                                                                                                                                                          
                                                                                                                                                                                                                                               
     conda install conda=23.5.2                                                                                                                                                                                                                
                      


## Package Plan ##

  environment location: /opt/conda/envs/t2c

  added / updated specs:
    - libcusolver-dev


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    libcusolver-dev-11.3.3.112 |       h9f1add7_0        55.0 MB  nvidia/label/cuda-11.6.1
    ------------------------------------------------------------
                                           Total:        55.0 MB

The following NEW packages will be INSTALLED:

  libcusolver-dev    nvidia/label/cuda-11.6.1/linux-64::libcusolver-dev-11.3.3.112-h9f1add7_0 


Proceed ([y]/n)? y


Downloading and Extracting Packages
                                                                                                                                                                                                                                               
class: CondaSSLError
message:
Encountered an SSL error. Most likely a certificate verification issue.

Exception: HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Max retries exceeded with url: /6137df1016c93053c05a6684/6215bb7c03b93eb2b1de2209?response-content-disposition=attachment%3B%20filename%3D%22libcusolver-dev-11.3.3.112-h9f1add7_0.tar.bz2%22%3B%20filename%2A%3DUTF-8%27%27libcusolver-dev-11.3.3.112-h9f1add7_0.tar.bz2&response-content-type=application%2Fx-tar&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=600&X-Amz-Date=20230724T083633Z&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHcaCXVzLWVhc3QtMSJIMEYCIQC6yayOjRAUqP5Axp1ZbVZpmFJvAVu6nNAINJbMY8rISwIhAInfNg6ApcYSh9to9DsA5Wqjsc6YPTBwq9j3WUm3r5X0KrMFCBAQABoMNDU1ODY0MDk4Mzc4IgxWYvMUS44diKr6uXgqkAXYlysqO%2FevcawMaRTJtxdDkWNF7X5%2BLDvkWJU9hg0AYlZJzSgUPezN9mRBx8XfthV23QVrAQGLKaU%2Bcx9jVpcSXn6Bs7k5LLcbokSDSilV3%2BsijMpYAPILkqQxoQZu6AfKqrGs2mFdmJHS8rZ8xkBv4%2FjswzA9hAxwpR1eUZAZLk1nWeira%2B4W9%2F1SRR0oghZpr8YMPdWF18vYqxeHQKzp8LHYW5kV8ANt3MaVvkJMb1hUqAmiDzSZnzzKdTNWzfaumgbTOsPkajrbPrL8nlrYUteDWBVXSBLRujay07vDuPIfc3gOL0awXaHj0%2B5tSwvqtr4GvGQpBVBeIs90yPXKvKKPzBF%2Fau5zLfHkPIBCpNebz8Q%2BSPuGP%2FpKlnFNxnbnCpgSwuXGHKtoHvbaeD7lyHVOKJ759sNZkGh8JHWtRtcU5j4jFtnbgmbJp5xMmAKyyZrD3OhyLokCHC8negRz8R%2BUWzmUNCLqp9mp6r0zG%2FX6zZ1vm8Tqk4HGl2FsPj8WA%2BWAAIVFyXX0OOpifZNiZ9M3SCAXb4yO8o3lvQYgDTXwLRAlRno4dLKYMxDYWeA08o%2FqphBNMOI9%2BWuCQI%2FygYYaE38Nh%2BiPdGKwy2ToVZdiD1dookK%2BIalM31%2BHa7y0t0lvPaoTCixgX7KuVnLKo5BsvmOqWMedVJnHAgCttPcTMhJmpwTFSb6%2B3Qf3CEbCV%2BveC5slq5Z3ewoKMNGEHDzMX7Eus6eclL06fjfjQMtJHUoIhemMJKBZxPh2gnkCYFcvoqCJYZW7ghfsRvxyUd3InEI8vj77czLcmCe29ksvTpJgBTUFfZsQtCTvRhNr4JnMVSiwn%2BtTI9T%2BmwCvf13tN7Fo%2FTnk2CWSiBT8bzDlwvilBjqwARHV11O7SoYUIeRBVjriqPm5MJ0Po8QyB%2Fv9hM1qibDm9KZmuA4Dl0Q5jGGAedSFTDQmu8WzpTWDZB153DIzZ90PHCm8La5rwQW8G7Hua7YQV1GjiJHw6qjw5z%2BfrqWluj1Cfz8zNhZLaQz4QsQ08kwLN6WSx95Mkz%2FmQOYUIV7UC5yUNGPsozkP9bOcdiYMvi1A4dzW0jPWumYgilFFg%2FgD9KZliKkUQ35fiE%2Fd9w2%2B&X-Amz-Credential=ASIAWUI46DZFOYYALWVN%2F20230724%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=822a1e199d29eee1933475f1a4a60d5ee3d0104d9fc68548830fd71f3ca5a943 (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))

kwargs:
{}

Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1118, in __call__
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/conda/cli/main.py", line 69, in main_subshell
    exit_code = do_call(args, p)
  File "/opt/conda/lib/python3.10/site-packages/conda/cli/conda_argparse.py", line 91, in do_call
    return getattr(module, func_name)(args, parser)
  File "/opt/conda/lib/python3.10/site-packages/conda/notices/core.py", line 109, in wrapper
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/conda/cli/main_install.py", line 20, in execute
    install(args, parser, 'install')
  File "/opt/conda/lib/python3.10/site-packages/conda/cli/install.py", line 332, in install
    handle_txn(unlink_link_transaction, prefix, args, newenv)
  File "/opt/conda/lib/python3.10/site-packages/conda/cli/install.py", line 357, in handle_txn
    unlink_link_transaction.download_and_extract()
  File "/opt/conda/lib/python3.10/site-packages/conda/core/link.py", line 204, in download_and_extract
    self._pfe.execute()
  File "/opt/conda/lib/python3.10/site-packages/conda/core/package_cache_data.py", line 805, in execute
    raise CondaMultiError(exceptions)
conda.CondaMultiErrorclass: CondaSSLError
message:
Encountered an SSL error. Most likely a certificate verification issue.

Exception: HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Max retries exceeded with url: /6137df1016c93053c05a6684/6215bb7c03b93eb2b1de2209?response-content-disposition=attachment%3B%20filename%3D%22libcusolver-dev-11.3.3.112-h9f1add7_0.tar.bz2%22%3B%20filename%2A%3DUTF-8%27%27libcusolver-dev-11.3.3.112-h9f1add7_0.tar.bz2&response-content-type=application%2Fx-tar&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=600&X-Amz-Date=20230724T083633Z&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEHcaCXVzLWVhc3QtMSJIMEYCIQC6yayOjRAUqP5Axp1ZbVZpmFJvAVu6nNAINJbMY8rISwIhAInfNg6ApcYSh9to9DsA5Wqjsc6YPTBwq9j3WUm3r5X0KrMFCBAQABoMNDU1ODY0MDk4Mzc4IgxWYvMUS44diKr6uXgqkAXYlysqO%2FevcawMaRTJtxdDkWNF7X5%2BLDvkWJU9hg0AYlZJzSgUPezN9mRBx8XfthV23QVrAQGLKaU%2Bcx9jVpcSXn6Bs7k5LLcbokSDSilV3%2BsijMpYAPILkqQxoQZu6AfKqrGs2mFdmJHS8rZ8xkBv4%2FjswzA9hAxwpR1eUZAZLk1nWeira%2B4W9%2F1SRR0oghZpr8YMPdWF18vYqxeHQKzp8LHYW5kV8ANt3MaVvkJMb1hUqAmiDzSZnzzKdTNWzfaumgbTOsPkajrbPrL8nlrYUteDWBVXSBLRujay07vDuPIfc3gOL0awXaHj0%2B5tSwvqtr4GvGQpBVBeIs90yPXKvKKPzBF%2Fau5zLfHkPIBCpNebz8Q%2BSPuGP%2FpKlnFNxnbnCpgSwuXGHKtoHvbaeD7lyHVOKJ759sNZkGh8JHWtRtcU5j4jFtnbgmbJp5xMmAKyyZrD3OhyLokCHC8negRz8R%2BUWzmUNCLqp9mp6r0zG%2FX6zZ1vm8Tqk4HGl2FsPj8WA%2BWAAIVFyXX0OOpifZNiZ9M3SCAXb4yO8o3lvQYgDTXwLRAlRno4dLKYMxDYWeA08o%2FqphBNMOI9%2BWuCQI%2FygYYaE38Nh%2BiPdGKwy2ToVZdiD1dookK%2BIalM31%2BHa7y0t0lvPaoTCixgX7KuVnLKo5BsvmOqWMedVJnHAgCttPcTMhJmpwTFSb6%2B3Qf3CEbCV%2BveC5slq5Z3ewoKMNGEHDzMX7Eus6eclL06fjfjQMtJHUoIhemMJKBZxPh2gnkCYFcvoqCJYZW7ghfsRvxyUd3InEI8vj77czLcmCe29ksvTpJgBTUFfZsQtCTvRhNr4JnMVSiwn%2BtTI9T%2BmwCvf13tN7Fo%2FTnk2CWSiBT8bzDlwvilBjqwARHV11O7SoYUIeRBVjriqPm5MJ0Po8QyB%2Fv9hM1qibDm9KZmuA4Dl0Q5jGGAedSFTDQmu8WzpTWDZB153DIzZ90PHCm8La5rwQW8G7Hua7YQV1GjiJHw6qjw5z%2BfrqWluj1Cfz8zNhZLaQz4QsQ08kwLN6WSx95Mkz%2FmQOYUIV7UC5yUNGPsozkP9bOcdiYMvi1A4dzW0jPWumYgilFFg%2FgD9KZliKkUQ35fiE%2Fd9w2%2B&X-Amz-Credential=ASIAWUI46DZFOYYALWVN%2F20230724%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=822a1e199d29eee1933475f1a4a60d5ee3d0104d9fc68548830fd71f3ca5a943 (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))

kwargs:
{}

: <exception str() failed>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/bin/conda", line 13, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.10/site-packages/conda/cli/main.py", line 112, in main
    return conda_exception_handler(main, *args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1418, in conda_exception_handler
    return_value = exception_handler(func, *args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1121, in __call__
    return self.handle_exception(exc_val, exc_tb)
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1150, in handle_exception
    return self.handle_application_exception(exc_val, exc_tb)
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1164, in handle_application_exception
    self._print_conda_exception(exc_val, exc_tb)
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1168, in _print_conda_exception
    print_conda_exception(exc_val, exc_tb)
  File "/opt/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1095, in print_conda_exception
    stderrlog.error("\n%r\n", exc_val)
  File "/opt/conda/lib/python3.10/logging/__init__.py", line 1506, in error
    self._log(ERROR, msg, args, **kwargs)
  File "/opt/conda/lib/python3.10/logging/__init__.py", line 1624, in _log
    self.handle(record)
  File "/opt/conda/lib/python3.10/logging/__init__.py", line 1633, in handle
    if (not self.disabled) and self.filter(record):
  File "/opt/conda/lib/python3.10/logging/__init__.py", line 821, in filter
    result = f.filter(record)
  File "/opt/conda/lib/python3.10/site-packages/conda/gateways/logging.py", line 48, in filter
    record.msg = record.msg % new_args
  File "/opt/conda/lib/python3.10/site-packages/conda/__init__.py", line 107, in __repr__
    errs.append(e.__repr__())
  File "/opt/conda/lib/python3.10/site-packages/conda/__init__.py", line 62, in __repr__
    return f"{self.__class__.__name__}: {self}"
  File "/opt/conda/lib/python3.10/site-packages/conda/__init__.py", line 66, in __str__
    return str(self.message % self._kwargs)
ValueError: unsupported format character 'B' (0x42) at index 289

I haved tried following solutions, but they didn't help:

  1. Change the conda channel to tsinghua.
  2. Use the command conda config --set ssl_verify false to close the ssl verification.
  3. Update the conda using either conda update -n base -c defaults conda or conda install conda=23.5.2.
  4. Follow https://stackoverflow.com/questions/33699577/conda-update-fails-with-ssl-error-certificate-verify-failed?rq=1, use the certification by the command conda config --set ssl_verify /etc/ssl/certs/ca-certificates.crt.

Could you help me with the problem?

How to determine ODISE vocab?

Hi, thanks a lot for your great works.

I'm wondering if we want to generate for different scene, do we need to change ODISE vocab? If so, how can we choose a good set of words?

git: 'lfs' is not a git command

Use sudo apt-get install git-lfs to install git-lfs, in case sudo is not allowed, use the following commands,

wget https://github.com/git-lfs/git-lfs/releases/download/v3.2.0/git-lfs-linux-amd64-v3.2.0.tar.gz
tar -xzf git-lfs-linux-amd64-v3.2.0.tar.gz
PATH=$PATH:/<absolute-path>/git-lfs-3.2.0/
git lfs install
git lfs version```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.