Code Monkey home page Code Monkey logo

sted-gaze's People

Contributors

zhengyuf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

sted-gaze's Issues

problem when running your train/eval scripts

Hi, first of all thanks for sharing the code, the project looks very cool.
When I try to run your scripts (according to the readme) I get this error:
ModuleNotFoundError: No module named 'models.PerceptualSimilarity'

How can I solve it?

suplementary files

@zhengyuf
Hi, I am Yong.
I am very sorry to bother you, could you upload the EYEDIAP_supplementary.h5 and Columbia_supplementary.h5 for pre-processing dataset. I have downloaded the MPIIFaceGaze_supplementary.h5 and GazeCapture_supplementary.h5
from FAZE.
Thank you.

How to run with pre-trained model?

Thank u for perfect repository <3
I see that you shared your pre-trained model but I don't know how to run the source code with your pre-trained model
Can you help me?

PerceptualSimilarity

Hi Zhengyuf,

The train_st_ed.py script is looking for a folder PerceptualSimilarity, can you provide the source for this?

STED-gaze-master# python train_st_ed.py config/eval.json
Traceback (most recent call last):
File "train_st_ed.py", line 20, in
from models import STED
File "/translation/STED-gaze-master/models/init.py", line 2, in
from .st_ed import STED
File "/translation/STED-gaze-master/models/st_ed.py", line 27, in
from .PerceptualSimilarity.lpips_models import PerceptualLoss
ModuleNotFoundError: No module named 'models.PerceptualSimilarity'

Number of training which the result is averaged over

Hi. I'm very interested in your work.

So I've trained your training code several times.
I could confirm that the variation of results exists somewhat.
Can I know the number of training done to get the results in your paper?

Can I get the source codes to process the 'Columbia' and 'EYEDIAP' dataset?

Hello.

As shown in the title, I want to know if I can get the source codes to process the 'Columbia' and 'EYEDIAP' datasets.
I know the EYEDIAP is not publicly available, but I will be very pleased if I get the code to normalize this dataset.

Additionally, How many frames did you use during evaluation from the EYEDIAP dataset which consists of a bunch of videos?

Thank you

Generating images with random gaze directions

Hello Yufeng,

Thank you for your excellent work and the code.

I used the published pre-trained models to generate some figures with manually set gaze directions based on the MPIIFazeGaze dataset. The synthetic figures looks great.
However, when I utilized my own collected face images as the input, the gaze redirection result is not good. As for some cases, the human skin color or even gender were changed.

What I did was replacing the test_visualize['image_a'] and test_visualize['image_b'] with my own images. And the full-face images was rescaled to 128*128. But I didn't do the normalization step for my own figures.

Can I have some suggestions from you given the situation that synthetic images have kind of large difference to the source images?
Thank you again for your help.

Sincerely,
Shiwei

Encoder weights not saved

It appears the model trained by the ST-ED.json isn't saving the encoder weights properly when run by the eval.json. Can you advise a fix to this issue?

python train_st_ed.py config/eval.json

config/eval.json
2021-01-28 11:48:38,787 Loading config/eval.json
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
Loading model from: /root/miniconda3/lib/python3.8/site-packages/lpips/weights/v0.1/alex.pth
Traceback (most recent call last):
File "train_st_ed.py", line 48, in
load_model(network, "models/"+ str(config.load_step) + '.pt')
File "/translation/STED-gaze-master/utils.py", line 250, in load_model
network.encoder.load_state_dict(checkpoint['encoder'])
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Encoder:
Missing key(s) in state_dict: "encoder.initial.conv1.weight", "encoder.initial.conv2.weight", "encoder.block1.compo1.conv.weight", "encoder.block1.compo2.conv.weight", "encoder.block1.compo3.conv.weight", "encoder.block1.compo4.conv.weight", "encoder.trans1.composite.conv.weight", "encoder.block2.compo1.conv.weight", "encoder.block2.compo2.conv.weight", "encoder.block2.compo3.conv.weight", "encoder.block2.compo4.conv.weight", "encoder.trans2.composite.conv.weight", "encoder.block3.compo1.conv.weight", "encoder.block3.compo2.conv.weight", "encoder.block3.compo3.conv.weight", "encoder.block3.compo4.conv.weight", "encoder.trans3.composite.conv.weight", "encoder.block4.compo1.conv.weight", "encoder.block4.compo2.conv.weight", "encoder.block4.compo3.conv.weight", "encoder.block4.compo4.conv.weight", "encoder.trans4.composite.conv.weight", "encoder.block5.compo1.conv.weight", "encoder.block5.compo2.conv.weight", "encoder.block5.compo3.conv.weight", "encoder.block5.compo4.conv.weight", "encoder_fc_pseudo_labels1.weight", "encoder_fc_pseudo_labels1.bias", "encoder_fc_pseudo_labels2.weight", "encoder_fc_pseudo_labels2.bias", "encoder_fc_embeddings1.weight", "encoder_fc_embeddings1.bias", "encoder_fc_embeddings2.weight", "encoder_fc_embeddings2.bias".
Unexpected key(s) in state_dict: "module.encoder.initial.conv1.weight", "module.encoder.initial.conv2.weight", "module.encoder.block1.compo1.conv.weight", "module.encoder.block1.compo2.conv.weight", "module.encoder.block1.compo3.conv.weight", "module.encoder.block1.compo4.conv.weight", "module.encoder.trans1.composite.conv.weight", "module.encoder.block2.compo1.conv.weight", "module.encoder.block2.compo2.conv.weight", "module.encoder.block2.compo3.conv.weight", "module.encoder.block2.compo4.conv.weight", "module.encoder.trans2.composite.conv.weight", "module.encoder.block3.compo1.conv.weight", "module.encoder.block3.compo2.conv.weight", "module.encoder.block3.compo3.conv.weight", "module.encoder.block3.compo4.conv.weight", "module.encoder.trans3.composite.conv.weight", "module.encoder.block4.compo1.conv.weight", "module.encoder.block4.compo2.conv.weight", "module.encoder.block4.compo3.conv.weight", "module.encoder.block4.compo4.conv.weight", "module.encoder.trans4.composite.conv.weight", "module.encoder.block5.compo1.conv.weight", "module.encoder.block5.compo2.conv.weight", "module.encoder.block5.compo3.conv.weight", "module.encoder.block5.compo4.conv.weight", "module.encoder_fc_pseudo_labels1.weight", "module.encoder_fc_pseudo_labels1.bias", "module.encoder_fc_pseudo_labels2.weight", "module.encoder_fc_pseudo_labels2.bias", "module.encoder_fc_embeddings1.weight", "module.encoder_fc_embeddings1.bias", "module.encoder_fc_embeddings2.weight", "module.encoder_fc_embeddings2.bias".

About functional loss

Hi, I'm very thankful for sharing your code and this awesome project.

The coefficient of functional redirection loss is 20 that you empirically set from the paper.
I wonder why you set the coefficient of that to 200 in config/ST-ED.json.

Can you explain for me? Thank you

Question about semi-supervised cross dataset evaluation

Hi @zhengyuf, thank you for your great work, STED.
I have a question about the training phase for Semi-supervised evaluation.

From 4.5, the paper says "we estimate the joint probability distribution function of the gaze and head orientation values of the labeled subset and sample random target conditions from it".

  1. Could you give more explanation about 'the joint probability distribution function' you applied for sampling, and why should you have considered this probability? Is this method for preventing unrealistic cases such as images where head pose vector and gaze vector pointing completely opposite directions?

  2. What is the purpose for using the rest of GazeCapture dataset without labels? Are you assuming situation when ground truth is hard to obtain? In this case, is loss for image generation is still applied to the model, but explicit gaze and head labels are not applied except for the sampled images? If so, why could you say that 'the model could augment new samples with just small amounts of training data', since you are fully using GazeCapture images?

I would appreciate your feedback.

Preprossing of 'columbia' and 'eyediap' datasets

Hi, I am trying to train STED following the README instructions. But I found that https://github.com/swook/faze_preprocess doesn't provide preprocessings of 'columbia' or 'eyediap' datasets, which are needed by the evaluation of STED. Specifically, respective supplement data are needed during preprocessing, which I don't know how to acquire.
Could you tell me where to get the supplement data, or how to generate them?
Looking forward to your reply. Thank you!

Gettng high gaze_redirection loss

Hello ,
First of all thanks so much for your excellent work!
I am running the pipeline you have provided to get similiar results.
I ran the data preprocessing file for both MPIIGaze and Gazecapture to get the results.
I am getting a training loss of 4.3 degree, whereas the testing error is nearly 17.2 degrees, this is on GazeCapture.
Please find both the charts here.
This was after running the training on GC completely. I however did not run the semi supervised version of it, but have triggered a run of this and will update the results here. Could you please let me know what could be the reasons for the high error?

The table of reference would be table 2, with the errors on GC being 2.195.
Thanks in advance!

There appears to be a few errors in the names of some items from eval.json

There appears to be a few errors in the names of some items from eval.json, they currently read:
"size_0d_unit": 1024,
"num_1d_units": 4,
"size_1d_unnit": 16,
"num_2d_uits": 4,
"size_2d_unit": 16,
should they read as follows?:
"size_0d_unit": 1024,
"num_1d_units": 4,
"size_1d_unit": 16,
"num_2d_units": 4,
"size_2d_unit": 16,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.