Code Monkey home page Code Monkey logo

head-pose-estimation's Introduction

Hi there 👋

I'm Yin Guobing (in Chinese: 尹国冰), a real human being! 🏃

I was born and grown up in China, a place of long history and ancient culture. After getting my master's degree in physics, I unwittingly broke into the world of computer vision and started my adventure. This is a place where I keep my open source projects that you might find helpful. 🌱

These projects do not aim to replace the existing mature solutions, instead, they consider the ease of understanding as first priority and were written with care and ❤️. Most of them also have companian posts and videos which make them easier to understand through visual illustration. You can find them from my blog.

如果您有在广州或者深圳的工作机会,欢迎联系!

关键词:大语言模型、多模态、深度学习图像处理、Rust。

head-pose-estimation's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

head-pose-estimation's Issues

Where is your model.txt from ? And how?

I'm curious about where is your 3d world coordinate facial landmarks in model.txt from?And how?
Curiously I find different implementation has different values, why?

What is your joint order in model.txt?

In your model.txt file, there are some number by horizontally.
What are their values? x0,y0,z0,x1,y1,z1?
If it is, what are image points (image_points.shape) to use SolvePnP. You said 2D and 3D points must be same.

'assets/pose_model' Is a directory

執行 python3 estimate_head_pose.py --video '/path/to/file'

最後一條訊息:
OSError: Unable to open file (file read failed: time = Tue Feb 9 18:05:26 2021 , filename = 'assets/pose_model', file descriptor = 4, errno = 21, error message = 'Is a directory', buf = 0x7ffd75ab19a8, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0)

看起來是指源碼中的資料夾?

How to define Model points?

Model points are defined by manually.
So, how to choose these values?
If change these values, is the accuracy impact?
Is there any conditions to choose proper model points?

how can I solve the error?

OpenCV version: 3.3.1
Linux is fine! Python multiprocessing works.
OpenCV Error: Unspecified error (FAILED: fs.is_open(). Can't open "assets/deploy.prototxt") in ReadProtoFromTextFile, file /io/opencv/modules/dnn/src/caffe/caffe_io.cpp, line 1113
Traceback (most recent call last):
File "/content/head-pose-estimation/estimate_head_pose.py", line 161, in
main()
File "/content/head-pose-estimation/estimate_head_pose.py", line 57, in main
mark_detector = MarkDetector()
File "/content/head-pose-estimation/mark_detector.py", line 68, in init
self.face_detector = FaceDetector()
File "/content/head-pose-estimation/mark_detector.py", line 14, in init
self.face_net = cv2.dnn.readNetFromCaffe(dnn_proto_text, dnn_model)
cv2.error: /io/opencv/modules/dnn/src/caffe/caffe_io.cpp:1113: error: (-2) FAILED: fs.is_open(). Can't open "assets/deploy.prototxt" in function ReadProtoFromTextFile

Could you help me?

Invalid protobuf

Hello! I am getting this error. It would be appreciated if you could help with the problem.

onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError]: 7: INVALID_PROTOBUF: Load model from assets/face_detector.onnx failed:Protobuf parsing failed.

AttributeError: 'cv2.TickMeter' object has no attribute 'count'

Using TensorFlow 1.4, OpenCV 4.1 and Python 3.6.8 I executed:

python3 estimate_head_pose.py

And I got the following error:

OpenCV version: 4.1.0
Linux is fine! Python multiprocessing works.
WARNING: Logging before flag parsing goes to stderr.
W0724 14:13:06.219129 139818702903104 deprecation_wrapper.py:119] From /home/omar/Documents/head-pose-estimation/mark_detector.py:78: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

W0724 14:13:06.219383 139818702903104 deprecation_wrapper.py:119] From /home/omar/Documents/head-pose-estimation/mark_detector.py:79: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

W0724 14:13:06.366628 139818702903104 deprecation_wrapper.py:119] From /home/omar/Documents/head-pose-estimation/mark_detector.py:84: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2019-07-24 14:13:06.367032: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-07-24 14:13:06.387688: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2400000000 Hz
2019-07-24 14:13:06.387975: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56f77e0 executing computations on platform Host. Devices:
2019-07-24 14:13:06.387997: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2019-07-24 14:13:08.168135: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Traceback (most recent call last):
File "estimate_head_pose.py", line 143, in
main()
File "estimate_head_pose.py", line 102, in main
print(tm.getTimeSec()/tm.count())
AttributeError: 'cv2.TickMeter' object has no attribute 'count'

Using images

Is there a way to make this work on images rather than video?

Algorithm of your estimation

Hi Yinguobing,

Thank for your code.
I know you have listed three major steps of your work in the wiki. But, I was looking for more info.
Can you please share an abstract level algorithm of this which can explain the whole process?
I would really appreciate and cite your work.
Best regards
Hossain

Ask: Best Pose Suggestion

I came across this project from google. Is it possible to pass two pictures of a same person, one picture looking front and other looking to the side, up or down and it would suggest the best frontal face post? If so, how?

Estimate head pose as vector of direction

How I can get the vector of heads direction? Is it possible? If I have translation and rotation vectors, and 68 facial landmarks, How I can get 3d vector of heads direction?

Training and adding another class ??

Thanks for sharing your great work

Can I replace the file frozen_inference_graph.pb with another one that I trained my model on. Or can I re-train it for another class besides face.

Thanks ??

IndexError: list index (0) out of range

Hello Yinguobing,

I was working on your former version of the repo and when I updated my virtual environment for the latest repo. The former one doesn't work anymore. I need to figure out why, but wanted to ask you if you have a clue with the error.

Traceback (most recent call last):
  File "estimate_head_pose.py", line 190, in <module>
    main()
  File "estimate_head_pose.py", line 66, in main
    mark_detector = MarkDetector()
  File "/Users/EUGENE/Documents/ADAS/HP/mark_detector.py", line 75, in __init__
    self.model = keras.models.load_model(saved_model)
  File "/Users/EUGENE/Documents/ADAS/HP/venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
    return saved_model_load.load(filepath, compile, options)
  File "/Users/EUGENE/Documents/ADAS/HP/venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 130, in load
    _read_legacy_metadata(object_graph_def, metadata)
  File "/Users/EUGENE/Documents/ADAS/HP/venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 179, in _read_legacy_metadata
    node_paths = _generate_object_paths(object_graph_def)
  File "/Users/EUGENE/Documents/ADAS/HP/venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 204, in _generate_object_paths
    for reference in object_graph_def.nodes[current_node].children:
IndexError: list index (0) out of range

I checked the model files' locations and seems it gets right.

-Youjin

heatmap landmark 网络模型

您好,看过您写的人脸特征点检测系列的博客后,深受启发。这个系列使用的是回归特征点的位置来完成特征点检测,特征点有些抖动。另外我注意到您尝试了使用 heatmap 的方法,在这个项目下面有对应的 heatmap 分支,我没有发现相应的模型以及训练代码,不知您是否方便提供网络模型以及训练代码。谢谢!

how can I solve the error?

OpenCV version: 3.3.1
Linux is fine! Python multiprocessing works.
OpenCV Error: Unspecified error (FAILED: fs.is_open(). Can't open "assets/deploy.prototxt") in ReadProtoFromTextFile, file /io/opencv/modules/dnn/src/caffe/caffe_io.cpp, line 1113
Traceback (most recent call last):
File "/content/head-pose-estimation/estimate_head_pose.py", line 161, in
main()
File "/content/head-pose-estimation/estimate_head_pose.py", line 57, in main
mark_detector = MarkDetector()
File "/content/head-pose-estimation/mark_detector.py", line 68, in init
self.face_detector = FaceDetector()
File "/content/head-pose-estimation/mark_detector.py", line 14, in init
self.face_net = cv2.dnn.readNetFromCaffe(dnn_proto_text, dnn_model)
cv2.error: /io/opencv/modules/dnn/src/caffe/caffe_io.cpp:1113: error: (-2) FAILED: fs.is_open(). Can't open "assets/deploy.prototxt" in function ReadProtoFromTextFile

Inaccuracy when yaw is over around 80 degrees

The pose estimation seems to work fine in -45-45 degree, however it seems accuracy is keep going down when yaw is over 80 degrees, when yaw is 90 degrees, the estimated yaw is 0...

is this caused by inaccuracy of facial landmark?(I spotted a lot of misplaced landmark in my own test image sets)

not an issue. Need help in translating the angles.

Hi,

Thanks to your model I was able to get angles and head pose of the person in the videos. I was also able to get pitch, yaw, and roll.
For example: pitch: -5.14, yaw: -0.35, roll: 2.94
However I am not able to understand this. How do we define how much pitch makes the face look down or how much roll makes the person's face tilt to left or right? The numbers above dont make sense to begin with because but I used your code from pts_tool.py to get pitch, yaw and roll.

Can you please clarify a bit?

Thanks.

Error reading SavedModel

OS: Windows 11 Pro
python: 3.10.9
tensorflow: 2.12.0
openCV: 4.7.0.72

When running python main.py --cam 0 (or any command), I get fllowing error:

Traceback (most recent call last): File "C:\Users\test\python\head_pose_estimation\main.py", line 49, in <module> mark_detector = MarkDetector() File "C:\Users\test\python\head_pose_estimation\mark_detector.py", line 75, in __init__ self.model = keras.models.load_model(saved_model) File "C:\Users\test\AppData\Local\r-miniconda\lib\site-packages\keras\saving\saving_api.py", line 212, in load_model return legacy_sm_saving_lib.load_model( File "C:\Users\test\AppData\Local\r-miniconda\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "C:\Users\test\AppData\Local\r-miniconda\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 703, in is_directory_v2 return _pywrap_file_io.IsDirectory(compat.path_to_bytes(path)) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 47: invalid continuation byte

There seems to be problem when reading saved_model.pb or files in variables folder. Re-downloading the file does not help.

Landmark detection accuracy?

Hello, thank you for the great code! I wonder if the landmark detection accuracy in this project is identical to the one in this project https://github.com/yinguobing/cnn-facial-landmark
is it the same CNN and training dataset?

I am working on facial expressions detection and I wonder if you have any suggestions for getting more accurate frontal face landmarks detection for eyes closed and mouth is widely open or one eyebrow is higher than the other one?

Thanks!

no preview showing when using cam

Hello @yinguobing,

For some reason, when I run estimate_head_pose.py, it runs good with video inputs but showing nothing with a camera input.
There is no error message on the console, and the camera light is on.
I ran just a basic code, and it shows the cam preview well.
What I tried is just a basic code like this:

import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Display the resulting frame
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

I am trying to find the reason though, just wanted to check if you have any hint about this.

Finding orientation

How do I know if the head is looking towards the left or right or up or down?

3D model points to be made as dynamic from the input image or video

I would like ot know how to change the 3D model points declared in Pose_estimator.py with respect to different image size that is 1280 X 1084

def __init__(self, img_size=(480, 640)):
    self.size = img_size

    # 3D model points.
    self.model_points = np.array([
        (0.0, 0.0, 0.0),             # Nose tip
        (0.0, -330.0, -65.0),        # Chin
        (-225.0, 170.0, -135.0),     # Left eye left corner
        (225.0, 170.0, -135.0),      # Right eye right corne
        (-150.0, -150.0, -125.0),    # Left Mouth corner
        (150.0, -150.0, -125.0)      # Right mouth corner
    ]) / 4.5

Please suggest me the way to get out of this to improve my accuracy of landmark point detection

unnecessary trainign ops in final model after freeze graph

@yinguobing I listed the ops from your prebuilt " frozen_inference_graph.pb" and freeze_graph created from your saved_model "head-pose-estimation/assets/pose_model/".

How did you do the optimization/pruning during freezing. what option did you use with tools.freeze_graph to remove unnecessary training ops?

Below are the difference between your saved model and frozen_inference_graph.pb

saved_model frozen_inference_graph.pb

1."image_tensor" (?,?,?,?3) ---> input_image_tensor (128,128,3)
2. All "map" nodes replaced by --> Reshape,input_to_float
3. layer6/final_dense removed
4."layer" name removed from ops, how and but why?
5. conv2d names --> postfix with "conv2d_1" till "conv2d_8".

What are the steps to do optimization and pruning to get "frozen_inference_graph.pb" like freezed graph?

python3 -m tensorflow.python.tools.freeze_graph --input_saved_model_dir ../head-pose-estimation/assets/pose_model/ --output_node_names layer6/final_dense --output_graph frozen_graph.pb
python3 demo.py --model frozen_graph.pb --list_ops true
butterfly/image_tensor
butterfly/map/Shape
butterfly/map/strided_slice/stack
butterfly/map/strided_slice/stack_1
butterfly/map/strided_slice/stack_2
butterfly/map/strided_slice
butterfly/map/TensorArray
butterfly/map/TensorArrayUnstack/Shape
butterfly/map/TensorArrayUnstack/strided_slice/stack
butterfly/map/TensorArrayUnstack/strided_slice/stack_1
butterfly/map/TensorArrayUnstack/strided_slice/stack_2
butterfly/map/TensorArrayUnstack/strided_slice
butterfly/map/TensorArrayUnstack/range/start
butterfly/map/TensorArrayUnstack/range/delta
butterfly/map/TensorArrayUnstack/range
butterfly/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3
butterfly/map/Const
butterfly/map/TensorArray_1
butterfly/map/while/iteration_counter
butterfly/map/while/Enter
butterfly/map/while/Enter_1
butterfly/map/while/Enter_2
butterfly/map/while/Merge
butterfly/map/while/Merge_1
butterfly/map/while/Merge_2
butterfly/map/while/Less/Enter
butterfly/map/while/Less
butterfly/map/while/Less_1
butterfly/map/while/LogicalAnd
butterfly/map/while/LoopCond
butterfly/map/while/Switch
butterfly/map/while/Switch_1
butterfly/map/while/Switch_2
butterfly/map/while/Identity
butterfly/map/while/Identity_1
butterfly/map/while/Identity_2
butterfly/map/while/add/y
butterfly/map/while/add
butterfly/map/while/TensorArrayReadV3/Enter
butterfly/map/while/TensorArrayReadV3/Enter_1
butterfly/map/while/TensorArrayReadV3
butterfly/map/while/resize/ExpandDims/dim
butterfly/map/while/resize/ExpandDims
butterfly/map/while/resize/size
butterfly/map/while/resize/ResizeBilinear
butterfly/map/while/resize/Squeeze
butterfly/map/while/TensorArrayWrite/TensorArrayWriteV3/Enter
butterfly/map/while/TensorArrayWrite/TensorArrayWriteV3
butterfly/map/while/add_1/y
butterfly/map/while/add_1
butterfly/map/while/NextIteration
butterfly/map/while/NextIteration_1
butterfly/map/while/NextIteration_2
butterfly/map/while/Exit_2
butterfly/map/TensorArrayStack/TensorArraySizeV3
butterfly/map/TensorArrayStack/range/start
butterfly/map/TensorArrayStack/range/delta
butterfly/map/TensorArrayStack/range
butterfly/map/TensorArrayStack/TensorArrayGatherV3
butterfly/layer1/conv2d/kernel
butterfly/layer1/conv2d/kernel/read
butterfly/layer1/conv2d/bias
butterfly/layer1/conv2d/bias/read
butterfly/layer1/conv2d/Conv2D
butterfly/layer1/conv2d/BiasAdd
butterfly/layer1/conv2d/Relu
butterfly/layer1/max_pooling2d/MaxPool
butterfly/layer2/conv2d/kernel
butterfly/layer2/conv2d/kernel/read
butterfly/layer2/conv2d/bias
butterfly/layer2/conv2d/bias/read
butterfly/layer2/conv2d/Conv2D
butterfly/layer2/conv2d/BiasAdd
butterfly/layer2/conv2d/Relu
butterfly/layer2/conv2d_1/kernel
butterfly/layer2/conv2d_1/kernel/read
butterfly/layer2/conv2d_1/bias
butterfly/layer2/conv2d_1/bias/read
butterfly/layer2/conv2d_1/Conv2D
butterfly/layer2/conv2d_1/BiasAdd
butterfly/layer2/conv2d_1/Relu
butterfly/layer2/max_pooling2d/MaxPool
butterfly/layer3/conv2d/kernel
butterfly/layer3/conv2d/kernel/read
butterfly/layer3/conv2d/bias
butterfly/layer3/conv2d/bias/read
butterfly/layer3/conv2d/Conv2D
butterfly/layer3/conv2d/BiasAdd
butterfly/layer3/conv2d/Relu
butterfly/layer3/conv2d_1/kernel
butterfly/layer3/conv2d_1/kernel/read
butterfly/layer3/conv2d_1/bias
butterfly/layer3/conv2d_1/bias/read
butterfly/layer3/conv2d_1/Conv2D
butterfly/layer3/conv2d_1/BiasAdd
butterfly/layer3/conv2d_1/Relu
butterfly/layer3/max_pooling2d/MaxPool
butterfly/layer4/conv2d/kernel
butterfly/layer4/conv2d/kernel/read
butterfly/layer4/conv2d/bias
butterfly/layer4/conv2d/bias/read
butterfly/layer4/conv2d/Conv2D
butterfly/layer4/conv2d/BiasAdd
butterfly/layer4/conv2d/Relu
butterfly/layer4/conv2d_1/kernel
butterfly/layer4/conv2d_1/kernel/read
butterfly/layer4/conv2d_1/bias
butterfly/layer4/conv2d_1/bias/read
butterfly/layer4/conv2d_1/Conv2D
butterfly/layer4/conv2d_1/BiasAdd
butterfly/layer4/conv2d_1/Relu
butterfly/layer4/max_pooling2d/MaxPool
butterfly/layer5/conv2d/kernel
butterfly/layer5/conv2d/kernel/read
butterfly/layer5/conv2d/bias
butterfly/layer5/conv2d/bias/read
butterfly/layer5/conv2d/Conv2D
butterfly/layer5/conv2d/BiasAdd
butterfly/layer5/conv2d/Relu
butterfly/layer6/flatten/Shape
butterfly/layer6/flatten/strided_slice/stack
butterfly/layer6/flatten/strided_slice/stack_1
butterfly/layer6/flatten/strided_slice/stack_2
butterfly/layer6/flatten/strided_slice
butterfly/layer6/flatten/Reshape/shape/1
butterfly/layer6/flatten/Reshape/shape
butterfly/layer6/flatten/Reshape
butterfly/layer6/dense/kernel
butterfly/layer6/dense/kernel/read
butterfly/layer6/dense/bias
butterfly/layer6/dense/bias/read
butterfly/layer6/dense/MatMul
butterfly/layer6/dense/BiasAdd
butterfly/layer6/dense/Relu
butterfly/layer6/logits/kernel
butterfly/layer6/logits/kernel/read
butterfly/layer6/logits/bias
butterfly/layer6/logits/bias/read
butterfly/layer6/logits/MatMul
butterfly/layer6/logits/BiasAdd
butterfly/layer6/final_dense

butterfly$ python3 demo.py --model frozen_inference_graph.pb --list_ops true
butterfly/input_image_tensor
butterfly/Reshape/shape
butterfly/Reshape
butterfly/input_to_float
butterfly/conv2d/kernel
butterfly/conv2d/kernel/read
butterfly/conv2d/bias
butterfly/conv2d/bias/read
butterfly/conv2d/Conv2D
butterfly/conv2d/BiasAdd
butterfly/conv2d/Relu
butterfly/max_pooling2d/MaxPool
butterfly/conv2d_1/kernel
butterfly/conv2d_1/kernel/read
butterfly/conv2d_1/bias
butterfly/conv2d_1/bias/read
butterfly/conv2d_2/Conv2D
butterfly/conv2d_2/BiasAdd
butterfly/conv2d_2/Relu
butterfly/conv2d_2/kernel
butterfly/conv2d_2/kernel/read
butterfly/conv2d_2/bias
butterfly/conv2d_2/bias/read
butterfly/conv2d_3/Conv2D
butterfly/conv2d_3/BiasAdd
butterfly/conv2d_3/Relu
butterfly/max_pooling2d_2/MaxPool
butterfly/conv2d_3/kernel
butterfly/conv2d_3/kernel/read
butterfly/conv2d_3/bias
butterfly/conv2d_3/bias/read
butterfly/conv2d_4/Conv2D
butterfly/conv2d_4/BiasAdd
butterfly/conv2d_4/Relu
butterfly/conv2d_4/kernel
butterfly/conv2d_4/kernel/read
butterfly/conv2d_4/bias
butterfly/conv2d_4/bias/read
butterfly/conv2d_5/Conv2D
butterfly/conv2d_5/BiasAdd
butterfly/conv2d_5/Relu
butterfly/max_pooling2d_3/MaxPool
butterfly/conv2d_5/kernel
butterfly/conv2d_5/kernel/read
butterfly/conv2d_5/bias
butterfly/conv2d_5/bias/read
butterfly/conv2d_6/Conv2D
butterfly/conv2d_6/BiasAdd
butterfly/conv2d_6/Relu
butterfly/conv2d_6/kernel
butterfly/conv2d_6/kernel/read
butterfly/conv2d_6/bias
butterfly/conv2d_6/bias/read
butterfly/conv2d_7/Conv2D
butterfly/conv2d_7/BiasAdd
butterfly/conv2d_7/Relu
butterfly/max_pooling2d_4/MaxPool
butterfly/conv2d_7/kernel
butterfly/conv2d_7/kernel/read
butterfly/conv2d_7/bias
butterfly/conv2d_7/bias/read
butterfly/conv2d_8/Conv2D
butterfly/conv2d_8/BiasAdd
butterfly/conv2d_8/Relu
butterfly/flatten/Shape
butterfly/flatten/strided_slice/stack
butterfly/flatten/strided_slice/stack_1
butterfly/flatten/strided_slice/stack_2
butterfly/flatten/strided_slice
butterfly/flatten/Reshape/shape/1
butterfly/flatten/Reshape/shape
butterfly/flatten/Reshape
butterfly/dense/kernel
butterfly/dense/kernel/read
butterfly/dense/bias
butterfly/dense/bias/read
butterfly/dense/MatMul
butterfly/dense/BiasAdd
butterfly/dense/Relu
butterfly/logits/kernel
butterfly/logits/kernel/read
butterfly/logits/bias
butterfly/logits/bias/read
butterfly/logits/MatMul
butterfly/logits/BiasAdd

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.