Comments (3)
W and b are learnable parameters and these values at optimal or at the final step are not unique, so it is normal that values are slightly different.
Oops nvm, I read wrong.
I was not able to reproduce this error.
Mine shows the same.
# list version
0 137.324 [-0.00311102] [ 0.]
100 0.0745313 [ 2.1351583] [ 2.42115235]
200 0.0168967 [ 2.0603478] [ 2.71034646]
300 0.00242139 [ 2.02283072] [ 2.89033508]
400 0.000215083 [ 2.00680494] [ 2.96731591]
500 1.19252e-05 [ 2.00160241] [ 2.99230409]
600 4.07653e-07 [ 2.00029635] [ 2.99857688]
700 8.39337e-09 [ 2.00004268] [ 2.99979568]
800 1.0849e-10 [ 2.00000477] [ 2.99997711]
**900 1.0784e-11 [ 2.00000167] [ 2.99999261]**
==========
# placeholder
0 136.361 [ 0.00615465] [ 0.]
100 0.0755595 [ 2.13604784] [ 2.41677833]
200 0.0170177 [ 2.06055903] [ 2.70930576]
300 0.00241298 [ 2.02279139] [ 2.89052629]
400 0.000211428 [ 2.00674629] [ 2.96759462]
500 1.1529e-05 [ 2.00157547] [ 2.99243259]
600 3.8709e-07 [ 2.00028872] [ 2.9986136]
700 7.72112e-09 [ 2.00004077] [ 2.99980354]
800 8.94228e-11 [ 2.00000453] [ 2.99997878]
**900 1.0784e-11 [ 2.00000167] [ 2.99999261]**
from deeplearningzerotoall.
@kkweon, thanks for your kindly help! I will double check whether I missed or not. Should others also have the same problem, please let me know!
+++
I suppose it is related to step or feed_dict timing(?) instead of placeholder vs list.
import tensorflow as tf
# Model parameters
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
linear_model = x * W + b
# cost/loss function
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
curr_W, curr_b, curr_loss, _ = sess.run([W, b, loss, train], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s" % (curr_W, curr_b, curr_loss))
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s" % (curr_W, curr_b, curr_loss))
W: [-0.9999969] b: [ 0.99999082] loss: 5.77707e-11
W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11
Please explain the reason above results' loss functions are different? I thought below one is just for evaluating, not training one more.
from deeplearningzerotoall.
Stale issue message
from deeplearningzerotoall.
Related Issues (20)
- lab-07-2 vs lab-07-3 HOT 1
- Question fashion miniest 도와주세요!!
- lab-11-2-mnist_deep_cnn The output of Fully-connected layer1 HOT 1
- hope have a English video HOT 2
- d_l1 i lab-09-5-linear_back_prop.py HOT 1
- tf.layers.dropout()'s rate is not keep_prob HOT 3
- lab_10_5 dropout HOT 3
- Code does not work(lab-12-0-rnn_basics) HOT 1
- Wrong commented in lab-09-4-xor_tensorboard.py to run the tensorboard HOT 1
- possible look-ahead bias in lab-12-5-rnn-stock HOT 4
- lab-04-4-tf_reader_linear_regression.py HOT 2
- lab-02-2-linear_regression_feed.py need `global_variables_initializer` HOT 2
- lab-10-1-mnist_softmax Weight,bias 관련 질문있습니다. HOT 2
- always same prediction 질문입니다. HOT 1
- Anyone interested in sending PR to change this code to TF 2.0? HOT 4
- keras and TF2 folders HOT 1
- Question on lab-12-0-rnn_basics.ipynb HOT 1
- hello HOT 1
- DeepLearningZeroToAll HOT 1
- - Create a new issue HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deeplearningzerotoall.