- 💬 Ask me about Artificial Intelligence or Google
- 📫 How to reach me: [email protected]
- 😄 Pronouns: he/him
- ⚡ Fun fact: Father to Chris and Claudia Moroney
Learn more about what I do by visiting my website!
Learn more about what I do by visiting my website!
In line 2 on the third paragraph from last of tflite-transferlearning.ipynb for chapter 12,range function end index should be 100,not 99. if the end index is 99,the last index will be 98,not 99.
The same flaw in line 2 on the last paragraph.
By the way, the sentence “index=73” in the last paragraph is useful. In the loop "for index in range(0,99)", the index value will be reassign.
Sorry for bad English and hope I made this bug clear.
Hi,
If anyone is downloading the sarcasm dataset directly from the Kaggle website, this alteration to loading the JSON data will properly execute:
datastore = []
data = open("tmp\Sarcasm_Headlines_Dataset.json", 'r').readlines()
for line in data:
datastore.append(json.loads(line))
sentences = []
labels = []
urls = []
for item in datastore:
sentence = item['headline'].lower()
sentence = sentence.replace(",", " , ")
sentence = sentence.replace(".", " . ")
sentence = sentence.replace("-", " - ")
sentence = sentence.replace("/", " / ")
soup = BeautifulSoup(sentence)
sentence = soup.get_text()
words = sentence.split()
filtered_sentence = ""
for word in words:
word = word.translate(table)
if word not in stopwords:
filtered_sentence = filtered_sentence + word + " "
sentences.append(filtered_sentence)
labels.append(item['is_sarcastic'])
urls.append(item['article_link'])
Hi Mr Moroney,
The savepar and savepar2 hold weights much different from ~ -1 and 2.
Do you have any suggestion as to why results are off?
Kind regards
Seems like author only selling book that comes with broken codes and not willing to response to readers on code issues.
Recently read chapter 2 and tried the code provided. I am a bit confused about why the model is only fitting 1875 records per epoch while in the book it is 60,000 per epoch.
I also measure the length for this fashion_mnist from keras. And it is 60,000 for the training set while 10,000 for the test set.
Tried and tested the same code on Google Colab, Jupyter Notebook as well as Pycharm but the issue is still the same.
(Also, reshaped the input for the test and training set. And ran the code with old TensorFlow version 2.3.0 that was referenced in the book - Previously was using latest Tensor flow version 2.7.0)
On chapter 10,
optimizer=SGD(lr=1e-6, momentum=0.9) breaks the model trainning, making it predict always "nan".
Leaving it blank (that defaults to adam optimizer), fix the issue.
model.compile(loss="mse")
Hello
I was working on the horse or human transfer learning section. I am using the datasets for training and validation from the urls you supply. No matter what horse image I try the model classifies it as human. I figured my local setup might have issues so I opened and ran the colab file for transfer learning in the chapter 3 folder. I received the same incorrect results using a random horse picture from the web as well as when I uploaded some of the validation horse images from the colab project.
The link contained in the reposity chapter 3 gives me "not found", while this is, for me, the one that does it correctly: https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_5340.zip
I copied your code, but there was an error like this:
java.lang.IllegalArgumentException: Input error: 0-th input should have 2 dimensions, but found 1 dimensions
hi!
I got stuck in Chapter 19. Deployment with TensorFlow Serving starting TensorFlow Serving from docker image.
When i pass docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving &
I got error:
$ /usr/bin/tf_serving_entrypoint.sh: line 3: 6 Illegal instruction (core dumped) tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=$ {MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} "$@"
And curl command doesnt work -
curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
error:
D:\Users\al>curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:85 01/v1/models/half_plus_two:predict curl: (3) bad range in URL position 2: [1.0, ^
I`m on Windows 7.
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com
/glove.twitter.27B.25d.zip \
-O /tmp/glove.zip
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Details>No such object: laurencemoroney-blog.appspot.com/glove.twitter.27B.25d.zip</Details>
</Error>
Hi,
Before I forget, thanks again for writing these tutorials. I'm finding them extremely helpful as I prepare for the TF Developer's Exam.
In chapter3, the URLs for the train and validation sets in Horse_or_Human_WithAugmentation.ipynb
both point to the same zip file.
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
The URL for the validation set is likely this one:
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip
There's also a typo a few cells below.
validation_generator = train_datagen.flow_from_directory(
'/tmp/vallidation-horse-or-human', # should be 'validation'
target_size=(300, 300),
class_mode='binary')
'/tmp/vallidation-horse-or-human',
should be '/tmp/validation-horse-or-human',
.
Cheers,
The eighth cell in sarcasm_swivel.ipynb appears to be from a different notebook, since it references two variables (training_padded
and testing_padded
) that have not yet been defined.
# Need this block to get it to work with TensorFlow 2.x
import numpy as np
training_sentences = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
I was working through the Iris example in Chapter 15 and trying to replicate it on my computer. Trying to open the html file in my browser, I got an error about fetching a url beginning with "file://" Digging around, I saw that tf.data.csv uses fetch, which in turn does not support fetching local files. I tried to download Brackets, as was suggested in the book, but it appears it is reaching end of support and can no longer be downloaded. Since my usual editor is VSCode, I found that using the Live Server extension can be used as an alternative to serve your files from localhost and fix the the fetch issue. Hope this helps anyone who might face this issue in the future.
Unable to open sarcasm_glove.ipynb (chapter 7)
Unexpected string in JSON at position 5705 when it is opened with VS Code.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.