Code Monkey home page Code Monkey logo

easy-bert's People

Contributors

dependabot[bot] avatar robrua avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easy-bert's Issues

Results difficult to explain

Dear Rob,
I do not know whether this is a bug or not but I have strange results, as per follows.
I compare the embeddings of two words, and the average (on 768 values) absolute difference is lower for different word than for synonyms.

I would have expected a lower difference for rich and a greater for poor. Where am I actually wrong?
Thank you.

Example 1:

String 1: wealthy
String 2: poor
Embedding 1	Embedding 2	100 * absolute difference
0.21383394	0.23239951	2.0
-0.0073103756	-0.057594057	5.0
0.09099525	0.11997495	3.0
...
Absolute difference average : 8

Example 2:

String 1: wealthy
String 2: blue
Embedding 1	Embedding 2	100 * absolute difference
0.21383394	0.29995522	9.0
-0.0073103756	-0.19767939	19.0
...
Absolute difference average : 16

Example 3:

String 1: wealthy
String 2: rich
Embedding 1	Embedding 2	100 * absolute difference
0.21383394	0.14642045	7.0
-0.0073103756	-0.108990476	10.0
0.09099525	0.25123212	16.0
0.069340415	-0.12602457	20.0
...
Absolute difference average : 11

Example 4:

String 1: wealthy
String 2: black
Embedding 1	Embedding 2	100 * absolute difference
0.21383394	0.22277042	1.0
-0.0073103756	-0.25720397	25.0
0.09099525	0.16640717	8.0
...
Absolute difference average : 11

Bump for tensorflow version 2.7.1

this repository is wonderful, though unmaintained,
i forked it deleted docker and python part (shamelessly :( )
converted it to kotlin and gradle
i can't open a merge request because of harsh changes yet you can find it here

Changing Max Sequence Length

If I wanted to change the max sequence length, particularly to reduce from 128 to speed token embedding, how would I go about doing this? I assume I'd pull and edit bert.py to my needs. In future versions could the max sequence length be exposed to users?

No module named 'tensorflow.contrib' (Support for TF > 2.0)

Hi,

I have installed version 1.0.3 of easybert using pip.
$ pip3 install easybert

This downloads an old copy of the script, which is not compatible with the current Java version, because the older python version doesnt produce model.json, which is required by Java.

So to resolve this problem, I updated the code for bert,py from the source code.

When I execute the cli command:

$ bert download

I get the following errors:

bert/modeling.py", line 29, in

from tensorflow.contrib import layers as contrib_layers
ModuleNotFoundError: No module named 'tensorflow.contrib'

I have tensorflow version 2.7.0.

The fix for this is relatively simple, which I will leave here for others to follow, until the TF compliance can be upgraded.

  • edit the file: .local/lib/python3.8/site-packages/bert/modeling.py
  • Change line 29 to
    from tensorflow_addons import layers as contrib_layers
  • install tensorflow_addons using
    pip install tensorflow-addons
  • run the command again.

Then had to fix the problem with tf.session. I did this by editing the following lines in bert.py file

76: with tf.Session(graph=self._graph) as session:

76: with tf.compat.v1.Session(graph=self._graph) as session:

100: with self._session = tf.Session(graph=self._graph)

100: self._session = tf.compat.v1.Session(graph=self._graph)

137: with tf.Session(graph=self._graph) as session:

137: with tf.compat.v1.Session(graph=self._graph) as session:

173: with tf.Session(graph=self._graph) as session:

173: with tf.compat.v1.Session(graph=self._graph) as session:

205: with tf.Session(graph=bert._graph) as session:

205: with tf.compat.v1.Session(graph=bert._graph) as session:

212: with tf.Session() as session:

212: with tf.compat.v1.Session() as session:

Now the tf.placeholder issue

86: self._input_ids = tf.placeholder(name="input_ids", shape=(None, max_sequence_length), dtype=tf.int32)

86: self._input_ids = tf.compat.v1.placeholder(name="input_ids", shape=(None, max_sequence_length), dtype=tf.int32)

87: self._input_mask = tf.placeholder(name="input_mask", shape=(None, max_sequence_length), dtype=tf.int32)

87: self._input_mask = tf.compat.v1.placeholder(name="input_mask", shape=(None, max_sequence_length), dtype=tf.int32)

88: self._segment_ids = tf.placeholder(name="segment_ids", shape=(None, max_sequence_length), dtype=tf.int32)

88: self._segment_ids = tf.compat.v1.placeholder(name="segment_ids", shape=(None, max_sequence_length), dtype=tf.int32)

Next on to global_variables_initializer()

102: self._session.run(tf.global_variables_initializer())

102: self._session.run(tf.compat.v1.global_variables_initializer())

138: session.run(tf.global_variables_initializer())

138: session.run(tf.compat.v1.global_variables_initializer())

174: session.run(tf.global_variables_initializer())

174: session.run(tf.compat.v1.global_variables_initializer())

And saving.

167: tf.saved_model.simple_save(self._session, str(path), inputs={

167: tf.compat.v1.saved_model.simple_save(self._session, str(path), inputs={

176: tf.saved_model.simple_save(session, str(path), inputs={

176: tf.compat.v1.saved_model.simple_save(session, str(path), inputs={

206: bundle = tf.compat.v1.saved_model.load(session, ["serve"], str(path))

206: bundle = tf.compat.v1.saved_model.load(session, ["serve"], str(path))

I will edit this post with other changes I do to comply with TF > 2.0. Hopefully, this will be useful for someone else.

Failed to locate the model in Java

I tried to use it using maven and include a bert model in jar.
But it failed to find the model:
Exception in thread "main" java.lang.IllegalArgumentException: resource /com/robrua/nlp/easy-bert/bert-chinese-L-12-H-768-A-12 not found.

Can you provide a working example? Thanks.

Hi Rob,

I am trying to run your code using one of the Maven models, but when I try to create the embeddings I get the following error:

Exception in thread "main" java.lang.NoSuchMethodError: org.tensorflow.Tensor.create([JLjava/nio/IntBuffer;)Lorg/tensorflow/Tensor;
at com.robrua.nlp.bert.Bert$Inputs.(Bert.java:98)
at com.robrua.nlp.bert.Bert.getInputs(Bert.java:431)
at com.robrua.nlp.bert.Bert.embedTokens(Bert.java:353)

The following is the code that causes the error:
try(Bert bert = Bert.load("com/robrua/nlp/easy-bert/bert-uncased-L-12-H-768-A-12")) {
float[][] embedding = bert.embedTokens("Hello World.");
}

What should I do to get these embeddings?
Thanks in advance.

Java API model.json not found

I downloaded and extracted a model but there is no model.json inside the folder.

Exception in thread "main" java.lang.RuntimeException: java.io.FileNotFoundException: F:\Desktop\aa\bert\bert-multi-cased\assets\model.json (The system cannot find the file specified)
	at com.robrua.nlp.bert.Bert.load(Bert.java:149)
	at com.robrua.nlp.bert.Bert.load(Bert.java:132)
	at BertTest.main(BertTest.java:8)
Caused by: java.io.FileNotFoundException: F:\Desktop\aa\bert\bert-multi-cased\assets\model.json (The system cannot find the file specified)
	at java.base/java.io.FileInputStream.open0(Native Method)
	at java.base/java.io.FileInputStream.open(FileInputStream.java:219)
	at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
	at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:766)
	at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2902)
	at com.robrua.nlp.bert.Bert.load(Bert.java:147)
	... 2 more

Allow dynamic allocation of GPU memory

Hi again,

I thought it might be worth a separate ticket – when running on GPU, all available memory is allocated, but the Tensorflow model of BERT may not actually need it. This should be simple enough to configure – e.g. in the Java API, the following code did the trick for me (replacing this line):

        ConfigProto configProto = ConfigProto.newBuilder()
                .setAllowSoftPlacement(true)
                .setGpuOptions(GPUOptions.newBuilder()
                                .setAllowGrowth(true)
                                .build())
                .build();
        SavedModelBundle bundle = SavedModelBundle.loader(path.toString())
                .withTags("serve")
                .withConfigProto(configProto.toByteArray())
                .load();

        return new Bert(bundle, model, path.resolve("assets").resolve(VOCAB_FILE));

Similarly in the Python API, it should be possible to start the TF session with an appropriately configured ConfigProto.

Thanks

Performance of the Java API with GPU support

Hi,

Thanks for creating easy-bert! I'm experimenting with it in a GPU-empowered AWS machine (p2.xlarge with pre-installed deep learning AMI), and I've noticed some differences in performance between the Python and Java versions. Just wanted to let you know – maybe you could advise what I'm doing wrong.

This is my sample code in Python:

sequences = ['First do it', 'then do it right', 'then do it better'] * 20
bert = Bert("https://tfhub.dev/google/bert_multi_cased_L-12_H-768_A-12/1")
with bert:
    for i in range(50):
        _ = bert.embed(sequences, per_token=True)

Assuming the model has already been downloaded from TFHub and cached, the script would claim all available GPU memory (when the Bert object is initialized), and then GPU utilization would quickly go up to 100% (when the sequences are processed). All in all, the script completes in ~30 seconds.

Upon saving the model with bert.save (let's say the path is /path/to/model), I load it inside a Java app – here is the relevant code snippet:

String[] sequencesOriginal = {"First do it", "then do it right", "then do it better"};
String[] sequences = new String[60];
for (int i = 0; i < sequences.length; i += sequencesOriginal.length) {
    System.arraycopy(sequencesOriginal, 0, sequences, i, 3);
}

String pathToModel = "/path/to/model";
try (Bert bert = Bert.load(Paths.get(pathToModel))) {
    for (int i = 0; i < 50; i++) {
        float[][][] output = bert.embedTokens(sequences);
    }
}

Like the Python version, this would take all of the GPU memory, but actual GPU utilization would be very low – no more than occasional spikes. Most of the computation is done on the CPU cores, and the app takes forever to complete. After I rebuilt easy-bert locally, adding a ConfigProto with .setLogDevicePlacement(true) in Bert.load, the log indicates that most nodes in the computation graph are indeed placed on CPU.

Do you have any ideas why could it happen? Is it a Tensorflow issue, or something is wrong with my installation, or could we tweak Bert.load in such a way that the GPU would be utilized properly?

I would greatly appreciate any help. Thanks in advance!

Could not find meta graph def matching supplied tags: { serve }

I added the following to the Bert class of the easy-bert-master project:

    public static void main(String[] args) {
        try (Bert bert = Bert.load(new File("G:\\Repositories My\\BERT-Models\\bert-multi-cased-L-12-H-768-A-12"))) {
            // Embed some sequences
        }
    }

I followed the instructions to load the model by adding to the pom file:

        <dependency>
            <groupId>com.robrua.nlp.models</groupId>
            <artifactId>easy-bert-multi-cased-L-12-H-768-A-12</artifactId>
            <version>1.0.0</version>
        </dependency>

and it seems to run fine (including reading the model.json file) until the moment I have:

2020-03-16 00:28:13.869547: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: G:\Repositories My\BERT-Models\bert-multi-cased-L-12-H-768-A-12
2020-03-16 00:28:13.889888: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-03-16 00:28:13.900368: I tensorflow/cc/saved_model/loader.cc:285] SavedModel load for tags { serve }; Status: fail. Took 30795 microseconds.
Exception in thread "main" org.tensorflow.TensorFlowException: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
	at org.tensorflow.SavedModelBundle.load(Native Method)

I thought maven would have managed all dependencies.
On Windows10.
Thanks for any support

"logits must be 2-dimensional" error on TF 1.9

I am getting the following exception on TF 1.9 when I load a saved Bert model on JAVA. I saved the model on Python using easy-bert and loaded in Java again with easy-bert. (https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1).

I am able to load fine but when I try to extract embeddings, it throws the following. I have to be on TF 1.9 only.

020-04-16 18:00:31.003625: I tensorflow/cc/saved_model/loader.cc:291] SavedModel load for tags { serve }; Status: success. Took 1197645 microseconds.
Exception in thread "main" java.lang.IllegalArgumentException: logits must be 2-dimensional
[[Node: module_apply_tokens/bert/encoder/layer_0/attention/self/Softmax = SoftmaxT=DT_FLOAT, _output_shapes=[[?,12,?,?]], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:298)
at org.tensorflow.Session$Runner.run(Session.java:248)
at com.robrua.nlp.bert.Bert.embedSequence(Bert.java:252)
at .bert.TestEasyBert.main(TestEasyBert.java:17)

the model file

Could you show me the detail of java version of model ? I download the model from Google without directory of assets

return length of the Java embedTokens() method

Hi,

first of all, well done for this great work and thanks for making it publicly available!

I have the following problem: I am using the Java version and I want to match each token to its embedding. I am loading the English uncased model and getting the embeddings of 2 strings (str1, str2) with
float[][][] embeddings = bert.embedTokens(str1, str2);.
After that, I can get the embedding corresponding to each sequence/string by

float[][] firstSent = embeddings[0];
float[][] secondSent = embeddings[1];

However, firstSent and secondSent have always a standard length of 127 and not the length of my strings str1 and str2. If I then do firstSent[0], this will have a length of 768 which is the expected size of the embeddings but I don't understand why I am getting 127 as the length of firstSent and secondSent. And since I get this length, I guess that firstSent[0] does NOT correspond to the first token of my first sentence, which is what I would like to get.

Any help is much appreciated! Thanks a lot!

can i use it in my java project?

i loaded the maven project in my eclipse, and use"Bert.load()" to load the bert model, but it says"File not found"
微信图片_20200527095449

do i misunderstanding the meaning of this project? can bert run in eclipse?
because our project're all write in java, so i wonder the answer,thanks very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.