Code Monkey home page Code Monkey logo

Comments (7)

amitness avatar amitness commented on May 31, 2024 1

@PetroIvaniuk Sadly it's not clear on their end. As far as I understand, you can't retrain it with your own vocab.

You can read more in this thread

from blog-comments.

setu4993 avatar setu4993 commented on May 31, 2024

The multilingual USE model may also be of interest to you: https://arxiv.org/pdf/1907.04307.pdf

It has neat improvements over the earlier USE model, especially for QA tasks. Plus, semantic similarity across various languages.

from blog-comments.

amitness avatar amitness commented on May 31, 2024

@setu4993 Thank you for recommending that paper. Looks very interesting.

from blog-comments.

PetroIvaniuk avatar PetroIvaniuk commented on May 31, 2024

Appreciate the post.
But I wondering it is possible to add my own text set and re-train with to universal sentence encoder to receive vectors with account the data corpus features.

from blog-comments.

DarianHarrison avatar DarianHarrison commented on May 31, 2024

thank you Amit, really appreciate you writing this. It's very informative

from blog-comments.

bsathish11 avatar bsathish11 commented on May 31, 2024

Hi Amit, for one of my project 'Document Search', i have used this multilingual USE. but model is over trained for the most occurring word in the document. Suppose, if i ask question like "Why do we need ABC" ,it is throwing the response. but similarly, when i ask, "Need of ABC", I am expecting the same response. but it is not throwing the same response. i.e, it is throwing other responses of ABC with the highest cosine similarity score. Based on the semantic and context, i want it to be trained. Any help would be better..

from blog-comments.

PratyushGoel avatar PratyushGoel commented on May 31, 2024

Hi Amit, Thanks for such an informative article. In the DAN variant of universal sentence encoder you mentioned it feeds the average of words and bigrams token embeddings. Can you shed some light on how these token embeddings are generated for each word or unigram? It would be helpful if you could point to that respective paper.

from blog-comments.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.