Code Monkey home page Code Monkey logo

Comments (5)

glample avatar glample commented on July 17, 2024 1

How long are your sequences? The position embeddings does not seem to contain enough positions. By default it contains 512 positions: https://github.com/facebookresearch/XLM/blob/master/src/model/transformer.py#L17

Try to replace this value by 1024 if you are using larger sequences, it should fix the issue.

from xlm.

rbawden avatar rbawden commented on July 17, 2024 1

Yes, thank you! There was a long sentence in the next batch, so I've filtered out sentences that are too long now. It's all running fine again!

from xlm.

bhardwaj1230 avatar bhardwaj1230 commented on July 17, 2024 1

How long are your sequences? The position embeddings does not seem to contain enough positions. By default it contains 512 positions: https://github.com/facebookresearch/XLM/blob/master/src/model/transformer.py#L17

Try to replace this value by 1024 if you are using larger sequences, it should fix the issue.

Thank you for this, I was struggling with this issue for long.

from xlm.

glample avatar glample commented on July 17, 2024

Do you have the full traceback?

This looks like some out of bounds error, i.e. there is a word index which is higher than the number of embeddings or something. Can you try to add CUDA_LAUNCH_BLOCKING=1 in front of your command to have a more detailed error message? If you can try to run the model on CPU you would probably get a more explicit error message as well.

from xlm.

rbawden avatar rbawden commented on July 17, 2024

Yes, here's the traceback (from CUDA_LAUNCH_BLOCKING=1:

Traceback (most recent call last):
  File "/home/username/tools/XLM/translate.py", line 150, in <module>
    main(params)
  File "/home/username/tools/XLM/translate.py", line 115, in main
    encoded = encoder('fwd', x=batch.cuda(), lengths=lengths.cuda(), langs=langs.cuda(), causal=False)
  File "/home/username/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/username/tools/XLM/src/model/transformer.py", line 311, in forward
    return self.fwd(**kwargs)
  File "/home/username/tools/XLM/src/model/transformer.py", line 369, in fwd
    tensor = tensor + self.position_embeddings(positions).expand_as(tensor)
RuntimeError: CUDA error: device-side assert triggered

Could this be due to the way unknown words are handled?

Thank you!

from xlm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.