Code Monkey home page Code Monkey logo

chexbert's Issues

Reports longer than 512 tokens?

In the paper, it is mentioned that the input token size is limited to 512 tokens, but it can be easily extended to work with longer reports. I was wondering how this would be done? Thanks.

CUDA error during inference

Thanks for this great work, I was following the setup instructions and came across this issue when trying to run inference on my data. It might be due to the mismatch between the dimension of input tensor and the dimensions of the weight matrix. Do you know how we can resolve this? Thank.

Begin report impression labeling. The progress bar counts the # of batches completed:
The batch size is 18
  0%|                                                                                                                              | 0/1 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "label.py", line 147, in <module>
    y_pred = label(checkpoint_path, csv_path)
  File "label.py", line 96, in label
    out = model(batch, attn_mask)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/CheXbert/src/models/bert_labeler.py", line 44, in forward
    final_hidden = self.bert(source_padded, attention_mask=attention_mask)[0]
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/transformers/modeling_bert.py", line 790, in forward
    encoder_attention_mask=encoder_extended_attention_mask,
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/transformers/modeling_bert.py", line 407, in forward
    hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/transformers/modeling_bert.py", line 368, in forward
    self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/transformers/modeling_bert.py", line 314, in forward
    hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/transformers/modeling_bert.py", line 216, in forward
    mixed_query_layer = self.query(hidden_states)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/ENTER/envs/chexbert/lib/python3.7/site-packages/torch/nn/functional.py", line 1372, in linear
    output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

comparing a NLP model to CheXbert

Hi,

Assuming I own a NLP labeling function and I want to compare the performance of my model to CheXbert. Is there a way to get access to this CheXpert reports dataset?

All The Best,

building the conda environment

Hello,
I'm trying to build the environment from the environment.yml you have in the repo but I'm getting the following.

Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound: 
  - pyzmq==18.1.1=py37he6710b0_0
  - libgcc-ng==9.1.0=hdf63c60_0
  - glib==2.63.1=h5a9c865_0
  - gst-plugins-base==1.14.0=hbbd80ab_1
  ...

I think this is due to the fact that if they don't already exist in the conda environment ResolvePackageNotFound will throw this error similar to this post

https://stackoverflow.com/questions/49154899/resolvepackagenotfound-create-env-using-conda-and-yml-file-on-macos

I'm thinking you might need to move all dependancies to the pip section but I'm not positive. I'm attempting to setup on MacOS

Thanks,
Ryan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.