Comments (3)
I'm glad you were able to get Haste up and running.
I recommend using the latest code from master
. PyTorch 1.5 uses a new build tools (ninja
) that changes include paths, and added new deprecation warnings like the ones you're seeing. Both of these are fixed in master
so you'll have an easier time if you use the latest code.
We may take on bidirectional RNNs in PyTorch depending on interest. That said, you could use this code to implement it yourself. It would look roughly like:
x = torch.Tensor(...) # batch of padded sequence
x_lengths = torch.Tensor(...) # unpadded length of each padded sequence in batch
rnn_fwd = haste.LayerNormLSTM(...)
rnn_bwd = haste.LayerNormLSTM(...)
y_fwd, _ = rnn_fwd(x)
y_bwd, _ = rnn_bwd(reverse_padded_sequence(x, x_lengths))
y_bwd = reverse_padded_sequence(y_bwd, x_lengths)
y = torch.cat([y_fwd, y_bwd], dim=-1)
from haste.
The bdist_wheel
issue is probably just a missing wheel
package. Can you run pip install wheel
and then try building with the standard instructions (make haste_pytorch
)?
I wasn't able to repro the dynamic linking issue with a fresh environment, but I know it can sometimes be an issue if you don't import torch
before you import haste_pytorch
. This was fixed in a later commit on the master
branch.
Can you try the same code snippet as above but with an import torch
first?
from haste.
Installing wheel did work.
However I forgot to mention in my initial bug report that I still needed to modify setup.py to include the full path to lib for it to compile as it could not find haste.h. I can see the "-Ilib", and this is odd considering it exists both in /tmp and the source directory where make is being run from. Even after manually editing setup.py, I get a few thousand lines of errors when manually running the setup with bdist_wheel starting with this:
/tmp/tmp.gThspIl2RQ/pytorch/indrnn.cc: In function ‘at::Tensor {anonymous}::indr
nn_forward(bool, float, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tens
or, at::Tensor)’:
/tmp/tmp.gThspIl2RQ/pytorch/support.h:18:42: warning: ‘at::DeprecatedTypePropert
ies& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Inste
ad use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-i
n replacement. If you were using data from type(), that is now available from Te
nsor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type()
instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecat
ed-declarations]
18 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUD
A tensor")
| ^
.../testing/lib64/python3.7/site-packages/torch/include/
c10/macros/Macros.h:141:65: note: in definition of macro ‘C10_UNLIKELY’
141 | #define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0
))
| ^~~~
.../testing/lib64/python3.7/site-packages/torch/include/
c10/util/Exception.h:262:7: note: in expansion of macro ‘C10_UNLIKELY_OR_CONST’
262 | if (C10_UNLIKELY_OR_CONST(!(cond))) { \
| ^~~~~~~~~~~~~~~~~~~~~
Good news though, just running setup.py install and importing torch before haste_pytorch does the trick, and I manually fed some random data through a layernormlstm and it worked fine.
Any plans on including bidirectional functionality in the pytorch branch?
from haste.
Related Issues (20)
- Install on pip on systems without cuda HOT 7
- Segmentation fault on Cuda 10.0 HOT 2
- Support zoneout on lstm cell state and add recurrent dropout HOT 2
- CUDA error: an illegal memory access was encountered HOT 6
- haste_pytorch: Gradient for kernel/recurrent_kernel becomes zero when trained on gpu HOT 4
- How to expose LayerNormGRUCell to python ? HOT 2
- Can't run haste layers in Keras HOT 12
- Biases in final IndRNN layer are 0 HOT 1
- Zoneout remains during eval() HOT 2
- return_state_sequence for tf version
- layer_norm_gru_cell HOT 1
- Can Bidirectional Rnn and multi-layer Rnn be supported? HOT 1
- Activation function in IndRNN HOT 1
- haste_pytorch does not install properly with conda cudatoolkit? HOT 3
- Feature request for cell classes for pytorch HOT 7
- `RNN`s with `zoneout > 0.0` have wrong gradients HOT 1
- haste_tf compilation fails with "‘bfloat16’ in namespace ‘Eigen’ does not name a type"
- Support for PyTorch packed sequences HOT 2
- Supporting RWKV (a RNN that can match transformer LM & zero-shot performance at 1B+ params)
- Nan loss when replace pytorch LSTM with your LSTM or LayerNormLSTM HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from haste.