Comments (9)
The app you linked is using PyTorch. We don't have a DirectML backend for PyTorch at the moment, but this is definitely something we could be interested in supporting in the future if there is a demand from the community.
from directml.
@SomeAB I'm not super familiar with the app you refer to, but as for ONNX runtime, we work closely with them. If your question is how to get a version of ONNX runtime that works with DirectML, the easiest way currently is to install this nuget package on your Windows PC. This package comes with a version of DirectML that works well with it, but would work on Windows only. Good luck with your experiment. Let us know how it goes.
from directml.
There are preview builds of PyTorch-DirectML now: https://devblogs.microsoft.com/windowsai/introducing-pytorch-directml-train-your-machine-learning-models-on-any-gpu/
from directml.
Hi Patrice. Thats good to hear, but I briefly read at one point that I can somehow achieve it using onnxruntime. Onnx does support Directml, but a person needs to build it manually. Any idea how? Also, If there was a easy pip3 command for Onnx-Directml, it would be great.
Also are there any video/walkthrough tutorials for Directml yet? perhaps some showing the samples listed on microsoft website's DirectML docs section?
from directml.
I'm not very familiar with this app, but as far as I can tell, it exclusively uses PyTorch. It's true that you can convert PyTorch models to ONNX models and use onnxruntime to run them, but you would need to swap the PyTorch calls that the app uses for onnxruntime calls.
To use DirectML with onnxruntime, you will need to follow the Building from source instructions.
from directml.
@SomeAB I'm not super familiar with the app you refer to, but as for ONNX runtime, we work closely with them. If your question is how to get a version of ONNX runtime that works with DirectML, the easiest way currently is to install this nuget package on your Windows PC. This package comes with a version of DirectML that works well with it, but would work on Windows only. Good luck with your experiment. Let us know how it goes.
It's only useful when you have VS solution not working in with pytorch
from directml.
I read of DirectML today and I must say I'm much impressed by Microsoft for this one. Might make me to boot more often in Windows if this works better than TF on CUDA in Linux. All AMD APU owners probably will do so, because of the non-existing support AMD provides for its APUs (If this works on APUs). Eh, if this was under Vulkan it'd be more portable and we could use on RaspberryPi...
from directml.
The app you linked is using PyTorch. We don't have a DirectML backend for PyTorch at the moment, but this is definitely something we could be interested in supporting in the future if there is a demand from the community.
The future never came.
from directml.
They made it. Nowadays pytorch just works on AMD GPU & APU. I use it on Ryzen 5700G APU.
from directml.
Related Issues (20)
- torch.nn.DataParallel(net).to(dml) raised an error HOT 1
- Replace model in DirectMLNpuInference sample: The specified device interface or feature level is not supported on this system HOT 2
- Support frame buffering without descriptor overwrite
- torch.lstm raised an error with backend HOT 5
- Outputs of masks are diffrent between onnxruntime and onnxruntime-directml, from onnxruntime-directml==1.15, when using detectron2 Mask r-cnn Model. HOT 3
- Python build fails to compile
- Microsoft.ML.OnnxRuntime.DirectML and Microsoft.AI.DirectML C++ API got incorrect mask output(detectron2 Mask r-cnn Model) when using GPU
- Memory leak in DirectMLNpuInference sample
- UnicodeDecodeError (utf-8 codec can't decode byte 0xcf) when using torch.uint8 in torch_directml
- Is there a new path in drrectML for supporting NPU device?
- a npu device driver developer want to consult some questions
- RuntimeError: Cannot set version_counter for inference tensor HOT 1
- DirectMLNpuInference fails to run on the intel NPU HOT 4
- Need GPU support for aten::_foreach_mul_.Scalar operator
- ImportError: */torch_directml_native.cpython-312-x86_64-linux-gnu.so: undefined symbol *
- Possible to release pypi package for windows on arm64 platform? HOT 1
- The operator 'aten::native_dropout_backward' is not currently supported on the DML backend
- Is it the same way to populate d3d resource between NPU and GPU inference?
- AssertionError when i try to move model to direct ml HOT 1
- DirectMLNpuInference fails to run on the ARM64 NPU HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from directml.