Comments (6)
Oh, I see :-)
This feature is not exposed, but should be easy to implement as it is what the backends actually do (example: the CUDACompiler::compile()
method). I would like to help refactor this into a standalone method if you need it.
The tricky part would be making the source compatible with other parts of the DSL and runtime. For example, we would need certain mechanism to "reflect" the arguments and resource usages from the source string, so that the argument binding and command scheduling functionalities could work.
Another (possibly) useful approach is to support injections of "custom" operations into the DSL as part of a kernel, by using a similar syntax to inline assembly. We think this might be a valid solution to easily extending the system and are planning the feature in the next version of LuisaCompute.
from luisacompute.
Yes.
LuisaCompute traces and records (unified) kernel ASTs at runtime, and generates the shader sources and compiles them into PSO/CUDA modules with different backends. This design should well fit the use cases of node-based systems with dynamically assembled and compiled kernels/shaders.
For the CUDA backend (in which I assume you are interested), code generation is implemented in src/backends/cuda/cuda_codegen.[cpp|h]
, and compilation in src/backends/cuda/cuda_compiler.[cpp|h]
(which uses NVRTC as well).
For more information (e.g., motivations, design principles, technical details), you might be interested in the original paper. We also have a proof-of-concept renderer that also implements a dynamic node-based system.
from luisacompute.
Hi thanks for the information. I first read your SIGGRAPH paper, and then I am here!
I understand LuisaCompute performs JIT compile at runtime, however what we are concerned about is whether it's possible to compile a kernel from a string (assume the content is source code written in LuisaCompute's DSL), which is procedurally generated. As far as I am aware, currently the Device::compile function only takes kernel objects. I guess there should be a way to generate kernel from string but don't know where it is. Of course LuisaCompute is fully capable for it, just wonder if this feature is already implemented.
from luisacompute.
Hi yes seems both approaches would work. Previously we did pretty much similar with your second approach. Either way, we will first try a bit further with LuisaCompute and see what we really want. Maybe we can contribute a bit to this project too.
from luisacompute.
Thanks! And looking forward to your good news!
from luisacompute.
We have (experimentally) introduced a simple "native include" feature recently, which allows users to include code segments written in the native shading language and call the imported ExternalCallable
s. Example is here: test_native_include.
The method still has some limitations, e.g., resources are not supported and no type checks are performed. We will try to improve it in the future.
from luisacompute.
Related Issues (20)
- Open Shading Language to LuisaCompute AST translator HOT 1
- CPU backend shared memory emulation via LLVM coroutine
- Build (linking) error on macOS Ventura 13.5.2 Apple M2 HOT 10
- Support offset and size in texture copy commands
- test_denoiser crash with DX backend HOT 2
- Vulkan / HIP backend(s) HOT 5
- Minimal cross platform example for blit to screen HOT 2
- Not building with msc HOT 9
- Export `luisa-config.h` for other build systems than CMake
- Better error handling when RTX support is not supported
- Simple lambda causes LESBAR HOT 2
- default variable initialization codegen wrong in loop scope HOT 1
- Motion Blur Support HOT 1
- ImageView.copy_from() crashes with specific resolution HOT 1
- Should TLAS accept primitives only? HOT 1
- Incorrect Buffer copy_from and copy_to on CPU/DX and possibly Metal backends
- 当采用cmake作为构建工具,且依赖的 “Windows 10 SDK” 为 “10.0.19041.0” 时,关于DirectML的部分会编译失败 HOT 2
- 当采用xmake构建,启用cpu后端来运行时,会出现 'swapchain context is not initialized' 错误 HOT 2
- python confusing behavior due to implicit casting HOT 2
- Pass values between RT shaders with OptiX's built-in registers rather than local memory
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from luisacompute.