Code Monkey home page Code Monkey logo

difftaichi's People

Contributors

ailzhang avatar archibate avatar coffeiersama avatar domnomnom avatar ehannigan avatar erizmr avatar feisuzhu avatar izackwu avatar jim19930609 avatar lin-hitonami avatar md2perpe avatar samuela avatar squarefk avatar strongoier avatar xumingkuan avatar yuanming-hu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

difftaichi's Issues

DeprecationWarning

I got a lot of DeprecationWarnings like 'ti.sqr(x) is deprecated, please use x ** 2 instead', then it doesn't work anymore. I think a warning won't let it crash. And I tried to fix these warnings, but still encounter some problems.

Version
[Taichi] version 0.6.7, supported archs: [cpu, cuda, opengl], commit ca4d9dda, python 3.7.4
Ubuntu
Linux version 5.3.0-53-generic (buildd@lgw01-amd64-016) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu118.04)) #4718.04.1-Ubuntu SMP Thu May 7 13:10:50 UTC 2020

import error, please help!

Ubuntu 16.04
Anaconda 3.5.0.1
cuda 10.0

I have pip installed the package into a anaconda pyenv. But when I import taichi, it shows:

[Release mode] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/__init__.py", line 1, in <module> from taichi.main import main File "/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/main.py", line 6, in <module> from taichi.tools.video import make_video, interpolate_frames, mp4_to_gif, scale_video, crop_video, accelerate_video File "/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/tools/video.py", line 3, in <module> import taichi.core as core File "/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/core/__init__.py", line 1, in <module> from .util import tc_core, build, format, load_module, start_memory_monitoring, \ File "/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/core/util.py", line 158, in <module> import_tc_core() File "/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/core/util.py", line 31, in import_tc_core import taichi_core as core ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.11' not found (required by /home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/core/../lib/taichi_core.so)
`

How can I fix it ? Thanks!

running difftaichi on linux with ncurses:6

Hi,
I want to try the examples on Linux (because of CUDA support) but got stuck due to ncurses 5 is required and my Gentoo Linux has only ncurses 6. Is there a time frame Taichi will port to the new ncurses?
Running on Win10 is OK but slow.
Thanks,
LSha

pip install not work?

python3 -m pip install taichi-nightly
ERROR: Could not find a version that satisfies the requirement taichi-nightly (from versions: none)
ERROR: No matching distribution found for taichi-nightly

Why does the input state contain sinusoids?

This is not necessarily a coding issue, but I don't know where else to ask. In mass_spring.py, the input to the neural network controller (length of n_input_states) contains n_sin_waves.

https://github.com/yuanming-hu/difftaichi/blob/242905d81d0814911d8f3c376f35b5045446bf61/examples/mass_spring.py#L102-L108

What is the purpose of feeding sine waves into the system? I read the DiffTaiChi paper and I do not see any references to the design of the mass-spring system or why you would want to input a sine wave. If it is not a simple explanation, could you point me towards a source that I could read? I appreciate it.

loss become NAN with Taichi 0.4.0

I install taichi-nightly on OS X and try to run the demo: diffmpm.py, diffmpm3d.py, liquid.py.

However the loss become NAN on the second iteration.

WHY???

No examples working (ubuntu 19.01 and 18.01)

I can load the tai chi library, but all examples crash with [Taichi] version 0.5.3, cuda 10.1, commit 6fa3be84, python 3.6.7 n_objects= 6 n_springs= 11 [W 02/29/20 10:15:14.815] [taichi_llvm_context.cpp:module_from_bitcode_file@186] Bitcode loading error message: Invalid bitcode signature [E 02/29/20 10:15:14.815] [taichi_llvm_context.cpp:module_from_bitcode_file@188] Bitcode /home/theresa/.conda/envs/conda_env/lib/python3.6/site-packages/taichi/core/../lib/runtime_x64.bc load failure. [E 02/29/20 10:15:14.815] Received signal 6 (Aborted)
On my 18.01 ubuntu install, it crashes with :`[Taichi version 0.5.2, cuda 10.1, commit 4d56959a]
[E 02/28/20 17:16:09.152] Received signal 11 (Segmentation fault)

Taichi Compiler Stack Traceback
/home/tbarton1/anaconda3/envs/py37/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::signal_handler(int)
/lib/x86_64-linux-gnu/libc.so.6(+0x3ef20) [0x7fdb8e6abf20]
/home/tbarton1/anaconda3/envs/py37/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::GUI::create_window()
/home/tbarton1/anaconda3/envs/py37/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::GUI::GUI(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int, int, bool)
/home/tbarton1/anaconda3/envs/py37/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so(+0xcc6d8c) [0x7fdb6e8ded8c]
/home/tbarton1/anaconda3/envs/py37/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so(+0xcc6c8f) [0x7fdb6e8dec8f]
/home/tbarton1/anaconda3/envs/py37/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so(+0xb25504) [0x7fdb6e73d504]
python3(_PyMethodDef_RawFastCallDict+0x24d) [0x55877c836afd]
python3(_PyCFunction_FastCallDict+0x21) [0x55877c836c81]
python3(_PyObject_Call_Prepend+0x63) [0x55877c835313]
python3(PyObject_Call+0x6e) [0x55877c82706e]
python3(+0xacc70) [0x55877c7acc70]
python3(_PyObject_FastCallKeywords+0x128) [0x55877c8804e8]
python3(_PyEval_EvalFrameDefault+0x5379) [0x55877c8d50b9]
python3(_PyEval_EvalCodeWithName+0x2f9) [0x55877c814729]
python3(_PyFunction_FastCallDict+0x400) [0x55877c815a90]
python3(_PyObject_Call_Prepend+0x63) [0x55877c835313]
python3(+0x17f72a) [0x55877c87f72a]
python3(_PyObject_FastCallKeywords+0x128) [0x55877c8804e8]
python3(_PyEval_EvalFrameDefault+0x5787) [0x55877c8d54c7]
python3(_PyEval_EvalCodeWithName+0x2f9) [0x55877c814729]
python3(PyEval_EvalCodeEx+0x44) [0x55877c815654]
python3(PyEval_EvalCode+0x1c) [0x55877c81567c]
python3(+0x22bcb4) [0x55877c92bcb4]
python3(PyRun_FileExFlags+0xa1) [0x55877c936191]
python3(PyRun_SimpleFileExFlags+0x1c3) [0x55877c936383]
python3(+0x237475) [0x55877c937475]
python3(_Py_UnixMain+0x3c) [0x55877c93759c]
/lib/x86_64-linux-gnu/libc.so.6: __libc_start_main
python3(+0x1dfb50) [0x55877c8dfb50]
`
Have you guys seen either of these issues?

CUDA extremely slow on example where CPU is fast

First of all: cool library. I am trying to familiarize myself with it.

I tried to make just a simple example. This code makes an image with a black-white gradient,
and using a loss functions to darken the image.
It runs fast on cpu, but cannot even render the first frame on the GPU (using an RTX 2080 Ti). It keeps the GPU at 100 % utilization, but nothing happens. I can run other examples just fine on the GPU.

Are there any glaring misunderstandings that I have gotten?

import taichi as ti

# ti.init(arch=ti.x86_64, debug=False)  # works
ti.init(arch=ti.cuda, debug=False)  # extremely slow

n = 320
pixels = ti.var(dt=ti.f32, shape=(n * 2, n), needs_grad=True)
loss = ti.var(dt=ti.f32, shape=(), needs_grad=True)

@ti.kernel
def paint(t: ti.f32):
    for i, j in pixels:
        loss[None] += ti.sqr(pixels[i, j])

@ti.kernel
def init():
    for i, j in pixels:
        pixels[i, j] = i/500. + j/500.

@ti.kernel
def apply_grad():
    for i, j in pixels:
        pixels[i, j] -= learning_rate * pixels.grad[i, j]

gui = ti.GUI("Tester", (n * 2, n))
init()

learning_rate = 0.01

for i in range(1000000):
    print(i)
    with ti.Tape(loss):
        paint(i * 0.1)
    apply_grad()
    print(pixels.grad[5, 5])

    gui.set_image(pixels)
    gui.show()

Thank you in advance for your help.

Libcudart.so.10.0

When i try to run any python scirpt in the examples it gives the error:
libcudart.so.10.0: cannot open shared object file: No such file or directory

taichi.lang.kernel.KernelDefError: No more for loops allowed

When I run billiards.py, I got this error but I don't kown how to deal with it.

[Taichi] mode=release
[Taichi] version 0.5.7, cpu only, commit 568f6651, python 3.7.3
Traceback (most recent call last):
File "D:\workspace\Python\difftaichi-master\examples\billiards.py", line 215, in
optimize()
File "D:\workspace\Python\difftaichi-master\examples\billiards.py", line 172, in optimize
forward(visualize=True, output=output)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\tape.py", line 19, in exit
self.grad()
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\tape.py", line 28, in grad
func.grad(*args)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\kernel.py", line 399, in call
self.materialize(key=key, args=args, arg_features=arg_features)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\kernel.py", line 247, in materialize
KernelSimplicityASTChecker(self.func).visit(tree)
File "F:\Python\Anaconda3\lib\ast.py", line 262, in visit
return visitor(node)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\ast_checker.py", line 66, in generic_visit
super().generic_visit(node)
File "F:\Python\Anaconda3\lib\ast.py", line 270, in generic_visit
self.visit(item)
File "F:\Python\Anaconda3\lib\ast.py", line 262, in visit
return visitor(node)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\ast_checker.py", line 66, in generic_visit
super().generic_visit(node)
File "F:\Python\Anaconda3\lib\ast.py", line 270, in generic_visit
self.visit(item)
File "F:\Python\Anaconda3\lib\ast.py", line 262, in visit
return visitor(node)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\ast_checker.py", line 99, in visit_For
super().generic_visit(node)
File "F:\Python\Anaconda3\lib\ast.py", line 270, in generic_visit
self.visit(item)
File "F:\Python\Anaconda3\lib\ast.py", line 262, in visit
return visitor(node)
File "F:\Python\Anaconda3\lib\site-packages\taichi\lang\ast_checker.py", line 96, in visit_For
f'No more for loops allowed, at {self.get_error_location(node)}')
taichi.lang.kernel.KernelDefError: No more for loops allowed, at file=D:\workspace\Python\difftaichi-master\examples\billiards.py kernel=collide line=78

What's wrong with it? Hope someone can help me. Thanks very much!

python smoke_taichi_gpu.py error

Error encountered when running smoke_taichi_gpu.py


[Release mode]
[T 01/13/20 12:10:09.233] [logging.cpp:Logger@68] Taichi core started. Thread ID = 13909
[Taichi version 0.3.20, cuda 10.0, commit 1c85d8e1]
Loading initial and target states...
Using CUDA Device [0]: GeForce GTX 1080
Device Compute Capability: 6.1
[I 01/13/20 12:10:11.626] [taichi_llvm_context.cpp:TaichiLLVMContext@59] Creating llvm context for arch: x86_64
[I 01/13/20 12:10:11.652] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/impl.py:materialize@125] Materializing layout...
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 700 promoted to 1024.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 700 promoted to 1024.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 100 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[D 01/13/20 12:10:11.652] [snode.cpp:create_node@48] Non-power-of-two node size 110 promoted to 128.
[I 01/13/20 12:10:11.769] [struct_llvm.cpp:operator()@276] Allocating data structure of size 235012104 B
Initializing runtime with 38 elements
Runtime initialized.
[I 01/13/20 12:10:11.777] [taichi_llvm_context.cpp:TaichiLLVMContext@59] Creating llvm context for arch: cuda
[I 01/13/20 12:10:12.048] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_0_...
[I 01/13/20 12:10:12.136] [llvm_jit_ptx.cpp:compile@179] PTX size: 26.97KB
[I 01/13/20 12:10:12.139] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_1_...
[I 01/13/20 12:10:12.227] [llvm_jit_ptx.cpp:compile@179] PTX size: 27.06KB
[I 01/13/20 12:10:12.229] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_2_...
[I 01/13/20 12:10:12.318] [llvm_jit_ptx.cpp:compile@179] PTX size: 27.06KB
[I 01/13/20 12:10:12.320] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_3_...
[I 01/13/20 12:10:12.408] [llvm_jit_ptx.cpp:compile@179] PTX size: 27.06KB
[I 01/13/20 12:10:12.410] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_4_...
[I 01/13/20 12:10:12.500] [llvm_jit_ptx.cpp:compile@179] PTX size: 27.06KB
[I 01/13/20 12:10:12.502] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_5_...
[I 01/13/20 12:10:12.590] [llvm_jit_ptx.cpp:compile@179] PTX size: 27.06KB
[I 01/13/20 12:10:12.592] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_6_...
[I 01/13/20 12:10:12.681] [llvm_jit_ptx.cpp:compile@179] PTX size: 27.06KB
[I 01/13/20 12:10:12.683] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_7_...
[I 01/13/20 12:10:12.759] [llvm_jit_ptx.cpp:compile@179] PTX size: 17.42KB
[I 01/13/20 12:10:12.761] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel clear_gradients_c26_8_...
[I 01/13/20 12:10:12.835] [llvm_jit_ptx.cpp:compile@179] PTX size: 16.25KB
[I 01/13/20 12:10:12.870] [/home/lukuan/.pyenv/versions/anaconda3-5.0.1/envs/lkconda/lib/python3.6/site-packages/taichi/lang/kernel.py:materialize@180] Compiling kernel advect_c10_0_...
[E 01/13/20 12:10:12.879] [type_check.cpp:visit@113] Taichi tensors must be accessed with integral indices (e.g., i32/i64). It seems that you have used a float point number as an index. You can cast that to an integer using int(). Also note that ti.floor(ti.f32) returns f32.
[E 01/13/20 12:10:12.879] Received signal 6 (Aborted)
***********************************

llvm version is 6.0.0
cuda 10
ubuntu 16.04
Thanks!

CUDA not working

[Taichi version 0.3.25, cuda 10.0, commit c5ce590f]
Using CUDA Device [0]: GeForce GTX 980 Ti
Device Compute Capability: 5.2
[E 01/30/20 00:43:20.664] [unified_allocator.cpp:UnifiedAllocator@23] Cuda Error cudaErrorMemoryAllocation: out of memory
[E 01/30/20 00:43:20.664] Received signal 6 (Aborted)
***********************************
* Taichi Compiler Stack Traceback *
***********************************
/home/thom/taichi/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::signal_handler(int)
/usr/lib/libc.so.6(+0x3bfb0) [0x7fd8a2b08fb0]
/usr/lib/libc.so.6: gsignal
/home/thom/taichi/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::Tlang::UnifiedAllocator::UnifiedAllocator(unsigned long, bool)
/home/thom/taichi/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::Tlang::UnifiedAllocator::create(bool)
/home/thom/taichi/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so: taichi::Tlang::Program::Program(taichi::Tlang::Arch)
/home/thom/taichi/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so(+0x81ca79) [0x7fd87f0aba79]
/home/thom/taichi/lib/python3.7/site-packages/taichi/core/../lib/taichi_core.so(+0x612fe4) [0x7fd87eea1fe4]
/usr/lib/libpython3.7m.so.1.0: _PyMethodDef_RawFastCallDict
/usr/lib/libpython3.7m.so.1.0: _PyCFunction_FastCallDict
/usr/lib/libpython3.7m.so.1.0: _PyObject_Call_Prepend
/usr/lib/libpython3.7m.so.1.0: PyObject_Call
/usr/lib/libpython3.7m.so.1.0(+0x152813) [0x7fd8a28dd813]
/usr/lib/libpython3.7m.so.1.0: _PyObject_FastCallKeywords
/usr/lib/libpython3.7m.so.1.0(+0x156fb2) [0x7fd8a28e1fb2]
/usr/lib/libpython3.7m.so.1.0: _PyEval_EvalFrameDefault
/usr/lib/libpython3.7m.so.1.0: _PyEval_EvalCodeWithName
/usr/lib/libpython3.7m.so.1.0: _PyFunction_FastCallKeywords
/usr/lib/libpython3.7m.so.1.0(+0x156e30) [0x7fd8a28e1e30]
/usr/lib/libpython3.7m.so.1.0: _PyEval_EvalFrameDefault
/usr/lib/libpython3.7m.so.1.0: _PyFunction_FastCallDict
/usr/lib/libpython3.7m.so.1.0: _PyObject_FastCall_Prepend
/usr/lib/libpython3.7m.so.1.0(+0x152893) [0x7fd8a28dd893]
/usr/lib/libpython3.7m.so.1.0(+0x152a88) [0x7fd8a28dda88]
/usr/lib/libpython3.7m.so.1.0: _PyEval_EvalFrameDefault
/usr/lib/libpython3.7m.so.1.0: _PyFunction_FastCallKeywords
/usr/lib/libpython3.7m.so.1.0(+0x156e30) [0x7fd8a28e1e30]
/usr/lib/libpython3.7m.so.1.0: _PyEval_EvalFrameDefault
/usr/lib/libpython3.7m.so.1.0: _PyEval_EvalCodeWithName
/usr/lib/libpython3.7m.so.1.0: PyEval_EvalCodeEx
/usr/lib/libpython3.7m.so.1.0: PyEval_EvalCode
/usr/lib/libpython3.7m.so.1.0(+0x1fee85) [0x7fd8a2989e85]
/usr/lib/libpython3.7m.so.1.0: PyRun_FileExFlags
/usr/lib/libpython3.7m.so.1.0: PyRun_SimpleFileExFlags
/usr/lib/libpython3.7m.so.1.0(+0x206610) [0x7fd8a2991610]
/usr/lib/libpython3.7m.so.1.0: _Py_UnixMain
/usr/lib/libc.so.6: __libc_start_main
python(_start+0x2e) [0x563762dac05e]

I have tried using both cuda 10.0 and 10.1. Sorry if this is a known issue, I've tried searching for a solution online.

Thank you :) would really like to try and mess around with all this.

Question about the speed of the billiards example

I downloaded this billiards example and it runs for an entire minute to finish? I thought it was something simple that would figure out the solution in 100 iterations, in micro-milliseconds basically. Are my expectations a little too high?

PyTorch interface? (not an issue)

Hi, I've found your simulator and it looks really nice.

I wanted to ask whether there is possibility to interface the environment (including gradients) with PyTorch. And if not, how complicated this might be to do on my own?

Best,
Jarda

un-needed clear_states() in mass_spring.py

In mass_spring.py optimization, clear() is called before each optimization iteration, but want to know if it needs to be.
https://github.com/yuanming-hu/difftaichi/blob/d5a1bb8b19eba4859e4c03d19c3ffdd39c4eeee8/examples/mass_spring.py#L323-L327

https://github.com/yuanming-hu/difftaichi/blob/d5a1bb8b19eba4859e4c03d19c3ffdd39c4eeee8/examples/mass_spring.py#L274-L281

I was trying to clear other gradient values besides those listed in clear_states(), and I realized that all of the gradients were set to 0 when I entered "with ti.Tape()". I looked into the taichi code, and I found that tape clears gradients by default.

https://github.com/taichi-dev/taichi/blob/b0b60a7da36ef2fb3a93924ebe8a44b4d2778622/python/taichi/lang/__init__.py#L266-L273

I figured I would just remove the call to clear() in my own code, but I wanted to double check before I did so. Is there another reason that clear() needs to be called? Or is it leftover code from older versions of taichi?

examples don't work

I tried to run a few examples and this error came out:
ModuleNotFoundError: No module named 'matplotlib'

请问这个代码库是什么 License ?

道生一,一生二,二生三,三生万物。

能从**古老的哲学中,悟出这样的道,一定是非常厉害的计算软件。

不知使用者/开发者,应该遵守什么样的规定?

How to apply force during for-loop in diffmpm

Take the diffmpm_simple.py as an example, I was thinking to add an additional force during the p2g step, that is,
(at line 87)
grid_v_in[f, base + offset] += weight * (p_mass * v[f, p] - dt * x.grad[f, p] + affine @ dpos)
where x.grad is supposed to be $\partial (total_energy) / \partial x$
However, since we already use ti.Tape(loss=loss) to store the gradient of init_v, I am wondering how to get both $\partial (total_energy) / \partial x$ in the for-loop and $\partial (loss) / \partial init_v$ after the loop.

Generating random numbers

Is it possible to generate random numbers within the Tape? I've tried with taichi.random(dt) and using numpy; the first alternative crashes (see below for a minimal working example), and numpy generated random numbers don't seem to vary between iterations.

MWE with taichi.random(dt)

import taichi as ti

real = ti.f32
ti.set_default_fp(real)

scalar = lambda: ti.var(dt=real)
loss = scalar()
value = scalar()

@ti.layout
def place():
  ti.root.place(value)
  ti.root.place(loss)
  ti.root.lazy_grad()

@ti.kernel
def sample():
  value[None] = ti.random(dt=real)

def main():
  with ti.Tape(loss):
    sample()
    print(value[None])

main()

This example prints the generated random value and then crashes.

Run-time generated log is here, and Taichi Compiler Stack Traceback is here. Also I'm using Ubuntu 18.04 with PyCharm and anaconda.

diffmpm3d.py doesn‘t work with metal

(difftaichi-1-EJh8dB-py3.8) ➜  examples git:(test) ✗ python diffmpm3d.py
[Taichi] mode=release
[Taichi] preparing sandbox at /var/folders/2c/1fvvzc9145j4b2kl3d539vvr0000gn/T/taichi-980668x5
[Taichi] version 0.7.16, llvm 10.0.0, commit 8d24c2f1, osx, python 3.8.9
[Taichi] Starting on arch=metal
[W 04/23/21 14:24:06.938] [impl.py:layout@451] @ti.layout will be deprecated in the future, use ti.root directly to specify data layout anytime before the data structure materializes.
n_particles 30495
n_solid 30495
[Taichi] materializing...
[E 04/23/21 14:24:08.203] Received signal 11 (Segmentation fault: 11)



                            * Taichi Core - Stack Traceback *
==========================================================================================
|                       Module |  Offset | Function                                      |
|----------------------------------------------------------------------------------------|
*               taichi_core.so |     126 | taichi::Logger::error(std::__1::basic_string< |
                                         | char, std::__1::char_traits<char>, std::__1:: |
                                         | allocator<char> > const&, bool)               |
*               taichi_core.so |     228 | taichi::(anonymous namespace)::signal_handler |
                                         | (int)                                         |
*     libsystem_platform.dylib |      29 | (null)                                        |
* AppleIntelKBLGraphicsMTLDriver |       0 | (null)                                      |
*               taichi_core.so |      68 | taichi::lang::metal::(anonymous namespace)::B |
                                         | ufferMemoryView::BufferMemoryView(unsigned lo |
                                         | ng, taichi::lang::MemoryPool*)                |
*               taichi_core.so |     629 | taichi::lang::metal::KernelManager::Impl::Imp |
                                         | l(taichi::lang::metal::KernelManager::Params) |
                                         |                                               |
*               taichi_core.so |     289 | taichi::lang::metal::KernelManager::KernelMan |
                                         | ager(taichi::lang::metal::KernelManager::Para |
                                         | ms)                                           |
*               taichi_core.so |    3122 | taichi::lang::Program::materialize_layout()   |
*               taichi_core.so |     111 | taichi::lang::layout(std::__1::function<void  |
                                         | ()> const&)                                   |
*               taichi_core.so |      93 | void pybind11::cpp_function::initialize<void  |
                                         | (*&)(std::__1::function<void ()> const&), voi |
                                         | d, std::__1::function<void ()> const&, pybind |
                                         | 11::name, pybind11::scope, pybind11::sibling> |
                                         | (void (*&)(std::__1::function<void ()> const& |
                                         | ), void (*)(std::__1::function<void ()> const |
                                         | &), pybind11::name const&, pybind11::scope co |
                                         | nst&, pybind11::sibling const&)::'lambda'(pyb |
                                         | ind11::detail::function_call&)::operator()(py |
                                         | bind11::detail::function_call&) const         |
*               taichi_core.so |    4408 | pybind11::cpp_function::dispatcher(_object*,  |
                                         | _object*, _object*)                           |
*                       Python |     171 | (null)                                        |
*                       Python |     274 | (null)                                        |
*                       Python |     804 | (null)                                        |
*                       Python |   29861 | (null)                                        |
*                       Python |    1947 | (null)                                        |
*                       Python |     227 | (null)                                        |
*                       Python |     346 | (null)                                        |
*                       Python |   29833 | (null)                                        |
*                       Python |     106 | (null)                                        |
*                       Python |     108 | (null)                                        |
*                       Python |   30754 | (null)                                        |
*                       Python |    1947 | (null)                                        |
*                       Python |     227 | (null)                                        |
*                       Python |     155 | (null)                                        |
*                       Python |      61 | (null)                                        |
*                       Python |      78 | (null)                                        |
*                       Python |   10133 | (null)                                        |
*                       Python |     106 | (null)                                        |
*                       Python |     346 | (null)                                        |
*                       Python |   30050 | (null)                                        |
*                       Python |    1947 | (null)                                        |
*                       Python |      51 | (null)                                        |
*                       Python |     102 | (null)                                        |
*                       Python |      82 | (null)                                        |
*                       Python |     133 | (null)                                        |
*                       Python |     660 | (null)                                        |
*                       Python |    1870 | (null)                                        |
*                       Python |     306 | (null)                                        |
*                       Python |      42 | (null)                                        |
*                libdyld.dylib |       1 | (null)                                        |
*                          ??? |       2 | (null)                                        |
==========================================================================================


Internal error occurred. Check out this page for possible solutions:
https://taichi.readthedocs.io/en/stable/install.html#troubleshooting

Taichi has no installable version `0.6.26`

While the readme tells folks to use taichi==0.6.26 to run misc examples, I got:

pip install taichi==0.6.26
ERROR: Could not find a version that satisfies the requirement taichi==0.6.26 (from versions: 0.5.15, 0.6.32, 0.6.33, 0.6.35, 0.6.36, 0.6.37, 0.6.39, 0.6.40, 0.6.41, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 0.7.5, 0.7.8, 0.7.10, 0.7.14, 0.7.15, 0.7.16, 0.7.17, 0.7.18)
ERROR: No matching distribution found for taichi==0.6.26

which seems to indicate that Taichi has no installable version 0.6.26 anymore (on macOS)?

License of repo content

Hi,
I am working on a talk and I was wondering if I can use images from this repository and if yes what would be the best attribution?

Best, Vassil

how to fix field(s) are not placed error

So I'm trying to run the examples yet many of the examples throw the error as shown below:(diffmpm as an example)
File "diffmpm.py", line 386, in
main()
File "diffmpm.py", line 348, in main
weights[i, j] = np.random.randn() * 0.01
...
RuntimeError: These field(s) are not placed:
File "diffmpm.py", line 32, in
actuator_id = ti.field(ti.i32)
File "diffmpm.py", line 33, in
particle_type = ti.field(ti.i32)
...............many similar errors...................
File "diffmpm.py", line 44, in
x_avg = vec()

So I tried to fix these errors by assigning a shape manually (ti.Vector.field(...shape=(xx,xx))), yet some shapes are not very clear to me and takes a lot of work, is there any decent fix of these?

I tried on windows and ubuntu18.04, both have similar issue, my taichi version is 0.7.31.

wave.py example does not compute a solution

Hey there, thank you for your work.

running wave.py does not seem to produce a solution.
The loss begins at approximately -0.00012 or so, then continues over iterations to, -1, -2, ... ,-50,
but the img solution is just a blank white square.

After toggling the setup on line 125, the first iteration shows waves propagating, but later ones are back to a blank white square.

Perhaps the values of p are simply clipping?

Particle based examples (diffmpm.py) and water_renderer.py work fine.

Any ideas?

update difftaichi

When I install the current version of taichi and try to run difftaichi , it shows that there are some modules that have been depreciated, will difftaichi be updated for the current version of taichi? Thanks

Diffmpm.py crashes during the forward simulation

Out-of-bounds Memory Access in the DiffMpm Example

In the example given in diffmpm.py, the code crashes during an iteration of my training process. I found that the there is an array-out-of-bounds access during the forward simulation.

How to reproduce

System specifications

  • OS: Ubuntu 18.04 LTS
  • Taichi version: 1.4.1
  • Python version: 3.9.16

To reproduce the bug, only the forward simulation is needed.

The necessary files are given in https://www.dropbox.com/s/obhpljd9qp8vj10/bug_report.zip?dl=0. The value of parameters weights and bias when the crash happens is given in weights.npy and bias.npy, respectively.

To get a minimal example that reproduces the crash, run the file bug_report.py, which simply loads the value of weights and bias from their corresponding numpy files and run forward simulation for 1500 steps. You would observe the output saying:

[Taichi] version 1.4.1, llvm 15.0.4, commit e67c674e, linux, python 3.9.16
[Taichi] Starting on arch=x64
n_particles 2976
n_solid 2976
Traceback (most recent call last):
  File "/home/user/bug_report.py", line 419, in <module>
    reproduce_bug()
  File "/home/user/bug_report.py", line 364, in reproduce_bug
    forward(1500)
  File "/home/user/bug_report.py", line 254, in forward
    advance(s)
  File "/home/user/miniconda3/envs/tai/lib/python3.9/site-packages/taichi/ad/_ad.py", line 311, in decorated
    func(*args, **kwargs)
  File "/home/user/bug_report.py", line 234, in advance
    p2g(s)
  File "/home/user/miniconda3/envs/tai/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 976, in wrapped
    raise type(e)('\n' + str(e)) from None
taichi.lang.exception.TaichiAssertionError:
(kernel=p2g_c82_0) Accessing field (S25place<f32>) of size (128, 128) with indices (-5980, -8119)

膜拜大神

膜拜大神膜拜大神膜拜大神膜拜大神膜拜大神膜拜大神

Code comments for the example diffmpm.py

Hello difftaichi Team,

First of all, I'd like to extend my gratitude for developing and maintaining such an innovative and impactful project.

While working with the difftaichi examples, I've been particularly focusing on the diffmpm.py file. While the functionality and performance of this script are impressive, I've found that the lack of comments makes it challenging to understand and extend certain parts of the code, especially for newcomers.

Understanding the underlying principles and the rationale behind certain coding decisions can significantly enhance the learning curve. Therefore, I kindly request if it would be possible to add some comments to the diffmpm.py file.

Thank you,

JYan

Unexpectedly bad results for billiards.py variant

In billiards.py, I've done some changes to the starting condition of the cue ball and the target position. These changes are available here.

What I observed was that in almost every iteration, the result got worse than before not better.
i.e. Loss seems to be increasing:
Loss is increasing

The ending stable state is not that surprising (no gradient) but the initial trend is concerning me.

[question] can we make mass_spring.py repeatable (deterministic) between runs?

I was debugging some modifications I made to mass_spring.py when I realized that the result of each run is non-deterministic. I went back to the original mass_spring.py and made sure the controller network weights were initialized to the same value each time. But even when I can guarantee that there are no random variables being assigned anywhere, the resulting loss differs in each run.

Here are two different runs of the exact same code. You can see that the controller weights are exactly the same, but the loss values begin to diverge.

Run 1: mass_spring.py 2 train
n_objects= 20 n_springs= 46 weights1[0,0] -0.23413006961345673 weights2[0,0] 0.46663400530815125 Iter= 0 Loss= -0.2193218171596527 0.19502715683487248 Iter= 1 Loss= -0.21754804253578186 0.07976935930575488 Iter= 2 Loss= -0.3397877812385559 0.055776006347379746 Iter= 3 Loss= -0.3514309227466583 0.03870257399629174
Run 2: mass_spring.py 2 train
n_objects= 20 n_springs= 46 weights1[0,0] -0.23413006961345673 weights2[0,0] 0.46663400530815125 Iter= 0 Loss= -0.21932175755500793 0.1950520028177551 Iter= 1 Loss= -0.21754644811153412 0.07983238023710348 Iter= 2 Loss= -0.3397367000579834 0.055822440269175766 Iter= 3 Loss= -0.3514898419380188

In my own modifications, this was resulting in inconsistent failures of the simulation (v_inc will explode and all values will go to nan). I assume this is due to instabilities in Euler integration, but it would be nice to be able to get consistent results each time to make debugging easier.

Where could the non-deterministic behavior be coming from? Is it something we can fix, or are there stochastic processes that are a result of the compiler?

Mass spring loss is `nan` after repeatedly running

I am running taichi 1.2.1 with arm64 arch and Python 3.10.8. I've written some code that repeatedly calls the mass_spring simulation with different mass spring layouts and the loss of mass spring becomes nan when this happens. However, this behavior is not deterministic, sometimes it happens after 1 iteration other times after 4-5 with it not failing on a particular mass_spring layout.
Before and after each call to mass_spring main, I am reloading the mass_spring file to re-initialize the variables and after I am tearing down with ti.reset().
Could someone please shed some light on this error?

New examples not working

I updated taichi to the latest version and tried to run the examples as instructed in the readme. Still cant get it to run. The compile and build phase gives all sorts of errors. I have tried this across two different devices and still it doesnt work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.